Navigating the Medicare Part D Website

PartD_as_Subway_5-21-2013.png

The transport map above is a representation of Medicare Part D [http://medicare.gov/part-d/index.html], as it rendered on May 21st 2013. It is my first draft*, a simple, rather crude attempt to organize key pages of a website** using a visual notation that has a very concrete yet familiar context; one that I have never used for the purpose of communicating the essence of user experience to stakeholders. I feel that this and other experience diagrams can be very helpful as facilitators of experience architecture.

The map above should be familiar, as it brings to mind similar maps of transportation systems worldwide. In one glance stakeholders can get a good and accurate sense of key 'lines' (8 in this map) and 'stations' (39 here) and these translate nicely to web pages. One can also assess the complexity and size of this section of the website, the navigability paths from one station to another, transfer options and so on. It is the user journey represented in a format that immediately transcends the high fences of boxes and arrow I typically use.

I argue that UX needs to experiment with new formats of visualization in addition to the artifacts we typically produce. Boxes & arrows diagrams and their permutations were never meant to capture and transmit the user experience. When decently executed, they are effective in communicating organization, flows, decision points and other aspects of the site's structure. But how can we invoke emotions, which is what experience is all about? Here I feel that by super-imposing familiar models of representation, such as subway maps and others I will explore later, the audience - most importantly our clients, are provided with an opportunity to think 'out of the box', to see their property in fresh yet very familiar ways.

In discussing the issue of UX diagramming with colleagues and clients, a sentiment of dissatisfaction and urgency is common: Increasingly we are involved in Mobile-First, a design strategy that requires device-agnostic, device-appropriate development of multiple experiences - simultaneously. It is becoming darn hard to create diagrams for complex sites, especially those that are highly contextual, personalized and localized to the user, in addition to supporting high variability of exceptions to business rules.

Everyone on a project, from executive sponsors through business stakeholders and on, talks about the importance of the user experience - user experience being the driver for many transformational enterprise projects - So we need means to capture and talk about the user experience directly - since in early phases of conceptualization, diagrams are extremely important because they serve as the foundation guiding work on wireframes and prototypes. 

Back to the map above: We all understand what it means to negotiate public transport - We have a specific goal - getting from where we are to where we want to be - it is a very goal oriented type of activity, for most of us, at least. For example, I remember staring at the NYC subway map trying to figure it out...something about the brown or green line perhaps. As sense of urgency (a need to be somewhere on time), of realizing this will need some time to figure out - time that I did not have or want to spend. And a sense of being somewhat helpless, finally realizing that asking a passer-by is probably the best bet. This, and other life experiences are triggered by looking at the map above. It is possible to overlay the personal, physical experience, with the that of the website. Which is why I hope using a transport map can work for user experience. 

Consider the audience of the Medicare website, and specifically Part D, a section that outlines the specifics of drug coverage: Eligible people (anyone with Medicare), their families, care givers, health care professionals and others. Imagine sending users to a trip in this transport system, then, look at a screen capture from the actual site, below:

A=We are in Drug Coverage (Part D) - but the tab does not provide any visual indication.

B=In the left-rail we selected the section 'How to get drug coverage'

C=And we are reading the section 'How to change your Medicare drug plan' - the left-rail has this part highlighted in Yellow, and there is a nice title on top of the text.

D=This is the kicker: The breadcrumbs do not correspond to the path we selected (A>B>C). It is an alternate, system generated path (the 1st breadcrumb is 'Home', the 2nd is 'Signup/change plan' - both not part of our ABC path), that conflicts with the user-driven path. It is an intersection of two rail systems and it is confusing. 

PartD_as_Subway_Orient-1.png

In the next iteration of this experience map will try to represent 'D' on the experience map. I can already tell you that we will experience how the magnitude of the entire Medicare site impacts this specific Part D section, and what it can do to the user experience, having to navigate a much larger set of content.

----------------------------------

* = To create this model I used the trial version of ConceptDraw Pro, which is easy to use and is relatively inexpensive. 

 ** = I use the terms website or sites to also include applications and all forms of software user interfaces.

Initial Thoughts on Logistics of Responsive Design and Rapid Prototyping

Finally, we (UX practitioners) had everything going for us: After years of having to toil with static drawing tools such as Visio, or agonize over tools intended for developers, rapid prototyping tools such as Axure made it possible for a non-developer to dream-up and demonstrate compelling interfaces. During a brief period of bliss, between 2007-2010 or so, while things were far from perfect, everyone was happy: Stakeholders, developers, users and of course - us, UX people. 

But clouds of trouble gathered with the rapid proliferation of mobile devices, starting with the iPhone, iPad and the now numerous offerings from others. For our clients, making sure their websites and software works well on all devices quickly changes from 'nice to have' to 'absolutely must have'. From providing some level of competitive advantage, ensuring your software is device-agonistic has become a financial deep-hole, with marginal advantage when offered, to punishing consequences if not. 

On the surface, this situation would suggest a gold mine to anyone in the UX profession, since the implications are that there is tons or work we need to help with. And indeed, one might argue that the opposite situation would be worse, and I agree. However, as device-agnostic design becomes a core demand from clients, a non-trivial mission is being viewed by the people who pay for it, as trivial. In other words - They want to pay less, to get more, and as usual - faster. It makes sense, from the client's perspective, because, when you look at a good simple design, it looks so simple, so easy to do...just like competitive ice-skating, perhaps. Only those who went through a design project can appreciate the complexities.

In the grand scheme of things, however, these are early days, Adding to the mess created by multiple operating systems, web-browsers and other competing standards, we now have to deal with more operating systems for mobile device, and the devices of course. The situation is not sustainable. period. Customers demand, and rightly so, access using their device, and they don't care, and should not care about the incompatibility issues.Responsive design is the latest savior of the day because it promises to reduce the costs and complexities associated with developing and maintaining apps on all devices known to Human Kind, with some clever techniques that make a website respond to the device it is use on. No dedicated apps are needed.
So back to us, UX practitioners, who got comfortable with our dedicated no-programing-needed prototyping tools. Responsive design requires multiple distinct experiences that will be displayed dynamically on the device, based on it's size and orientation. If you use a tool such as Axure, it may be tempting to prototype the various experience permutations AS WELL AS the dynamic, fluid transformation between one state to another. I have seen several examples posted here and there, and the approaches are well intentioned, clever and in some cases even appropriate.
But for any project of meaningful size, responsive design should be approached with excitement mixed with dread: 
  • Do our stakeholders understand what we are getting ourselves into?
  • Do we understand?
In speaking with colleagues in the US and Eurpoe, I'm getting the impression that UX might be sliding into a state of chaos, with companies scrambling to develop presence across the new mobile frontier, but lack the time, budget and expertice to get this done. Most our prototyping tools are not well equipped with the demands of a single complex, multi-phased, mulit-release project, and the compaunding weight of simultanious interfaces in not sustainable.
    

 

Domain Agnostic Naming Convention for Axure Based UX Projects [part 1]

Introduction

The following naming convention scheme is domain agnostic or in other words - appropriate for any type of software UX being modeled. Aspects of the convention are specific to Axure but I think they could be easily modified to fit other UX prototyping tools.


This convention is offered for review, discussion and adoption by others in the UX community in the belief that there is no need to reinvent the wheel for each new project and that we could all benefit from some standardization. Comments and suggestions are most welcome.

Benefits:

  1. Identify each element of the wireframe with a unique identifier and help:
    1. The UX team: troubleshoot interactions and quality of wireframe construction: It is inherent in the process of naming things that one needs to consider aspects of structure: hierarchy, placement and efficiencies which consequently yields well-formed wireframes.
    2. Developers, BAs and other stakeholders who consume the Word specification document or the CSV output.
  2. Quickly determine when reading the Word spec or CSV output if we are looking at a page, master, dynamic-panel or state - just by the ID.
  3. Consistency, which is critical for large shared project with multiple designers working on a shared project file. Eliminating 'Tower of Babel' syndrome where each UX labels items
  4. Avoid the "Unlabeled" plague that makes it extremely hard to create advanced interactions, or to make sense out of the specification document
  5. Internationalize - Greatly improve the process of localization. Instead of relying on a label which may not be unique, ids makes it easy for teams across the globe to make references to any item of the UI with confidence.
  6. Productive - When used from the project get-go, save the UX team significant amount of time in downstream consistency enforcement and tons of reworking.
  7. Transferable - Use on any type of UX project.
Throughout this post I will use a simplified UX project example to contextualize and clarify some of the concepts. The project example: An electronic Medical records application (EMR) that is being developed by a team of UX experts and is divided into 4 workstreams:

  • UI Framework (all shared UX widgets, patterns etc.)
  • Medical Staff Portal (doctors, nurses, etc.)
  • Patients Portal (Charts, lab results, prescriptions, etc.)
  • Pharmacy Portal (Prescription Tacking, etc.)

The Namespace

1. Workstream Prefix

We'll start by assigning each workstream a 2-letter code prefix:

  • UI Framework = FW
  • Medical Staff Portal = MD
  • Patients Portal = PP
  • Pharmacy Portal = RX
The process of coming up with the 2-letter prefix should be straight forward, although it is
important to keep in mind:

  • Domain nomenclature (Pharmacy could be PR, but RX makes more sense)
  • Avoiding confusion in the future (Patient portal could be PT, for example, but may be confused with a sub section under MD for Physical Therapy (widely known as PT)
  • Naming conventions of other partners in the project that may have started their work before the UX team. Often, the business process team is in place early on. The goal is to gain alignment and reuse confusion.
Why 2 and not 3 letters? To make the overall id as short as possible. As you will see later, the id can become quite long.

2. Workstream Number Range

Each workstream is assigned a numerical range that will be used for issuing unique ID's for pages and masters. Just by looking at the wireframe or widget number it becomes easy to associate the object to the workstream. Note that framework wireframes are assigned the highest range of 900, mostly to strengthen the visual of framework elements - they play an important role as global elements across the whole Ui.

  • FW = 900 to 999
  • MD = 100 to 199
  • PP = 200 to 299
  • RX = 300 to 399
Once the prefix is in place, the Sitemap panel should have the following top-level pages that will help each UX workstream organize their pages under the shared sitemap. Axure's sitemap panel (Version 5.6) does not support folders as the masters panel.


Continue to create the corresponding nodes on the Maters panel - Here we can take advantage of folders to structure the wireframes without creating extraneous pages. Note that we add "M-" to all masters to help distinguish between masters and pages.


3: Pages

Pages are wireframes of entire screens and are typically constructed of widgets, masters and dynamic panels. In the context of Axure's html output, pages are the only wireframes directly accessible via the left-nav sitemap.
Names should be meaningful and self-explanatory to provide maximal clarity to all stakeholders. Keep in mind that you need to communicate with others, so avoid shortcuts,  ambiguous names. From our example:
The RX team has a wireframe page with a list of drugs and a detail page of each drug, which is presented when the user clicks a row in the list.


The 2 pages should be named:

  • RX-301 Drug List
  • RX-302 Drug Detail
Regular briefing of UX team members who work with workstreams about changes to the sitemap. The clarity communicated by the name also helps members of the UX team, for example - when on-boarding new team members who must be able to digest a lot of new material quickly. Use proper capitalization and space words to maximize readability. Remember - the goal is effective communication.


4. Masters


4.1 Workstream Masters
These are masters of limited use - only on pages related to a particular workstream. For example, the MD workstream has several pages related to Board certification of doctors and nurses. These are relevant only to the MD portal, and the ID of masters used there will communicate this fact:

4.2 Framework masters
These are typically global navigation, header and footer elements - masters that are consumed by all pages in the framework template. 
For example: M-FW-900.1 etc. Note that the 3-digit prefix for masters does not advance. Rather, it is an indicator of the workstream and not associated with a particular page of the workstream because masters can be used on multiple pages. M-MD-100.1, for the first master for the MD workstream, etc.

5. Dynamic Panels
Dynamic panels will inherit the prefix of the page or master they were placed on. For example: RX-232_1 DP Name, which references the first dynamic panel on page 232, which is part of the RX workstream.

6. States
States inherit the prefix of the dynamic panel they are part of: RX-232_1.1 State Name.
Note that the IDs of masters and dynamic panels are very likely to change: Masters may often start with one of the workstream, but might later be used across other parts of the UI, and become part of the UI Framework. Dynamic panels may initially e used on a single page, but then converted to a master. It is important to rename those ids ASAP to reduce confusion.
Unfortunately, Axure does not allow page notes for dynamic panels and their states. AAs a result, the ability to reference with accuracy any element of a dynamic panel is important, and can be done on the master level, which can be cumbersome, but allows documenting those elements.

7. Nesting
Nested masters and dynamic panels are unavoidable and we can easily extend the naming convention to account for nesting: RX-232_2.4_1.3 State Name - we are looking at the wireframe associated with the 3rd state of the 1st dynamic panel that 'lives' in the 4th state of the 2nd dynamic panel that 'lives' on page RX-232. The naming convention thus communicates the construction structure of the wireframe in addition to providing a unique reference and ownership association by workstream.

8. Widgets
Widgets are associated with the page, master or dynamic panel they live on.

Axure Built-In Widgets
For the built-in widget in Axure use the abbreviations listed here, or create your own.

Image= IMG
Text Panel= TPNL
Hyperlink= LNK
Button= BTN
Table*= TBL
Text Field= FLD        
Text Area= TXTA
Droplist= DL
List Box= LBX
Checkbox= CBX
Radio Button*= RBTN
Horizontal Line= HR
Vertical Line= VR
Image Map= IMP
Inline Frame= IFRM
Menu Vertical= MNV
Menu Horiz= MNH
Tree= TRE

Widgets inherits their parents 's ID and look like this:
M-PT-305-BTN.1 Button Name, or more complex....
M-RX-211_1.3_1.1-DL..2 Quantity
Keep in mind that if you named widgets on a dynamic panel that later became part of a master, renaming will be required. While tedious, it will pay of f when generating a specifications  document that enables clear communication between developer, UX and all other stakeholders involved in consuming the documentation.


----------
To be continued in part 2.

------
Portions of this convention were developed in collaboration with my colleagues and friends Elizabeth Srail and Katrina Benco.

Evaluating Axure RP - Talking Points

I'm often asked by colleagues and clients about my experience with rapid UX prototyping tools in general, and Axure specifically. I always preface by saying that I’ve been using Axure as a primary wireframing tool for the past 3+ years, and, although I occasionally evaluate new prototyping tools, such as ProtoShare or Sketchflow , I don’t have the hard-core experience that comes from daily work with a tool.

The landscape of rapid prototyping is changing rapidly, with frequent announcement of new players, some that look very promising. These are very exciting times for our profession because finally, user experience architects and designers can master a specialized UX tool instead of tools borrowed from desktop publishing or app developers.
So one must be open minded to change and to take advantage of whatever does the job. Yet any tool is a strategic investment - of our time, our ability to successfully execute and deliver demanding challenges. In other words - the stuff needs to really work when a large team of UX designers, spread all over the country needs to collaborate under tight schedules. Making things change colors on mouse over is not enough.

So here is a deck I created recently and used in several occasions to help guide an evaluation process. It is of course in favor of Axure, and not because the product is perfect or because the company pays me to promote the product (They don't!). It is their uniquely amazing customer support that won me over.


Download the PDF.


Comments are most welcome. 

Inital thoughts on Agile with Axure



Part 1: 
The Settings (Or, making software is like making sausage - you don't really want to know how it is made, but you should)
Immediately one faces the fact that there are way more flavors of Agile than Baskin Robbins ice cream...So perhaps we should begin with the key drivers that lead to typical UX projects. Typically it is the business side of the organization that drives the change, with IT group either resisting change, locked into a technology flavor mindset (.Net, Java, etc.) and in many cases the communications between the two part of the organizations are poor.There are many exceptions, but I want to address realities of large scale projects. In established organizations, despite the desire to use Agile, there are still hard wired protocols and bureaucracy around traceability, change management and sign-offs.
Axure, like the typical UX team, sits in the intersection of messy cultures and is being used as a communication tool to articulate, 'sell' and specify both vision and reality - this is where some of the key risks and opportunities are for us as UX practitioners. By that I mean that the initial Axure prototype is probably very aggressive about rich interactions, features and functionality.  The business end gets excited because customers who are exposed to the new vision are pressuring for improvements. The sales department is always hungry for a better product to sell.
Thus, strategic plans and budgets are set around unrealistic delivery targets and often, with very little awareness of the full UX development process. In fact, the UX team is often engaged after the project plan, budget and delivery dates were set (What?! we need to pay for usability tests?!). Moreover, large projects are often broken into phases, which means you can not expect a clean process of creating a prototype, finalizing it and go to the beach, because work on following phases often begins before coding of the previous phases ended - which means that everything is in flux, and the Axure file should be well formed to handle change. 
So who are the players, and consumers of Axure output now the the project is in flight?
  • The UX team (or perhaps you are a single practitioner) - The organizational association of the team is critical - is the team part of the business unit, or part of the IT organization. I've seen both, and, I've seen the team floating with no clear association, which can be worse.
  • The business owners - and depending on the organization, they may be spread all over the US or worldwide, often with conflicting motives and requirements. - The UX team must make sure that UX requirements are captured in a very formal way - Axure can be used for that, but there are some issues around managing the requirements in Axure. While Axure is not a requirement gathering tool it makes so much more sense to capture UX requirements in it that we may get tempted, and have to live with the consequences later on. Not a big deal if we planned for it, a mess if we did not.
  • The IT team - In large organization this may be a Hidra, with sub units that are in charge of some aspect of the technology, they may not like each other, or for that matter, even communicate much on a regular basis. And off shore teams are the norm these days, so we have to keep in mind that the output of our Axure project may be consumed by people for whom English in not a native language.
  • The BAs - In some organizations each group might have their team of BAs - so the documents that are produced include business requirements and specifications, technical requirements and specifications, etc. The UX team typically adds UX requirements and specifications -- All mentioned are often extremely long documents that none really reads because everyone is busy trying to beat the unrealistic deadlines mentioned above. So an opportunity that Axure affords, is generating specifications that are easy to consume, which s is not so trivial, and I'm looking forward for the upcoming resale for some enhancements. Agile does not mean that specs and requirements are not needed - in fact the issue becomes worse - how to track and mange changes from scrums and sprints such that developers and business partners don't get lost in the sea of documentation?
  • The QA team - this team (if it exists beyond being a place holder) has the worst job - testing scripts are often rendered useless because changes happen all the time - so they need to be able to follow up the rapid changes, and update the testing library in time for testing.
This post was originally posted on the Axure Discussion board.

Comparing Apples to Oranges - The NYT Bestsellers Lists and Kindle

The June 7th issue of the New York Times Book Review, print edition, had the following Amazon ad on page 21: An arrow pointing at the bestsellers list and the text:
"in the time it takes to skim the bestseller list, you can wirelessly download an entire book." A couple of inches Below that text was an image of the Kindle accompanied by the text:
"Choose from 275,000 of the most popular books, magazines and newspapers. Free wireless delivery in less than 60 seconds."

In the print edition of the Times the bestsellers list is spread across 3 pages:

Page Bestseller Category List Category # of Books
18 Best Sellers Fiction 15
18 Best Sellers Nonfiction 15
20 Paperback Best Sellers Trade Fiction 20
20 Paperback Best Sellers Mass Market Fiction 20
21 Paperback Best Sellers Nonfiction 20
21 Advice, How-To and Miscellaneous Hardcover10
21 Advice, How-To and Miscellaneous Paperback 10

# Pages # Bestseller Category # List Category # of Books
3 3 6 110

The Amazon ad suggests that the act of downloading a single book is equivalent to the act of browsing a list of books  (perhaps to determine which book to purchase). But it is really a comparison between (a) A pre-decision activity of browsing the list of books, and (b) A post-decision activity, since the download is done after one chose, purchased and clicked the 'Download' button - it is the device that does the work.

Let's consider Amazon value preposition which is divided into 2 phases:

  1. Choose from 275,000 of the most popular 
    1. Books
    2. Magazines
    3. Newspapers
  2. Free wireless delivery in less than 60 seconds.

In phase 1, the ad touts a clear quantitative advantage for Amazon: A choice of 275,000 bestseller books compared to the meager 110 books of the NYT. In fact, the ad is positioned on page 21 which only lists 20 books, compared to page 20 which lists 40 books and page 18 which lists 30 books - the quantitive advantage is visually enhanced.

But as Barry Schwartz (not a relative) suggests in his book 'The Paradox of Choice', 'More is Less'.

In the print edition, one only has to choose between books. Indeed, making the choice even in this short list is confusing. What's odd about the NYT bestsellers list is its classification confusion in both the Bestseller category and the list category: Format (paperback, hardcover) is intermixed with genre (advise, fiction, nonfiction), and sales channel (mass market, trade).

This list is clearly not organized with the user (the reader) in mind. It is hard to imagine a reader pondering which a mass market fiction book to get for her Summer holiday. But at least the choice is among books. The Amazon ad offers, in addition to the large quantity of books, also a range of publication types - books and magazines and newspapers -- clearly a scope beyond that of the print list, but also a completely different type of choice and context of choice.

Finally, the interface of the NYT list in print requires the reader to read. Each item includes:

  • Title 
  • Author, 
  • Publisher, 
  • List price and 
  • A short blurb. 
There may be a bias to select the top ranking books thinking that they are also the best books, but because the lists are short and easy to read, it is not an effort to cover all.


Now, the user interface for browsing the same NYT bestsellers list on Amazon's Kindle section does not require one to read. Rather, one may be compelled to make the choice by the covers, ignoring the wise proverb 'Don't judge a book by its cover'.

Here, each item includes:

  • Cover photo (title and author quite visible)
  • Title
  • List price
  • Kindle price

and here is the interface of all the bestseller books (54) in fiction category:


So, how many books could you download by the time you finish browsing the NYT bestsellers list on the Kindle website?

What We Learn From Animators About Prototyping

In animation, much like in software, everything that we see on the screen needs to be artificially created. In other words, as opposed to live action film, where the camera captures massive amounts of extra detail because it is part of the physical world, animators must create the ground the characters are walking on, the sky, and everything in between.

The production process of animation can be excruciatingly slow even in modern, computer generated productions. 'Snow white', Disney's first full length feature film took 4 years to produce (1934-37) and ended up costing nearly S1.5 million, a significant sum for a feature film back in 1937. In fact, a list compiled by Forbs, of 25 most expensive films until 2006, adjusted for inflation, lists one full-animation film and many with heavy use of special effects which is a form of animation.

In a recent interview with Terry Gross, Pete Docter described the creative process of the animators team behind the Pixar movie 'UP' ($175M), a process that in many respects is very similar to the process established by animators in the early days of animation at the dawn of the 20th century.
For example, the team created a story reel which Docter, the movie's co-director, described as a "comic book version of the movie". The idea is to build the visual sequence of the movie using rough 'keyframes' - drawings that define the start and end points of a sequence. Team members record the dialogs and add these to the visual roll. In the case of 'Up' the team used immediately available resources such as Docter's daughter who's recording ended up in the released movie.

Animation, has always been slow and expensive to produce because it is labor and technology intensive. Thus the story reel provides the stakeholders and the production team with a good idea of the narrative flow from start to end - before production begins. Gaps and flows can be identified and the script, character models and animation properties can be modified accordingly.

One second of animation, at 24 frames per second takes 12 to 24 unique images. According to an article published in January 1938 in Popular Mechanics Monthly, over 1.5 million drawings were created for Snow White. Fast forward 70 years and production of computer generated animation is still as demanding, and prototyping before actual production is critical.

So in addition to the story reel animators also use a technique called pencil testing. The pencil test helps evaluate the animation quality within a scene. The animators shoot a scene using key frames and in-betweens (the sequence of drawings the connect 2 keyframes), review the result to identify flows in the animation - jerkiness, action that goes too fast or too slow, etc. - and make the necessary changes. Once pencil sketches were approved the production moved on to retrace in ink those rough pencil outlines, yielding high quality drawings, and these were in turn traced on clear acetate cels and painted. A long process indeed.
Another technique that was an absolute must in the day of hand made drawing animation, was 'flipping'. The animator, hunched over his animation table would quickly flip through a stack of drawings - sometimes as little s six or twelve, to get a sense of the flow within a sequence. This was very helpful during the process of creating the in-between drawings.

The similarity between animation prototyping techniques and user interface and user experience design are interesting:
  1. The story reel is like a complete interactive prototype, the one that let's us step through tasks an interactions from login to logoff - check for overall consistency of interaction patterns and usability. Identify gaps in requirements and patterns. It is seeing the forest and also the trees and is important on a project level.
  2. The pencil test is is like testing a single ui widget or screen - is it working according to business requirements? does it comply with our established interaction and visual design patterns? If not, iterate until approved. This is like seeing the tree but not the forest, and is useful on a work-stream level.
  3. Flipping, the quick testing of interaction flow within a ui widget, for example, a dynamic panel in Axure. This is useful on a team member, UXA level.

Choosing a Prototyping Tool


It seems that only yesterday the mainstream prototyping option favored by user experience practitioners was Visio. Also common were heated arguments on the greatness of paper, Power Point and other low fidelity tools and techniques as the main prototyping instruments. I recall a 2007 uphill battle with colleagues around use of Axure for a large RIA project, where I was met with skepticism and concerns about the validity of the approach. They favored Visio.

Fast forward to 2009 and there seems to be an influx of new tools and with them, potential possibilities to express ourselves to our colleagues, business and development partners. This trend signals that finally the practice of user experience has matured and is large enough to attract software developers. This trend happened with word processors, desktop publishing, illustration, video editing, browsers, web authoring and many others. Eventually the market settles on a couple of applications that become the de-facto tools of the trade, at least until a game-changer enters the field. From this perspective, Axure is a game changer, emerging when pressures on UX to simulate rich interaction rendered tools like Visio useless.
A few points to consider:

  • What is our interest as a professional community? I would argue that as information architecture and interaction design are getting more complex yet deadlines continue to shrink, we want our prototyping tools to be powerful yet easier to use: We need demonstrate our value in expressing complex applications correctly - and fast. The tools needs to handle the various facets of our work products: As we know, there is a lot more to user experience design than just mocking up screens and simulating rich interaction. Our deliverables include extensive documentation that is consumed by a range of stakeholders.

  • Features and complexity. I would argue that the successful tool must be feature-rich and fit the granularity of prototyping throughout the design process. By that I mean that we typically start with high-level concepts - fast wireframes and flows. Gradually and with feedback from research and stakeholders, more depth and details are added to the prototype, including interactions and detailed annotations. While we want to simulate rich interactions, I think that it is desirable to avoid becoming programmers, or at least, minimize the need to relay on a scripting language such as ActionScript or JavaScript. A concern is that the more effort is spent on making the prototype interactive, the less flexible the design becomes because we are getting involved in development instead of design. It is possible to create fairly complex prototypes with Axure without ever using raised events and variables, but these features are available to power users. Few of the new tools offer this flexibility. Finally, beyond dragging and dropping some UI widgets on a canvas and simulate RIA interaction, it is the proven ability to fuse team work, richness of interaction specifications, reuse of visual and interaction patterns (to name some key capabilities) that sets a tool like Axure from the new crop of tools.

  • Proficiency and professional benefits. This is especially relevant to situations where a team of interaction designers is assembled and is required to conceptualize, test and document (fast...) a large, complex application. It makes a great difference if all team members can - in addition to quickly get up to speed on the prototyping tool - master it and maximize its potential. For example, Axure seems to be gaining awareness in the UX community so it is easier to find UX professionals who are familiar with it and can 'hit the ground running' . Another important aspect is that practitioners want leverage expertise gained in one project when moving to another employer or project. If one uses tool A in one project, tool b in another and tool c in the next, there is little benefit in terms of best practice and expertise from a professional perspective.

  • Shared projects, regardless of the prototyping tool, are not trivial and best practice is still evolving as knowledge around this new, emerging capability is spreading within the community. Developers of prototyping tools that do not support sharing miss on the experiences gained from having to deal with the challenges of collaborative work, especially issues that relate to management of assets, management of multiple releases, etc. Keep in mind that implementing solutions into the tool take time and feedback form practitioners - see the list of desired functionality for Axure to get an idea of how much more we want...

  • Cost. As others and myself noted elsewhere in this forum, cost plays a major role for the acceptance and adaptation of any tool. As we know, cost is not just the price of the application, but also the time invested in getting to proficiency, dealing with work-arounds if the tool lacks the features needed, or if it is buggy. There is also an interesting phenomenon with price: If the tool is too inexpensive it tend to be dismissed by IT organizations. From this perspective Axure's price point makes it affordable to single practitioner and also makes it a palatable purchase for large teams.

  • Community and Customer support. Last but not least - The prototyping application and files become critical to our ability to deliver on time. As I wrote elsewhere, the confidence that Axure will respond to an urgent crisis is a major, major point of differentiation for me. I know that postings on this board or direct mail to Support will be addressed. I also learn all the time from reading the tips and techniques that other practitioners post regularly. In fairness to the developers of the new tools, they will have an opportunity to prove their commitment to the their customer base. Ultimately, the success of one tool over another can be often attributed to the strength of the community formed around it.
To be continued here.

This post was originally written as a response to another post in a thread on Axure's
discussion board.
---------------------------------
Disclaimer: I am not an employee of Axure nor am I compensated by the company in any way, shape or form. Rather, I have a vested interest in its continued development as an avid user of the application on a daily-basis. (Disclaimer text by Dingle)

High Fidelity and Low Fiedelity Prototyping


Magritte's painting "Ceci n'est pas une pipe" ("This is not a pipe") continues to be the source of delicious musings on art and semiotics almost a century after Magritte created the series of paintings called The Treachery of Images .


The point here is that the prototype is not the application, and keeping this in mind can guide the user interaction team in developing a prototype that is rich and effective, yet not so involved as to introduce complexities to the project.


We are witnessing a dramatic change in the landscape of prototyping tools available to practitioners, and with the tools, business acceptance of and demand for increased visualization of the proof of concepts before development begins.


Ideally, the prototyping process should be continuous and evolutionary, meaning that it is possible to increment the prototype file increasingly adding depth and specifications. So it is a matter of developing a prototyping process that is effective and appropriate to the point of project. Typically, low fidelity works well at the very early days of the design process:

  • Sketches on paper, cards, post-its, etc.
  • Sketches in Powerpoint, Visio, Illustratior, etc.

The purpose of these quick sketches is mostly to provide the designer with an initial handle of the concept, quickly experiment with approaches.

To be continued here.



* As a side note, a search for 'this is not a pipe' yields a result set that demonstrates some of the issues Walter Benjamin brought up in 'The Work of Art in the Age of Its Technological Reproducibility'. Which image is the pipe of 'This Is Not A Pipe'?


Lessons from History on Prototyping

A decade ago the discipline of UX did not exist. Not that we did not practice it, but terminology was still evolving, user centered design was in the horizon, and Donald Norman's 'The Design of Everyday Things' was becoming a hit among those of us who found themselves responsible for making software easier to use by introducing the wild concept of accounting for the users in the process.

It is true that personal computers haven't been around for long either, but as the use of computers spread worldwide, several generations of users suffer the consequences of a software with terrible user interfaces at all levels - from operating systems and up - software that was designed with little consideration for ease of use, accessibility and real productivity. This is a generalization which is unfair to those who did care about the user, the user interface and the outcome - the user experience, but the statement does apply in my opinion to the majority of software vendors.

This is not unlike the situation of physical architecture. Of the billions of private residents, public buildings and industrial structures, probably only a few ever benefited from the design of an architect. Not that the solutions were necessarily bad - in fact, many of the structures we see today evolved successfully over millennia. People build their own homes - individually or as a communal effort. Read Donald Harington's 'The Architecture of the Arkansas Ozarks' for a wonderful account of such evolutionary process.

In the classic text 'On the Art of Building in Ten Books', Leon Battista Alberti mentions that Julius Caesar "completely demolished a house on his estate in Nemi, because it did not totally meet with his approval." and continues to recommend "the time-honored custom, practiced by the best builders, of preparing not only drawings and sketches but also models of wood or any other material." (1).

Back in the Fifteenth century Alberti described an event that took place in the First century BC. Substitute 'building' with user interface, and the business value, best practice and positive impact on the end product are still the same. The amazing find here is the reference to a prototype, to a model, that builders and their clients used early on as means of communicating the desired end result.

Alberti writes further that "Having constructed those models, it will e possible to examine clearly and consider thoroughly relationship between the site and the surrounding district, the shape of the area, the number and order of parts of a building...It will also allow one to increase or decrease the size of those elements freely, to exchange them, and make new proposals and alterations until everything fits together well and meet with approval. Furthermore, it will provide a surer indication of the likely costs - which is not unimportant - by allowing one to calculate costs".

In another example of custom use of prototyping, Baxandall writes about the Fifteenth century painter Filippo Lippi, who in 1457 was commissioned to paint a triptych for Giovanni di Cosimo de' Medici, the Italian banker and patron of the arts (1). In a letter to Giovanni, Filippo writes "...And to keep you informed, I send a drawing of how the triptych is made of wood, and with its height and breadth..."

So we did not quite invent the prototyping wheel and I'd propose that instead of floating complex ROI equations and fancy technical terminology as means to convince our business partners that investment in interactive prototyping is worth while, we can reference back to the past and lessons learned from the art of building and of fine art.


To be continued here.

Best practice for shared Axure projects

While it is important to develop tool-agnostic practices, in reality we are always empowered and limited by our choice of tools. Although this post references Axure specific functionality it also includes general aspects, the first and most important of which is communications.
Regular and productive communications are the important contributer for successful team work, yet it is easier to say than practice. This is especially true with virtual teams of individuals who work remotely from their homes and on-site teams spread across several geographical locations. But all to often people who are only a few years apart fail to exchange meaningful information.
As much as possible it is important to allocate time for staff development to ensure that all team members posses a level of proficiency that would not only make them productive, but also avoid loss of work due to errors caused by an unknowledgeable team member messing up the shared file. As we know, such calamities tend to happen just before a major deadline.

  1. Team members should understand how to work with shared projects. All should be comfortable with the various options under the 'Share' menu and the difference between options such as 'Get all changes..." and "Get Changes...", for example.
  2. New team members should have an on-boarding deep dive session with a knowledgeable team member to cover the structure of the sites. In large, intense projects new members are often thrown in to the cold waters of a shared project file to sink or swim because the team is at the height of some crunch. disoriented and under pressure to get up to speed asap, the incoming member can be easily lost in the intricacies and work-arounds.
  3. All team member should participate in a weekly status meeting that covers the structure of the sitemap, variables (since those are global and limited) and other important changes. Use web sharing to view the file, make sure that members understands the composition structure of their colleagues.
  4. Despite looming deadlines...it is important to be careful and pay attention before checking in and out. A few seconds of concentration can save hours of lost work.
  5. Team members should avoid unsafe check outs -- checking out pages that are already checked out by another team member - this is critical.
  6. Before you begin work on a page, make sure to 'Get ALL changes from shared directory' - this will insure you have the latest copy.
  7. Update your file frequently by getting all changes.
  8. When done editing a page or master you checked out, check it in so that it will be available for other team members.
  9. Check out only what's needed for your design work, check in as soon as done and check out the next chunk you are going to work on: Avoid hogging files you are not working on but still checked out.
  10. If possible, structure the sitemap and masters in sections such that team members can work on chunks of the file in parallel. Agree on unique page and master IDs and a naming convention to help team members access the right files and communicate.
  11. Make sure to back up the shared file.


Note: Sections of this entry was first published on Axure's discussion board, but I had requests to post it here.

Disclaimer: I am not an employee of Axure nor am I compensated by the company in any way, shape or form. Rather, I have a vested interest in its continued development as an avid user of the application on a daily-basis. (Disclaimer text by 'dingle', a frequent contributer to the Axure discussion Board)

DATA.GOV

A recent editorial in the NYT informed me about the federal government's new resource - Data.gov. As noted in the editorial, the site is still new and does not provide yet any direct data visualization and manipulation widgets, although sometimes there are links to other sites where such widgets are available. Still, this “one-stop shop for free access to data generated across all federal agencies.” as Peter Orszag describes it, promises information architects and user experience designers an unparalleled opportunity to experiment and develop new paradigms of data visualizations. For the most part, access to very large sets of data is not readily available, or so easily found.
Below is the search result for the term 'flu.

The link to the CDC's library of widgets shows a surprising wealth of them readily available for consumption. Embeded in this post is the CDC's FluIQ widget, as a useful example...Check it out:

On Monitizing

On May 21, 2009 I started a new blog dedicated to user experience prototyping. As part of the Settings flow, I decided that it will be interesting to witness the evolution in the context of ads that Google's AdSense feeds to the blog. Also, I am hoping to get really wealthy as visitors to my blog click away from it on their way to some other destination...

I was really shocked when I first tested the the results after posting my first post and my knee-jerk reaction was to stop showing the ads. The reason, as you may guess was that the ads were suitable more to, shell we say, interactions of the physical nature than to a site than to one dedicated user experience and interaction with software, an activity that typically does not involve body fluids,
I've reactivated a couple of days later and here is the result:As you can see, it is not likely to be of interest to my target audience. But perhaps I'm wrong, a topic for another entry. I am hoping to update this post over time, and am really curious about what is going to transpire.
Update on May 25th:
Still too soon for Google's bots to discover the great contribution of the blog to the practice of user experience design, because the automated ads are clearly not contextual to the site, which now has a couple of posts and some links to relevant content.

Update on May 28th
Already on the 26th there was a noticeable change in the quality of the ads Google generated and displayed on the site: There were all contextual to the blog's content. The illustration below is a comparison of ads One of the lessons of this experiment is the importance of conditioning a site to be as productive as possible from a search engine perspective.

I am not sure how many user experience practitioners are versed in the craft of website optimization and web analytics. In my experience, work on a commercial B to C project typically involves heightened awareness to analysis. It does appear that not enough information architects and user experience designers are considering analytics during the design process. Rather, analytics professionals handle optimization as a technical aspect of the site, and after the site has been redesigned and launched.

Avinash Kaushik's blog Occam's Razor provides important insights, many of which are really relevant from an interaction design perspective. Since the demands (or daemons?) for monetizing anything web are becoming a norm, it is important to consider the information architecture in a way the will be effective, providing value 'under the hood' avoiding transformation of the site or application into a 'Times Square'. Best practice approaches can be adopted from analytics and further developed for the purposes of improved user experience.

But back to the nature of the ads the appear by default on the side before Google and other search engine had time to index it's content.

As you can see in the capture above, Blogger (and I'm assuming other publishing tools) provide the ability to indicate to the bots that the site contains adult content. But despite the fact that from it's inception, my blog was set to 'No', the ads in the first few days assumed (cynically?) that it is, or, that visitors to the site will be interested.

Faceted Browsing and Taxonomy

This entry is a work in progress intended as a tentative study of current use of faceted (guided) navigation in e-commerce settings and how it exposes underlying taxonomy to the user. This blog entry is NOT a critique of the sites discussed here but rather an exploration of navigation paths, taxonomy facilitated browsing and assumptions made regarding the inherent use of underlying information architecture and its impact on clarity and usability (directly impacting conversion and retention rates).

1. Lowe's.com [Captured January 2009]
There are several ways to navigate the site by browsing. The left column provides groupings that parallel the top horizontal menu. In the left column the user see items grouped by Departments and below that, items grouped by Rooms. Departments maps to the store or correspond to a mental model a user might have of the store, and Rooms map to a home or correspond to a mental model the user might have of a home. Providing multiple browsing models is a nice feature because it supports self identification - the user benefits from a flexibility to be at their comfort level, not the site's.
One can assume that the user is more familiar with the concept of a home, so it is a pity that the navigation the user is more comfortable with is secondary to the one the store's model. On the other hand, some users may also be very familiar with the store model. For example - store clerks or customer service reps. (But in my experience the in-store terminals are generally not similar to a company's public e-commerce store). In any case, the site exposes and organizes the highest level of its taxonomy and access to its products in two ways, which increases the flexibility and and probability the user will select one path to work with and not abandon the site.
The number of items under the Rooms model is obviously much shorter than those under Departments and more over, the Laundry Room is one of the items listed right there on this first level. But, browsing top down path 1 (See image below) is the first the user will encounter, clicking the Appliance link under 'Departments'.
>>> Assumption: The user would know/guess that a dryer is an appliance.
Path 1:
L1.1 - Departments
L1.2 - Appliances (click to get to L2)

If the user is more inquisitive and visually scrolls down to the Rooms section, the obvious, explicit selection is right there. (See image
Path 2:
l1.3 - Rooms
l1.4 - Laundry Room (click to get to L2)


Both browse path involve 2 clicks, so no efficiency is gained in terms of physical effort.

Talking Taxonomy To Kids

Everyone knows that you need to use simple words when talking to little kids, so 'big' words like classification, clustering, facets and hierarchy (a distasteful term which even some grownups find difficult to spell) are out. So let's start with a tree. Imagine a tree. What's a tree really? You could say that a tree is like a 'parent', and it has 'children': A trunk that splits into several big branches that in turn split into smaller twigs that split into even smaller twigs where leaves sprout and fruit grow. If the child is really curious, you can talk about the parts of the tree that only moles could see if moles could see: The taproot which is the main root that grows vertically into the ground, the lateral roots that parallels the branches, the radicles which is just a word for small roots that parallel twigs, and the root hair zone which is like the leaves. But lets not make things complicated, because we are talking to a child who happens to speak English. We'd have to use words like stamm (trunk), zweig (branch) and zweig again for twig because it seems that the Germans don't have a special word for it, or at least that's what you get from online translators.


So. is taxonomy like a tree? wait, wait... because there is another way to describe a tree: The bole is the part of the tree between the ground and the first branch, the crown which is the part of the tree from the first branch to the top, and the top which is the highest part of the tree.

And...a tree is a plant and there are all kinds of trees and here are just a few: Redwood, Ash, Fir, Spruce, Sequoia. There are banana trees, apple trees, orange trees and how exactly is a Sequoia related to and Avocado tree? And even more: A collection of trees can form a forest, grove, garden, or park which by themselves are not just collections of trees but wider concepts. So of course, the development of a taxonomy involves research, and among other things, one can find that for many domains, especially in life sciences, law, government and many others, it is possible to start the process with an existing taxonomy, see for example the taxonomywarehouse.

It is clear that the scope of concepts in the world is endless due to the Human instinct to stereotype and classify, first explored by Aristotle, or at least - this is our first written record of an arrangement to classes, subclasses and so on, as means to understand our world. But if so, lets keep in mind that children must have an inherent understanding of classification, and by extenuation, of taxonomy. It is only a matter of vocabulary then.

Taxonomy is a communication device, a tricky one because it is important to make sure that the person that communicates the taxonomy and the audience for the taxonomy understand each other. So the first thing is to understand - who will be using your taxonomy and how.

When you talk to a child, you want to talk about the tree using words like brunch, twigs, leaves etc. and you don't want to discuss apical dominance, foliage and phloem because this is not the vocabulary of an eight years old. And since we clearly can apply elementary school education to user experience design, I would add that developing a taxonomy is art as much as it is science, and for that you can read more in Bowker and Star's great book 'Sorting Things Out'.

As a communication device, taxonomy's principal use is in navigation systems and facilitating good search results. But because a taxonomy maps to a known mental model shared by the user and the system, it is important that the appropriate taxonomy will be exposed to the user in navigation systems, drop-lists, and other actionable interface objects. Such a system allows the user to self-identify: I'm a kid, I'm a teacher or I'm a parent/guardian, and the system renders relevant taxonomies based on appropriate synonym mapping. The multidimensionality of relations within a taxonomic plane is supported by explicit content tagging as well as folksonomy - a taxonomy set by users, to provides the necessary flexibility.

Towards A Unified UI Testing Model

This entry was inspired by a post by Avinash Kaushik 'Experiment or Die. Five Reasons And Awesome Testing Ideas'.

Although 'User' is the operative word in 'User Interface', it took several decades to get usability off the ground as a service companies are willing to pay for. It is true that some companies pioneered user centered design years ago, but I think it is safe to say that the 'main street' of companies involved in any substantial software project considered (and many still do) the user interface mere eye candy. But the evidence for an evolution is the accepted legitimacy of roles such information and user experience architects, usability engineers, interface designers and so on.

As a result of the higher awareness to the UI throughout a software's life cycle, testing the UI during development is now increasingly common as the tools needed to conduct reasonable testing are more affordable, and testing goals are more practical. Consumer facing interface re/design projects are increasingly adding usability testing as part of the pre-launch process and there is certainly a shift from pseudo-scientific testing of eye-movement tracking or user response time to on-screen events, to measures of task flow efficiency and task completion success.

Usability testing software such as Morae and UserVue substantially reduce the expense and limitations of UI testing that were common just a few years ago when usability labs had to be be rented by the hour, and were extremely expensive. In early 2006 I was handed sixteen audio-cassettes of ninety minutes each after finishing a couple of days in a usability lab. The client spent over $10K for the testing and yet the budget did not allow for video taping and there was no time nor budget allocation to go over the audio tapes after the sessions. While we learned a lot form the sessions, and $10K were a drop in the bucket for a multi-million dollar project, the singularity of such an exercise turned it into an expensive line item that was difficult to sell to many clients, whose budget for UI work was limited to begin with.
The truth is that the technology was just not there in terms of computing powers for real time audio and video capture possible now, and best practices were thin, since performing lab tests was a rare occasion for most practitioners. But the big drawback, in my mind, involved limited demographic and geographic distribution of the test participant due to their need to be in a relative close proximity to the testing facility. Today, with web based testing, we re no longer limited to a physical location and are able to sample a spread that is an accurate reflection of an application's user audience. Methodologies and best practices for UI testing are evolving rapidly, and acceptance of this effort is so high such that it is no longer questioned, as long as the cost is reasonable. UI testing prior to (and maybe during) development makes all the sense.

What I often find is a reality in which organizations contract UI design services - especially interaction design and information architects. As a result, navigation systems, page layouts and behavior patterns of landing pages are set during the concept development phase. Companies will pay for some iterations of user validations, but there is always a real budget pressure to release asap and cut costs. I have yet to see a project plan that seriously accounts for sufficient exploration and testing, and have to fight for it time and again. It is not that clients don’t see the value, but they don’t want to pay unless the concept is seriously off target.

To be realistic and practical - It takes significant time and labor (=$$$) to determine and preserve patterns of consistent interaction and visual design approach and the variations possible. The efforts can be significantly bigger when you are dealing with a multi-national presence where one needs to account for many stakeholders as well as contrasting cultural sensibilities. It is very rare to have such luxury and moreover, critics may argue that the best evolution of the redesigned UI will take place in deployment, not in the 'lab'.
And so, in many cases, the UI design consulting firm leaves around deployment time after handoff to the internal development team and this is where the brand new UI begins to fall apart - there is no one internally with the skill-set, time, budget to take charge of testing the evolving interface as it is being readied for deployment. I doubt that The style guides and UI specs are used much; the cynical phrase 'No one reads' is no far off reality partially because specs are difficult to produce and hard to consume. But that is another story.

As it turns out the UI often gets tested again once in production. This is especially true for commercial B2B and B2C RIAs. However, this round of testing and decisions about modifications to the UI are often done outside of the context of usability, often, without the involvement of the UI team that architected it (due to the fact that often, the consultants who were hired to develop the applications UI are not retained after the launch. In fact - the people who do this round of testing often know very little about UI practice OR even look at the UI they test and attempt to improve.

Usability testing:
  • The testing is performed by usability professionals, part of a concentrated, focused UI effort.
  • The testing is typically qualitative because the sample of participants is relatively small.
  • The testing is typically done on low fidelity clickable prototype, a semi-functional POC, or for redesign purposes on the deployed software.
  • The testing validates the design concept and triggers stakeholders' sign-off, or guides improvements to the existing or redesigned software.
Web Analytics Conversion testing:
  • The testing is performed by web analytics professionals and the effort is typically not related to a UI effort: The testing is not really focused on the user interface from a usability perspective, but from an optimization perspective.
  • The testing is quantitative, based on actual web analytics data derived from deployment usage.
  • The tested user interface is the production UI.
Analytics testing takes time - time to plan the testing strategy, prepare it, but most of all, time to execute and wait to see if trends are changing. We can not assume that the change will take place overnight. Is there a way to attribute time factor to the success or failure of a tested approach? Was it a single element that has contributed to the change, or is it the combination? or is it the latent impact of the brand, of market drivers, reduction of costs and so on.
During development, usability testing is iterative, fast and qualitative. Often this is where testing ends for many organizations, they stop using the consultants and move to analytics testing that is performed by web analytics consultant, or, it is likely that they will have someone in house. Analytics testing is on-going, quantitative, and can be like stabbing in the dark - trying to figure Why without tying it to usability.

Clearly a gap in the interaction design discourse when it comes to web analytics (and testing for optimization). Analytics is regarded as a ‘post’ event, not as something you can be proactive about during the design process. What I hope to see is more dialog between the user experience community and the web analytics community around practical ways to integrate testing and develop a full life cycle approach that combines usability and analytic s considerations throughout. More to come.

Real-Time, Quantitative Capture of User Response to Streaming Content

1. Introduction

Usability studies utilize both qualitative and quantitative methods for capturing user response to the user interface that is being tested. We can measure mouse-clicks, time on task, task completion rates and other valuable data. We can also collect verbal feedback related to ease of use, visual design, layout and other subjective responses. The processing of collected verbal data is expensive because recordings have to be transcribed, tagged and often edited for readability. This is a labor intensive process and if the testing is done with users who talk different languages, translation is also required. Moreover, even when interviews are carefully scripted and prompts are consistent, response are often difficult to reconcile:  Participant's answers can be inconsistent, vague, and generally difficult to analyze and interpret.

Verbal feedback is also used to capture participants' response to streaming content and to gage level of engagement with that content. Typically the tester pauses the media and prompts the participant for her or his opinion. The benefit of this method is that the feedback is contextually related to the content which had just been displayed and is fresh in the mind of the respondent. The disadvantage is the labor intensive post session processing and interpretation of the information gathered. Alternatively, a user can be given a questioner at the end of the streaming content. The benefit of a questioner is that it is easier to process and measure the responses, but the drawback is that the participant is not likely to recall in deep detail their response to content or their sense of engagement with content that was displayed minutes ago.

This paper describes a method I developed to capture in real-time participants' response to streaming content as well as their engagement levels throughout the presentation. The key benefits of this method are:

  1. Capture in real-time users' response to streaming content such as web seminars, tutorials and demos, where the user interface itself plays a smaller role in the interaction
  2. Significantly lower the time labor costs associated with processing the feedback, which may help budgeting for larger samples.
  3. Capture response to streaming content by setting your own test pages or from any website or application.
The method involves the use of TechSmith's Morae*, which is currently the only commercial, out-of-the-box software for usability test. The method leverages Morae's capability to captures, among other things, mouse movement and mouse clicks.

2. Methods
2.1 Create your own test page/s
The first approach is to create your own test pages. This scenario works well when you:
  1. Wish to hide the tested content from the associated company's identity by isolating it from the rest of the company's site and the site's URL. 
  2. When you are testing several draft variations of the content, don't want to bother site admins with helping you post the stuff and need to run it locally off your machine. 
An added benefit is that you can perform the test without worrying about the quality of bandwidth in the test location, or an internet connection all together.
Some technical skills involving the creation of a standard web page are required for setting up your own test pages, but a typical page is really simple, composed of the embedded streaming content - typically a Flash file (So you will need the SWF file), and a single graphics that is used to capture the feedback for content and engagement. See image 1 below:
You need to create an image that will be used to capture the user's responses to content and the user's engagement level. This graphic can be as fancy as you wish, but my suggestion is to keep it simple and remember that the main event on the page is the streaming content, not these graphics. Here is an image I typically use:

The image is divided into 2 sections:

  1. Left side - Response to content. A rating scale from 1 to 7, with 1 being "I don't care -- trivial content" to 7 being "Really important -- Tell me more!"
  2. Right side - Engagement level. A rating scale from 1 to 7, with 1 being "I'm bored" to 7 being "I'm fully engaged"
Morae Study Configuration
To maximize efficiency of logging sessions in Morae Manager, it is best to prepare the study configuration in advance. See image below:

















For a 7 based rating scale, prepare 7 markers for content and 7 markers for engagement and label them Content 1, Content 2, etc. Change the letter association for the markers to a sequence that will make it easy for you to use shortcut during the logging. Finally, assign a color to all content markers, and a different one to all engagement markers. This different colors will provide a clear differentiation once you finish placing all the markers.

How it Works:
Ask the user to click the relevant ratings on the content and engagement bars as the content streams. Ask the user to click as many times as makes sense. Morae captures mouse clicks on the bars, which are easy to see and log. (The red triangle in the image below is generated by Moare Recorder during the session.)
In logging the session it is possible to identify with a high degree of accuracy which section of the streaming content the participant rated, and of course, the assigned value. With a big enough sample rate you can get a good insight into participants opinion about the content -- both narration and visuals, as well as their engagement level throughout the streaming.

Some production tips:
  1. For the screen to be aesthetically pleasing and professionally looking, I adjust the width of the image so that it is the same as the width of the embedded content I'm testing.
  2. The buttons on the bar should be clear and easy to see, and easy to click on. 
  3. The labels should be clear and easy to read
  4. This is a static image - no need to create mouse-over states.
  5. Keep to the minimum the number of shades and colors used for the buttons: The participant needs to focus on the media, not the buttons, so minimize visual overload.
  6. Differentiate between the low and high scores. I have a gradual shift from White (1) to Yellow (7) 
  7. Make sure you have good speakers so that the participant can hear clearly the narration.
2.2 Capture any web page
The second approach makes it possible to capture user's response to any streaming content, on any site. This scenario works when:
  1. You want to test content that is on a production site but you don't have the media file locally
  2. You want to capture response to a section of a competitors site
  3. You want to capture response to streaming content but are also conducting a traditional usability test for the site (navigation, workflow, tasks and so on)  
  4. For some reason you can not use self-created test pages.
Just keep in mind that an internet connection will be required in the testing -- try to avoid at all cost a wireless connection and opt for an ethernet cable, if available.

How it Works:
Since a measurement bar graphic cannot be used, I suggest a low tech solution - drafting tape. The simplest method: Apply a strip of drafting tape directly to the monitor, above the clip you want to test. With a sharpie, write 'Content' in the top-center, the number 1 on the left, 2 in the middle and 3 on the right. Apply a second strip on the bottom of the clip, write 'Engagement' and the 3 numbers.

The strips help guide the user to well defined area of the screen where you want them to click. The strip is semi-transparent, so the user can see the mouse pointer then they click, and since they click in areas that are not part of the content object, the streaming is not interrupted by the clicks. When you view the recorded session later, the drafting tape strips will obviously not be there, but since you know their meaning - the clusters of clicks on top and bottom of the clip, and to the left, middle and right - will help you collect the relevant data as effectively as if there was a graphic there. Once this section of the study is done, you can peel the tape off the screen an move on to another topic.

While the example above works best for a 3 rating system, you can setup a more granular system used the left and right sides of the box. However, keep in mind that you want to keep it simple, and that adding too much tape around the clip may mask too much of the screen. Also think about the accuracy when logging the

What you need:

  1. A 1" 3M™ Scotch® 230 Drafting Tape - this tape sticks to the screen but is easy to peel off.  You can get it in any office supply store.
  2. Ultra or extra fine tip Sharpies - I use a Blue for the content strip , Black for engagement strip and red for the numbers. (Avoid using Red and Green for labels because they carry an inherent association for bad (Red) and Good (Green), which may confuse the user.
  3. Small scissors (to cut the tape nicely)
  4. Lens cleaner solution to wipe the screen after peeling off the tape
Keep in mind
  1. Don't be sloppy: Cut the strips with scissors. If you have to tear the tape, fold about 1/2" on each side to give the strip edges and straight edge.
  2. Apply the tape as horizontally as you can (Leveler is not needed...)
  3. Demonstrate to the user how you want them to act during the recording and make sure they are comfortable with the mouse going 'under' the tape while they click it.

3. What's next?
Once you capture and tag the sessions, it is possible to translate data to valuable information. There are many interesting ways to slice and dice the data, well beyond the scope of this document. However, as you can see in the graphs below, aggregation of session data make a compelling story about response to content and level of engagement to existing or proposed streaming media. makes helps present to stakeholders important analysis and help develop strategies






















------------------------------------------------------------------------
* Can be used with Morae 2 and 3.

The Analog Threat

Google's new browser has a privacy mode called 'Incognito'. In this mode, sites open in a new window and do not "...appear in your browser history or search history, and they won't leave other traces, like cookies, on your computer after you close the incognito window."

Google warns users that Going incognito doesn't affect the behavior of other people, servers, or software. Be wary of:
  • Websites that collect or share information about you
  • Internet service providers or employers that track the pages you visit
  • Malicious software that tracks your keystrokes in exchange for free smileys
  • Surveillance by secret agents
These are all sophisticated electronic transgression methods and they sharply contrast the last point:
  • People standing behind you
This point is needed, perhapse, because, to quote Voltaire "Common sense in not so common". Interstingly, this is the only threat most users CAN do something about if they pay attention...