Thursday, April 9, 2009

Measuring Gov 2.0 (Via Web 1.0): Forrester

In two previous posts (Brookings and Foresee), I have explored common methods for measuring websites in a Web 1.0 world in order to find applicability for Web 2.0. Essentially, I am providing a summary in order to educate myself and share that knowledge acquisition with personnel from government agencies and other organizations that are beginning to think about social media metrics and analytics. Each of these measurement methods are included in a presentation that Ari Herzog and I delivered at the "Social Media for Government" conference a few weeks ago.

Another way that an organization may analyze its website is by using the “Web Site Review Scorecard 7.0” developed by Forrester Research, Inc. As a quick aside, I owe Alan Webber, formerly a government website reviewer with Forrester who is launching his own venture (called Ronin Research Group), my appreciation for reviewing this summary of Forrester's review methods.

Whereas ForeSee directly asks customers to complete a survey, Forrester engages an expert in an evaluation of a website using a scorecard review instrument. The expert assumes the role of the customer vs. being the customer him/herself. Technically, its a "heuristic evaluation," which is a review of the user experience in light of acknowledged usability principles.

The scorecard begins by asking an organization to articulate their “evaluated user goals” to ground the review in a set of clearly defined outcomes. With this foundation, each review question is scored on a -2 to +2 scale:

• 2 = Strong Pass (Best practice)
• 1 = Pass (No problems found)
• -1 = Fail (One major problem or several minor problems
• -2 = Two or more major programs, or one major problem and several minor problems

The scorecard then asks several questions related to four elements. There are specific criterion under each question that results in a score. A score of 25 (+1 on each criteria) being a pass). The elements and some of their associated questions are found below:

Value
• Is essential content available where needed?
• Is essential function available where needed?
• Are essential content and function given priority in the display?

Navigation
• Do menu categories immediately expose or describe their subcategories?
• Is the wording in hyperlinks and controls clear and informative?
• Are keyword-based searches comprehensive and precise?

Presentation
• Does the site use graphics, icons, and symbols that are easy to understand?
• Do layouts use space effectively?
• Are interactive elements easily recognizable and behave as expected?

Trust
• Does the site present privacy and security policies in context?
• Does site functionality provide clear feedback in response to user actions?
• Does the site help users avoid and recover from errors?

The review scorecard is a bit more comprehensive, and presents yet another way of thinking about an organization’s Web presence. Per Forrester’s own description of the review, it “uncovers flaws that prevent users from accomplishing key goals on Web sites….To get the most out of the Web Site Review, site owners should identify user goals that drive business metrics, review their sites using the tools available on Forrester's Web site, and fix usability problems identified in the review.”

So what are the implications and applications for Government and Web 2.0?

A. Agencies may consider gaining a combination of feedback from both end users (ForeSee) and experts (Forrester). Alan suggested that "agencies should use multiple paths, including detailed analytics, usability reviews, user feedback" - much of which can be done internally.

B. Forrester has a Blog Review tool that is accessible to their clients. Agencies who happen to be current Forrester clients may want to examine this review tool and conduct their own analysis if they currently use blogs to communicate with citizens.

C. Consider how some of the questions above apply to social media in evaluating the placement of social media tools on your agency's website. Under "Value": Where should you place a video or RSS stream in light of its relative importance to your goals and objectives in communicating with constituents? Under "Presentation": Are you using only the icon of a Delicious or Twitter link rather than spelling out a description of that tool? We cannot assume that our end users know how to navigate a page adeptly or understand what these icons represent.

D. Start with the end in mind. Once your agency has decided to launch a social media tool, use the four Forrester elements and ask again: "How does the use of this tool on our website or elsewhere on the Web connect to our mission, goals and objectives?" That's where Forrester starts its evaluation - with the "why". So should you.

6 comments:

Jed Sundwall said...

Thanks for posting this, Andrew. I agree that there is a need for multiple levels/flavors of analysis, but I'm a big fan of heuristic analyses and detailed anayltics. I'd prefer a combination of the two over user testing in most instances.

User testing is extremely costly to execute. It's hard to fathom its cost across all government sites. Instead, it'd be great if we could provide incentives for agencies to adopt best practices identified by site analytics and heuristic analysis.

Andrew Krzmarzick said...

Jed - Thanks for your feedback. Where can one find those best practices that you cite? Is there a place where agencies can review and replicate the work of their compatriots? If so, I'd love to see it and learn more.

Gwynne said...

@Jed, I respectfully disagree with the relative value of user testing. While it can be costly, I have NEVER EVER sat through a user testing session without being floored or surprised by something. Something that turned my thinking on it's face.

User testing can be done simply using paper prototypes or by doing live A/B or multivariate testing. I am dying to try the google tools [see http://www.google.com/websiteoptimizer] once the TOS issues are resolved.

Heuristics are good, analytics are good, user testing is good.

Andrew Krzmarzick said...

Gwynne,
Thanks for your thoughts. I agree with your summary - all of these evaluative tools/methods are good in combination. The "who" (champion for project, contributors to dynamic content, and the constituents/customers/citizens that are the ultimate end users) that comes right after the "why" (mission, goals, objectives) in implementing social media is critical to ensuring that the final "product" is appropriate and effective. Another question for both of you: What are less costly ways of accomplishing user testing?

Jed said...

I'm against the ropes! ;)

I'm apparently using a very narrow definition of user testing that excludes a/b and multivariate testing, which I tend to look at as extensions of analytics. I've probably got my definition wrong.

While I think gathering users into a room to do card sorting exercises and examine paper prototypes can be extremely valuable at the outset of a large project, I think analytics (complete with a/b testing etc) is ultimately a better resource for ongoing in

Like Gwynne, I'm crazy about multivariate testing, and I'm thrilled to know that tools like Web Site Optimizer are on govt web managers' radars. If we can allow persistent cookies, we'll gain access to a very low cost way to user test via analytics.

And Andy, there's some best practices gold at webcontent.gov. With regards to specifically to web 2.0, I'm hoping that the Social Media Subcouncil Wiki will become a great resource as well.

Gwynne said...

Jed, Since I got you on the ropes, I am gonna finish you off with my right cross.

I don't think that user testing has to be project based. If we only talk to and interact with our users at the beginning of a project, we miss improvement opportunities. The question is how to illicit the nuggets? In the past, I have tested very narrow--but critical--questions in the hallway during a conference of my target group armed with a clipboard and some full color screen shots.

These conversations taught me--and my fellow testers--boatloads about people's expectations and the way they worked. A little, cheap interaction, especially if frequent, while being anecdotal and hard to quantify, can go a long way. Frankly, I wish we did this all the time!

See also Gerry McGovern; for example, Too Close to Your Website, Too Far from Your Customers.

Boom! (okay, don't think I delivered the knockout, since we are really in agreement, but it's a fun analogy)