43 Presentation Desc

Views:
 
Category: Education
     
 

Presentation Description

No description available.

Comments

Presentation Transcript

Taxonomy Testing & Usability: 

Taxonomy Testing & Usability Joseph A. Busch

Agenda: 

Agenda Qualitative methods Quantitative methods

Qualitative taxonomy testing methods: 

Qualitative taxonomy testing methods

Walk-through method— Show & explain: 

Walk-through method— Show & explain ABC Computers.com All Business Dell Employee Education Gaming Enthusiast Home Investor Job Seeker Media Partner Shopper First Time Experienced Advanced Supplier Audience All Home & Home Office Gaming Government, Education & Healthcare Medium & Large Business Small Business Line of Business All Asia-Pacific Canada Dell EMEA Japan Latin America & Caribbean United States Region-Country Desktops MP3 Players Monitors Networking Notebooks Printers Projectors Servers Services Storage Televisions Non-Dell Brands Product Family Award Case Study Contract & Warranty Demo Magazine News & Event Product Information Services Solution Specification Technical Note Tool Training White Paper Other Content Type Content Type Business & Finance Interpersonal Development IT Professionals Technical Training IT Professionals Training & Certification PC Productivity Personal Computing Proficiency Competency Industry Banking & Finance Communica-tions E-Business Education Government Healthcare Hospitality Manufacturing Petro-chemocals Retail / Wholesale Technology Transportation Other Industries Service Assessment, Design & Implementation Deployment Enterprise Support Client Support Managed Lifecycle Asset Recovery & Recycling Training

Walk-through method— Editorial rules consistency check: 

Walk-through method— Editorial rules consistency check Abbreviations Ampersands Capitalization General…, More…, Other… Languages & character sets Length limits Multiple parents Plural vs. singular form Scope notes Serial comma Sources of terms Spaces Synonyms & acronyms Term order (Alphabetic or …) Term label order (Direct vs. inverted) …

Usability testing method— Task-based card sorting (1): 

Usability testing method— Task-based card sorting (1) 15 representative questions were selected Perspective of various organizational units Most frequent website searches Most frequently accessed website content Correct answers to the questions were agreed in advance by team. 15 users were tested Did not work for the organization Represented target audiences Testers were asked “where would you look for …” “under which facet… Topic, Commodity, or Geography?” Then, “… under which category?” Then, “…under which sub-category?” Tester choices were recorded Testers were asked to “think aloud” Notes were taken on what they said Pre- and post questions were asked Tester answers were recorded

Usability testing method— Task-based card sorting (2): 

Usability testing method— Task-based card sorting (2) 3. What is the average farm income level in your state? Topics Commodities 3. Geographic Coverage 1. Topics 1.1 Agricultural Economy 1.2 Agriculture-Related Policy 1.3 Diet, Health & Safety 1.4 Farm Financial Conditions 1.5 Farm Practices & Management 1.6 Food & Agricultural Industries 1.7 Food & Nutrition Assistance 1.8 Natural Resources & Environment 1.9 Rural Economy 1.10 Trade & International Markets 1.4 Farm Financial Conditions 1.4.1 Costs of Production 1.4.2 Commodity Outlook 1.4.3 Farm Financial Management & Performance 1.4.4 Farm Income 1.4.5 Farm Household Financial Well-being 1.4.6 Lenders & Financial Markets 1.4.7 Taxes

Analysis of task-based card sorting (1): 

Analysis of task-based card sorting (1)

Analysis of task-based card sorting (2): 

Analysis of task-based card sorting (2) In 80% of the trials users looked for information under the categories that we expected them to look for it. Breaking-up topics into facets makes it easier to find information, especially information related to commodities.

Analysis of task-based card sorting (3): 

Analysis of task-based card sorting (3) Possible change required. Change required. Possible error in categorization of this question because 64% thought the answer should be “Commodity Trade.” On these trials, only 50% looked in the right category, & only 27-36% agreed on the category. Policy of “Traceability” needs to be clarified. Use quasi-synonyms.

User satisfaction method— Card Sort Questionnaire (1): 

User satisfaction method— Card Sort Questionnaire (1) Was it easy, medium or difficult to choose the appropriate Topic? Easy Medium Difficult Was it easy, medium or difficult to choose the appropriate Commodity? Easy Medium Difficult Was it easy, medium or difficult to choose the appropriate Geographic Coverage? Easy Medium Difficult

User satisfaction method— Card Sort Questionnaire (2): 

User satisfaction method— Card Sort Questionnaire (2) Easier More Difficult

User interface survey— Which search UI is ‘better’?: 

User interface survey— Which search UI is ‘better’? Criteria User satisfaction Success completing tasks Confidence in results Fewer dead ends Methodology Design tasks from specific to general Time performance Calculate success rates Survey subjective criteria Pay attention to survey hygiene: Participant selection Counterbalancing T-scores Source: Yee, Swearingen, Li, & Hearst

User interface survey — Results (1): 

User interface survey — Results (1) Source: Yee, Swearingen, Li, & Hearst

User interface survey — Results (2): 

User interface survey — Results (2) Source: Yee, Swearingen, Li, & Hearst

Tagging samples— How many items?: 

Tagging samples— How many items? Quantitative methods require large amounts of tagged content. This requires specialists, or software, to do tagging. Results may be very different than how “real” users would categorize content.

Tagging samples— Manually tagged metadata sample: 

Tagging samples— Manually tagged metadata sample

Tagging samples— Spreadsheet for tagging 10’s-100’s of items: 

Tagging samples— Spreadsheet for tagging 10’s-100’s of items 1) Clickable URLs for sample content 2) Review small sample and describe 3) Drop-down for tagging (including ‘Other’ entry for the unexpected 4) Flag questions

Rough Bulk Tagging— Facet Demo (1): 

Rough Bulk Tagging— Facet Demo (1) Collections: 4 content sources NTRS, SIRTF, Webb, Lessons Learned Taxonomy Converted MultiTes format into RDF for Seamark Metadata Converted from existing metadata on web pages, or Created using simple automatic classifier (string matching with terms & synonyms) 250k items, ~12 metadata fields, 1.5 weeks effort OOTB Seamark user interface, plus logo

Rough Bulk Tagging— OOTB Facet Demo (2): 

Rough Bulk Tagging— OOTB Facet Demo (2)

Agenda: 

Agenda Qualitative methods Quantitative methods

How evenly does it divide the content?: 

How evenly does it divide the content? Documents do not distribute uniformly across categories Zipf (1/x) distribution is expected behavior 80/20 rule in action (actually 70/20 rule) Leading candidate for splitting Leading candidates for merging

How evenly does it divide the content?: 

How evenly does it divide the content? Methodology: 115 randomly selected URLs from corporate intranet search index were manually categorized. Inaccessible files and ‘junk’ were removed. Results: Slightly more uniform than Zipf distribution. Above the curve is better than expected.

How intuitive (repeatable) are the categorizations?: 

How intuitive (repeatable) are the categorizations? Methodology: Closed Card Sort For alpha test of a grocery site 15 Testers put each of 71 best-selling product types into one of 10 pre-defined categories Categories where fewer than 14 of 15 testers put product into same category were flagged

How intuitive (repeatable) are the categorizations?: 

How intuitive (repeatable) are the categorizations?

How intuitive (repeatable) are the categorizations?: 

How intuitive (repeatable) are the categorizations?

How does taxonomy “shape” match that of content?: 

How does taxonomy “shape” match that of content? Background: Hierarchical taxonomies allow comparison of “fit” between content and taxonomy areas Methodology: 25,380 resources tagged with taxonomy of 179 terms. (Avg. of 2 terms per resource) Counts of terms and documents summed within taxonomy hierarchy Results: Roughly Zipf distributed (top 20 terms: 79%; top 30 terms: 87%) Mismatches between term% and document% flagged Source: Courtesy Keith Stubbs, US. Dept. of Ed.

Pop Quiz: 

Pop Quiz What is the #1 underused source of quantitative information on how to improve your taxonomy? Query Logs & Click Trails

Query Log & Click Trail Examination— Who are the users & what are they looking for?: 

Query Log & Click Trail Examination— Who are the users & what are they looking for? Only 30-40% of organizations regularly examine their logs*. Sophisticated software available, but don’t wait. 80% of value comes from basic reports

Query logs: 

Query logs UltraSeek Reporting Top queries Queries with no results Queries with no click-through Most requested documents Query trend analysis Complete server usage summary

Click Trail Packages: 

Click Trail Packages iWebTrack NetTracker OptimalIQ SiteCatalyst Visitorville  WebTrends

Start a “Measure & Improve” mindset: 

Start a “Measure & Improve” mindset Taxonomy changes do not stand alone Search system improvements Navigation improvements Content improvements Process improvements

Questions Joseph A. Busch jbusch@taxonomystrategies.com http://ww.taxonomystrategies.com : 

Questions Joseph A. Busch jbusch@taxonomystrategies.com http://ww.taxonomystrategies.com

Bibliography: 

Bibliography K. Yee, K. Swearingen, K. Li, M. Hearst. "Searching and organizing: Faceted metadata for image search and browsing." Proceedings of the Conference on Human Factors in Computing Systems (April 2003) http://bailando.sims.berkeley.edu/papers/flamenco-chi03.pdf R. Daniel and J. Busch. "Benchmarking Your Search Function: A Maturity Model.” http://www.taxonomystrategies.com/presentations/maturity-2005-05-17%28as-presented%29.ppt

authorStream Live Help