ACCESSIBILITY UX RESEARCH

Beyond Adaptive Sports: Challenges & Opportunities
to Improve Accessibility and Analytics


Company: Carnegie Mellon University

Abstract:

A recent surge in sensing platforms for sports has been accompanied by drastic improvements in the quality of data analytics. This improved quality has catalyzed notable progress in training techniques, athletic performance tracking, real-time strategy management, and even better refereeing. However, despite a sustained growth in the number of para-athletes, there has been little exploration into the accessibility and data analytics needs for adaptive sports. We interviewed 18 participants in different roles (athletes, coaches, and high-performance managers) across six adaptive sports. We probed them on their current practices, existing challenges, and analytical needs. We uncovered common themes prevalent across all six sports and further examined findings in three groups: (1) blind sports; (2) wheelchair sports; and (3) adaptive sports with high equipment. Our study highlights the challenges faced by different adaptive sports and unearths opportunities for future research to improve accessibility and address specific needs for each sport.


Output:

  1. Published article at ACM ASSETS

methods
Pilot

We first developed a list of around eight questions to elicit challenges and needs in adaptive sports. Each sport is governed by a different set of rules and regulations, uses different equipment, and is likely to have a mix of both overlapping and unique challenges with other adaptive sports. These questions were used as probes in our semi-structured interview process. We ran two pilot interviews with an athlete and a coach in adaptive sports. The interviewees discussed issues with accessibility, and lack of resources, confirming our speculation that adaptive sports are far behind their able-bodied counterpart in sports analytics.

STUDY PROCEDURE

We conducted 18 semi-structured interviews to understand the challenges and opportunities surrounding improving the accessibility of adaptive sports and gathering analytics. Based on our pilot interviews, we added three new probes to our interview protocol. The major themes of our questions were: (1) current practices and technology; (2) technological and analytical needs; (3) athlete tendencies and anecdotal data; and (4) role of the equipment in their respective sport. We used these probes but dove deeper into certain topics when required. Other questions were tailored specifically to each different sport.

PARTICIPANTS

We used snowball sampling to recruit interviewees. We tried to balance interviewees in terms of the role they play in a particular sport (manager, coach, athlete, analyst) within each sport. We recruited participants from six different adaptive sports. We also tried to capture representation from sports with different kinds of disability (see Table 1). We found it hard to quantify and compare years of experience, but we had a mix of participants including coaches/athletes who only play in leagues, athletes who play competitively at the national level, athletes who have been playing for only 3-4 years, and coaches with over 25 years of experience.

DATA ANALYSIS

All interviews were audio recorded and transcribed. We performed open coding on the transcribed interviews to identify major themes in the data. We followed a collaborative coding process. Multiple members of the research team were involved in the coding process and reached a consensus for codes generated through this process. We used thematic analysis on our codes in three different subsets: (1) blind sports; (2) wheelchair-based sports; and (3) adaptive sports with high equipment. We identify challenges and opportunities to improve accessibility and gather analytics in each group.

Nonvisual Interaction Techniques at the Keyboard Surface


Company: Carnegie Mellon University

Abstract:

Web user interfaces today leverage many common GUI design patterns, including navigation bars and menus (hierarchical structure), tabular content presentation, and scrolling. These visual-spatial cues enhance the interaction experience of sighted users. However, the linear nature of screen translation tools currently available to blind users make it difficult to understand or navigate these structures. In this work we evaluated the usabiliy of a novel accessibility tool called Spatial Region Interaction Techniques (SPRITEs) for nonvisual access: a novel method for navigating two-dimensional structures using the keyboard surface. SPRITEs 1) preserve spatial layout, 2) enable bimanual interaction, and 3) improve the end user experience. We used a series of design probes to explore different methods for keyboard surface interaction. Our evaluation of SPRITEs shows that three times as many participants were able to complete spatial tasks with SPRITEs than with their preferred current technology.

Output:

  1. Published article at ACM CHI

methods
Pilot

We conducted a study comparing the performance of SPRITEs to each participant’s preferred accessibility tool. A secondary goal was to explore how SPRITEs impacted participants’ understanding of webpage organization and spatial layout. We recruited ten visually impaired participants for the study, including three low vision participants who used screen magnifiers, via word of mouth. Participants’ years of experience with their assistive technology ranged from 6 to 32 (mean= 17.2, S.D. = 8.83).

STUDY PROCEDURE AND ANALYSIS

To compare the participants’ own setup to SPRITEs, we used a counterbalanced, within subject design. Participants completed one block of eight tasks with their own assistive technology (PAT condition), and one block of eight tasks using SPRITEs. The eight tasks are shown in Table 2 and were designed to include both target acquisition and questions about structure and organization. They were also selected to include tasks that require understanding of both spatial and more linear or hierarchical components. Tasks were always assigned in the same order. To vary the content between conditions, three different Wikipedia pages with similar structure and content organization were used. The Wiki pages were randomly assigned to each technique block.

For each task, we recorded the start and end time (when the participant thought they were done) and key press events. From this we calculated the time to complete the task. If a participant failed to start, or complete a task, we discarded their time data. We also recorded subjective ratings and general feedback from the participants. Because the webpage relocate task (W6) has two distinct steps, we recorded times for each step as W6a and W6b. After each block, participants rated the condition (PAT, SP) on a seven point Likert scale.

Participants rated their experience with each task separately, because condition performance varied by task. At the end of the session, the participants ranked the techniques for different kinds of tasks (searching, and interacting/browsing) and provided qualitative feedback. All questionnaires were administered verbally and the participant feedback was recorded using a combination of recording audio, and pen and paper.