Year: 2015 | Clemson University | Grad Team Members: Jianyan Yang, Pallavi Karan, Anish Joshi | Skill: Survey, Interview, Cognitive Walkthrough, Rapid Prototyping, Usability Testing | Role: PhD Leader
INTRODUCTION
There are many situations people face where they need help from their surroundings like to get a jumper start their cars, push a wheelchair, lift, shelter, first aid, changing tire. From the previous paper studying, survey and follow-up interview, the researchers find that that the help seekers may encounter: 1) currently cannot find the appropriate people to help, because of their limited network; 2) worry about others will reject them which can reduce their self-esteem, especially face-to-face rejection; and 3) low efficient, though the help seeker successfully find a helper, the helper may not have the skills and tools to help. Consequently, the researchers decide to resolve these problems by the HELPING application for mobile devices.
HelpR provides a platform that can 1) find the appropriate person to help the user, so the user have more resources other than just in his/her social network; 2) make the user won’t be rejected by the specific person directly or face-to-face when they seek help; and 3) the user can get efficient help from the person who has the specific needed skill or equipment, without asking more people that may lead to time wasting. It also aims at providing satisfaction to people who seek happiness in helping others. In addition, to motivate users help others, the application includes generous points reward mechanism. Users will have the chance to be helped if they help others, which is ‘help yourself by helping others’.
The scope of this application is to ask for help request and help others via notification within a given radius. The product perspective is to help people around and spread kindness.
HCC CYCLE STAGE 1: DEFINE USERS' NEEDS
Design Requirements
Help the user find the appropriate person to seek for help; provide more resources other than just in user’s social network.
The user won’t be rejected by the specific person directly or face-to-face when seeking for help
Get efficient help from the person who has the specific needed skills or tools, without asking more people that may lead to time wasting.
The design is a mobile application, because in most situations, the users will use this product in outdoor environment.
HCC CYCLE STAGE 2: DESIGN LOW-FIDELITY PROTOTYPE
Low-fidelity Prototype Screen Samples
STAGE 3: EXPERT EVALUATION & REDESIGN ITERATIONS ON LOW-FI
In this stage, the low-fidelity prototype was evaluated by six experts and redesigned for three iterations. The method the teamed used was cognitive walkthrough. For each iteration, the researchers tested the prototype with two experts. The experts were PhD students and visitor scholars in Human-centered Computing and Computer science majors in Clemson University. The evaluations in this stage were formative testing, more focusing on finding out the problems and getting the qualitative feedback.
STAGE 4: DESIGN HIGH-FIDELITY PROTOTYPE
After fixing all problems in low-fi prototypes, we built a high-fi prototype based on the last version of low-fi, using Just In Mind software.
High-fidelity Prototype Screen Samples
STAGE 5: USABILITY TESTING
In this stage, the high-fidelity prototype was evaluated by eight users and redesigned for two iterations (two users per iteration). The eight participants were team members’ classmates and friends. They were all students from Clemson University; age range was from 21 to 28. For each iteration, the users performed six tasks below:
As a helper, search a problem and confirm the problem request· As a help seeker, post a problem request using the current location
Navigate to and check the personal information profile
Navigate to and check the ongoing problems (including processing problems and unsolved problems)
Check how may generous points this account currently has
Navigate to and check the setting page
Quantitative Data Performance Data Measurements:
Time spend for each task
The number of errors occurred for each task
The successfulness for each task
Preference Data Measurements:
Rate the ease or difficulty of performing this task (1very difficult-5very easy)
Rate the time it took to complete this task (1more time than expected-5less time than expected)
Rate the likelihood that you would use this feature/task (1not likely at all-5very likely)
Post-test Questionnaire:
SUS (System Usability Scale)
Qualitative Data The participants were also asked to think aloud when they performed the tasks, so that the researchers could collect qualitative feedback. Results From the data collected, the team found the following points. Firstly, most of the preference scores in this stage are not significantly higher than those of last version of high-fidelity prototype, and some measurements are even lower. One of the reasons is the interface of the implementation application does not perfectly correspond with the last version of high-fidelity prototype. Secondly, it is worth mentioning that the team expected one or less error occurred in each task, however, three errors occurred in task 1 which was finding a problem to solve. The participants said it was really confusing when they clicked “location specific problems”. In the future, the implementation application should be redesigned according to the usability testing feedbacks.