Project Description
The main assumption about personal predictions, although an untested one, has always rested on the rationale that people want to be as accurate as possible. Armor et al. wanted to explore this assumption in 2008 and found out that in reality, personal predictions about the future are often optimistically biased. In this paper, we attempted to replicate their research and share the study design we adopted, the methods we applied, as well as the results we got at the end and our commentaries on certain aspects of the study, its first replication by van’t Veer et al. and our own replication.
In the context of the Research Methodologies in Humanities and Science course of the Master in Cognitive Systems and Interactive Media, we had to pick an experiment from the Reproducibility Project of the Open Science Framework, which we could replicate during a semester. We chose Prescribed Optimism — Is it right to be wrong about the future? originally conducted by Armor, Massey & Sackett (2008), and later reproduced by Lassetter, Brandt & van 't Veer (2016).
Read full paperUser Experience
Challenge
We needed to recruit participants, conduct the experiment, and collect data about the experiment to run statistical analyses.
Investigation
While browsing the vast quantity of projects from which we could pick, we had to keep in mind the team's varied technical skill levels. We also had to ensure to be able to reproduce the experiments given the facilities to which we had access. Some projects required specific hardware—e.g. EEG scanners—while others were longitudinal studies.
Our experiment required a survey, mostly handled with randomized questions to which the participant could answer by selection a value on a scale. We took a look at different survey platforms, such as SoSci Survey and Google Forms, however at the time, none were able to fulfill these requirements:
- Easily switch language in same survey: our survey was offered in three languages: English, Spanish, and Catalan;
- Pick randomly group of questions: the survey was based on four different scenarios, and each participant would only see questions relating to one;
- Randomize questions from selected group of questions: following the guidance of our professor, we wanted to ensure to randomize the order in which participants would see the questions.
Solution
While the original study and its replication asked participants to complete the study in a paper-and-pencil format, we opted to create a web-based questionnaire. Our reasoning was supported by a few arguments:
- Avoiding manual data entry: Before designing our experiment, we were hesitating between using paper questionnaires, which meant manual data entry to a database afterwards, or a software that would save to the database directly. We chose the latter option, and then developed the software. The code of the application has been open-sourced and is publicly available for review.
- Easier and cheaper to fix issues: Working on an online application would also allow us to improve and correct issues and along the way as they occurred, to no additional price. If we were to go the paper way, we would have to discard the copies made in the case of an error, and pay to have them reprinted.
- Easier to convince participants: The original study had people complete the study in a laboratory. We did not have access to this kind of facility, and we knew this would be time consuming for participants, thus a likely deterrent.
We created our own software, which allowed us to track information that was not asked to users, such as how long did they take to complete the survey.
Custom software also enabled us to adapt: after a few people took the survey, comments were given about the interface, which did not provide information about progress. We added a progress bar at the top of each page, which reduced dropout rate from that moment.
In order to invite participants, we created small paper handouts that had a short paragraph in three languages and a hyperlink to the online questionnaire. This was simpler than forcing people to sit down at a computer while we stared at them; they could answer the questions at their own pace.
Credits
- Research Design: Mat Janson Blanchet, Dimitar Karageorgiev, Pol Ricart, Lida Zacharopoulou
- UX Design, Full-Stack Development: Mat Janson Blanchet
- Translations: Pol Ricart
- Data Analysis: Mat Janson Blanchet, Lida Zacharopoulou
- Paper Redaction: Mat Janson Blanchet, Dimitar Karageorgiev, Lida Zacharopoulou
Role
Research Design, UX Design, Development, Data Analysis
Context
While studying Cognitive Systems and Interactive Media at Universitat Pompeu Fabra (Barcelona, ES)
Circa
2018
Project Link
- https://mat.jansonblanchet.com/archives/upf-questionnaire/