Project Implicit Study Development Checklist

Is current?: 
Yes

The study development checklist summarizes the key milestones that must be completed for any study created and launched in the Project Implicit infrastructure.

Study Creation

1.    For studies that are intended for the Project Implicit research pool, consult with the PI team about the rules and design principles for using the pool in advance of development.

2.    Create study components (e.g., consent, explicit tasks, implicit tasks, debriefing)[1]

3.    Create experiment file that integrates all study components into an experimental design[2]

Study Organization

4.    All study files must be in a single folder that is for one and only one study[3]

5.    File names for study materials should have informative yet concise titles.[4]

6.    File names must be in lowercase letters, and cannot start with a number.[5]

7.    Only the final versions of files associated with your study should be in the study folder stored on the test and production servers.[6]

8.    If your study includes images, these images should be in a subfolder of the study folder entitled “images”

Study Testing

9.    Validate all study files to confirm correct coding using the "Check js syntax" and "Run study validator" links. 

10.   Test that each study component appears and operates correctly in Firefox, Chrome, Safari, and Internet Explorer browsers.[7]

11.  Confirm consistent formatting across all tasks, e.g. equal margins, consistent font, etc.

12.  Test that data is recording correctly for each study component.  This includes - sending data to the database, following your variable naming decisions without overlapping variable names, and having the correct response options for each variable.[8]

13. Test the study to make sure that that the results feedback shows up in the debriefing and that the result is correct.[9]

14.  Test every condition of the study from start to finish, ensuring that (a) all tasks that should show up do show up, (b) tasks appear in the expected order, (c) Randomization is occurring when expected, and (d) random selection is occurring as expected.[10]

15.  Confirm that the study ID is unique, i.e., not redundant with any other studies done on Project Implicit.[11]

Study Approval

16.  Obtain approval for the final, tested study from all collaborators before moving it to production. 

17.  For studies to go into the Project Implicit research pool, send a test link to studysubmission@projectimplicit.net to get feedback and, ultimately approval, from Calvin Lai. In your email, include the link to the study, your planned inclusion/exclusion rules, the planned sample size and a justification for the sample size.[12]

Moving to production

18. Fill out and submit the Study Submission form. 

Once your study is on production

19. When your study is on production, you will receive an email from Rick Klein. At that time, test your study from start to finish on dev1. The dev1 link for your study is exactly the same as the dev2 link you were using to test before, except the https is http. Email Rick Klein immediately if errors are detected.

20.  Watch completion rates using [details.jsp link] to detect any unusual patterns suggesting errors in the study.

21.  After a couple of days, download data from production and analyze it.  Confirm that all variables are present as expected.  Email Rick Klein immediately if errors are detected. 

[1] Instructions and tasks should be clear, concise, and attentive to the capacity of the sample.  Study materials are usually created and partially tested on the researcher’s own machine and then moved to the test server (pi or dw2) for additional testing

[2] If your experiment file has several conditions, it is wise to create separate experiment files for each condition for easier testing later

[3] The only exception are files that are in the common folder.  Those files are used but not edited by individual researchers

[4] The experiment file itself should have the study name as part of its file name (e.g., affect2.expt.xml) but other files do not need to follow this practice.

[5] Avoid special characters (&,%,~,_), except for hyphens (-) and the occasional period (used only in the experiment file, e.g., affect2.expt.xml).

[6] You can create a pilot folder for preparation and then a final folder for the study materials. IMPORTANT: Test the study AFTER all extraneous files are removed from the study folder.

[7] Functionality testing can require some time - does each part operate correctly? Reusing existing materials can increase confidence that it will operate correctly across browsers.

[8] To check the test data for a single page, use httpfox. To check the data for an entire study, use the [RDE or resdata.jsp link].

[9]It is possible, for example, for the IAT feedback to be presented opposite the actual result if the IAT.xml file is not correct.  Testers should deliberately obtain a particular result and confirm that it is reported accurately.  This is particularly important for new IAT files, and one IAT file could be erroneous while others in the study are correct.

[10] Often this step requires running through the study many times quickly to check randomization. Alternatively, you can create multiple versions of the study to test individual conditions systematically. 

[11] Use your username as part of the studyID.  That way, you need only compare against other studies that you have conducted.

[12] Visit the design principles wiki entry to anticipate what these reviewers need to see for studies to be approved for the researchers pool.