Best Practices/Rapid Signup


Some information for getting workers fast, plus tips for general best practices.

Credit for this information to David Rand at Yale, who originally wrote this document,
and the TurkShop Google Group which surfaced it.


There are a lot of MTurk workers who just want to click through the study as fast as
possible, and you don’t want these people in your study. One way to screen them out is to 
begin the HIT by having subjects transcribe a few sentences of handwritten nonsense text 
into a text box. 

 

 It’s important to include comprehension questions about the game/task/stimuli to make 
sure they actually read/understood it (you can either exclude based on comprehension, or 
just include a control for it; the latter is what I typically do these days). 

 

 

 If you repost a HIT, many of the same people who did the previous posting(s) will try to 
do it again. So it’s very important that you have some mechanism in place to screen out 
repeat participants. One such way is to have the first question in the survey ask them to 
enter their MTurk WorkerID, and then match that against a list of previous participants’ 
WorkerIDs. See the attached appendix put together by Alex Peysakhovich showing how 
to do this in Qualtrics using embedded data. [Ed note: This appendix is not included in this 
page. For alternate solutions see our pages on preventing workers from participating multiple times.]

 

 MTurk tries hard to prevent people from having multiple IDs. But sometimes they fail. 
So once you get all your data, it’s important to filter out repeated observations from the 
same IP address (and so that means it’s also important to capture the IP address of each 
subject – most survey software should have this functionality). 

 

 Workers are willing to wait a few days to receive their bonuses without getting annoyed. 
But they hate waiting to have their HITs approved (that is, to get their showup fees). So I 
try to accept submitted HITs as they come in (I check every 30 min or so and approve all 
the new HITs) and then pay the bonuses within a couple of days of the last data getting 
collected. 

 

 Building up a reputation as a good requester (via ratings received from workers on sites 
like Turkopticon) can be good for getting lots of workers to do your HITs quickly. But I 
think it can also be bad, in that it encourages the same workers to do lots of your HITs, 
and may also change the subset of Turkers willing to do your HITs. So I recommend 
changing your requester name with some regularity. 

 

 Another reason to change your name / actually make a new account with some regularity 
is that people can also sign up to receive notifications every time you post a new HIT. If 
you run lots of similar things and don’t want to same people in all your studies, this can 
be problematic. 

 

 Related to both of these issues, it’s a good idea to include a question in the demographics 
questionnaire about previous experience with similar studies, again to use either as a 
control or an exclusion criterion. 

 

 Also, you should be aware that if you are running a high paying HIT, it may get posted 
on forums (for example reddit has a section on good HITs). This effects both the type of 
person that takes the HIT (redditers are weird) and also opens the possibility of crosstalk 
between subjects in different conditions. So it might not be a bad idea to ask in the 
demographics questionnaire if they found the HIT through a forum, and again either 
exclude or control for this. 

 

 If you do experiments that involve interacting with other MTurk workers, it may be a 
good idea to ask at the end to what extent they believed the other person was real. But at 
the same time, it’s hard to take these responses as face believe because of things like ex-
post justification of being selfish etc. 

 

 Keep in mind that MTurk data is typically noisier than data from the lab, so you want 
more subjects than you might in a lab study (I’d say 50-100 per condition as a vague rule 
of thumb).