> Karpinska, Marzena, et al. 'The Perils of Using Mechanical Turk to Evaluate Open-Ended Text Generation'. _ArXiv:2109.06835 [Cs]_, Sept. 2021. _arXiv.org_, [http://arxiv.org/abs/2109.06835](http://arxiv.org/abs/2109.06835).
# The Perils of Using Mechanical Turk to Evaluate Open-Ended Text Generation
> Recent text generation research has increasingly focused on open-ended domains such as story and poetry generation. Because models built for such tasks are difficult to evaluate automatically, most researchers in the space justify their modeling choices by collecting crowdsourced human judgments of text quality (e.g., Likert scores of coherence or grammaticality) from Amazon Mechanical Turk (AMT). In this paper, we first conduct a survey of 45 open-ended text generation papers and find that the vast majority of them fail to report crucial details about their AMT tasks, hindering reproducibility. We then run a series of story evaluation experiments with both AMT workers and English teachers and discover that even with strict qualification filters, AMT workers (unlike teachers) fail to distinguish between model-generated text and human-generated references. We show that AMT worker judgments improve when they are shown model-generated output alongside human-generated references, which enables the workers to better calibrate their ratings. Finally, interviews with the English teachers provide deeper insights into the challenges of the evaluation process, particularly when rating model-generated text.
## Observations
1. AMT ratings do not reliably distinguish model-generated text from human-generated text unless workers are asked to rate both side-by-side, which allows them to better calibrate their ratings.
2. Running an identical task (same AMT parameters and input data) on different days of the week exhibits high variance and can lead to dubious conclusions (e.g., that reference texts are lower quality than GPT-2 generated text).
3. Many AMT workers do not carefully read the text that they are evaluating. Even after enabling multiple qualifications to exclude lowquality workers, 42% of workers on average take fewer than 40 seconds to complete each task. Filtering out these workers can make a significant impact to the overall ratings, but also notably reduces the number of datapoints.
4. Even expert raters struggle to read and judge model-generated text. The time they spend per example increases significantly compared to that for references, and agreement also drops.
## Task parameter recommendations
- Each experiment should use a completely different set of workers to prevent them from judging the same story multiple times
- All experiments should be launched on weekdays at the same time to eliminate potential weekdays v weekends variance
- Hiring expert teachers is not much more expensive and yields replicable results