Every academic is familiar with the constant battle to keep email inboxes manageable. For politicians, the public’s designated representatives, the struggle must be even more daunting – especially during a pandemic.
In that regard, it was hardly surprising when some UK Members of Parliament recently cried foul after learning that academics had sent them emails from fictional constituents as part of a study into politicians’ responsiveness to the financial strains caused by lockdowns. The study, the politicians argued, was a waste of their time and taxpayers’ money.
“Audit studies” such as this are not new. The use of individuals or correspondence (such as emails or job applications) to accomplish real tasks and “audit” decision-makers’ responses to specific characteristics are a standard method in social sciences, especially in areas such as race or gender discrimination.
For example, researchers might put the names Jamal Washington and Jake Warner on otherwise identical resumes and examine whether there are disparities in interview call-back rates. This would allow them to estimate the causal effect of being a Black applicant on hiring outcomes. In the aggregate, it could provide a rigorous measure of discrimination against Black jobseekers in the labour market.
Indeed, audits partially originated in government attempts to understand whether anti-discrimination policies change behaviour. The UK Parliament conducted the first known significant audit in the late 1960s to examine discrimination on the basis of race and immigration in employment, housing and services. The US federal government conducted its own audit of racial discrimination in housing in the late 1970s, with additional audits in 1989, 2000 and 2012. And both the Supreme Court and several circuit courts have recognised the value of audits in providing uniquely compelling evidence of discrimination.
However, the lack of informed consent among audit subjects makes them controversial. Many professional associations, including the American Sociological Association and the American Political Science Association, have ethics codes that require informed consent except when the research cannot be done without deception and there is minimal harm to the subjects.
It’s hard to study discrimination without deception. Alternative approaches such as surveys and interviews are subject to social desirability bias, meaning that subjects may adjust their responses to appear less prejudiced. Moreover, biases may operate at a subconscious level, so respondents might not accurately predict how they would behave or respond. Audits avoid these shortcomings because they record subjects’ behaviour without their knowledge, allowing researchers to make strong causal claims about discrimination that cannot be made with observational data. Audits have illuminated the existence of discrimination and bias against numerous groups in myriad contexts, including the labour and housing markets, politics and other economic transactions.
But do audit studies meet the minimal harm criterion? It is philosophically and mathematically difficult to calculate all potential harms to subjects in an audit study. Researchers must reflect, though, on at least three individual study characteristics – context, sample size and outcome(s).
First, certain contexts present potentially greater harms to other individuals, rather than the subjects themselves. In the case of employment audits, for instance, researcher applications may bump real applicants out of the callback pool. Additionally, time requirements vary across contexts. An employer may spend more time looking at a résumé than a bureaucrat spends answering an email. Harm may also vary based on the subjects studied. Is it better to “waste” the time of a human resources employee than of a politician’s aide? In that regard, it is important to note that the only known survey conducted on this matter suggests that both politicians and citizens see experiments with politicians as “unproblematic”.
Second, large sample sizes may increase harm by increasing the total amount of time spent by subjects and the likelihood that subjects will become aware of their involvement in a research study – which can cause researchers to scrap their studies. On the other hand, researchers often need large sample sizes for adequate statistical power and to minimise inferential errors.
Third, the type of outcome, even within the same context, impacts potential harm. A standard email enquiry to a bureaucrat requesting information likely does not impact anyone other than the bureaucrat. However, if the enquiries all regard a timely policy issue, they might influence the bureaucrat’s behaviour by providing an inaccurate signal of constituent opinion.
Researchers struggle with these decisions and worry about unintended consequences, particularly as audits have become more common. We suggest that researchers exercise greater caution when designing audit studies and focus more on potential ethical issues in the preregistration plans that are often filed before study implementation. We also encourage more open discussion of these issues within and across disciplines.
One critical question is how many audit studies may occur simultaneously. Institutional review boards do not know whether other researchers are conducting similar audits with similar subjects at the same time, so they cannot balance those harms with the benefits of our studies. This is a problem that we must address, likely with a centralised system similar to preregistration.
But while we can and should do better in designing and conducting audits, we cannot discard them completely. In doing so, we would set aside our best tool for examining “what”, “when”, “where” and even “why” discrimination occurs. This would be especially short-sighted at a time when many forms of discrimination persist across the world.
Michael Gaddis is an assistant professor of sociology at the University of California, Los Angeles, and Charles Crabtree is assistant professor of government at Dartmouth College. This article is co-authored with Marc Bendick, Jr., Patrick Button, John Holbein, Joanna N. Lahey, Michelangelo Landgrave, Donald Moynihan, David Pedulla, Natasha Quadlin and Kate Weisshaar.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login