Chapter 2:
Jobs

In Online Searches, Big Data Systems Reproduce Racial Bias

Digital indicators of race, religion, or sexual preference can easily be observed or inferred online. In some ways, these indicators are just like those an employer might pick up when scanning a person’s resume. [41] However, a recent study has revealed that these indicators can foster “discriminatory outcomes or giv[ing] preference to members of one group over another” when combined with complex big data systems. [42]

There is discrimination in delivery of these ads.

Latanya Sweeney, a computer science professor at Harvard who recently served as Chief Technologist at the Federal Trade Commission, described how Google ads discriminate based on the name of the person searched. [43] When searching for her own name on Google, Dr. Sweeney noticed ads referencing arrest records. This prompted her to design a study to learn whether searches for white-identifying names prompted the same sorts of ads as searches for black-identifying names did. She found that a greater percentage ads with “arrest” in their text appeared for black-identifying names than for white-identifying names, to an extent that could not plausibly be explained by chance. [44] She concluded that “[t]here is discrimination in delivery of these ads.”[45]

This happens because Google’s software automatically learns which ad combinations are most effective (and most profitable) by tracking how often users click on each ad. These user behaviors, in aggregate, reflect the biases that currently exist across society. Instantcheckmate.com, a leading company that sells arrest records, denied that it has ever tried to connect a name with race. But it would not necessarily have to for this outcome to occur. [46]

Ads that are more often clicked on automatically receive a higher “quality score”—and are more often displayed—in Google’s system. [47] Google and InstantCheckmate may automatically find themselves reinforcing the racial biases that their audience’s click patterns reflect. Dr. Sweeney explains: “If Google’s Adsense service learns which ad combinations are more effective, it would first serve the arrest-related ads to all names at random. But this would change” as the algorithm automatically changed in response to a pattern, where “click-throughs are more likely when these ads are served against a black-identifying name.” [48]

These sorts of structural discrimination issues are particularly troubling as employers—and others in positions of power and responsibility—increasingly consult the Internet when making the decisions that shape people’s lives. [49] Although potential employees have some legal protections today, it would be difficult for a job applicant harmed by the subliminal effects of biased ads to trace such harm to its cause. A quick glance (or many such glances) by a hiring professional are likely to go unnoticed. The same concerns may arise in situations involving promotions, special awards, or other forms of professional advancement, or in different settings such as the search for a roommate.

Lawyers do caution employers to tread carefully online. “I advise employers that it’s not a good idea to use social media as a screening tool,” says James McDonald, a specialist in employment law. [50] “[Employers] need to control the information,” he says, but the ease of a Google search may be hard to resist. “By and large, employers avoid asking questions about these traits in interviews. But now technology makes it easier to find that information,” observes Prof. Alessandro Acquisti of Carnegie Mellon University. [51]

Dr. Sweeney’s research shows that racism can be perpetuated inadvertently by complex online systems, even when the companies that create these systems do not intend to discriminate.

[41]David R. Francis, Employers’ Replies to Racial Names, National Bureau of Economic Research (2003), http://www.nber.org/digest/sep03/w9873.html.

[42] Latanya Sweeney, Discrimination in Online Ad Delivery, 11 Queue 10 (2013), http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2208240.

[43] Id.

[44] Id.

[45] Id.

[46] Racism is Poisoning Online Ad Delivery, Says Harvard Professor, MIT Technology Review (2013), http://www.technologyreview.com/view/510646/racism-is-poisoning-online-ad-delivery-says-harvard-professor.

[47] See Google, Check and understand Quality Score, https://support.google.com/adwords/answer/2454010 (explaining to advertisers that “your ad’s expected CTR [click through rate]” contributes to its Quality Score, which in turn “typically lead[s] to lower costs and better ad positions” for ads with high Quality Scores).

[48] Racism is Poisoning Online Ad Delivery, Says Harvard Professor, supra note 46.

[49] Nick Fishman, Survey Shows 48% of Employers Conduct Social Media Background Checks, IQ Blog (2012), http://www.employeescreen.com/iqblog/48-of-employers-conduct-social-media-background-checks.

[50] Jennifer Valentino-Devries, Bosses May Use Social Media to Discriminate Against Job Seekers, Wall St. J. (2013), http://online.wsj.com/news/articles/SB10001424052702303755504579208304255139392.

[51] Id.