Algorithms and Intentional Discrimination or Implicit Bias? Balancing the Risks and Benefits of Facial Recognition and Artificial Intelligence During the COVID-19 Pandemic and Civil Unrest

Alerts / June 26, 2020

Employers and other businesses are increasingly looking for technological solutions to minimize the burden of testing and security during the COVID-19 pandemic. Indeed, the use of facial recognition has surged during the pandemic as a type of “one-stop shop” technology that both provides added security and gauges the employee’s temperature.[1] Yet just as this technology is increasingly used in the workplace, the legal view of facial recognition continues to grow more complex.

San Francisco banned police use of facial recognition technology even before the nationwide protests, as lawmakers argue such technology disproportionately targets African Americans.[2] Boston just announced a ban on facial recognition on June 24, 2020 joining San Francisco, Oakland, and Cambridge.[3] Now, other major cities such as Portland and Santa Cruz have either passed bans or are considering bans.[4] Microsoft and Amazon have similarly backed away from providing the technology to law enforcement, at least temporarily.[5] IBM, on the other hand, has announced it will be removing itself from the facial recognition business entirely.[6] While these laws are focused on law enforcement and other government entities, the trend is clear that lawmakers and others may view use of this technology as discriminatory and/or implicating privacy concerns. In 2018, Senate Democrats sent a letter to the EEOC warning of the potentially discriminatory nature of facial recognition technology.[7] Democrats outlined a series of potential grievances they had with facial recognition technology, from security to job placement. Employers can expect that pressure will ramp up with the current skepticism surrounding facial recognition and with heightened media attention on perceived discrimination. Just this week, it was widely reported that the first wrongful arrest using facial recognition software occurred.[8] According to the report, the algorithm came up with a false “match” of an African American man robbing a jewelry store and officers arrested an innocent African American man who looked nothing like the suspect.

This is a reminder for employers that artificial intelligence (AI) certainly has employment law risks. The theory behind such a claim is that the code itself used in these technologies is laced with discriminatory animus.[9] For example, in one study, an AI used to make hiring decisions initiated callbacks 50% more often if the individual had a traditionally Caucasian name versus a traditionally African American name.[10] The EEOC is currently investigating at least two discrimination claims based on the discriminatory use of algorithms.[11] The same problem potentially exists in facial recognition algorithms, in that an employer may unknowingly be employing a system that prefers the “faces” of a certain group over those of another. Aside from claims of intentional discrimination, it is important to consider the potential for disparate impact claims and ongoing efforts to transform intentional discrimination standards. While data may speak for itself and is arguably color-blind, advocates are seeking to change employment law to focus more on social justice outcomes and less on intent. Advocates are also seeking to expand the definition of intentional discrimination to include concepts of implicit bias.[12] It is important for businesses to consider and evaluate such legal risks before utilizing AI and other new technologies that impact employees. Formal risk assessments and bias impact studies may provide a safe harbor for manufacturers of such technology and the businesses that employ them.

Employers should use caution in instituting facial recognition and any type of AI. Facial recognition technology was in the crosshairs of lawmakers even before the recent nationwide protests about law enforcement. Now employers can expect facial recognition to truly be under the microscope. There is heightened pressure to view this new technology through a discriminatory lens. This is a troubling sign for facial recognition’s application in the private sector. Companies may soon see proposed national legislation banning facial recognition technology for many workplace applications, and the plaintiffs’ bar is watching.

The pandemic will end. What will continue, however, is increasing legal scrutiny of facial recognition and other forms of AI. As a result, employers should be cautious about incorporating these new technologies into the workplace. While now may seem like a good time to make an investment in this technology, the legal future is unclear. Employers may ultimately be left with a costly gadget that functions only to bring unwanted litigation. As a result, employers should weigh the present benefit against the future risk and adopt the proper technologies accordingly, only after evaluating the legal risks. AI, algorithms and related technology can be useful in the employment context, but it is critical for information officers, IT, HR and employment counsel to work closely together to consider the legal risks and practical benefits of such technology.

Authorship Credit: M. Scott McIntyre

[1] Tech That Allows Restaurant Customers to ‘Pay with Their Face Is Gaining Traction, US Chamber of Commerce,
[2] San Francisco Bans Facial Recognition Technology, The New York Times
[3] Boston Bans Municipal Use of Facial Recognition, Siladitya Ray, Forbes, (June 24, 2020).
[4] Boston police support the effort to ban facial recognition technology — for now,,; Portland Delays Action on Facial Recognition Ban as it Kicks Off Review of Police Practices, Bangor Daily News,; Community Leaders and Santa Cruz Police Meet to Discuss Future of Policing, KSBW 8,
[5] Amazon to Ban Police Use of Facial Recognition Software for a Year,,
[6] IBM Quits Facial Recognition, Joins Call for Police Reforms, Matt O’Brien, Seattle Times, (June 23, 2020)
[7] Letter to the EEOC, Sen. Kamala Harris, Sen. Patty Murray, Sen. Elizabeth Warren, (September 17, 2018).
[8] This May be America's First Known Wrongful Arrest Involving Facial Recognition, Brian Fung and Rachel Metz, CNN Business, (June 24, 2020).
[9] Why Algorithms Can be Racist and Sexist, Vox, Rebecca Heilweil, (Feb. 18, 2020).
[10] Biased Algorithms Are Easier to Fix Than Biased People, The New York Times, Sendhil Mullainathan, (Dec. 6, 2019).
[11] Punching In: Workplace Bias Police Look at Hiring Algorithms, Bloomberg Law, (Oct. 28, 2019)
[12] Benson Cooper v. KSHB-TV et. al., 4:17-cv-00041-BP (W.D. Mo. Jan. 17, 2019); EEOC v. Wal-Mart Stores, Inc., 2010 WL 583681 (E.D. Ky. 2010); Karlo v. Pittsburgh Glass Works, LLC, 849 F.3d 61, 84-85 (3d Cir. 2017) (finding the district court did not abuse its discretion in excluding expert testimony on implicit bias); White v. BNSF Railway Co., 726 F. App’x 603 (9th Cir. 2018); Jones v. National Council of Young Men’s Christian, 34 F. Supp. 3d 896, 900 (N.D. Ill. 2014) But see Samaha v. Wash. State Dep’t of Transp., No. cv-10-175-RMP, 2012 WL 11091843, at *4 (E.D. Wash. Jan. 3, 2012) (finding an expert’s testimony on “concepts of implicit bias and stereotypes is relevant to the issue of whether an employer intentionally discriminated against an employee.”)

Baker & Hostetler LLP publications are intended to inform our clients and other friends of the firm about current legal developments of general interest. They should not be construed as legal advice, and readers should not act upon the information contained in these publications without professional counsel. The hiring of a lawyer is an important decision that should not be based solely upon advertisements. Before you decide, ask us to send you written information about our qualifications and experience.