Artificial intelligence tools have potential to lead great advances throughout society. But the tools have downsides, and some—including applicant screening tools, employee monitoring systems, and video interviewing scoring tests—may create bias-related risks.
Employers should take advantage of AI tools’ many benefits, but they must also ensure proper oversight and risk management planning to reduce litigation and regulatory exposure.
IRS Bias Findings
A 2023 study by the Stanford Institute for Economic Policy Research revealed racial disparities in the IRS’s auditing process, showing how bias can unwittingly enter into AI tools. Although the IRS doesn’t ask taxpayers about their race, Black taxpayers claiming the earned income tax credit were three to five times more likely to be audited than non-Black taxpayers, according to the study.
The reason? A predictive algorithm used for audit selection. It was set up to flag tax returns with potential mistakes, which is common when taxpayers file their own tax forms or claim the earned income tax credit, designed to support low- and middle-income working families.
The exact reason for the disparity remains unclear, but the study suggests Black taxpayers could have been targeted at higher rates because they’re more likely to claim the earned income tax credit and file their own tax returns. The study also observed subpopulation disparities among earned income tax credit claimants—for unmarried claimants with dependents, the audit rate for Black men was over 4% higher than for non-Black men.
The IRS acknowledged the disparate audit outcomes in its 2024 annual report and vowed to overhaul its compliance efforts and dedicate resources toward identifying disparities across dimensions of race, gender, ethnicity, age, and geography.
However, this problem could affect any organization, and the IRS case backs up the National Institute of Standards and Technology’s findings in July that bias can exist in AI tools.
Legal Implications
The main risk for employers using AI tools comes from a possible disparate impact claim. The Stanford study notes that the IRS’s audit disparity isn’t likely driven by disparate treatment, given that the agency doesn’t observe or even ask about a filer’s race.
Unlike a disparate treatment claim, which requires direct or circumstantial evidence of discriminatory intent, the disparate impact theory of liability established by Griggs v. Duke Power Company only requires a plaintiff to make an initial showing that a facially neutral practice led to statistical disparate outcomes for members of a protected class.
Even if employers use AI tools that don’t intentionally result in discrimination against a protected class, there may be grounds for liability if there are disproportionate outcomes.
To address these risks in the employment context, lawmakers and courts have focused their efforts on regulating AI use. Last year, President Joe Biden issued a comprehensive executive order on the use of AI and other technologies, including directives to the Department of Labor. The DOL-related provisions addressed hiring discrimination, AI workers, employee displacement, workplace monitoring systems, and employee well-being. The Office of Federal Contract Compliance Programs also released a guide on AI bias in line with the executive order.
State legislatures and courts similarly have tried to address AI bias in the workplace. The California state legislature failed last year to pass Assembly Bill 331, which focused on regulating automated decision-making, including AI and algorithm-driven tools. The bill would have subjected employers using automated decision-making to notification and disclosure requirements to reduce risk of bias.
In July, the US District Court for the Northern District of California in Mobley v. Workday, Inc opened the door to holding software companies using AI-powered screening tools liable for employment discrimination as “agents” of employers, denying Workday’s second motion to dismiss.
Mitigation Strategies
Despite the potential legal liability, AI tools can yield significant benefits. But employers should have plans in place to prevent potential bias.
In drafting and adopting a risk management plan, employers should work with experienced counsel when reviewing AI systems to allow for privileged exchange of information and options. Along with counsel, employers can look to the NIST’s risk management framework developed in partnership with the Department of Commerce and in collaboration with the private and public sectors.
The NIST framework provides four specific functions to help organizations address AI risks in practice: govern using a broad holistic strategy, identify and measure risks, manage through employing tools to address risks, and measure continuously by monitoring impact. The framework’s purpose isn’t to provide a concrete checklist to manage risk, but to provide a flexible risk model that can suit an organization’s specific needs and uses.
While the NIST’s risk management framework provides a proactive approach to identifying and managing AI risk, employers should have solutions ready if bias is identified. Although these solutions will vary depending on the organization, algorithm, and bias involved, the Stanford study highlighted a few potential solutions.
The report suggested the IRS could shift further toward regression-based algorithms rather than a model that incorporates both classification-based and regression-based algorithms. By focusing more on the probability of underreporting (regression) than whether underreporting exceeds $100 or whether a taxpayer claims the earned income tax credit (classifications), the model could better deter large-scale tax evaders and avoid bias toward selecting Black taxpayers.
The Stanford report also said the model could prioritize underreporting from other refundable credits and sources equally, and that the IRS could expand its resources to accommodate auditing for more complex earned income tax credit returns, such as those with business incomes. These available analytical and remediation strategies show that employers can improve how they evaluate the myriad AI tools promising superlative results.
Employers should look to use AI tools despite the liability concerns. Predictive AI yields benefits such as staffing efficiency, cost reduction, and greater profitability. These tools also can improve overall performance and help businesses navigate market complexities.
With proper oversight, risk management planning, prepared solutions, and legal counsel, AI tools offer employers a wide range of benefits without the risk of turning the employer into another cautionary tale.