On 16 October 2024, the Department of Labor (DOL) published a comprehensive guidance regarding the use of artificial intelligence (AI) tools in employment. The guidance, entitled “Artificial Intelligence and Worker Well-being: Principles and Best Practices for Developers and Employers”1 (the DOL AI Guidance), builds on the DOL’s May 2024 AI guidance (the May Guidance) and fulfills the agency’s obligations under President Biden’s October 2023 executive order on AI.2 The DOL AI Guidance also follows the agency’s endorsement of the Partnership on Employment & Accessible Technology (PEAT)’s AI & Inclusive Hiring Framework,3 as well as publications by DOL subagencies like the Equal Employment Opportunity Commission (EEOC)4 and the Office of Federal Contract Compliance Programs (OFCCP).5
The DOL makes clear in its disclaimer that the DOL AI Guidance is not binding and does not supersede, modify, or direct an interpretation of any statute, regulation, or policy. However, the publication is the DOL’s most comprehensive AI publication to date, and following the guidance will help employers use AI without running afoul of existing equal employment opportunity and other laws.
Principles and Best Practices
The DOL AI Guidance expands on the eight AI principles (Principles) contained in the May Guidance by providing best practices (Best Practices) that employers6 can follow to implement these Principles.
Centering Worker Empowerment
First, to “center worker empowerment” by ensuring that “[w]orkers and their representatives, especially those from underserved communities, [are] informed of and have genuine input in the design, development, testing, training, use, and oversight of AI systems in the workplace,” employers should regularly integrate input from workers. By incorporating the worker into the process, from design to use, employers can balance the benefits of AI with worker protection and strive to use AI to improve workers’ job quality and enable businesses success.
Ethically Developing AI
Second, to ethically develop AI, employers should develop a strong foundation consisting of ethical standards, guidelines, and an internal review process to help “ensure AI and automated systems…meet safety, security, and trustworthy standards for their customers, customers’ workers, and the public.” To do this, employers should do the following:
Carry out impact assessments and independent audits of the AI programs and publish the results.
Assess the risks of algorithmic discrimination.
Document negative impacts on workers’ job quality and well-being.
Monitor the AI programs on an ongoing basis and prioritize human oversight over the tools and employment decisions that involve those tools.
Ensure that any jobs created to review and analyze AI comply with domestic and international labor standards.
Establishing AI Governance and Human Oversight
Third, to establish sufficient AI governance and appropriate human oversight of AI tools, employers should do the following:
Establish empowered governance structures to incorporate input from workers in the decision-making process to continually review and evaluate worker-impacting AI systems.
Offer appropriate training on AI systems to a broad range of employees, including processes for raising concerns.
Not rely solely on AI and automated systems, or the information collected through them, to make significant employment decisions.
Identify and document the types of significant employment decisions informed by AI systems, including procedures for human consideration and remedies for decisions that adversely impact employees.
Ensuring Transparency in AI Use
Fourth, to ensure transparency in AI use, employers should do the following:
Provide workers and representatives advance notice and appropriate disclosure that AI systems are in use. This information should be clear and accessible, conspicuously notify workers what data will be collected and stored about them, and what that data will be used for.
Allow employees to view, dispute, and submit corrections for their individually identifiable data without fear of retaliation.
According to the DOL, this transparency will “foster greater trust and job security, prepare workers to effectively use AI, and open channels for workers to provide input to improve the technology or correct errors.”
Protecting Labor and Employment Rights
Fifth, to ensure AI tools do not interfere with employees’ labor organizing, cause reductions in employees’ wages, or put employees’ health and safety at risk, employers should do the following:
Not use AI tools to reduce wages, break time, or benefits.
Audit AI systems for disparate or adverse impacts on individuals with protected characteristics to comply with anti-discrimination requirements, including offering reasonable accommodations when requested.
Using AI to Enable Workers
Sixth, to use AI to enable workers, employers should do the following:
Create AI pilot programs for employees to use and test tools before conducting large-scale rollouts to ensure the tools are assisting and complementing workers and improving job quality.
Not use AI tools to engage in invasive monitoring of employees, especially when assessing worker performance.
Consider how to balance enhanced productivity through the use of AI tools while benefiting workers, such as through “increased wages, improved benefits, increased training, fair compensation for the collection and use of worker data or reduced working hours without loss of pay.”
Supporting Workers Impacted by AI
Seventh, to support workers impacted by AI, employers should do the following:
Train employees on AI systems to upskill workers instead of replacing them.
Work to preserve jobs for those at risk of displacement due to AI by offering training, education, and professional development opportunities for workers to learn how to use and work with AI systems.
Ensuring Responsible Use of Worker Data
Eighth, to ensure responsible use of worker data, employers should do the following:
Develop safeguards for protecting employee data from internal and external threats, with an emphasis on mitigating privacy risks for workers.
Ensure that AI tools have “safeguards for securing and protecting data.”
Avoid collecting unnecessary data.
Not share data outside of the business.
Takeaways
The DOL stresses that employers should utilize each of these eight Principles “during the whole lifecycle of AI – from design to development, testing, training, deployment and use, oversight, and auditing.” Further, the DOL clarified in its DOL AI Guidance that the eight Principles and the Best Practices it outlined are not intended to be an “exhaustive list” and, as noted above, are not binding. However, the document provides an integral “guiding framework” employers can follow as they refine how best to use AI in employment decisions.
Recommendations
Employers that are implementing or considering implementing AI systems and procedures should examine the DOL AI Guidance to ensure their systems and procedures track the purposes and policies outlined in the Principles and Best Practices. Employers also should continue examining requirements of other federal agencies—such as the EEOC and the OFCCP, if applicable—as well as state laws to ensure their systems meet all appropriate legal requirements.
Our Labor, Employment, and Workplace Safety practice group lawyers regularly counsel clients on a wide variety of topics related to emerging issues in labor, employment, and workplace safety law, and they are well-positioned to provide guidance and assistance to clients on AI developments.
Footnotes
{Categories} _Category: Inspiration{/Categories}
{URL}https://natlawreview.com/article/dols-ai-hiring-framework-provides-employers-helpful-guidance-how-decrease-legal{/URL}
{Author}K&L Gates LLP{/Author}
{Image}https://natlawreview.com/sites/default/files/styles/article_image/public/2024-12/AI%20talent%20Artificial%20intelligence%20jobs%20tech%20job%20technology%20talent.jpg?h=575791a6&itok=ih_LCKuJ{/Image}
{Keywords}{/Keywords}
{Source}Inspiration{/Source}
{Thumb}{/Thumb}