For a several years, I’ve been thinking and writing about ethical use of artificial intelligence both within the enterprise and beyond. While these topics remain of paramount interest to me and importance to all, given AI’s rapid deployment in various levels of the enterprise, it seems prudent to share some guidance on AI use in the context of employment, particularly from the perspective of the employer. The U.S. Department of Labor’s Wage and Hour Division has recently issued field Assistance Bulletin No. 2024-1, “Artificial Intelligence and Automated Systems in the Workplace under the Fair Labor Standards Act and Other Federal Labor Standards.” This document provides some relevant information not about AI system acquisition, but actual use of AI-powered systems and existing laws and regulations. The Field Assistance Bulletin issued by the Department of Labor states clearly that “As new technological innovations emerge, the federal laws administered and enforced by W[age and] H[our] D[ivision] continue to apply, and employees are entitled to the protections these laws provide, regardless of the tools and systems used in their workplaces. I couldn’t have said it better myself.
For the purpose of this guidance, the terms “artificial intelligence” or “AI” are defined under 15 U.S.C. § 9401(3) as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to - (A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action.”
Many employers see that AI-based tools and other automated functions can be used to handle the often-cumbersome tasks of employee scheduling, timekeeping, tracking employee location and calculating wages. And while such tools may be sold as useful for measuring employee productivity, as soon as the employer makes the leap to replace and/or reinforce judgment-based tools, ethical issues can come into play in very meaningful—and potentially litigious—ways. For employers to use measurements like keystrokes, mouse clicks or employee presence in front of a camera are not necessarily accurate measurements of productivity. To judge employee performance based on these metrics alone is fraught with danger. It’s the same concept of using my step counting watch to determine my level of exercise for the day. For all the watch knows, I might just as well put my watch into a safe bag and spin it through the dryer for an hour. The watch simply will not be able to tell if I was actually running or not. Should I qualify for the Boston Marathon based on the info provided by my watch? Hardly reliable.
Other danger areas for AI or automated systems in the context of employment include calculating break times, wages, wait times while the automated system figures out the next task to be assigned and, among others, system locators. Both federal and state labor laws define very specific, legally enforceable obligations on employers related to the measurement of actual time worked. In many instances, pay is tied directly to hours worked, so the employer is required to measure work time accurately.
Labor law is only one of the many areas where over-reliance on AI and/or automated outputs can lead to significant harm and legal costs. On the federal level, determinations for compliance with obligations defined the Family Medical Leave Act, the PUMP Act and the Employee Polygraph Protection Act (EPPA) are fraught with risk, particularly when used in the employment context. With respect to the polygraph law, according to a recent article in the National Law Review, some AI data-collection may collect “eye measurements, voice analysis, micro-expressions, or other body movements to suggest if someone is lying or to detect deception. An employer’s use of any lie detector test, including any such device that utilizes or incorporates AI technology, would be prohibited by the EPPA unless used in accordance with the limited exemptions provided for in the law.”
As I have commented in previous posts, one of the many challenges created by use of AI tools is that the resulting output generated from data fed into an AI algorithm, regardless of the algorithm’s level of sophistication, is only as good as the original data itself – and there are many questions to answer about the quality and characteristics of that training information: How old is the original data? What are the sources for the data? Why those sources? How were those sources vetted? How were biases detected – or was bias detection part of the data-gathering and training at all? Without being fully informed of these considerations, reliance strictly on AI output creates huge risks for employers for violation(s) of labor laws, carrying stiff penalties. AI and/or automated output are only as good as the data and complex math upon which they are built. Without human input, supervision and oversight, the unvalidated AI-generated output is nothing more than a number.”
AI tools can be powerful when how they work, and what the sources of underlying data are—are well-understood. But without appropriate, with human oversight, AI outputs can be led to conclusions that violate labor laws, health care regulations and many other elements of state and federal law. The ultimate takeaway is that when using these outputs, solid legal advice to the employer is that human oversight of these tools is absolutely required.