| Read Time: 4 minutes
Federal Employment Law
AI in the federal workplace

Once a far-off dream of science fiction, artificial intelligence (AI) has become a regular part of life for many employees today.

Thanks to the explosion of AI’s capabilities and accessibility in recent years, many employers and employees have eagerly integrated automated technology into the workplace to boost efficiency and productivity.

However, despite AI’s exciting potential to improve the modern workflow, it also comes with legal and operational risks. Understanding the ongoing compliance concerns around AI at work is crucial for ensuring employees and their organizations protect themselves while achieving productivity goals. 

In this blog post, we’ll explain what federal employees should know about using artificial intelligence in the workplace. We’ll discuss the pros and cons of this technology and how federal law impacts the role of AI in the workplace. If you still have questions, contact our office by calling (833) 833-3529 or using our online contact form

The Current State of Artificial Intelligence in the Workplace

Artificial intelligence is a form of computer technology that can perform tasks generally thought to require human intelligence. Many of today’s AI tools are trained on large amounts of data, using algorithms to “learn” to recognize patterns, solve problems, analyze language, and generate text with minimal human intervention. 

In the workplace, artificial intelligence can serve many different functions, including:

  • Recruiting and hiring—preparing job descriptions, sorting and ranking applicants, conducting initial interviews; 
  • Onboarding employees—conducting training, answering common questions, personalizing educational content;
  • Outsourcing repetitive tasks—scheduling meetings, completing data entry, organizing files, responding to common customer queries;
  • Analyzing data—conducting research, identifying patterns in productivity, resource allocation, and other operational metrics;
  • Drafting routine documents—generating and proofreading emails, memos, presentations, contracts; and
  • Managing employees—assessing individual performance, gauging workforce satisfaction, predicting personnel or operational issues.

This wealth of capabilities makes AI an increasingly valuable and popular tool for employers and employees. 

Potential Pitfalls Involving AI in the Workplace 

Despite the technological capabilities of AI, these tools have limits and drawbacks. Some basic concerns raised around integrating AI tools into the work environment include the following:

  • Outdated or incorrect information. Errors or gaps in an AI’s training dataset can lead to inaccurate or misleading results. For example, a generative AI tool trained on data last updated in 2022 can’t provide truly up-to-date information about current events.
  • Chatbot “hallucination.” Some generative AI chatbots are known to fabricate information, known as “hallucinating.” Unless a user is an expert in a given topic, it can be easy to fall prey to seemingly credible but false conclusions.
  • Privacy concerns. Information shared with some interactive AI systems may become part of the technology’s working knowledge base. When this information is confidential or proprietary, there’s a risk that it could be leaked to unintended other users.

With the ongoing expansion of publicly available automated and generative AI tools, many organizations are working to develop policy guardrails to avoid these risks while still reaping the benefits of this technology. 

Federal Law and AI

As AI has become more sophisticated and widely available, lawmakers have begun to take action to respond to some of the concerns about what this technology can do. Let’s look at some current legislative and administrative attempts to regulate AI and how they could impact employees.

Algorithmic Accountability Act 

This bill was first introduced in 2019 and is currently under consideration by Congress. It would require certain employers who use AI to study its potential impact on employees before making critical employment decisions.

If passed, the bill would require companies to report any bias, accuracy, discrimination, privacy, or security concerns with a tool to the Federal Trade Commission (FTC). 

EEOC Guidance on AI and the ADA

In 2022, the Equal Employment Opportunity Commission (EEOC) released a statement offering guidance on maintaining Americans with Disabilities Act (ADA) compliance with AI tools in the workplace. In it, the EEOC flagged the following ways that AI may put employees with disabilities at risk of discrimination:

  • AI tools fail to provide reasonable accommodations. For example, any digital recruiting tools using AI must offer accommodations or alternatives for individuals with hearing, sight, or other impairments. 
  • AI tools screen qualified individuals with disabilities out of hiring. Technology that uses strict criteria to identify qualified candidates may inadvertently exclude applicants with certain disabilities. For instance, using AI to assess fluent speech patterns might negatively evaluate a qualified candidate with a speech impairment. 
  • AI tools conduct illegal disability or medical inquiries. AI-based employment questionnaires can’t explicitly or implicitly encourage someone to disclose a disability or medical issue. If this happens before a job offer, it risks violating the ADA’s disability discrimination protections.

Although not yet confirmed as federal law, these guidelines offer important clarifications for employers considering using AI in hiring and recruiting.

EEOC Guidance on AI and Title VII

The EEOC also issued technical guidance addressing AI and potential unintended discrimination under Title VII. It specifically points out how AI as a hiring tool could perpetuate biases and prejudices in recruiting.

Algorithms trained on past hiring data may base future decisions on criteria historically slanted toward specific groups. For example, prioritizing candidates based on education, geographic region, and job titles could skew toward white applicants and away from other racial groups.

As a result, the EEOC recommends that employers conduct regular bias assessments of any AI tools in hiring to ensure they don’t return lower rates of candidates who are members of protected groups.

Trusted Guide and Defender for Federal Employees 

AI is a promising tool with great potential to improve the daily lives of federal employees and agencies. However, it shouldn’t be used thoughtlessly.

Although legislation seems to be a step behind technology, the misuse of AI risks opening the door to serious legal complications for federal employees and employers. If you’re concerned about AI compliance in your workplace, contact the Federal Employment Law Firm of Aaron D. Wersing PLLC.

Aaron Wersing has spent years helping federal employees understand and exercise their rights under complex government employment regulations. Contact our office to schedule a consultation and learn more. 

Author Photo

Aaron Wersing, Attorney at Law

Aaron Wersing is the founder of the Law Office of Aaron D. Wersing. Mr. Wersing graduated from the Georgia State University College of Law with a Doctorate in Jurisprudence and was the recipient of the CALI Excellence for the Future Award. Mr. Wersing previously attended the University of Georgia, where he received a Bachelor of Business Administration degree in Accounting. Mr. Wersing is an active member of his local community. Mr. Wersing acts as a volunteer attorney with Houston Volunteer Lawyers, the pro bono legal aid organization of the Houston Bar Association. He is also a member of professional legal organizations such as the National Employment Lawyers Association and the American Inns of Court. To reach Aaron for a consultation, please call him at (833) 833-3529.

Rate this Post

1 Star2 Stars3 Stars4 Stars5 Stars
Loading...