Filter By Service Area
Filter By Title
Filter By Office

Resources

EEOC's "Algorithm" for AI in the Workplace

Without question, the rapid advancement of artificial intelligence (AI) has certainly made things easier for employers. For example, human resource departments often use AI technologies to streamline tasks, such as resume screening, candidate shortlisting, facilitating employee onboarding, and tracking employee performance.

However, use of AI-powered tools in hiring and other employment decisions must still comply with Federal equal employment opportunity laws. That is, AI-inspired employment decisions cannot discriminate against applicants or employees based on race, color, religion, sex, national origin, age (40+), disability, or any other legally protected characteristic or activity.

Recently, the EEOC released technical guidance to address the intersection of AI and workplace discrimination, and to clarify that employers must ensure that AI technologies are in compliance with Federal law.

Here are a few ways for employers to mitigate potential risks and safeguard against AI as a high-tech pathway to discrimination while effectively leveraging the benefits of AI technologies:

Do not rely on AI technology entirely. While AI can automate various HR processes, human oversight is critical to helping prevent potential biases, errors, or unintended consequences. Employees should still use critical thinking skills, based on their knowledge, expertise, and experience, for input into decisions.

Work with vendors. The EEOC has made clear that employers could be on the hook for vendor-designed or administered technologies that result in discriminatory employment decisions. Employers should work with third-party vendors to ensure that AI systems comply with Federal law. For example, employers may wish to confirm that vendors have taken steps to evaluate whether the AI tool could cause a substantially lower selection rate of individuals with protected characteristics, such as race or sex. Employers also should have their contracts with vendors reviewed by legal counsel.

Establish clear guidelines. Employers should establish clear guidelines on how AI technology is utilized in the workplace. This could include, but is not limited to, defining the purpose of using AI tools, procedures for data collection/analysis, and evaluation procedures to monitor the fairness in AI decision-making.

Evaluate and adjust AI technology. Employers should be aware of the potential for AI to be biased, particularly if the algorithms used were based on biased data. It is important that businesses regularly evaluate AI tools’ performance to identify any potential biases, ensure accuracy of data and compliance with Federal, State, and local laws and regulations. If discrepancies are identified, employers should work with vendors to adjust AI algorithms or data sets, or explore new AI technologies that decrease the risk of discrimination.

Train employees. As with any other work process or tool, employees must be trained effectively to utilize and work alongside AI technology. Training employees in areas such as data literacy, analytics, and unconscious bias may assist employees in utilizing AI properly.

Do not forget to accommodate. Employers using AI are still required under disability and religious discrimination laws to provide reasonable accommodations to applicants and employees. Failure to provide accommodations could result in a qualified individual being unfairly or inaccurately rated by an algorithm monitoring work performance or being unable to use the AI technology to apply/interview for a job. Employers should be sure that any AI tools adopted can accommodate qualified individuals with disabilities to prevent unlawful discrimination.

EEOC's "Algorithm" for AI in the Workplace

Without question, the rapid advancement of artificial intelligence (AI) has certainly made things easier for employers. For example, human resource departments often use AI technologies to streamline tasks, such as resume screening, candidate shortlisting, facilitating employee onboarding, and tracking employee performance.

However, use of AI-powered tools in hiring and other employment decisions must still comply with Federal equal employment opportunity laws. That is, AI-inspired employment decisions cannot discriminate against applicants or employees based on race, color, religion, sex, national origin, age (40+), disability, or any other legally protected characteristic or activity.

Recently, the EEOC released technical guidance to address the intersection of AI and workplace discrimination, and to clarify that employers must ensure that AI technologies are in compliance with Federal law.

Here are a few ways for employers to mitigate potential risks and safeguard against AI as a high-tech pathway to discrimination while effectively leveraging the benefits of AI technologies:

Do not rely on AI technology entirely. While AI can automate various HR processes, human oversight is critical to helping prevent potential biases, errors, or unintended consequences. Employees should still use critical thinking skills, based on their knowledge, expertise, and experience, for input into decisions.

Work with vendors. The EEOC has made clear that employers could be on the hook for vendor-designed or administered technologies that result in discriminatory employment decisions. Employers should work with third-party vendors to ensure that AI systems comply with Federal law. For example, employers may wish to confirm that vendors have taken steps to evaluate whether the AI tool could cause a substantially lower selection rate of individuals with protected characteristics, such as race or sex. Employers also should have their contracts with vendors reviewed by legal counsel.

Establish clear guidelines. Employers should establish clear guidelines on how AI technology is utilized in the workplace. This could include, but is not limited to, defining the purpose of using AI tools, procedures for data collection/analysis, and evaluation procedures to monitor the fairness in AI decision-making.

Evaluate and adjust AI technology. Employers should be aware of the potential for AI to be biased, particularly if the algorithms used were based on biased data. It is important that businesses regularly evaluate AI tools’ performance to identify any potential biases, ensure accuracy of data and compliance with Federal, State, and local laws and regulations. If discrepancies are identified, employers should work with vendors to adjust AI algorithms or data sets, or explore new AI technologies that decrease the risk of discrimination.

Train employees. As with any other work process or tool, employees must be trained effectively to utilize and work alongside AI technology. Training employees in areas such as data literacy, analytics, and unconscious bias may assist employees in utilizing AI properly.

Do not forget to accommodate. Employers using AI are still required under disability and religious discrimination laws to provide reasonable accommodations to applicants and employees. Failure to provide accommodations could result in a qualified individual being unfairly or inaccurately rated by an algorithm monitoring work performance or being unable to use the AI technology to apply/interview for a job. Employers should be sure that any AI tools adopted can accommodate qualified individuals with disabilities to prevent unlawful discrimination.

EEOC's "Algorithm" for AI in the Workplace

Without question, the rapid advancement of artificial intelligence (AI) has certainly made things easier for employers. For example, human resource departments often use AI technologies to streamline tasks, such as resume screening, candidate shortlisting, facilitating employee onboarding, and tracking employee performance.

However, use of AI-powered tools in hiring and other employment decisions must still comply with Federal equal employment opportunity laws. That is, AI-inspired employment decisions cannot discriminate against applicants or employees based on race, color, religion, sex, national origin, age (40+), disability, or any other legally protected characteristic or activity.

Recently, the EEOC released technical guidance to address the intersection of AI and workplace discrimination, and to clarify that employers must ensure that AI technologies are in compliance with Federal law.

Here are a few ways for employers to mitigate potential risks and safeguard against AI as a high-tech pathway to discrimination while effectively leveraging the benefits of AI technologies:

Do not rely on AI technology entirely. While AI can automate various HR processes, human oversight is critical to helping prevent potential biases, errors, or unintended consequences. Employees should still use critical thinking skills, based on their knowledge, expertise, and experience, for input into decisions.

Work with vendors. The EEOC has made clear that employers could be on the hook for vendor-designed or administered technologies that result in discriminatory employment decisions. Employers should work with third-party vendors to ensure that AI systems comply with Federal law. For example, employers may wish to confirm that vendors have taken steps to evaluate whether the AI tool could cause a substantially lower selection rate of individuals with protected characteristics, such as race or sex. Employers also should have their contracts with vendors reviewed by legal counsel.

Establish clear guidelines. Employers should establish clear guidelines on how AI technology is utilized in the workplace. This could include, but is not limited to, defining the purpose of using AI tools, procedures for data collection/analysis, and evaluation procedures to monitor the fairness in AI decision-making.

Evaluate and adjust AI technology. Employers should be aware of the potential for AI to be biased, particularly if the algorithms used were based on biased data. It is important that businesses regularly evaluate AI tools’ performance to identify any potential biases, ensure accuracy of data and compliance with Federal, State, and local laws and regulations. If discrepancies are identified, employers should work with vendors to adjust AI algorithms or data sets, or explore new AI technologies that decrease the risk of discrimination.

Train employees. As with any other work process or tool, employees must be trained effectively to utilize and work alongside AI technology. Training employees in areas such as data literacy, analytics, and unconscious bias may assist employees in utilizing AI properly.

Do not forget to accommodate. Employers using AI are still required under disability and religious discrimination laws to provide reasonable accommodations to applicants and employees. Failure to provide accommodations could result in a qualified individual being unfairly or inaccurately rated by an algorithm monitoring work performance or being unable to use the AI technology to apply/interview for a job. Employers should be sure that any AI tools adopted can accommodate qualified individuals with disabilities to prevent unlawful discrimination.

EEOC's "Algorithm" for AI in the Workplace

Without question, the rapid advancement of artificial intelligence (AI) has certainly made things easier for employers. For example, human resource departments often use AI technologies to streamline tasks, such as resume screening, candidate shortlisting, facilitating employee onboarding, and tracking employee performance.

However, use of AI-powered tools in hiring and other employment decisions must still comply with Federal equal employment opportunity laws. That is, AI-inspired employment decisions cannot discriminate against applicants or employees based on race, color, religion, sex, national origin, age (40+), disability, or any other legally protected characteristic or activity.

Recently, the EEOC released technical guidance to address the intersection of AI and workplace discrimination, and to clarify that employers must ensure that AI technologies are in compliance with Federal law.

Here are a few ways for employers to mitigate potential risks and safeguard against AI as a high-tech pathway to discrimination while effectively leveraging the benefits of AI technologies:

Do not rely on AI technology entirely. While AI can automate various HR processes, human oversight is critical to helping prevent potential biases, errors, or unintended consequences. Employees should still use critical thinking skills, based on their knowledge, expertise, and experience, for input into decisions.

Work with vendors. The EEOC has made clear that employers could be on the hook for vendor-designed or administered technologies that result in discriminatory employment decisions. Employers should work with third-party vendors to ensure that AI systems comply with Federal law. For example, employers may wish to confirm that vendors have taken steps to evaluate whether the AI tool could cause a substantially lower selection rate of individuals with protected characteristics, such as race or sex. Employers also should have their contracts with vendors reviewed by legal counsel.

Establish clear guidelines. Employers should establish clear guidelines on how AI technology is utilized in the workplace. This could include, but is not limited to, defining the purpose of using AI tools, procedures for data collection/analysis, and evaluation procedures to monitor the fairness in AI decision-making.

Evaluate and adjust AI technology. Employers should be aware of the potential for AI to be biased, particularly if the algorithms used were based on biased data. It is important that businesses regularly evaluate AI tools’ performance to identify any potential biases, ensure accuracy of data and compliance with Federal, State, and local laws and regulations. If discrepancies are identified, employers should work with vendors to adjust AI algorithms or data sets, or explore new AI technologies that decrease the risk of discrimination.

Train employees. As with any other work process or tool, employees must be trained effectively to utilize and work alongside AI technology. Training employees in areas such as data literacy, analytics, and unconscious bias may assist employees in utilizing AI properly.

Do not forget to accommodate. Employers using AI are still required under disability and religious discrimination laws to provide reasonable accommodations to applicants and employees. Failure to provide accommodations could result in a qualified individual being unfairly or inaccurately rated by an algorithm monitoring work performance or being unable to use the AI technology to apply/interview for a job. Employers should be sure that any AI tools adopted can accommodate qualified individuals with disabilities to prevent unlawful discrimination.

EEOC's "Algorithm" for AI in the Workplace

Without question, the rapid advancement of artificial intelligence (AI) has certainly made things easier for employers. For example, human resource departments often use AI technologies to streamline tasks, such as resume screening, candidate shortlisting, facilitating employee onboarding, and tracking employee performance.

However, use of AI-powered tools in hiring and other employment decisions must still comply with Federal equal employment opportunity laws. That is, AI-inspired employment decisions cannot discriminate against applicants or employees based on race, color, religion, sex, national origin, age (40+), disability, or any other legally protected characteristic or activity.

Recently, the EEOC released technical guidance to address the intersection of AI and workplace discrimination, and to clarify that employers must ensure that AI technologies are in compliance with Federal law.

Here are a few ways for employers to mitigate potential risks and safeguard against AI as a high-tech pathway to discrimination while effectively leveraging the benefits of AI technologies:

Do not rely on AI technology entirely. While AI can automate various HR processes, human oversight is critical to helping prevent potential biases, errors, or unintended consequences. Employees should still use critical thinking skills, based on their knowledge, expertise, and experience, for input into decisions.

Work with vendors. The EEOC has made clear that employers could be on the hook for vendor-designed or administered technologies that result in discriminatory employment decisions. Employers should work with third-party vendors to ensure that AI systems comply with Federal law. For example, employers may wish to confirm that vendors have taken steps to evaluate whether the AI tool could cause a substantially lower selection rate of individuals with protected characteristics, such as race or sex. Employers also should have their contracts with vendors reviewed by legal counsel.

Establish clear guidelines. Employers should establish clear guidelines on how AI technology is utilized in the workplace. This could include, but is not limited to, defining the purpose of using AI tools, procedures for data collection/analysis, and evaluation procedures to monitor the fairness in AI decision-making.

Evaluate and adjust AI technology. Employers should be aware of the potential for AI to be biased, particularly if the algorithms used were based on biased data. It is important that businesses regularly evaluate AI tools’ performance to identify any potential biases, ensure accuracy of data and compliance with Federal, State, and local laws and regulations. If discrepancies are identified, employers should work with vendors to adjust AI algorithms or data sets, or explore new AI technologies that decrease the risk of discrimination.

Train employees. As with any other work process or tool, employees must be trained effectively to utilize and work alongside AI technology. Training employees in areas such as data literacy, analytics, and unconscious bias may assist employees in utilizing AI properly.

Do not forget to accommodate. Employers using AI are still required under disability and religious discrimination laws to provide reasonable accommodations to applicants and employees. Failure to provide accommodations could result in a qualified individual being unfairly or inaccurately rated by an algorithm monitoring work performance or being unable to use the AI technology to apply/interview for a job. Employers should be sure that any AI tools adopted can accommodate qualified individuals with disabilities to prevent unlawful discrimination.

EEOC's "Algorithm" for AI in the Workplace

Without question, the rapid advancement of artificial intelligence (AI) has certainly made things easier for employers. For example, human resource departments often use AI technologies to streamline tasks, such as resume screening, candidate shortlisting, facilitating employee onboarding, and tracking employee performance.

However, use of AI-powered tools in hiring and other employment decisions must still comply with Federal equal employment opportunity laws. That is, AI-inspired employment decisions cannot discriminate against applicants or employees based on race, color, religion, sex, national origin, age (40+), disability, or any other legally protected characteristic or activity.

Recently, the EEOC released technical guidance to address the intersection of AI and workplace discrimination, and to clarify that employers must ensure that AI technologies are in compliance with Federal law.

Here are a few ways for employers to mitigate potential risks and safeguard against AI as a high-tech pathway to discrimination while effectively leveraging the benefits of AI technologies:

Do not rely on AI technology entirely. While AI can automate various HR processes, human oversight is critical to helping prevent potential biases, errors, or unintended consequences. Employees should still use critical thinking skills, based on their knowledge, expertise, and experience, for input into decisions.

Work with vendors. The EEOC has made clear that employers could be on the hook for vendor-designed or administered technologies that result in discriminatory employment decisions. Employers should work with third-party vendors to ensure that AI systems comply with Federal law. For example, employers may wish to confirm that vendors have taken steps to evaluate whether the AI tool could cause a substantially lower selection rate of individuals with protected characteristics, such as race or sex. Employers also should have their contracts with vendors reviewed by legal counsel.

Establish clear guidelines. Employers should establish clear guidelines on how AI technology is utilized in the workplace. This could include, but is not limited to, defining the purpose of using AI tools, procedures for data collection/analysis, and evaluation procedures to monitor the fairness in AI decision-making.

Evaluate and adjust AI technology. Employers should be aware of the potential for AI to be biased, particularly if the algorithms used were based on biased data. It is important that businesses regularly evaluate AI tools’ performance to identify any potential biases, ensure accuracy of data and compliance with Federal, State, and local laws and regulations. If discrepancies are identified, employers should work with vendors to adjust AI algorithms or data sets, or explore new AI technologies that decrease the risk of discrimination.

Train employees. As with any other work process or tool, employees must be trained effectively to utilize and work alongside AI technology. Training employees in areas such as data literacy, analytics, and unconscious bias may assist employees in utilizing AI properly.

Do not forget to accommodate. Employers using AI are still required under disability and religious discrimination laws to provide reasonable accommodations to applicants and employees. Failure to provide accommodations could result in a qualified individual being unfairly or inaccurately rated by an algorithm monitoring work performance or being unable to use the AI technology to apply/interview for a job. Employers should be sure that any AI tools adopted can accommodate qualified individuals with disabilities to prevent unlawful discrimination.