AI technology has become more common in the workplace. One Forbes Advisor survey indicated that over half of all businesses use AI. Unfortunately, the use of AI can open your organization up to legal liability, which is why you need a workplace AI policy.
There aren’t many AI laws yet, but many businesses are drafting up an AI policy for the workplace to protect themselves from future liability.
Read on to learn about why you need an AI policy in the workplace, what you should include, and how companies are starting to draft their AI policies.
What is AI technology?
Artificial Intelligence (AI) technology uses computation to try and mimic human problem-solving and decision-making. AI tools use a process called Machine Learning, where the technology “learns” by using a set of already existing data.
When talking about AI in the workplace, it’s important to distinguish between Generative AI and Algorithmic AI.
Algorithmic AI uses algorithms to make predictions or improvements. Examples of this technology are predictive texting on your phone, and the use of Grammarly and other spellcheckers to correct common mistakes.
Generative AI uses an existing data set to create new content in response to prompts. While Algorithmic AI is predicting something or improving upon something, Generative AI creates something new altogether.
An example of Generative AI is ChatGPT. With ChatGPT, you can enter a prompt such as “write me a blog post about the pros and cons of a four-day work week”, and the software will pull from existing information about the topic to compile a piece.
Another example of Generative AI is an art generator tool. With this software, you can enter a specific prompt such as “create an animation of a tree turning into cotton candy”. The tool will then take existing art pieces of both trees and cotton candy to create something similar.
Throughout this piece when we refer to AI, we are referring mostly to Generative AI.
Why you need an AI policy in the workplace
AI tools can help boost your employee’s productivity and efficiency, but they can also potentially put your company at risk. So why do you need an AI policy in the workplace?
Some employers may think it’s better to ban AI use outright, while others are excited to leverage AI for increased productivity. Either way, it’s important to have a policy that explicitly states how and if your company will use AI.
If you don’t have a set policy, employees will assume AI technology is allowed and will use the tool. It’s better to set a policy that dictates limitations and proper usage.
Here are some reasons why you may want to limit AI use in the workplace:
AI may use sensitive information
AI uses training data, and with most tools any data that you enter into the system becomes part of their training data. AI training data is defined as a set of labeled examples that is used to train machine learning models. If an employee inputs sensitive information – such as trade secrets – that information may find itself disseminated beyond the workplace and therefore compromised.
Other sensitive information may inadvertently breach non-disclosure agreements, or violate customer data protection laws.
When it comes to data and cyber security, you should treat AI tools like any other open access tool. You wouldn’t allow your employees to put sensitive company information on their Facebook page, so they probably shouldn’t use that information as input for AI tools.
Some generative AI tools may violate intellectual property rights
How AI interacts with intellectual property rights is still a legal gray area. To a certain extent, anything that Generative AI tools create uses elements that a human has already worked hard to create.
These are very murky legal waters, and there aren’t clear directions on how AI violates intellectual property rights. But if your business profits from creative pursuits, you may want to be very careful about how you use Generative AI.
AI isn’t always accurate
AI pulls data from across the internet, and unfortunately some people write and say things that are incorrect. AI can be a tool through which false information is quickly disseminated and treated as fact.
In addition, an AI tool is trained to generate the most likely response to a query. Sometimes, it will provide a completely false but entirely plausible answer. This can make it very difficult to spot obvious falsehoods in AI-generated content.
AI algorithms may have bias built in
AI tools are built by humans and trained on data created by humans. Humans have natural biases, and these biases can become part of how an algorithm learns to problem-solve and make decisions.
The difference between humans and AI is that humans are capable of being aware of and overcoming biases. AI takes bias as fact.
For example, let’s say you’re hiring for an IT position. You may have an internal bias that younger candidates are better suited for the position because you assume that they are more technology-literate. To avoid age bias, you make the effort to ignore age and instead focus on a candidate’s experience and training. The result is that you invite an older but very well qualified candidate for an interview despite their age, therefore overcoming your bias.
An AI tool, however, may look at data and conclude that MOST successful IT candidates tend to be young. Based on this conclusion, the tool automatically excludes any candidate over the age of 40, including the candidate you would have invited for an interview. This can lead to age-related discrimination, which is illegal.
Lawmakers are already starting to regulate the use of AI
Governments are concerned about the legal implications of AI and are starting to create regulations for its use.
We didn’t choose the above example of age discrimination at random. NYC passed a law last year regulating the use of AI for employment decision making.
As of July 5th, 2023, employers in NYC cannot use AI-powered automated employment decision-making tools to screen candidates for employment decisions. Only AI tools that have undergone a bias audit can be used.
In 2023, President Biden issued an executive order on the safety, security, and trustworthiness of AI. This isn’t a law yet but outlines suggestions for implementing AI regulations.
Instead of scrambling to change your AI policy if and when new laws pass, it makes sense to have a strong AI policy in the workplace that’s likely to be compliant with future regulations.
What you should include in a workplace AI policy
Now that we’ve covered why you need a workplace AI policy, you may be wondering what such a policy should entail.
The following are some guidelines that companies today are putting into their AI policies. You may want to consult with a lawyer who specializes in AI to put your policy together.
Define what kind of AI the policy covers
This should seem pretty straightforward, but the truth is that AI is in many tools that we use. Does your policy cover Algorithm AI tools like Grammarly, or does it just refer to Generative AI tools like ChatGPT?
Be specific about which AI tools are and are not permissible
Saying “AI is allowed” or “AI is not allowed” is not sufficient. You may decide to accept the use of some AI tools, while others are prohibited. Your employees may already be using some approved AI tools, and therefore assume that any AI tool they come across is allowed.
Provide an explanation for the limits of use
This isn’t a requirement, but if you’re implementing any kind of policy that restricts what your employees do, it may be a good idea to explain why. Your employees may not know, for example, that AI-created art is generated using existing artwork that humans worked hard to create.
If employees understand the potential issues with using AI tools, they are more likely to be compliant with your AI policy.
Outline specific rules and use cases for AI tools
This will be the meat of your AI workplace policy. Be specific about when using AI tools is and isn’t appropriate.
Some restrictions that companies are already implementing and that you may want to include are:
- The use of AI for hiring: Some companies are following the NYC example and prohibiting the use of AI for automatic screening of candidates or any employment related decision. AI should probably not be used to create termination letters, or to decide between candidates.
- Anything used that exposes the company to legal liability: This will be very specific to your company. It may include the use of generative AI for creative outputs or using AI tools to input protected customer data in an AI prompt.
- Input of sensitive information: To reduce the risk of cyber attacks or the exposure of trade secrets, some employees are restricting the kind of information that employees can input to AI tools.
Require human review and accountability
Employees should always check AI-generated work for misinformation or inaccuracies. It’s a good idea to state explicitly in your policy that employees are ultimately accountable for their own work. That means that if the AI tool they are using makes a mistake, they are accountable for fixing that mistake.
Implement mandatory training
If your policy allows the use of AI tools, the best way to ensure that people use them efficiently and adhere to the policy is to implement mandatory AI training. As a bonus, you’ll probably see increased productivity if your employees know how to correctly use AI to complete their work.
Can you enforce AI policies?
AI policies are not always easy to enforce, but it depends on your current IT set up.
If your employees work from home on their own personal devices, there is no way to track their AI use. If you’re planning on implementing an AI policy, then it’s a great time to reconsider your IT structure. Compliance is much easier with employer-provided devices.
Although NYC has implemented an AI regulation, some have pointed out that it hasn’t been enforced.
While there aren’t that many AI laws currently, lawmakers are starting to draft laws that could take effect in the future. As more laws regulating AI get passed, enforcement will become more commonplace.
It’s always a good idea to keep apprised of any changes to AI regulations. You can even use AI tools to set up a news alert that sends you relevant news articles about AI and the law.
Whether you decide to eliminate, limit, or allow AI use, it’s always a good idea to put together a clear AI workplace policy.
Compliance with Commonwealth Payroll & HR
As discussed above, using AI for hiring processes can be legally complicated. But as an employer, you want software that will help make the process easier.
Our pre-employment solutions are built to be flexible, and scale with your business. At Commonwealth Payroll & HR, we take a human-led approach and work with you to identify your company’s unique hiring and onboarding needs. All of our solutions – from hiring, to HR, and Payroll – are on one easy-to-use platform.
Most importantly, we think about compliance first, and we don’t use AI tools that may compromise your company’s HR compliance.
If you want efficient solutions to your payroll and HR needs, contact us to start the conversation.