Mitigating the Risk of Generative AI at Work: Steps to Creating Effective Company Policy

June 13, 2023

No matter what industry you’re in, it would be near impossible that the topic of ChatGPT or generative AI (Artificial Intelligence) has not come up in your business discussions.  It’s the shiny new object that both excites and intimidates us, and for some, downright scares us.  The fear runs the spectrum from AI replacing humans and eliminating jobs to it becoming smarter than us and learning to think beyond what we prompt it to do.

When a trend like this becomes as viral as this has, the impact and risks are not always initially evident.  Many companies are now restricting or outright banning the use of generative AI tools such as ChatGPT: Apple, JP Morgan Chase, Verizon, Amazon, Bank of America, Citigroup, Deutsche Bank, Goldman Sachs, and Wells Fargo to name a few.

Assessing the Risks of Generative AI

While the risks of using the technology are real, like unintentionally sharing proprietary or confidential information, it can bring unprecedented automation capabilities to businesses, making it easier to achieve your goals faster.

However, with the introduction of any new technology comes the need for thoughtful consideration and policymaking in order to ensure that its use is beneficial and safe. In this article, we lay out some important steps to consider when creating work policies around the usage of generative AI at work in order to reap the benefits without exposing your company to unnecessary risks.

Steps to Consider When Crafting Your Utilization of Generative AI at Work

  1. Understanding Generative AI: An Essential Foundation for Policy Development – Before formulating a comprehensive policy, it is paramount to get a robust understanding of generative AI and its vast capabilities. These sophisticated algorithms are designed to learn patterns and generate novel content based on their acquired knowledge. With the ability to analyze massive datasets and mimic human creativity, generative AI holds immense potential. However, it is crucial to acknowledge that without proper training and supervision, these systems can inadvertently produce biased or inappropriate content.
  2. Identifying Objectives and Benefits: Guiding Policy Formation with Purpose – To shape your policy effectively, it is essential to define the specific objectives and benefits your organization aims to achieve through the use of generative AI. Whether your goals involve enhancing creativity, improving productivity, or streamlining processes, understanding these desired outcomes will guide policy implementation. Aligning the policy with the strategic vision of your company will maximize its impact and ensure a unified approach.
  3. Identifying Use Cases: Tailoring Policies to Optimize Business Operations – A critical step in policy development is identifying the specific use cases for generative AI within your organization. Whether it pertains to content generation for marketing, creative design, or process optimization, gaining a clear understanding of how generative AI can enhance your business operations is vital. This understanding will help shape your policy and enable you to leverage the technology effectively.
  4. Ethical Considerations: Integrating Ethical Safeguards from the Outset – Ethics must be at the forefront of your company policy from the beginning. Generative AI has the potential to amplify existing biases or inadvertently create inappropriate and harmful content. To mitigate these risks, your policy should include guidelines for training data selection, bias detection, and mitigation strategies. Striving for inclusivity, diversity, and fairness in the generated content is essential for responsible AI deployment.
  5. Data Privacy and Security: Safeguarding Sensitive Information – Generative AI often requires access to vast amounts of data for training purposes. Establishing robust protocols and guidelines to protect sensitive data and ensure compliance with privacy regulations is imperative. Clearly defining the types of data that can be used, implementing secure storage protocols, and establishing data retention policies will uphold the highest standards of privacy and security.
  6. Training and Supervision: Nurturing Expertise and Oversight – Developing comprehensive guidelines for the training and supervision of generative AI systems is crucial. Assigning responsibilities to trained personnel who have a deep understanding of the technology and can actively oversee its operations is essential. Implementing mechanisms for ongoing monitoring, evaluation, and adjustment of the AI models will ensure their accuracy, relevance, and ethical compliance.
  7. Intellectual Property Rights: Addressing Ownership and Legal Considerations – Considering the implications of generative AI on intellectual property rights is paramount. Deciding who owns the generated content and clarifying licensing agreements are crucial steps. Consulting legal experts to ensure compliance with copyright laws and safeguarding original content will protect your organization’s intellectual property.
  8. Transparency and Explainability: Building Trust through Open Communication – Promoting transparency and explainability within your policy is key. Clearly outlining the expectations for communicating the use of generative AI to stakeholders, including employees, customers, and partners, will build trust and mitigate concerns related to the blurring of human and AI-generated content. Distinguishing between machine-generated and human-generated content is essential for transparency.
  9. Employee Training and Awareness: Fostering Responsible AI Usage – Developing comprehensive training programs to educate employees about generative AI, its benefits, and its limitations is crucial. Fostering a culture of responsible AI usage by providing guidelines on appropriate utilization and ensuring employees are aware of potential biases and ethical implications will enhance the ethical deployment of generative AI within your organization.
  10. Continuous Evaluation and Improvement: Navigating an Evolving Landscape – Creating a company policy on generative AI is an ongoing process. Cultivating a culture of continuous evaluation and improvement is essential. Regularly reviewing the policy, seeking feedback from employees, and staying informed about emerging trends and best practices will allow you to adapt the policy to align with evolving ethical and technological considerations.


Crafting an effective company policy on the use of generative AI is crucial to harnessing its benefits while upholding ethical standards. By considering the ethical implications, promoting transparency, protecting privacy, and fostering employee awareness, businesses can confidently embrace generative AI as a tool for innovation and creative enhancement. As the technology continues to evolve, it’s imperative to remain vigilant, adapt to new challenges, and ensure that the policy aligns with the ethical values and goals of your organization. With a well-defined policy in place, companies can leverage the power of generative AI while mitigating potential risks and maximizing its potential for positive impact.

How Commonwealth Payroll & HR keeps You Informed:

Stay up to date on the ever-changing guidelines, laws and news on PAYROLL, HR & BENEFITS! Subscribe to our informative monthly email that provides employer-relevant news, resources, and insights about Human Capital Management. You will also receive important compliance alerts containing updates to federal, state and local regulations affecting employers and employees.


*The information provided in this article does not, and is not intended to, constitute legal advice; instead, all information is for general informational purposes only. Information in this article may not constitute the most up-to-date legal or other information. This article contains links to other third-party websites provided only for the convenience of the reader.

Compare Plans View Demo Self Assessment Subscribe to Insights