Essential Policies for Managing Gen AI in Your Workplace.

Female CEO Sally leading a discussion on AI policies with a diverse group of mature professionals in a bright, modern meeting room. The atmosphere is collaborative.

Generative AI is quickly becoming a crucial part of today’s organisational landscape. Whether your organisation actively uses AI or not, it’s essential to have policies in place to manage both its opportunities and risks. This blog provides a practical guide for creating these necessary policies.

Gen AI tools like ChatGPT are becoming more accessible, impacting many areas of organisations—from operations to external communications. Boards and Executives need to recognise that a comprehensive approach is needed to ensure AI is used ethically, effectively, and in alignment with the organisation’s values.

Internal Policies for Staff Use of AI

Employees are already experimenting with AI tools like ChatGPT. It’s crucial to have policies outlining responsible use. Think of these policies like ‘social media use’ guidelines—balancing the benefits of innovation while mitigating risks like privacy issues or reputational damage.

Opportunities and Risks: Policies should encourage AI exploration while addressing security, privacy, and ethical concerns. Examples include the UK government’s official guidance on AI use for civil servants.

Resources to Consider:

  • The official guidance for how civil servants should use this technology from the UK government 

With numerous vendors entering the AI space, guidelines for evaluating potential tools are essential. Executives need to develop procurement policies based on industry best practices to ensure AI investments are aligned with organisational strategy.

Resources:

Stakeholders may want to know how your organisation uses AI. For transparency, publishing internal AI policies or statements is helpful. In sectors like journalism, reassurance around AI’s ethical use is particularly important.

Recent research from the Centre for Data Ethics and Innovation found that most people are okay with using AI in the public sector for simple tasks, as long as a human checks the results and takes responsibility.

Gen AI enhances digital services, but clear policies are needed for ethical use and effective deployment. If your organisation has a machine learning policy, now is the time to update it for generative models.

  • Focus Areas:
    • Responsible development guidelines
    • Data privacy and ethical considerations

Resources

AI-generated content is now common in job applications, tenders, and other documents. Executives need to create guidelines to clarify how applicants should use AI tools for submissions.

A few other interesting things on the governance of generative AI

  • Great research from Harvard Law School surveying governance of AI in corporate organisations
  • Microsoft offers various kinds of governance support for organisations using their AI products 
  • credo.ai is a tool for tracking whether AI systems deployed in an org comply with policies. Perhaps you could do this with a spreadsheet, but it’s interesting to see that there’s now a dedicated tool for this nevertheless
  • Wikipedia policy for editors on LLM use

Boards and Executives need to understand that having clear AI policies is essential for guiding staff, managing risk, and ensuring transparency. By developing specific guidelines for staff use, procurement, digital services, and possibly public statements, your organisation can position itself to leverage AI effectively while upholding ethical standards.

Start by reviewing existing AI policies, considering areas like staff use and procurement, and adapting them to include Gen AI. Effective AI governance helps foster innovation while protecting your organisation’s values.

Key sources: The Civic AI Observatory, Responsible Artificial Intelligence Insititute

Read our other posts:
Other resource categories:
Do you have a question about this post?
Share this post: