Generative AI is quickly becoming a crucial part of today’s organisational landscape. Whether your organisation actively uses AI or not, it’s essential to have policies in place to manage both its opportunities and risks. This blog provides a practical guide for creating these necessary policies.
Why You Need Organisational AI Policies
Gen AI tools like ChatGPT are becoming more accessible, impacting many areas of organisations—from operations to external communications. Boards and Executives need to recognise that a comprehensive approach is needed to ensure AI is used ethically, effectively, and in alignment with the organisation’s values.
Types of AI Policies Your Organisation Needs
Internal Policies for Staff Use of AI
Employees are already experimenting with AI tools like ChatGPT. It’s crucial to have policies outlining responsible use. Think of these policies like ‘social media use’ guidelines—balancing the benefits of innovation while mitigating risks like privacy issues or reputational damage.
Opportunities and Risks: Policies should encourage AI exploration while addressing security, privacy, and ethical concerns. Examples include the UK government’s official guidance on AI use for civil servants.
Resources to Consider:
- The official guidance for how civil servants should use this technology from the UK government
- A sample policy that considers the ethical guideline for use in campaigning from a smaller organisation
- One-Pager for Staff from London’s Office of Technology
- A sample policy from the Society for Innovation, Technology and Modernisation
- AI Policy Template from the Responsible AI Institute
Policies for Procuring Gen AI Products/Services
With numerous vendors entering the AI space, guidelines for evaluating potential tools are essential. Executives need to develop procurement policies based on industry best practices to ensure AI investments are aligned with organisational strategy.
Resources:
- Guidelines for Procurement of AI Solutions from the World Economic Forum
- Commentary on US public sector procurement from Centre for Democracy & Technology of AI
- A snapshot of AI procurement challenges in government from GovLab
Public Statements on AI Use
Stakeholders may want to know how your organisation uses AI. For transparency, publishing internal AI policies or statements is helpful. In sectors like journalism, reassurance around AI’s ethical use is particularly important.
- Examples:
- The Guardian’s approach to generative AI
- How WIRED Will Use Generative AI Tools
- Letter from the editor on generative AI and the Financial Times
- Generative AI at the BBC
Recent research from the Centre for Data Ethics and Innovation found that most people are okay with using AI in the public sector for simple tasks, as long as a human checks the results and takes responsibility.
Developing Digital Service Policies Involving AI
Gen AI enhances digital services, but clear policies are needed for ethical use and effective deployment. If your organisation has a machine learning policy, now is the time to update it for generative models.
- Focus Areas:
- Responsible development guidelines
- Data privacy and ethical considerations
Resources
- Generative AI: 5 Guidelines for Responsible Development from Salesforce
- Seven Principles for ‘Responsible’ Generative AI by the UK’s Competition & Market Authority
Policies for AI in Written Submissions
AI-generated content is now common in job applications, tenders, and other documents. Executives need to create guidelines to clarify how applicants should use AI tools for submissions.
- Resource Example:
- Funders joint statement: use of generative AI tools in funding applications and assessment
A few other interesting things on the governance of generative AI
- Great research from Harvard Law School surveying governance of AI in corporate organisations
- Microsoft offers various kinds of governance support for organisations using their AI products
- credo.ai is a tool for tracking whether AI systems deployed in an org comply with policies. Perhaps you could do this with a spreadsheet, but it’s interesting to see that there’s now a dedicated tool for this nevertheless
- Wikipedia policy for editors on LLM use
Conclusion: Taking Action on AI Policies
Boards and Executives need to understand that having clear AI policies is essential for guiding staff, managing risk, and ensuring transparency. By developing specific guidelines for staff use, procurement, digital services, and possibly public statements, your organisation can position itself to leverage AI effectively while upholding ethical standards.
Start by reviewing existing AI policies, considering areas like staff use and procurement, and adapting them to include Gen AI. Effective AI governance helps foster innovation while protecting your organisation’s values.
Key sources: The Civic AI Observatory, Responsible Artificial Intelligence Insititute
For a copy of the AI Policy Template” by the Responsible AI Institute click here.