Introduction
This template is a preliminary guide to creating an acceptable use policy for generative AI in your nonprofit organization. It includes checklists for each of the steps, examples, and process questions to consider.
As you create your generative AI acceptable use policy, you may want to visit The Generative AI Glossary for Business Leaders (From A-Z) to review commonly used terms.
1. Ethical AI Principles
This section of your policy will contain values statements that identify your organization’s ethical AI use principles. Ethical AI principles help guide decisions, norms, and behaviours when using generative AI. Meanwhile, your value statements should closely align with your organization’s mission, core beliefs, norms, culture, and interactions with stakeholders. They are based on an understanding of the benefits and challenges of AI use at your nonprofit. For examples of value statements that reflect an organization’s ethical AI principles, visit Google’s AI Principles page, or the Government of Ontario’s Principles for Ethical Use of AI.
Strive to accomplish the following as you create this section of your policy:
Articulate the core values guiding your organization's AI use
Address the "dividend of time" concept: How will your organization reinvest time saved by AI?
Define your stance on human-centred AI use
Review the following examples of ethical AI principles and evaluate how they relate to your organization’s value statements:
- Human-Centred: We use generative AI to augment human intelligence, experience, expertise, creativity, and decision-making. We reinvest the “dividend of time” or time savings into human-focused work such as relationship-building and mission-aligned work.
- Co-Intelligence: We commit to maintaining human oversight and decision-making when using generative AI tools. Humans are always in the loop and in charge.
- Bias: We will strive to minimize bias in generative AI outputs and ensure equitable outcomes for all. We'll carefully review AI-generated content through an equity lens, considering stereotypes around race, gender expression, sexual orientation, ethnicity, and disability.
- Equity & Access: Generative AI will not be used in ways that create new inequities internally or externally.
- Privacy and Confidentiality: Stakeholders' personally identifiable data is safeguarded with organizational data security practices, and an understanding of privacy policies of platforms selected and internal document confidentiality is preserved when using generative AI tools.
- Accuracy: The highest standards of accuracy, truthfulness, and reliability in our work through human and careful review of all generative AI outputs.
- Transparency: Generative AI use is disclosed when creating text, visual, or multimedia externally facing content to maintain stakeholder trust.
- Intellectual Property: We will ensure that we are not inadvertently violating IP rights.
- Sustainability: We consider the long-term sustainability and environmental impact of generative AI, especially large language models (LLMs), and will encourage sustainable AI generative use practices.
- Fair Work: We consider the working conditions of low-wage workers whose job it is to perform the critical data work that underpins the development of AI systems that we are all using, encouraging Fairwork principles.
Questions to consider:
How do these principles align with your organization's mission?
What unique ethical considerations – perhaps not listed above – arise from your organization’s specific work?
2. Organizational Norms
Organizational norms are the shared expectations that guide staff actions and interactions. These norms influence how work is performed and how staff communicate and collaborate, contributing to the organization's overall mission, culture, and operations. Your acceptable use policy should detail human-centred norms, use cases, and tool selection and provisioning.
Strive to incorporate the following as you create this section of your policy:
Establish criteria for selecting AI tools
Define how AI tools will be integrated into workflows
Specify approved and prohibited AI tools
Outline the process for tool provisioning and reimbursement
Define acceptable use cases for AI such as the following, for example:
Content creation and editing
Translation
Summarization
Meeting notes
Research
Brainstorming
Analysis
Examples of human-centred norms:
- Thoughtful Workflow Integration: Generative AI is strategically integrated into workflows to avoid over-reliance or impact human connection in the workplace. We understand which tasks we as humans can do best and those which generative AI can do well (with human oversight).
- Staying Up to Date: In the rapidly evolving field of generative AI, we encourage and support staff in keeping informed of new developments, tools, and ethical considerations through regular training sessions, workshops, and self-directed learning.
- Values-Aligned Tool Selection: The criteria for selecting tools will be not based solely on features or price, but also on the values and ethics of the vendor.
Questions to consider:
Which tasks should always involve human oversight?
How will you encourage staff to identify beneficial AI use cases?
How will you promote knowledge sharing about AI usage among staff?
3. Guardrails
Guardrails are rules that support the ethical principles in Section 1 and reduce risk. This section should provide specific guidance on how employees should use generative AI. It should also be included as part of onboarding and introduction to the tools, as well as ongoing training.
Strive to establish clear guardrails for the following as you create this section of your policy:
Equity and access in AI use
Bias detection and mitigation
Privacy protection
Confidentiality preservation
Ensuring the accuracy of AI outputs
Respecting intellectual property
Transparency in AI use
Promoting sustainability
Supporting Fairwork practices
Any other guardrails that surface in your discussion
Below are examples of the above commonly used guardrails:
-
Equity & Access
- Verify that AI-generated content, especially job descriptions or resume screening tools, are not screening people out based on race, gender presentation, sexual orientation, ethnicity, or disability
- Ensure that all staff have access to generative AI tools and provide support
-
Bias
- Always review output from large language models (LLMs) with an equity lens
- Understand the potential bias that could result in an output from an LLM, refine prompts to adjust output
- Ensure that content from an LLM is reviewed by a diverse team
- Understand how LLMs (ChatGPT, Copilot, Claude, Gemini, etc.) are fine-tuned for bias
-
Privacy
- Redact any stakeholder personally identifiable information (PII) such as names, email addresses, phone numbers, or other sensitive data connected to an individual in all prompts or documents shared with an LLM
- Create a "traffic light" system for data use in prompts and include examples:
- Green: Safe to use with AI
- Yellow: Use with caution, may require redaction
- Red: Never use with AI
- Opt-out of having your documents or prompts train the LLM (ChatGPT is opt out; Claude is opt-in)
- Only use PIPEDA-compliant voice-to-text apps if you are dealing with personally identifiable health (PIH) data. If your organization operates internationally/in the U.S., strive to ensure the apps you’re using are HIPAA-compliant and/or VAWA-compliant. (Note that ChatGPT, at the time of publication, is not compliant with HIPAA.)
- If you’re using meeting participation metrics generated by voice-to-text apps for internal staff performance reviews, it should be disclosed
- If you’re using generative AI voice-to-text apps for meeting notes, get consent from all attendees to record the meeting
- Decide if the app is default “on” or default “off” and train staff on how to start or stop the app
-
Confidentiality
- Do not share confidential documents with LLMs and clearly designate which documents are confidential
- Define a “redaction” policy if confidential documents can be used with LLMs.
- If generative AI voice-to-text notetaking apps are used for confidential meetings, disable sharing settings of transcript and summary
-
Accuracy
- Do not share output from an LLM or voice-to-text apps without a complete and thorough human review
- Use prompting techniques with an LLM to mitigate hallucinations, such as prompting the LLM to share it step-by-step with the task, including sources, or to place in brackets the information that it isn’t 100% confident in
- Establish procedures for regular audits of generative AI outputs by use case. For example, if used for translation, the output should be revised by someone who understands the language
-
Intellectual Property
- Do not share copyrighted materials that are not owned by your organization with LLMs
- Avoid using copyrighted text verbatim in your prompts – paraphrase or summarize concepts instead
- Avoid using prompts that request a specific artist’s style by name
- Review and modify LLM-generated content with your own unique expression before using it to avoid reproducing copyrighted text
- For sensitive topics, consider using open source or models with IP-cleared datasets
- The legal landscape around AI and IP is evolving. Keep up with the latest rulings and best practices
-
Transparency
- Become familiar with guidelines such as the Partnership on AI’s Responsible Practices for Synthetic Media framework
- Determine how to disclose the use of generative AI for externally facing content
- Don't present LLM outputs as entirely human-created work
-
Sustainability
- Encourage sustainable generative AI practices by sparingly using prompts
- Advocate for sustainable generative AI approaches in your field
-
Fair Work
- Become familiar with AI Fairwork principles and advocate for fair work practices in your field
Questions to consider:
How do these guardrails support your organization’s ethical principles?
What unique risks does your organization face that require specific guardrails?
4. Implementation and Evolution
This section of your policy identifies your organization’s adoption strategy, including how you will roll out AI use across your organization. It will also cover opportunities for peer learning, training, experimentation, and open learning. Lastly, this section will spell out guidance for updating and reviewing your policy on a regular basis as additional use cases, platforms, and systems are adopted.
Below is a sample plan for an acceptable AI usage policy rollout:
Identify a pilot group
Design basic training in AI prompting skills
Create opportunities for peer learning
Develop an internal AI playbook or hub:
Include workflow examples
Provide prompting tips
Create cheat sheets and other resources
Establish a policy review schedule
Create mechanisms for staff feedback
Define metrics for evaluating AI policy effectiveness
Plan for staying updated on legal and compliance issues
Resource allocation for continued adoption, including budget, staff time, and training
Additional questions to consider:
How will you encourage experimentation within safe boundaries?
What resources will you allocate for ongoing AI adoption and training?
How will you adapt the policy as AI technology evolves?
Resources
For curated and annotated selection articles, resources, and training materials to further develop your acceptable use policy, you may refer to this document.