Key considerations for integrating generative AI at your organization

As a nonprofit leader, adopting artificial intelligence (AI) may seem like a future consideration for your organization. However, recent studies reveal a striking reality – generative AI, tools like ChatGPT, have already permeated the workplace, often without an official acceptable use policy.

The 2024 Work Trend Index Annual Report from Microsoft and Linkedin found that 75% of staff, eager for improved productivity, are using AI in their jobs and 78% of AI users are secretly bringing their own AI tools to work. Meanwhile, in a 2024 study by Asana and Anthropic, 82% of employees said their organizations have not yet provided acceptable use guidelines or training.

What does this mean for your nonprofit? Your staff may already be using ChatGPT without your awareness, and while this adoption shows the potential to enhance productivity, it also poses significant risks without proper oversight.

Consider the implications: Without guidelines, AI-generated content might be used externally without proper vetting, risking your reputation. Unchecked AI use could also lead to unintended biases or ethical missteps that contradict your mission. 

This is where an interim acceptable use policy for generative AI becomes essential. It's about channelling the enthusiasm for AI into safe, effective, and human-centred practices for completing job tasks. Your policy sets ethical boundaries while enabling safe experimentation and training.

It is referred to as an interim policy because as you add AI-powered systems for organizational AI use (e.g., internal systems, databases, program delivery), you can add more comprehensive policies for data governance, ethical monitoring, and chatbot guidelines. Using this article as a starting point, it’s recommended that you treat your generative AI acceptable use policy as a "living document," updating it regularly as technology evolves and new use cases emerge.

This article will guide you through the initial steps to create an acceptable use policy for small to midsize nonprofits, and begin integrating generative AI into your workflow.

1. Ethical AI principles

Start by outlining the values that will guide your nonprofit's use of AI, reflecting your organization's mission and culture. This outline should also cover the specific benefits and ethical challenges of AI use for your nonprofit.

A key benefit of using generative AI is that it creates what’s known as a "dividend of time,” reducing labour-intensive tasks so staff can focus on mission-critical work requiring uniquely human skills. 

Generative AI does not replace humans. Your policy should include an ethical principle based on your organization’s perspective. For example, the Coastal Watershed Council, a small California-based nonprofit dedicated to environmental education, included the following value about human-centred AI in its policy: “We use AI to reduce our time in front of screens in order to increase time outside educating stakeholders about the San Lorenzo River conservation.”

Generative AI tools also come with ethical risks. For example, these tools are trained on internet-scraped information and can amplify bias – particularly concerning for nonprofits working with marginalized communities. Indiscriminate use could unknowingly perpetuate stereotypes or provide biased information conflicting with your mission. Your policy should articulate a principle about mitigating bias, such as: 

"We will strive to minimize bias in generative AI outputs and ensure equitable outcomes for all. We'll carefully review AI-generated content through an equity lens, considering stereotypes around race, gender expression, sexual orientation, ethnicity, and disability."

However, bias is not the only ethical challenge. Your policy should also address privacy, confidentiality, accuracy, disclosure, and others included in the template. Strive to involve your team in adapting these principles to your organization's unique needs and mission.

2: Norms: Shared expectations 

Workplace norms help weave generative AI into the fabric of your nonprofit's culture. Think of this step as setting the stage for a new collaboration – one between your dedicated staff and some pretty impressive digital assistants. Your role is to guide this partnership in a direction that is always human-centred.

First, consider how AI fits into your workflow. The goal isn't to replace the human touch but to free up your team to focus on what they do best. Encourage critical thinking about tasks: Which could benefit from AI assistance, and which need irreplaceable human insight?

When it comes to choosing AI tools, remember that it's not just about bells and whistles. Features and price matter, but so do ethics and values. Ask yourself, “Does the company behind it share our commitment to making the world a better place?”

Your policy should also identify specific tools as well as how the tools will be provisioned. It should also spell out the specific use cases. For a set of detailed questions to help you work through these topics with your team, visit the generative AI acceptable use policy checklist on HR Intervals.

3: Guardrails: Boundaries for responsible AI use

Guardrails are the specific rules and practices that bring your ethical AI principles to life. Think of guardrails as safety features that keep your organization on track and empower your team to use AI tools responsibly.

Let's take privacy as an example. In the nonprofit world, we often handle sensitive information about donors, beneficiaries, and our work. A privacy guardrail might read: "Never share personally identifiable information (PII) in your prompts when using generative AI tools. PII includes names, email addresses, phone numbers, or any sensitive data connected to an individual."

This guardrail creates a clear boundary – PII is off-limits for AI tools – while still encouraging the use of AI for tasks that don't involve sensitive data. You might complement this with a "traffic light" system for content and/or data: green is safe to use with AI, yellow requires caution and possibly redaction, and red is never to be used with AI.

By setting up such guardrails, you're not just protecting sensitive information, you're also giving your team the confidence to explore generative AI's potential without fear of inadvertently causing harm. They know where the boundaries are, which allows them to work freely within them.

Remember, guardrails aren't about saying "no" to AI use. They're about saying "yes, and here's how to do it safely and ethically." By clearly communicating these guidelines and incorporating them into your AI training and onboarding processes, you're setting the stage for responsible AI adoption that amplifies your nonprofit's impact while staying true to your values. HR Intervals’ Generative AI acceptable use policy checklist provides example guardrails, however, it's crucial to involve your team in customizing them to fit your specific needs.

4: Implementation and evolution

As you prepare to put your generative AI policy into action, it's time to think about how you'll roll it out. 

Consider starting with a pilot group: a small team of enthusiastic staff members who can test the waters, try out your new guardrails, and develop initial use cases. Think of this as creating a playground with a high fence – a safe space for experimentation and learning. As this group gains experience, they'll become your AI champions, helping to inspire others.

Learning how to use generative AI can be much easier than learning how to use other technologies. Provide your team with basic skills in crafting effective AI prompts, and create opportunities for peer learning where staff can share tips and success stories. 

But remember, learning how to use generative AI is not the same as using it regularly. The more your team engages with AI tools, the better they'll understand how to leverage them effectively.  

To support this learning process, consider creating an internal AI playbook or hub. This could be as simple as a shared Google folder containing workflow examples, prompting tips, cheat sheets, and other resources. Remember, you don’t necessarily have to create everything from scratch – explore the many free resources available online for sample prompts, AI use case examples, insights, and learnings tailored specifically to nonprofits.

As AI technology evolves rapidly, so too should your policy. Set a regular schedule for reviewing and updating your AI guidelines. Create channels for staff feedback and evaluate the effectiveness of your organization’s AI use. Stay informed about legal and compliance issues related to AI use in your sector.

By approaching AI adoption with a spirit of openness, continuous learning, and collaboration, you're setting your nonprofit up for success in the AI age. This technology has the potential to significantly amplify your impact – and with a thoughtful, well-implemented interim policy, you'll be well-positioned to harness that potential responsibly.

Was this article helpful?
0 out of 1 found this helpful