You are here:

AI & LLMs: What Nonprofits Need to Know

Posted Mar 27, 2026 09:15 AM
There's a lot of talk about Artificial Intelligence (AI), especially revolving around helping staff teams with limited resources accomplish more. This article is written by Jordan Dwight, founder of J. Dwight Labs, which helps nonprofits leverage software to achieve their goals and further their missions. Jordan has worked in tech for 10+ years at Amazon and startups.
This article seeks to demystify Artificial Intelligence (AI) so you can confidently use it to grow your programs (if you so choose). Over the past 3 years, Jordan has used many AI tools, read about it extensively, and even built things he couldn’t before (like his website in a day). He's a big believer in what it can unlock, but he's also conscious of the risks and problems it can create in the world. Read on to learn how you can use it effectively, if you decide to at all.

What is AI and why does it matter to you?

Before we get started, no, this was not written by AI 😉

Large Language Models (LLMs) or models, are the technology that power services like ChatGPT or Google’s Gemini. First, let's dive into how the models are created, at a high level, because it will help you use them more confidently as well as know their limitations.
Step One: Model training. The first step in creating a model is the model processes all of the Internet’s public data to learn about the world. This is the most costly step for the model developers because it requires lots of specialized computer chips to consume all that data (GPUs and TPUs). Think of this as the knowledge base of the LLM, similar to how we learn/do things and can draw from those memories in the future.

Step Two: Fine tuning. After model training, the model “knows” a tremendous amount of stuff, but it needs to be taught how to use it to be helpful to users. This is when thousands of helpful and unhelpful examples of interactions are fed to the model, often with human feedback steering the process. Progressively the model learns how to respond to different topics and prompts, and also what not to do. A prompt is simply what you type into the textbox. Responsible companies, like Anthropic, will address safety here as well, such as how to handle situations where someone raises an illegal topic. This step truly defines the user experience when chatting with the LLM, whereas the first step is about what content the model can draw from.

Probability: The key insight for today is that LLMs are probabilistic, not deterministic. They literally predict the most useful response to your prompt based on its training. While most LLMs are quite good at this across a range of topics, they can still make mistakes and do so with confidence ("hallucination" is the term for that).

Jordan's advice: Trust your instincts when a model provides a response. It may sound confident, but if it smells fishy, you’re probably right, and here are some tips for digging deeper.
  • Perplexity is a great tool for important research or situations where you cannot accept bad information. It breaks down information in a useful way and it links to sources so you can easily investigate primary sources yourself.
  • Try submitting the same prompt to a different model (Claude, ChatGPT, and Gemini are the big 3). If you get a similar answer, there’s a better chance you’re getting good information.
  • Resubmit your prompt but tell the model you are doing something important where you need facts and to provide links to sources. Better context goes a long way with improving outcomes with models.
In the coming months, Jordan will share more about how you can practically use AI at your organization.

Do you have feedback or ideas for a future article? Email
jordan@jdwightlabs.com or visit jdwightlabs.com.