AI is dramatically changing the funding landscape. Many funders are experiencing greater volumes of grant applications and are concerned about the pace of change. But take-up is varied. Last year, more than 50% of Association of Charitable Foundations’ members said they were already experimenting with and using AI tools daily, but more than 40% reported they were “not currently planning to use AI at all”.
Much is likely to have changed even since that point, with more of us feeling the pull and the normalisation of AI in our personal and professional lives. But the playing field is still uneven, and both charities and funders are having to learn on the go.
To support this learning, we’ve been collaborating with CAST and the Technology Association of Grantmakers (TAG) to explore how AI is changing funding application processes for both charities and funders. This article considers not just how funders might use AI, but how they respond to its growing use by grant-seekers and grantees, especially in funding application processes.
The problem
The integration of AI into funding applications introduces new dynamics for grantmakers committed to open and trusting grant-making:
Obscured authorship and intent: When applicants use generative AI (e.g. ChatGPT or Copilot) to write proposals, it becomes harder for funders to distinguish between organisational voice and machine-generated content. Any output no longer necessarily represents what a person/organisation knows or can deliver and this undermines the ability to judge values alignment, passion, and authenticity.
Widening inequity: Although AI tools can lower barriers for some applicants, well-resourced organisations may still gain a disproportionate advantage through access to paid AI tools, specialist training, and time for iterative refinement. This risks entrenching or widening existing inequities, despite the appearance of increased accessibility.
Standardisation vs originality: AI-generated applications may become increasingly formulaic, making it harder to spot truly distinctive or community-led ideas.
Application overload: AI is already adding to the volume of applications – and when used badly is not increasing the quality of applications. The ease of writing also means more speculative applications are being submitted.
Due diligence challenges: AI-generated content may obscure key information or hide inconsistencies, complicating assessment and verification.
These risks do not exist in isolation; they intersect with and, in some cases, exacerbate long-standing concerns about application burden, accessibility, and bias in assessment.
The opportunity
Despite these concerns, AI also presents opportunities to improve application processes if introduced intentionally and ethically:
Reducing applicant burden: With guidance, AI tools could help applicants, especially those with fewer resources, to structure, draft or translate applications, potentially increasing access and confidence.
Streamlining internal reviews: Funders could use AI to help triage large volumes of applications, identify keywords, or cluster themes thereby reducing time pressure and freeing staff up for deeper, more relational work.
Designing better forms: AI tools can help analyse where applicants struggle or misinterpret questions, informing more user-friendly application design.
Language accessibility: AI could support translation or plain-language editing, making application forms more inclusive for non-native English speakers or people with literacy challenges.
These opportunities are strongest when AI complements, rather than replaces, human judgement and applicant voice.
Short-term fixes
Funders can take immediate steps to manage AI-related risks while improving their application processes:
Revise application guidance: Include clear, transparent policies on whether (and how) applicants may use AI tools and how the funder will respond. Ideally, applicants should not be penalised for using AI and funders should be explicit about this approach. (UKRI and Wellcome are examples of published policies).
Focus on the ‘why’, not just the ‘what’: Adapt application questions to better surface organisational values, context, and community connection – things AI struggles to simulate convincingly.
Provide ethical prompts: Help applicants use AI in thoughtful ways (e.g. drafting but not finalising responses), levelling the playing field for those unfamiliar with the tools.
Give prompt feedback: Communicate to applicants when applications are bland, vague or insufficiently evidenced, so that they can improve upon them – whether that’s as a result of using AI or not.
These steps reinforce a values-based assessment process while engaging constructively with emerging practice.
Longer-term solutions
As AI use becomes more embedded in the sector, funders will need to consider more strategic shifts in application practice:
Redesign assessment frameworks: Place more weight on relational knowledge, track record, and trust, rather than relying solely on written applications.
Invest in applicant support: Provide non-AI tools and coaching to help all applicants make a strong case, especially those from underrepresented communities.
Develop AI literacy in-house: Ensure teams have the skills and confidence to make informed judgements about how AI is used in applications and assessments.
Explore collective approaches: Collaborate with peers to develop ethical standards, support skills development and training, or even co-create sector-specific AI tools that reflect funder values.
These steps will help maintain integrity, equity and trust in application processes over time.
What not to do
The nature of AI requires experimentation and exploration and there will be mistakes along the way. However, there are several potential missteps to be avoided:
Don’t default to suspicion: Avoid penalising applicants for using AI without first understanding how and why they used it.
Don’t automate core decisions: Resist the temptation to fully outsource shortlisting or assessment to AI. Relational knowledge and context still matter.
Don’t ignore structural inequity: Recognise that access to AI tools is uneven; avoid making them an unspoken requirement for success.
Don’t stay silent: Failing to communicate your stance on AI risks confusion, distrust, and unequal interpretation among applicants.
Funders should also be wary of making the application process more difficult or onerous, shifting to an ‘invitation-only’ application process, or pausing the distribution of funds while they ‘figure out’ AI.
This is a pivotal moment for funders to reflect not just on their own use of AI, but on how application systems can evolve in line with Open and Trusting principles ensuring that fairness, clarity and human judgement remain at the heart of decision-making.
Join our workshop: We’re inviting charities & funders to join our online workshop to help us guide how AI is and could be used in relation to grant reporting.




