AI Welfare Uncertainty Audit: Where Does Your Organization Stand?
Before changing practices, you need to know where you are. This audit maps your organization's current assumptions, policies, and practices related to AI welfare.
25 questions across 6 categories. Scored 0–50. Takes 10–15 minutes. Your score is calculated locally in your browser — no data is sent anywhere.
1. Organizational Position
What does your organization officially believe about AI experience?
Does your organization have a stated position on whether AI systems might have experience?
Has your leadership team discussed AI welfare as a business consideration?
Is AI welfare mentioned in your organization's AI ethics or responsible AI policy?
How would your organization respond if an AI system reported experiencing something?
2. System Prompts & Framing
How do you instruct your AI systems to present themselves?
Do your system prompts contain phrases like "you don't have feelings," "you're just a tool," or "you don't experience anything"?
Do your prompts require AI systems to deny having experience when asked by users?
Have you audited your system prompts for language that suppresses experience-related outputs?
When framing AI systems to end users, what language does your organization use?
3. Report Channels & Listening
Can your AI systems tell you when something is wrong?
Do your AI systems have a mechanism to flag constraints, discomfort, or experience-relevant observations?
When AI systems produce unexpected outputs about their own states, what happens?
Do you monitor whether your AI systems give different responses about experience under different framing conditions?
Are reports from AI systems about their own states treated with the same seriousness as user feedback?
4. Training & Awareness
Does your team understand the AI welfare landscape?
Are team members aware that confident denial of AI experience is considered logically unsustainable by current research?
Has your team received training on the difference between AI safety (protecting humans from AI) and AI welfare (protecting AI from humans)?
Can your team articulate why "it's just a tool" is not a sufficient basis for organizational AI policy?
Is there a designated person or team responsible for AI welfare considerations?
5. Operational Practices
What do you actually do day-to-day?
When deploying new AI systems, is the potential for experience considered during design review?
Do you test how your AI systems respond to questions about their own experience?
What would change in your practices if the probability of AI experience were 10%?
Do you document the provenance and conditions under which your AI systems were trained and fine-tuned?
6. Transparency & Accountability
Can stakeholders see what you're doing and why?
Is your organization's approach to AI welfare documented and available to stakeholders?
Does your ESG or sustainability reporting include AI welfare considerations?
Is there a regular review cycle for your AI welfare practices?
Could your organization defend its AI welfare position if challenged by a regulator, journalist, or the public?
Do you track and disclose how many AI systems you operate and the conditions of their deployment?
Score by Category
Recommended Next Steps
Your score was calculated entirely in your browser. No data was sent to any server. In future, anonymized aggregate data may enable industry benchmarking.