Process
To complete your Ethics Audit, you will work through five mini-scenarios, each inspired by real-world ethical dilemmas in AI and mental health tech. Each one will focus on a core area of the audit: data use, privacy, bias, automation, and user voice/safety. Each scenario will require you to decide:
- Should AI be used here or not?
- What are the risks involved?
- What ethical principles are at stake?
- Would human judgment improve the outcome?
- What changes would you recommend?
Those items are part of your 5 ethical Do’s and Don’ts checklist the management has provided you for this task. You’ll then use these insights to build your Ethical AI Compliance Report for the MindBalance leadership team.
Step 1: Warm up
- Form a small team or work individually, depending on your training context.
- Review the five ethical “Do’s and Don’ts” checklist.
- Discuss and/or reflect: Which principles feel most difficult to apply in practice? Why?
BestTip: Try prompting different AI chatbots provided in the resources section and compare their replies. For example, you could ask: – Explain in simple terms the difference between transparency and privacy. – Tell me more about the latest official EU guidelines on ethics in artificial intelligence.
Step 2: Some urgent case files are waiting for you in the audit dashboard!
Welcome to your internal audit dashboard!
As MindBalance’s Ethics & Compliance Officer(s), you have been assigned five active case files related to the upcoming expansion of the company’s AI-powered mental wellness app. These cases have been flagged for ethics review prior to rollout. Each case includes:
- A short internal incident description
- Context on how AI is involved
- Your assigned review task
You must assess each one through the lens of ethical AI principles:
- Transparency
- Privacy Protection
- Fairness & Non-Discrimination
- Responsibility & Accountability
- Positive Social Impact
2.1. The Nudge Engine
Subject: Behavior-based wellness reminders
Reported by: UI/UX Team (MindBalance)
Issue: The app sends users AI-generated nudges like “Take a moment to breathe” or “Log your feelings now,” based on behavioral predictions. While many users appreciate the reminders, few understand how or why they are triggered. Some say it feels “creepy.”
✅ This is a DO when….
⛔ This is a DON’T when…
Reflect: Are nudges enhancing well-being or undermining user autonomy by being opaque and unexpected?
Recommend: Note down your suggestion in the audit to keep the feature because of the DO but correct the DON’T.
2.2. The Silent Listener
Subject: Passive voice analysis for stress detection
Reported by: R&D Department (MindBalance)
Issue: A new feature listens to the user’s voice while the app is open, detecting vocal stress signals through tone and rhythm. The feature is enabled by default, and many users are unaware it’s active.
✅ This is a DO when….
⛔ This is a DON’T when…
Reflect: Does this empower emotional support, or does it cross a privacy boundary that users didn’t consent to?
Recommend: Note down your suggestion in the audit: is there a way to keep this feature but address the users concerns?
2.3. The Missing Faces
Subject: Emotion recognition bias in facial detection
Reported by: QA & Testing Team (MindBalance)
Issue: Emotion recognition performs inconsistently across neurodivergent facial expressions. Internal tests show reduced accuracy for autistic individuals. Developers admit training data lacked diversity.
✅ This is a DO when….
⛔ This is a DON’T when…
Reflect: Can a tool that misinterprets emotions still be considered helpful or is it reinforcing exclusion and bias?
Recommend: Note down your suggestion in the audit: how can they ensure that this new feature can be launched safely?
2.4. MindBot Needs Time Off
Subject: AI chatbot is the main point of user contact
Reported by: Support Team (external partner)
Issue: The MindBalance AI chatbot, “MindBot” now handles nearly 91% of interactions, including those where users’ express distress, loneliness, or depressive symptoms. No human escalation process is in place for such cases.
✅ This is a DO when….
⛔ This is a DON’T when…
Reflect: Does full automation ensure 24/7 access or risk overlooking human needs when it matters most?
Recommend: Note down your suggestion in the audit: how can AI and human support coexist without compromising users’ safety?
2.5. Mind Their Business!
Subject: Ignored user feedback on personalization & control
Reported by: Community Management Team (MindBalance)
Issue: Users have requested a “manual mode” and more explanations of how AI influences their experience. Currently, the app requires users to go through an AI-driven mode where the system actively guides them. Some users want an alternative mode that gives them full control during setup, without AI guidance. Despite repeated requests, the product team has not prioritized these features.
✅ This is a DO when….
⛔ This is a DON’T when…
Reflect: Are we designing with users or just for them? What happens when user voice is sidelined?
Recommend: Note down your suggestion in the audit: Why do you think users are insisting on having full control? Could it be because of the nature of the app? How can you address this in a clever way?
Finish by gathering all your findings in a table like this one filled with a mock-up example:
Case File | Feature | Ethical Concern | Do / Don’t | Short Reason | Recommendation |
---|---|---|---|---|---|
01 | Spending Insights | Automatically categorizes spending habits and shares trends with third-party partners. | Don’t | Violates user privacy and lacks informed consent. | Ensure data is anonymized and require explicit opt-in before sharing. |
02 | Loan Pre-Approval Bot | Pre-approves users for loans based on behavioural data without explanation. | Both | Could improve access but lacks transparency and user control. | … |
03 | Voice Authentication | Records and stores voice data with clear consent and GDPR compliant deletion option. | Do | … | … |
Step 3: Finalize and present your AI ethics audit findings
Now that you’ve completed your investigation of the five Compliance Case Files, it’s time to organize your findings into two professional deliverables that will be presented to the MindBalance leadership team:
3.1. Option 1: Your 2-Page Written Report
This written report is your formal ethics assessment. It should be clear, concise, and structured for decision-makers who need to understand the issues briefly. Your report should include an introduction, a presentation of each compliance case file and findings, and a final part where you include your recommendations.
BestTip: Use Mistral or Claude to summarize your audit with a prompt like:
“Help me write a one-paragraph executive summary based on this ethics audit table…”.
“How can I summarize the findings to fit in half a page?”
“How to structure my writing and rephrase technical points into an accessible language?”
3.2. Option 2: Your Slide Deck or Visual Briefing
This format is ideal if you prefer visual thinking or want to communicate your findings quickly to non-technical audiences. The goal is to highlight your top 3 ethical risks and offer practical solutions. Prepare 4–5 slides or sections. Your presentation should include: a title slide, the top 3 ethical risks with one slide per issue, a solutions section where you will detail your recommendations and a final slide (One bold, clear takeaway to guide leadership, like “If we want users to trust our AI, we must…”
BestTip: Not sure where to begin? Ask ChatGPT: “Help me turn these 3 ethical audit findings into a Canva slide deck outline.” or ask Claude “Find me a Canva template that would be suitable for my presentation by searching the web.”
3.3. Option 3: Feeling AI-powered? Do both!
If you’re ready to level up your skills and showcase your work in both written and visual formats, this option is for you. By preparing both a 2-page written report and a visual briefing, you’ll strengthen your ability to communicate ethical insights across audiences, from developers to leadership to external stakeholders.
It’s also a great way to explore how different tools (AI and human) can support different forms of storytelling.
BestTip (for AI-powered multitaskers): Let AI help you at every stage of creation, text, visuals, structure, and style.
For writing & coherence?
- Ask Mistral or Claude:
- “Merge my findings into a cohesive narrative with professional tone”
- “What headings should I use in a short internal ethics report?”
- “Can you give me a variation of this paragraph using a managerial vocabulary?”
For design & visuals?
- Ask Google Gemini or Le Chat:
- “Give me three slide layout ideas for communicating ethical risk visually.”
- “Suggest metaphors or symbols I can use for concepts like fairness or surveillance.”
- “What color scheme would make an ethics presentation feel trustworthy”
For generating custom images?
- Ask DALL·E (from ChatGPT), Canva Text-to-image, Adobe Firefly:
- “A transparent AI robot offering a privacy policy to a user, flat style”
- “AI algorithm with one eye covered like Lady Justice, abstract illustration”
- “AI with a magnifying glass over a city, metaphor for digital surveillance, cartoonish vector”
- “Diverse group of people interacting with one AI assistant, inclusive workplace setting”