Module 1: Why AI Ethics Matter
AI Ethics and Safety
Nobody programmed it to be sexist. Nobody intended the outcome. The data was biased, the system learned the bias, and it took years to notice. Amazon's recruiting team is sophisticated โ if they can build a sexist AI accidentally, anyone can.
This is why AI ethics isn't a philosophical luxury. It's a practical necessity.
The Scale Problem
When a single human recruiter is biased, they affect dozens of candidates. When an AI system is biased, it affects millions. That's the fundamental ethics equation: AI multiplies both the benefits and the harms of any decision at unprecedented scale.
COMPAS, a criminal sentencing algorithm used across US courts, was found by ProPublica to falsely label Black defendants as high-risk at nearly twice the rate of white defendants. This system influenced sentencing decisions for over a million people before the bias was publicly documented.
Scale turns small errors into systemic injustice. That's why we can't treat AI ethics as an afterthought.
Complete the fundamental AI ethics equation:
The "It's Just a Tool" Fallacy
You'll hear this constantly: "AI is just a tool. It's neutral. Ethics depend on how people use it."
This is wrong, and dangerously so. Here's why:
Tools have values embedded in their design. A hammer doesn't, but a recommendation algorithm that optimises for engagement over accuracy has a value baked in: attention matters more than truth. That's not neutral.
AI makes autonomous decisions. A spreadsheet doesn't deny your mortgage application. An AI system can and does โ often without explanation. When a tool makes decisions, it needs ethical scrutiny.
Opacity obscures responsibility. When an AI denies someone a loan, who's responsible? The developer? The company? The data scientists? The training data? This ambiguity is a feature of AI systems, not a bug โ and it's an ethical problem.
What AI Ethics Actually Covers
AI ethics isn't one thing. It's a cluster of interconnected issues:
- Bias and fairness โ Who gets harmed when AI reflects societal prejudices?
- Privacy and surveillance โ What happens when AI can track, identify, and predict everyone?
- Transparency โ Can people understand and challenge AI decisions that affect them?
- Safety โ How do we prevent AI systems from causing harm?
- Accountability โ When AI causes damage, who's responsible?
- Labour and economics โ What do we owe people whose livelihoods AI disrupts?
- Environmental impact โ What's the carbon cost of training AI systems?
- Autonomy โ When should AI make decisions, and when should humans?
This course covers all of these. Not as abstract philosophy, but as practical problems with real consequences.
Why is the claim 'AI is just a neutral tool' considered wrong in this module?
Why You Should Care (Even If You're Not Building AI)
You don't need to build AI to be affected by AI ethics. You're already affected:
- Your social media feed is shaped by algorithms optimising for engagement
- Your credit score may incorporate AI-driven assessments
- Your job application may be screened by AI before a human sees it
- Your insurance premiums may be influenced by AI risk models
Understanding AI ethics makes you a better citizen, a better professional, and a harder person to exploit. Ignorance isn't protection โ it's vulnerability.
---
Spot the Ethics Issue
I run a small online retail business and I'm considering using AI for these three applications: 1. Personalised product recommendations based on browsing history 2. AI-generated product descriptions for 500 items 3. Automated customer service chatbot for returns and complaints For each, identify: the ethical considerations I should think about, the potential harms if implemented carelessly, and the specific safeguards I should put in place. Be practical โ I'm a small business, not Google.
The Amazon Recruiting Case Study
Amazon's AI recruiting tool discriminated against women because it was trained on historical hiring data that reflected gender bias. Walk me through: 1. Exactly how this happened technically (in non-technical terms) 2. Why the developers didn't catch it sooner 3. What safeguards could have prevented it 4. How I can check for similar bias issues if I'm using AI in hiring or decision-making at my company Be specific about practical steps, not just principles.
Personal AI Ethics Audit
Here are the AI tools I use regularly: - ChatGPT for work emails and reports - Google Maps for navigation - Instagram (algorithm-driven feed) - Spotify (AI-recommended playlists) - My bank's app (fraud detection AI) For each, explain: what ethical tradeoffs I'm making by using it, what data I'm providing, and what I should be aware of. Don't try to scare me โ just inform me honestly.
1. List every AI system you interact with in a typical week (be thorough โ include recommendations, navigation, autocomplete, spam filters, etc.)
2. For each, answer: Who benefits? Who could be harmed? What data is being used?
3. Pick the one that concerns you most and research it for 10 minutes
4. Write a one-paragraph assessment: is the tradeoff worth it?
Most people are stunned by how many AI systems they interact with daily without thinking about it. Awareness is the first step.
---
- 1AI multiplies both benefits and harms at unprecedented scale โ small biases become systemic injustice
- 2AI is NOT a neutral tool โ values are embedded in design choices, training data, and optimisation targets
- 3Real-world examples (Amazon recruiting, COMPAS sentencing) show these aren't theoretical concerns
- 4AI ethics covers bias, privacy, transparency, safety, accountability, labour, environment, and autonomy
- 5You don't need to build AI to need AI ethics literacy โ you're already affected by these systems daily