Module 2: The Automation Equation โ What Determines If Your Job Gets Automated
Which Jobs Survive AI
Why? Because whether a job gets automated isn't just about whether AI can do it. It's about a specific equation involving capability, economics, and barriers. Understanding this equation is the single most useful framework for evaluating your own career risk.
The Automation Equation
AI Capability ร Economic Incentive รท Barriers = Automation Likelihood
Each factor matters. Miss one and your prediction will be wrong.
Factor 1: AI Capability
Can AI actually perform the task at acceptable quality? "Acceptable" is doing heavy lifting in that sentence. It doesn't mean perfect. It means good enough for the use case.
Levels of capability:
- Fully capable now: Transcription, basic translation, data entry, simple customer queries, standard code generation, document summarisation
- Mostly capable (needs human review): Legal document analysis, medical image reading, financial report generation, marketing copy, code review
- Partially capable: Complex creative work, strategic planning, nuanced negotiation, novel research
- Not yet capable: Physical tasks in unstructured environments, deep relationship management, genuine innovation, moral judgment in ambiguous situations
The key insight: the "mostly capable" category is where the most disruption happens, because it allows one human to do the work of five โ the AI handles the volume, the human handles the exceptions.
Order the three factors of the Automation Equation in the correct sequence for assessing automation risk.
Factor 2: Economic Incentive
How much money does automation save? This is brutally simple math.
High incentive (automation pressure is intense):
- High labour costs relative to output (e.g., legal research at $200+/hour)
- High volume, repetitive tasks (e.g., processing thousands of insurance claims)
- 24/7 demand that requires shift work (e.g., customer support)
- Labour shortages making hiring difficult (e.g., cybersecurity analysts)
Low incentive (automation pressure is weak):
- Labour is already cheap relative to the cost of AI systems
- Low volume doesn't justify automation investment
- Customers explicitly want human interaction (e.g., luxury retail, therapy)
- The cost of errors exceeds the cost of human labour (e.g., surgical decisions)
The Klarna math: 700 agents ร average cost of ~$40,000-$50,000 each = $28-35 million annually. Their AI costs a fraction of that. The incentive was overwhelming.
The radiologist math: AI can flag abnormalities, but a misdiagnosis lawsuit costs millions. The liability barrier outweighs the cost savings. So AI augments radiologists rather than replacing them โ making them faster, not redundant.
Factor 3: Barriers
What stands between AI capability and actual deployment?
Regulatory barriers. Healthcare, finance, law, and aviation have strict rules about who (or what) can make decisions. The FDA doesn't let an AI diagnose independently. Financial regulators require human oversight for major decisions. These barriers slow automation significantly.
Trust barriers. Would you let an AI represent you in court? Handle your child's education? Manage your parent's care? Even when AI is capable, human trust lags. This is especially strong in high-stakes, emotionally charged situations.
Infrastructure barriers. Many organisations run on legacy systems that can't easily integrate AI. A hospital using 1990s-era records systems can't deploy cutting-edge AI overnight, regardless of capability.
Union and political barriers. The Hollywood writers' strike demonstrated that organised labour can set limits on AI use. Similar dynamics exist in education, government, and transportation.
Physical barriers. AI can plan a renovation perfectly. It can't swing a hammer. Until robotics catches up to AI cognition, physical jobs retain significant protection.
Match each barrier type with its example of slowing automation.
Applying the Equation
Let's run three examples:
Data entry clerk: AI capability = high. Economic incentive = high (repetitive, volume-based). Barriers = low (minimal regulation, no trust requirement). Result: Very high automation risk.
Trial lawyer: AI capability = moderate (research, document prep). Economic incentive = moderate (expensive labour but high-stakes). Barriers = very high (regulatory, trust, accountability). Result: Transformation, not replacement. AI handles research; lawyer handles courtroom and judgment.
Plumber: AI capability = low for physical tasks. Economic incentive = moderate. Barriers = high (physical, infrastructure). Result: Low automation risk. AI might help with scheduling and diagnostics, but the core job is protected.
---
Take your task inventory from Module 1 and apply the equation:
| Task | AI Capability (1-10) | Economic Incentive (1-10) | Barriers (1-10) | Net Risk |
|------|---------------------|--------------------------|-----------------|----------|
| Task 1 | | | | |
| Task 2 | | | | |
| Task 3 | | | | |
For each task, calculate: (Capability ร Incentive) รท Barriers = Risk Score
Tasks scoring above 7 need immediate attention. Start planning how to shift your time toward lower-risk tasks.
---
- 1Automation likelihood = AI Capability ร Economic Incentive รท Barriers
- 2"Can AI do it?" isn't enough โ economics and barriers determine whether it *will* be automated
- 3The "mostly capable" zone is most disruptive: one human + AI replaces five humans
- 4Regulatory, trust, physical, and political barriers significantly slow automation
- 5Apply this equation to your specific tasks โ not your job title โ for an accurate risk assessment