Protect Yourself in the Age of AI
AI is powerful. It's also being used by bad actors. Here's what you need to know to stay safe.
Quick Reference: Stay Safe
Deepfakes
What are deepfakes?
AI-generated videos, images, or audio that make it look like someone said or did something they never did. The technology is now good enough to fool most people.
Where you'll encounter them
- • Fake celebrity endorsements selling products
- • Political misinformation during elections
- • Scam calls using cloned voices of family members
- • Fake news videos on social media
How to spot deepfakes
AI-Powered Scams
Scammers now use AI to write perfect phishing emails, clone voices, and create convincing fake websites instantly. Here are the most common AI-enhanced scams:
Voice cloning scams
"Grandma, I'm in trouble and need money." The voice sounds exactly like your grandchild — because AI cloned it from their social media videos.
Protection: Establish a family code word. Always verify through a second channel.
CEO fraud / Business email compromise
Emails that perfectly mimic your boss's writing style, asking you to transfer money or share sensitive information.
Protection: Verify unusual requests by phone. Don't trust email alone for financial decisions.
AI chatbot phishing
Fake customer service chats that seem helpful while stealing your login credentials or payment information.
Protection: Go directly to official websites. Don't click links in emails or messages.
Investment scams
AI-generated fake testimonials, deepfake celebrity endorsements, and convincing fake trading platforms.
Protection: If it sounds too good to be true, it is. Verify through official financial regulators.
Protecting Your Data
How AI uses your data
When you use AI tools, you're often sharing:
- • Your questions and prompts
- • Documents you upload
- • Your writing patterns and preferences
- • Information about your work and life
Some AI companies use this data to train future models. Some don't. Know the difference.
Best practices
- Don't share sensitive information — passwords, financial details, confidential work documents
- Read privacy settings — most tools let you opt out of training
- Use enterprise versions for work — they typically have better data protection
- Assume anything you type could become public — act accordingly
AI and Children
Risks for young people
- • Inappropriate content — AI can generate anything
- • AI companions — chatbots that form unhealthy relationships
- • Homework fraud — using AI to cheat rather than learn
- • Manipulation — AI chatbots giving dangerous advice
- • Deepfake bullying — fake images or videos used to harass
What parents can do
- • Know which AI tools your children use
- • Have ongoing conversations about AI and fake content
- • Set expectations about AI use for schoolwork
- • Create an environment where they'll tell you if something goes wrong
- • Model good verification habits yourself
Common Questions
Look for unnatural facial movements, audio sync issues, strange lighting, or blurred edges around the face. If something feels "off," trust your instincts and verify through other sources.
Stop communication immediately. Report to Action Fraud (0300 123 2040) and your bank if money was involved. Change any compromised passwords. Don't be embarrassed — these scams are sophisticated.
It depends on your company policy and what data you're sharing. Never paste confidential information into AI tools. Check if your company has approved AI tools and use those.
Have ongoing conversations about AI and fake content. Set appropriate boundaries on AI tool usage. Model good verification habits. Create an environment where they'll tell you if something goes wrong.
Want to learn more?
Our courses include detailed safety guidance specific to your situation.