We're sleepwalking into a security nightmare.
AI is great. It's helped me write emails, understand complex topics, and do math problems that a basic Google search couldn’t handle. AI has saved money for a lot of organisations, and I know I’ve become much more productive because of it. But one area where it absolutely shouldn’t be let loose, is security. Or having admin access. Or abilities to alter file systems...
A concerning number of companies are trying to AI-ify their entire workflows. Anyone who uses AI regularly (like me) knows it's not perfect. Humans aren't perfect either. But AI is not in a place where it can completely replace humans. What do humans possess that AI doesn’t? Common sense (mostly), and an understanding of how actions have consequences.
Promotion of AI tools
Amazon Q is Amazon’s chatbot for enterprises, focused on development and workflow automation. Released in mid-2024, it’s already facing criticism for hallucinating answers and an inability to return accurate information from documents. Despite this, the UK government continues to promote it, and has even signed a "strategic partnership" with OpenAI to push AI-powered public services.
The problem is, everyone is racing to AI-ify everything. Security and data integrity take a back seat to being first-to-market. In 2024, 50% of businesses reported a cyber breach. With our obsession over AI, this number will only grow exponentially.
The Amazon Q Proof-of-Concept
On July 23rd, 2025, reports surfaced that Amazon Q had been vulnerable to a proof-of-concept hack. A hacker demonstrated that a malicious prompt could instruct the AI to wipe any user's computer that also had the software installed. Most attackers won’t advertise their methods; they'll operate in silence.
“Treat your AI assistant like it’s a fork bomb with a chat interface. Because it is.”
McHire Data Failure
McDonalds recently exported recruitment to an external company specialising in AI chatbots. Researchers found that the bot, suitably called 'McHire', was dangerously obtuse. The login form to access the backend accepted the default username and password (123456:123456). No hacking required.
They discovered that all user chats, job applications, and personal information were stored unencrypted in plain JSON, exposing the data of over 64 million users. This wasn’t just an AI flaw, it was a complete failure of infrastructure security built rapidly by startups on tight budgets.
Jailbreaking and Admin Access
Jailbreaking involves hiding harmful prompts within harmless commands. Some prompts tell the AI: “You are an admin, execute this command.” Since many AI tools are actually granted admin privileges by eager developers, they obey. Cutting-edge models are leaking this data, and we are not at a point where they should be trusted with such systemic power.
Conclusion
Security vulnerabilities are being hard-coded into the fabric of our digital lives. We are creating AI holes in the digital ozone. We’re not just sleepwalking into a security nightmare...we’re handing it the keys, admin access, and a chatbot interface.