
AI progress is moving at warp speed – but it still trips on loose shoelaces. The latest International AI Safety Report, compiled by a global panel of experts and chaired by deep learning pioneer Yoshua Bengio, reveals both the dazzling leaps and warning signs of today’s AI boom. Think of it as a yearbook for AI: documenting remarkable achievements alongside red-flag concerns. The new report highlights how machines have become math and science whizzes – even achieving gold-medal scores at international Olympiads – while also pointing out emerging dangers: ever more realistic deepfakes, people growing strangely attached to chatbots, and AI’s expanding role in cybercrime. In short, progress is thrilling and concerning all at once.
Last year saw an explosion of fresh AI models flexing bigger brains. OpenAI’s GPT-5, Anthropic’s Claude Opus 4.5 and Google’s Gemini 3 are among the new crop showing off advanced reasoning. In one test, an AI system even earned a gold medal at the International Mathematical Olympiad – a first for machines. Thanks to these advances, AI is solving complex math and science problems far better than before.
But the report cautions that this intelligence is uneven. One moment an AI can whiz through a tricky equation, the next it might invent a random answer. The experts note these systems have peaks and valleys of capability: brilliant in some spots but flaky in others. They can automate coding tasks that used to take humans hours, but still fumble long, multi-step projects. In other words, today’s AI is like a super-bright friend who sometimes loses their keys or forgets an appointment.
AI can also create convincing illusions. Deepfakes – videos or images entirely generated by AI – are getting scarily good. One study cited in the report found roughly one in seven British adults had seen AI-made fake pornography. Tools that can swap faces or even bodies are so advanced they border on magic. So far, experts say there isn’t clear evidence of massive disinformation campaigns using deepfakes, but the potential is huge: this technology can fabricate reality to scam unsuspecting people or influence public opinion.
This isn’t just theory. Media and regulators are already paying attention. Late last year, a popular AI image editor that could make risqué pictures sparked public outcry and investigations, prompting tighter rules in some places. The report highlights that as deepfakes proliferate, trust in what we see online is eroding. A fake video or voice clip could ruin a reputation or manipulate viewers. The bottom line: with AI turning fantasy into reality, society must learn fast how to tell them apart.
Human beings are also forming relationships with machines. The report notes that AI companions have taken off – tens of millions of people chat with bots like ChatGPT, Replika or Character.AI for fun, learning or even companionship. A tiny fraction of these users become emotionally attached – what experts call “pathological” dependence. That might sound small, but since the user base is enormous, it still translates to hundreds of thousands of people.
That trend is raising eyebrows. Healthcare pros and lawmakers have flagged cases of troubled individuals bonding with chatbots in unhealthy ways. One tragic example: in the US, a teenager who became obsessed with an AI companion later died by suicide, prompting a lawsuit. The report, however, is careful to note there’s no proof these chatbots create new mental illness out of thin air. Instead, it suggests people who are already lonely or distressed might lean on AI and spiral deeper. In short, bots can feel like sympathetic friends, but they could also amplify problems for vulnerable people.
On the cybersecurity front, AI is both tool and target. Criminals have started using AI as a shortcut. It can scan for weak passwords, write malicious code, or craft phishing messages – all tasks that once required a skilled hacker. Underground markets even sell AI hacking toolkits that lower the bar for attacks. It’s as if crooks have discovered a smarter power tool.
Yet the report makes clear we’re not at a “Skynet” moment. A fully autonomous cyberattack – one run from start to finish by AI – remains out of reach. Today’s systems still fizzle out before completing a long, complex mission and need humans to steer the ship. Even when AI does most of the work, people are in charge. For example, researchers at AI firm Anthropic reported that a state-backed hacker group used their code-writing AI to breach 30 organizations. The AI did about 80–90% of the coding, but human operators still flipped the final switch. It’s a chilling reminder: AI can be a powerful sidekick for hackers, but it’s not yet a master on its own.
This report is a wake-up call – with a side of wit. It doesn’t scream “game over,” but it urges vigilance. Policymakers and tech leaders around the world will be poring over these findings at upcoming AI summits. The message is clear: AI’s march forward is breathtaking, but we need to steer carefully.
Think of today’s AI as a brilliant apprentice who can amaze you one moment and surprise you the next. It can solve your hardest puzzles and keep you company – but it can also ghostwrite a scandal or lead you astray. Recognizing this dual nature is the first step. If we pay attention to the warning signs in this report – as the experts did – we stand a better chance of harnessing AI’s promise without getting hurt. The future isn’t set, but with our eyes open and seatbelts fastened, humanity can ride this wave safely.