By André Baptista, Co-founder of ethical hacking platform Ethiack
Cybercriminals have wasted no time in weaponising Artificial Intelligence. From phishing scams to ransomware assaults, an increasing proportion of cyberattacks bear the tell-tale hallmarks of AI. For all its power and potential for good, AI can also be used by bad actors to penetrate the IT systems of companies, individuals and governments.
Less well known, but more encouraging, are the huge strides that those tasked with defending digital assets are making in using AI to keep attackers at bay.
For IT teams, this development shifts the narrative on AI. Rather than simply being a source of escalating threats, AI is set to offer a way to outpace the threat actors through the use of advanced, AI-powered tools that proactively seek out and identify vulnerabilities in the organisation’s systems and servers before they can be exploited. At the heart of this exciting approach is the rise of ethical AI-driven “hackbots.”
The evolution of the threat landscape
An authoritative analysis by the UK’s National Cyber Security Centre (NCSC) confirms what many in the industry have suspected for years: most categories of cyber threat actor, from sophisticated state-sponsored groups to less skilled hackers-for-hire, are now integrating AI into their operations.
The implications are concerning. With AI, attackers can automate the reconnaissance phase of their attack, craft highly convincing phishing messages, bypass basic security measures, and even evade detection by mimicking legitimate behaviour. This effectively lowers the bar for entry into cybercrime, putting powerful attack capabilities into the hands of even individuals with minimal technical skill.
The rapid growth of tools like ‘ransomware-as-a-service’ compounds the issue. Many of these models – which were reportedly used in a series of devastating cyberattacks on UK retailers this year – enable cybercriminals to rent pre-built AI-powered attack infrastructure. This commodification of cybercrime is accelerating the frequency, complexity, and success rate of attacks across all business sectors, including financial services.
Turning the tables with AI
With that said, it’s not all bad news. The same technological capabilities that empower attackers can also empower defenders, if used responsibly and strategically. This is where AI-powered ‘hackbots’ come in.
Ethical hackbots are AI systems designed to mimic the behaviour of real-world attackers, but for a very different purpose: to find vulnerabilities before they’re discovered by malicious actors. Unlike traditional cybersecurity tools, which often rely on static rules or scheduled scans, hackbots operate continuously and adaptively.
Think of them as digital sentinels, patrolling webservers and tirelessly probing a company’s digital infrastructure, learning from what they encounter, and adjusting their tactics to uncover potential flaws in real time.
Beyond human limitations
Traditional penetration testing, often called ‘pentesting’, remains a cornerstone of modern cybersecurity. These exercises, conducted by ethical hackers, involve simulating attacks on systems to identify vulnerabilities. But manual testing has its limitations. It’s time-consuming, costly, and only conducted at a certain time, or periodically rather than continuously. Its effectiveness is also constrained by human bandwidth.
AI-powered hackbots overcome these constraints. Equipped with the analytical power of Large Language Models (LLMs) and real-time data processing, they can assess vast and complex digital ecosystems far faster than a human ever could. They’re not just running scripts; they’re interpreting behaviours, identifying anomalies, and flagging risks that might otherwise go unnoticed.
More importantly, they learn and put things into context. The more data they interact with, the better they become at anticipating how a real attacker might behave. This allows for a level of adaptability and precision that simply wasn’t possible with earlier generations of security software.
AI hackbots – human allies rather than replacements
A common concern in the cybersecurity industry is that automation may displace human talent. But AI hackbots are best viewed as allies rather than replacements for human professionals.
These tools take over the grunt work: the repetitive, time-intensive tasks that consume much of an ethical hacker’s day. This frees up human teams to focus on higher-order challenges; strategic defence planning, decision-making, interpreting ambiguous threat signals and managing the ethical and legal dimensions of security operations.
At Ethiack, we’ve been exploring this symbiosis in practice. Our research and development of AI hackbots has consistently shown that the most effective defence strategies are hybrid – combining machine speed and scalability with human judgment and creativity. We concluded that, used right, LLMs make a powerful coworker, rather than a rival, for a skilled ethical hacker.
Building the future of ethical hacking
Of course, using AI in this way requires caution and a strong ethical framework. Any tool that mimics the behaviour of an attacker must have proper safeguards in place and be continuously monitored. The responsibility for guiding, validating or intervening when necessary, must rest with human operators.
But when implemented thoughtfully, hackbots offer a powerful new approach to cybersecurity. They represent a shift from reactive defence to proactive detection. Instead of waiting to be breached and then scrambling to respond, organisations can maintain a persistent, intelligent watch on their own systems, identifying and resolving weaknesses before they become liabilities.
This is especially vital for financial institutions, which hold vast troves of sensitive data and are often targeted for both financial and political reasons. For them, adopting AI-driven ethical hacking is not a luxury, it’s rapidly becoming a necessity.
The cybersecurity equation has changed
We’re entering an era where every cyberdefender must assume their adversary is using AI. This changes the equation completely. Firewalls, antivirus software, and even human-led testing are no longer sufficient on their own. The scale and speed of AI-driven threats demands an equally agile and intelligent response.
Ethical hackbots are a critical part of that response. Deployed responsibly and overseen by skilled professionals, they offer a scalable, intelligent way to stay ahead of the curve.
André Baptista is Co-founder of the ethical hacking platform Ethiack. A Visiting Professor at the University of Porto, he is a two-time winner of HackerOne Live-Hacking Events.