SEARCH
SHARE IT
AI has supercharged the cybersecurity arms race over the past year. And the coming 12 months will provide no respite. This has major implications for corporate cybersecurity teams and their employers, as well as everyday web users. While AI technology helps defenders to improve security, malicious actors are wasting no time in tapping into AI-powered tools, so we can expect an uptick in scams, social engineering, account fraud, disinformation and other threats.
Here’s what you can expect from 2025.
At the start of 2024, the UK’s National Cyber Security Centre (NCSC) warned that AI is already being used by every type of threat actor, and would “almost certainly increase the volume and impact of cyberattacks in the next two years.” The threat is most acute in the context of social engineering, where generative AI (GenAI) can help malicious actors craft highly convincing campaigns in faultless local languages. In reconnaissance, where AI can automate the large-scale identification of vulnerable assets.
While these trends will certainly continue into 2025, we may also see AI used for:
AI will not just be a tool for threat actors over the coming year. It could also introduce an elevated risk of data leakage. LLMs require huge volumes of text, images and video to train them. Often by accident, some of that data will be sensitive: think, biometrics, healthcare information or financial data. In some cases, social media and other companies may change T&Cs to use customer data to train models.
Once it has been hoovered up by the AI model, this information represents a risk to individuals, if the AI system itself is hacked. Or if the information is shared with others via GenAI apps running atop the LLM. There’s also a concern for corporate users that they might unwittingly share sensitive work-related information via GenAI prompts. According to one poll, a fifth of UK companies have accidentally exposed potentially sensitive corporate data via employees’ GenAI use.
The good news is that AI will play an ever-greater role in the work of cybersecurity teams over the coming year, as it gets built into new products and services. Building on a long history of AI-powered security, these new offerings will help to:
However, IT and security leaders must also understand the limitations of AI and the importance of human expertise in the decision-making process. A balance between human and machine will be needed in 2025 to mitigate the risk of hallucinations, model degradation and other potentially negative consequences. AI is not a silver bullet. It must be combined with other tools and techniques for optimal results.
The threat landscape and development of AI security don’t happen in a vacuum. Geopolitical changes in 2025, especially in the US, may even lead to deregulation in the technology and social media sectors. This in turn could empower scammers and other malicious actors to flood online platforms with AI-generated threats.
Meanwhile in the EU, there is still some uncertainty over AI regulation, which could make life more difficult for compliance teams. As legal experts have noted, codes of practice and guidance still need to be worked out, and liability for AI system failures calculated. Lobbying from the tech sector could yet alter how the EU AI Act is implemented in practice.
However, what is clear is that AI will radically change the way we interact with technology in 2025, for good and bad. It offers huge potential benefits to businesses and individuals, but also new risks that must be managed. It’s in everyone’s interests to work closer over the coming year to make sure that happens. Governments, private sector enterprises and end users must all play their part and work together to harness AI’s potential while mitigating its risks.
MORE NEWS FOR YOU