top of page

The Future of Cybersecurity: Is Artificial Intelligence a Friend or Foe?

Pradhyumna Prakash

The Future of Cybersecurity: Is Artificial Intelligence a Friend or Foe?



Abstract


In 2025, one of the largest data breaches in history exposed 16 billion records, including login credentials from major platforms like Apple, Google, Facebook, and Telegram, affecting millions of lives. Establishments once considered protective barriers are now crumbling against a new surge of sophisticated cyber threats. In this landscape, one question takes center stage: What is the future of cybersecurity, and more specifically, is artificial intelligence a friend or foe? AI offers significant benefits, enhancing the accuracy and efficiency of protective measures by aiding security teams in identifying patterns and detecting anomalies within vast datasets.

However, artificial intelligence is not a one-sided coin. As AI-integrated cybersecurity networks become more prevalent, cybercriminals are increasingly abusing the same technologies for their own benefit, often mitigating the defensive advantages. Furthermore, the ethical implications surrounding the storage and management of personal data by AI systems are complex, with public opinion divided on the wisdom of entrusting AI with sensitive information. This paper will thoroughly discuss these dual aspects. Through a systematic review, we will analyze AI’s integral role in the future of cybersecurity and offer insights into its responsible integration.


1. Introduction



1.1 What is Cybersecurity?


Every day, the average user faces over 600 million cyberattacks from a diverse range of actors, including independent cybercriminals and nation-states. A cyberattack is an intentional effort to steal, expose, alter, disable, or destroy data, applications, or other digital assets through unauthorized access. The financial consequences are staggering; a typical data breach can cost a company an average of $4.88 million, with the most destructive attacks reaching well into the billions.

With the rapid pace of digital transformation, individuals and businesses are placing increasing trust in computer networks for storing and managing sensitive information. Consequently, the protection of computer systems, networks, programs, and devices—otherwise known as cybersecurity—has evolved into a vital sector within the broader field of Computer Science and Information Technology. As cyberattacks become more numerous, sophisticated, and diverse, cybersecurity measures must continuously evolve to keep pace.


1.2 Key Areas for Improvement


With hackers launching a new cyberattack every three seconds, it is critical for cybersecurity defenses to remain up-to-date. Yet, despite advancements in the field, the frequency and cost of cybercrime have continued to skyrocket, indicating a stagnation in the effectiveness of traditional methods.


Annual Cost of Cybercrime Worldwide


Year

Estimated Annual Cost of Cybercrime (in trillions of USD)

2018

$0.86

2019

$1.16

2020

$2.95

2021

$5.49

2022

$7.08

2023

$8.15

2024

$9.22

2025

$10.29

2026

$11.36

2027

$12.43

2028

$13.82


This alarming rise in cybercrime can be attributed to several factors:

  • AI-Driven Threats: Attackers are leveraging AI to automate and scale their attacks.

  • Increasing Complexity of Cloud & IoT Security: The expansion of cloud services and Internet of Things (IoT) devices has created a larger, more complex attack surface.

  • Supply Chain Vulnerabilities: Weaknesses in third-party vendors, software, and hardware create indirect entry points for attackers.

  • Sophisticated and Complex Attacks: Modern cyberattacks are increasingly multi-faceted and difficult to detect with conventional tools.

Clearly, the cybersecurity field has struggled to keep pace. Traditional automated measures are often insufficient against the adaptive and relentless force of modern cybercriminals. For this reason, the integration of Artificial Intelligence into cybersecurity has accelerated dramatically. Studies show that AI can reduce the success rate of cyberattacks by 73% and detect threats 60% faster than traditional methods. Furthermore, 67% of organizations are currently using AI as part of their cybersecurity strategy, with 31% relying on it extensively.

But is that the end of the story? Has the adoption of AI benefited cybersecurity enough to justify its inherent risks, such as biased outputs, opaque decision-making, the potential for adversarial attacks, and the malicious use of AI by attackers?


1.3 Objectives


This paper aims to systematically discuss the following:

  • The benefits and necessity of Artificial Intelligence in specific cybersecurity domains.

  • The objections, risks, and hindrances of using Artificial Intelligence in cybersecurity tasks.

  • The ethical considerations of AI accessing and protecting user data.

  • A framework for the responsible integration of AI in cybersecurity.


2. The Dual Aspects of AI in Cybersecurity: Two Case Studies



Case Study 1: Bad Rabbit Ransomware Attack - A Lack of AI Integration


In 2017, a devastating ransomware attack known as "Bad Rabbit" targeted users primarily in Russia, Bulgaria, and Ukraine. Disguised as an Adobe Flash Player update and distributed through compromised media websites, the malware spread rapidly upon infection. It encrypted users' files, effectively locking them out of their own systems, and demanded a ransom payment for their release.

Bad Rabbit’s success is largely attributed to the inadequate security infrastructure of its targets, particularly the absence of AI-integrated defense networks. This incident highlights a key weakness of traditional, non-AI cybersecurity: without intelligent automation, it is extremely difficult to monitor, detect, and block novel and sophisticated threats in real-time. Cyberattacks that leverage social engineering (psychologically manipulating people into performing actions) and evolving malware strains (new categories of malicious software like worms or viruses) can often bypass static, rule-based security structures set up by humans.


Case Study 2: The Arup Deepfake - Adversarial Exploitation of AI


In 2023, the renowned British engineering company Arup fell victim to a highly sophisticated deepfake scam, resulting in a fraudulent transfer of $20 million. An employee in Arup's Hong Kong office was deceived by fraudsters who used AI-generated audio and video to convincingly impersonate a senior executive during a video conference call. This incident starkly illustrates the increasing sophistication of deepfake technology and its power when combined with social engineering to compromise businesses.

On a deeper level, it highlights one of the most significant negative aspects of AI in this field: adversarial exploitation. The rise of AI models in defensive cybersecurity has been mirrored by the adoption of the same technology by attackers. This case study underscores the unfortunate reality that as defenders adopt AI, so do cybercriminals, leading to an arms race where AI is used to both perpetrate and prevent attacks.


3. The Benefits of Artificial Intelligence in Cybersecurity


A significant majority of cybersecurity professionals—80%—believe AI is beneficial to security, and 85% of IT stakeholders assert that the only way to effectively counter AI-generated threats is through AI-driven cybersecurity solutions. Before delving into the specific benefits, it is essential to understand the key domains within cybersecurity where AI is making an impact.


3.1 Key Cybersecurity Domains



Network Security


Network security involves protecting the communication infrastructure and all devices connected to an internal or external network. Common threats include malware and phishing (online scams using fraudulent communications to impersonate legitimate sources). Key network security measures include:

  • Firewall: A network security system that monitors and controls incoming and outgoing traffic based on predefined security rules, acting as a barrier between a trusted internal network and an untrusted external network.

  • Intrusion Prevention System (IPS): An advanced version of a firewall that provides real-time threat detection and prevention. It inspects traffic content using more sophisticated parameters and can actively block malicious traffic.

  • Virtual Private Networks (VPNs): A VPN creates a secure, encrypted tunnel over a public network, allowing users to connect safely while masking critical details like their IP address.


Cloud Security


Cloud security is a set of procedures and technologies designed to protect cloud computing systems from both internal and external threats. Key measures include:

  • Identity and Access Management (IAM): Tools that manage and protect digital identities and user access permissions, ensuring that only authorized individuals access specific resources at appropriate times.

  • Zero Trust Security Models: Instead of trusting connections within a network perimeter, these models enforce strict security policies for every individual connection, verifying identity and authority regardless of location.


Internet of Things (IoT) Security


IoT security focuses on strategies to protect interconnected IoT devices and the often-vulnerable networks they use. Since many IoT devices lack built-in security, external measures are critical. A key strategy is:

  • Network Segmentation: This practice divides a computer network into smaller, isolated sub-networks using internal firewalls, Access Control Lists (ACLs), and Virtual Local Area Networks (VLANs). This contains potential breaches and improves overall security.


3.2 How AI Enhances Network Security


The primary advantage AI brings to network security is its ability to analyze massive volumes of data, identifying subtle patterns and correlations that are impossible for a human analyst to detect. AI-powered Network Detection and Response (NDR) solutions can therefore identify novel cyberattacks without relying on predefined signatures. AI's speed in analyzing network traffic through deep packet inspection (analyzing the content of data packets, not just headers) is unmatched by human capabilities.

  • Large Language Models (LLMs) like GPT-5 are being used to identify real-time suspicious behavior by recognizing subtle anomalies in user activity.

  • For anomaly detection, Deep Belief Networks (DBNs) and autoencoders are used in Intrusion Detection Systems for their ability to identify new attacks without needing labeled training data.

Specific integrations include:

  • AI-Powered Firewalls: Next-Generation Firewalls (NGFWs) now use supervised learning and deep learning architectures like Convolutional Neural Networks (CNNs) to effectively identify unknown threats and prevent zero-day attacks (exploits that target vulnerabilities before a patch is available).

  • AI in Intrusion Prevention Systems (IPS): Supervised machine learning models have demonstrated stunning results. In one study, the K-NN algorithm and Random Forest Logistic Regression achieved a 99.89% accuracy in classifying and preventing cyberattacks. Unsupervised models like K-means also showed high accuracy in detecting novel attacks.

  • AI-Integrated VPNs: The integration of AI has enabled modern VPNs to achieve connection security accuracies of over 90%. AI-based routing also optimizes connection speeds and enhances security by connecting users to the most secure and efficient servers.


3.3 How AI Enhances Cloud and IoT Security


Gartner predicts that by 2025, 99% of cloud breaches will be caused by misconfigurations, largely attributable to human error. With over 90% of organizations using the cloud, preventing these breaches is critical.

AI, especially generative AI, enhances cloud security by improving threat detection, automating management, and streamlining the deployment of security controls aligned with company policies.

  • AI in Identity and Access Management (IAM): AI has led to more efficient approval requests, improved anomaly detection, and automated application onboarding. The use of modern protocols like OAuth 2.0 further secures connections between systems.

  • AI-Integrated Zero Trust Systems: AI enhances Zero Trust models by analyzing user behavior to make real-time access control decisions. It enables Just-in-Time (JIT) and Just-Enough-Access (JEA), granting temporary and minimal permissions to reduce risk.

A similar story is found in IoT Security. AI reduces the need for human intervention, lowering costs while improving efficiency. AI-powered IoT solutions can personalize security based on user behavior and are a vital component of advanced network segmentation.


3.4 Traditional vs. AI-Enhanced Cybersecurity


Criteria

Traditional Cybersecurity

AI-Enhanced Cybersecurity

Threat Detection Speed

Often manual and reactive, leading to slower detection.

Real-time to near-real-time, detecting threats instantly.

Data Analysis Volume

Limited by human capacity, processes smaller datasets.

Massive scale, analyzes petabytes of data continuously.

Incident Response Time

Manual processes result in slower containment.

Automated and orchestrated, enabling rapid response.

Human Effort Required

High, requiring extensive manual investigation.

Reduced, with AI automating routine tasks and flagging critical alerts.

Predictive Capability

Minimal, relies on known signatures and past events.

High: predicts emerging threats and attack patterns.

Vulnerability Prioritization

Often manual and based on generalized risk scores.

Intelligent, prioritizes based on context, exploitability, and asset criticality.

Table adapted from Palo Alto Networks.


4. The Risks and Challenges of AI in Cybersecurity



4.1 AI-Powered Attacks: The Offensive vs. Defensive Arms Race


It is an unfortunate reality that as organizations increase their use of AI for defense, cybercriminals adopt the same tools for offense. The use of generative and deep learning models in cybercrime has led to more effective and scalable attacks that can overwhelm both human analysts and defending AI models.

Furthermore, hackers have developed new strategies to undermine defensive AI. By implanting false and biased datainto a model's training set, they can corrupt its learning process. This can lead to the model becoming inefficient, misclassifying real cyberattacks as legitimate traffic (a false negative) or flagging benign activity as malicious (a false positive).

Finally, as seen in the Arup case study, the creation of highly convincing deepfakes and AI-driven phishing scams has transformed the landscape of social engineering. This is a major source of public distress and a primary reason many are hesitant to fully embrace AI in cybersecurity.


4.2 Ethical Concerns: The Fear of Bias


The power of AI models is derived from the vast amounts of data they are trained on. If this training data contains historical biases—unintentional or otherwise—the model will learn and perpetuate those biases. This can lead to an increase in false positives that unfairly target certain user groups or false negatives that overlook threats common to underrepresented demographics.

Moreover, the "black box" nature of certain AI models, like GPT-5, presents a significant challenge. Due to proprietary intellectual property and algorithmic complexity, it is often difficult for users and even developers to understand the exact reasoning behind a model's decision. This lack of transparency means that biases and errors can go unnoticed and uncorrected.


4.3 Are We Becoming Too Reliant?


Given that many AI models are not fully transparent, human oversight remains crucial. Over-reliance on automation can lead to a decline in critical thinking skills among security professionals. In a survey by N-able, 58.1% of respondents reported an increased reliance on AI for decision-making.

This is problematic because supervised learning models often struggle to classify completely novel cyberattacks. Studies have also shown that for the average cybersecurity model, over 30% of alerts are false positives, requiring human intervention to validate.

Finally, the shift toward AI has created a skills gap. There is a shortage of cybersecurity personnel capable of developing, deploying, and managing these complex models. While Goldman Sachs predicts AI will put 300 million full-time jobs at risk of automation by 2030, an (ISC)² survey in 2024 revealed that 25% of respondents reported layoffs in their cybersecurity departments, suggesting a shift from human analysts to AI-driven systems.


4.4 Data Privacy


Perhaps the most pressing argument against AI in cybersecurity is data privacy. To function effectively, these models must be trained on large quantities of personal and sensitive data, including network traffic, user behavior, and threat intelligence. This requires organizations to implement robust protective measures to prevent data breaches of the very systems designed to protect them—creating a paradox of securing the security measures themselves.


5. The Final Verdict: A Path Toward Responsible Integration


From this systematic review, we can draw several important conclusions. Although AI in cybersecurity presents a double-edged sword, its integration is no longer optional but essential for modern defense. Cybercriminals will continue to leverage AI regardless of the defensive landscape, making it imperative for organizations to adopt it as well.

Therefore, the focus must shift to the responsible integration of AI, aiming to mitigate its negatives while maximizing its positives.

Here are key principles for a path forward:

  1. Continuous Security and Maintenance: The entire AI model pipeline—from data collection to deployment—must be continuously monitored and secured to prevent tampering by cybercriminals. Models must be regularly maintained and updated to remain effective against evolving threats.

  2. A Hybrid AI Approach: A healthy balance between different AI models (e.g., supervised and unsupervised learning) should be implemented. This creates multiple checkpoints for threat detection and maximizes efficiency by leveraging the strengths of different algorithmic approaches.

  3. Data Integrity and Privacy: Training data must be rigorously audited to prevent overfitting and the inclusion of biases. Crucially, it should not contain private user data unless absolutely necessary and anonymized. Privacy-preserving techniques like federated learning should be prioritized.

  4. Human-in-the-Loop: Human oversight must remain a core component of the process. This adds a critical layer of protection, allows for the validation of AI-driven decisions, handles complex edge cases, and provides essential employment for skilled professionals who can manage and interpret AI systems.


6. Conclusion


The role of Artificial Intelligence in cybersecurity is complex, with both profound benefits and significant detriments. While its integration into modern cyber defense is inevitable and will only increase, it is crucial to understand the context from both perspectives. By adopting a framework of responsible integration—one that emphasizes security, fairness, privacy, and human oversight—we can harness AI's incredible potential. In doing so, artificial intelligence can truly shape the cybersecurity industry for the better, creating a safer digital future for all.


Bibliography


bottom of page