IDC

VOICE OF SECURITY 2025:

Security Leaders’ Perspectives on AI Adoption, Team Performance, and Job Satisfaction

March 2025 | us53204125
Christopher Kissel

Christopher Kissel

Research Vice President, Security & Trust Products

Product Type:
IDC: White Paper
Sponsored by: Tines, in partnership with AWS

Introduction

Security teams are under more pressure than ever. They must manage increasing workloads without growing their teams; stay ahead of threats; and ensure their tech stacks, including AI and automation tools, help them achieve their goals rather than create new obstacles.

This paper captures insights from 900+ security leaders across the United States, Europe, and Australia, uncovering what drives job satisfaction and high performance within security teams. It examines how automation and AI are helping teams tackle increasingly complex challenges and whether teams’ extensive tech stacks simplify or complicate their jobs. It also highlights where teams genuinely succeed with AI, where they feel the technology’s potential falls short, and how automation and AI fit into their plans for a rapidly evolving future.

Executive Summary

The contemporary security operations center (SOC) is undergoing a dramatic transformation as it starts to realize the benefits of generative artificial intelligence (GenAI) and to utilize the manifestations of truly autonomous agentic AI.

Additionally, the promise of security automation is coming to fruition. In theory and practice, security automation should truncate the time SOCs spend investigating and mitigating alerts.

However, the tried and true saying about technology still applies: Cybersecurity still relies on the combination of people, processes, and technology. For some time, AI and security automation have achieved gains, but there have also been occasional setbacks.

This paper is organized into sections devoted to the current composition of security operations teams and the attitudes of security leaders, the role of security automation, and perceptions about artificial intelligence. While a single profile or narrative about the leaders, security automation, or AI would be convenient, there is no one-size-fits-all solution for businesses, and the insights differ according to the business’s country, region, size, and type.

The Security Operations Center and Its Teams’ Attitudes

Security operation centers are centers of controlled mayhem. When alerts occur, security leaders learn if their tooling and training are aligned to produce positive security outcomes. Security leaders must position themselves and their colleagues for success when the team addresses alerts. This job requires a diverse set of skills, including critical and novel thinking and grace under pressure.

The typical cybersecurity leader is meticulous and devoted. They view their exposure to continuous learning, skills development, and problem solving as positive job characteristics and not points of frustration. Generally, the cybersecurity leader feels satisfied and self-empowered even when facing an expanding digital footprint, the need to integrate new tooling, and an influx of data. In short, while security leaders derive satisfaction from their job, they face an increasing workload.

How many people are on a security team?

The tendency to think of a cybersecurity operation as a large SOC with centrally managed workflows and team members with specialized roles is incorrect. In truth, most cybersecurity teams are smaller. Roughly three in five companies have security teams with fewer than 10 members, with 54.8% managing between 20–49 tools (see Figure 1A).

In what ways is your team working well?

Security operations are under high levels of scrutiny; without this scrutiny, a security team might work in anonymity until a severe error occurs. Cybersecurity leaders hold various perspectives but believe their teams are working well in the following areas:

Which top factors drive job satisfaction for security leaders, and which bring dissatisfaction?

While security leaders consider it a positive attribute that their jobs are subject to changing conditions, they feel slightly misunderstood, as they believe their contributions are not conveyed to other groups within the organization. Conditions such as toxic work environments and poor relationships with their direct managers also seem to affect morale (see Table 1).

Staffing Levels Appear Sufficient, But Workloads Are Increasing

Security operations take place in a necessarily kinetic environment. While workplace experiences are generally positive, workload management remains an area for improvement. A tipping point may be on the horizon, and security automation may hold the key

(see Figure2A, B, and C).

“Tines helped us realize time savings. It now takes 30 minutes for tasks analysis when it used to take eight hours. Additionally, the onboarding process, which took an hour, now takes four to five minutes.”

Maritime Shipping and Travel Cybersecurity Company Director

What are the top 5 skills a security leader believes a successful security analyst requires?

Before we present information about the skills security leaders consider requisite for success, we should consider the point of view of such a leader. The security leader is responsible for the tools, architecture, and culture of a business’s cybersecurity. They must find answers to challenges from the ephemeral (e.g., how to block a threat) to the more permanent (e.g., how to return devices/ a network to a state of innocence after a security incident).

Automation accomplishes several important aspects of cybersecurity. First, it combines many of the manual actions a team needs to undertake to investigate an alert, saving time and ensuring accuracy. Second, it consolidates and expedites the response. Last, it presents a powerful technology that allows security leaders to be more tactical and less reactive in their approach to cybersecurity.

Security leaders identified that the most important skills for a successful security analyst involve tactical thinking. The top 3 skills are keeping up to date on threat actors’ TTPs, mastering threat hunting, and understanding malware analysis techniques. These skills relate to different aspects of threat intelligence (TI) and the tactical exploration of a potential attack vector. This makes a great deal of sense, as security operations teams must understand adversary intention and how to contain the blast surface. The next two desired skills security leaders seek out are updating security training and certifications and computer forensics techniques (see Figure 3).

Best Practices in Security Automation

End users, devices, and applications intersect on the network. Applications can be single purpose, but in cybersecurity, many platforms connect to provide visibility. The most common example of how applications and platforms are connected in networks is a security information and event management (SIEM) platform that uses a series of connectors to accept logs from firewalls, NetFlow, endpoint data, cloud instances, and other sources of telemetry. The SIEM indexes and stores data from its ingress and can directly suggest or initiate actions by sending instructions back to these applications.

Security automation binds platforms and improves workflow. It is straightforward and draws on two operative words associated with automation: “automatic” and “autonomy.” Automation means that actions are taken without human intervention. An incident occurs, and the automatic response is to remove all users from the network and require them to reauthenticate back onto the network. This seems like a benign process until it affects several thousand users. Autonomy is the idea that security automation can collect and correlate data without requiring a human to execute on every alert. For example, when an alert is generated in security operations, the security team initiates an investigation into its cause. They will gather when the alert was formally triggered, what devices are likely to be affected, what applications have been accessed, and where an end user or device has traveled on the internet or internal network in proximity to the alert. This is called triage, and a particularly powerful aspect of security automation is the information gathered automatically during triage.

Security automation has been an important advancement toward helping security teams achieve more accurate and faster outcomes. This section describes the attitudes and expectations associated with security automation going forward.

What tasks would security leaders like to see automated?

There is a direct correlation between the activities that security leaders are spending the most time on and those they most wish to automate. Respondents indicated that network security is the top task they would prioritize for automation, followed by threat detection and cloud security (see Figure 5).

“From what we can confidently measure, the company saves 10 hours a month in ‘zero-touch’ automations and 230 hours a month in man hours related to other automations that Tines directly affects”

Alex Windle – Manager, Security Triage and Automation Team, Snowflake

Which challenges could the right automation solve?

The question of which challenges the right automation could solve relates to that of which tasks are the top priorities for automation. Time spent on manual tasks is the most frequently mentioned challenge, but streamlining data is also important. Security leaders suggest there is too much data and not enough information and that there are too many alerts and logs (see Figure 6).

Business Orchestration and Automation Technology

Hyperautomation and business orchestration and automation technologies offer solutions that not only help security teams strengthen defenses but also improve collaboration between security and closely knit business units, such as IT and DevOps. support for custom code among others.

Security leaders find immense value in various IT functions that automation could improve. Roughly 83% of security leaders thought automating coding was a good idea or were very enthusiastic about the prospect. However, the prospects of monitoring IT and security shared responsibility (patch management, installing firewall rules, etc.) were met with the most enthusiasm (see Figure 8). IDC believes that this indicates silos still exist between the network operations center and SOC.

How would security teams reallocate time that automation saves?

In security operations, extra time is something of a rarity. Security teams prioritize reallocating time toward security policy development, training, and team development as well as incident response planning over other worthy activities (see Figure 9).

The Expanding Role of AI in Cybersecurity

Artificial intelligence has been a part of cybersecurity since the beginning of the 21st century. A close cousin to AI, machine learning has been used in IT, security, and operational platforms for two decades. User behavioral analytics (UBA) establishes baseline behaviors and creates alerts when deviations occur. Implemented a decade ago, this was a discrete standalone technology but has been integrated into almost all cybersecurity detection and response systems.

AI is a very broad term, and it could include UBA and basic machine learning. For the purposes of this paper, AI refers to the adoption of GenAI and agentic AI and how cybersecurity leaders perceive these technologies.

A Brief Introduction to Cybersecurity AI

There is a general sense of optimism about the adoption of AI in cybersecurity. However, it is important to understand the hurdles that must be addressed before the use of GenAI and agentic AI among security teams spreads even further:

  • Accepting the conclusions AI reaches.
    Most people use the term “hallucination” to explain when an AI model reaches an obviously wrong conclusion. This is partly a misnomer, as AI is probabilistic. In other words, when an AI engine ingests poor or inapplicable knowledge or incomplete data sets, analytics can only draw conclusions based on that information. Bias is another concern, as AI engines may draw conclusions from past data sets and old assumptions. Remember, AI’s greatest advantage will be in bringing an order and structure to data sets that humans are incapable of achieving. This vision has not been fully realized yet.
  • Implementing the proper guardrails for AI outputs.
    AI analytics are more than simply mechanical outputs, but they are far from sentient. The fear of exposing personally identifiable information such as social security numbers, medical records, or credit card numbers is already a concern in non-AI-based technologies; this concern is magnified when generated information occurs faster than the ability to assimilate it.
  • Safe guarding the AI data lake.
    Malicious actors already use various tactics to steal digital assets. The potential exists for adversaries to add or subtract critical pieces of information to disrupt the proper function of GenAI/agentic AI. However, that concern is almost secondary. Often, the AI training data set contains proprietary or sensitive information. Containing AI involves both limiting permissions to authenticated users and developing advanced data loss prevention techniques.
  • Paying for AI.
    AI carries a significant cost, and it is unclear what a proper cost strategy would be. An advanced GPU core could cost $50,000. For a given company, is it better to buy compute hours or tokens? Is it more cost effective for cybersecurity teams to use a digital AI assistant (or “copilot”) continuously or as a batch collection tool?
  • Training humans to be proficient with AI.
    In the near future, simple voice commands using natural language processing (NLP) could become so effective that they allow for precise typing (although, in many cases, typing is too slow). Trial and experimentation will shape how AI performs when humans use it.

Despite these hurdles, tangible evidence shows that AI can significantly help security leaders. Digital assistance helps security leaders articulate issues using natural language processing to investigate anomalies. AI is helping leaders classify data and draw initial conclusions. The human remains in the loop, and the speed of machine learning empowers them.

For practical purposes, let’s envision how AI is used and will be used in the SOC. The level 1 analyst is the first line of defense. They are responsible for identifying the “who/what/where/when” that caused an alert. During this investigation, if the analyst can find the proper response (e.g., by isolating the endpoint and asking an end user to reboot), they can initiate the proper action. Increasingly, GenAI is subsuming this task. However, if additional insights are needed, the alert ticket is elevated to a level 2/3 analyst who may use further investigative techniques such as examining a file in a sandbox, developing a firewall rule, or requesting a software patch.

One of this paper’s goals was to measure the anticipation, expectations, and possible apprehensions regarding AI. We found that of security teams adopting AI, 49.2% are implementing AI in a few areas, 29.8% are extensively using AI in multiple areas, and 19.5% are exploring potential use cases. Many insights will follow, but it should be noted that security leaders see AI bringing disruption as well as benefits. For security leaders, time savings are invaluable. AI might lessen the workload in manual procedures, but shifting to more initiative-taking activities such as fine-tuning detection and response rules uses that time.

Security Teams’ Most Common AI Tasks

Summarizing security data and threat intelligence are the most common AI use cases for security data. Level 1 triage, advanced triage, and EDR are standard features within detection and response platforms, as security appliances assimilate data around the MITRE ATT&CK framework.

Summarizing security data is a valued use case for security leaders. Modern businesses may ingest petabytes of data, and finding security context from this constant ingestion is impossible to achieve manually. Aggregating data from disparate sources such as NetFlow, Syslog, and firewall logs is difficult because these are all received in different formats. In addition to security logs, there are endpoint logs. Last, TI analysis is a classic problem for the SOC, as information about threats, tactics, and procedures may not be tailored to one’s own business environment (see Figure 12).

How do security leaders view the current AI adoption level?

It may be reassuring that the most common response to this question was that security leaders were ultimately optimistic about AI, but they have concerns. Training is the top concern; this is exacerbated in the SOC, where operations are at the point of attack. Businesses struggle to allot time off the front line.

AI adds capabilities, but it also adds governance layers. One-quarter of respondents suggested that AI is not properly leveraged for security (see Figure 19).

Advice and Recommendations

Returns from real-world implementations have dampened the initial unbridled enthusiasm for AI. Return on investment has been hard to prove, as implementing AI for business use cases is not always an intuitive process. For cybersecurity, none of this is particularly new. Cybersecurity leaders have gone through similar cycles in machine learning and user behavioral analytics.

The adage that cybersecurity is the combination of people, processes, and technology is as relevant as ever. While it is tempting to suggest that human beings are the weakness in the loop (audacious in so much as a human is writing this and other humans will read it), humans provide resiliency and redundancy as problems with data mining and interpretation and economies of scale work themselves out.

  • Security leaders are realistic about AI.
    Security leaders expressed very little trepidation about AI in security environments. By its nature, security operations is an evergreen profession where AI automation can help redistribute the gift of time, allowing staff to focus on training or prevention planning rather than manual tasks.
  • Failing to plan is planning to fail.
    A security operations team should chart out security automation objectives. For instance, they could decide to automate 80% of all security responses.
  • In AI adoption, it is better to be right than to be a first adopter.
    When working with GenAI, starting with modest data sets and learning what works before expanding the model seems preferable to presenting massive data sets and asking GenAI (and soon agentic AI) to simply model the data sets to the best of its understanding.
  • Security automation can prevent errors as well as provide a perfect conduit to maximize the utility of AI.
    When it is undertaken correctly, security automation creates a lossless tunnel between applications. Automation serves as a boundary and perimeter for AI functions.
  • While there are outsized expectations for AI in cybersecurity, there are also outsized responsibilities.
    The regulatory environment is unsparing. If a data breach compromises the sovereignty of digital citizens, the regulator won’t care if AI caused it, and neither will a business’s customers. While AI creates efficiencies and generates unique insights, the data must remain pristine. Best-in-class data access and data protection techniques are required to realize the promise of AI.

Conclusion

While AI can help draw meaningful insights from the huge amount of available data, considerable human intervention is required to realize AI’s benefits.

In IT and its close cousin cybersecurity, new technology is often met with regulatory concerns, challenges in training, and concerns about exposure. These dynamics remain for GenAI and agentic AI.

No one strategy will fit all use cases. Size of business, country/region, and type of business influence AI strategies.

Security automation must walk hand in hand with businesses’ extended use of AI. Data lakes are too big to derive business outcomes from them manually. Automation can help refine the parameters to create meaningful outcomes.

The continuous flow of data in cybersecurity makes it too hard to establish baselines or “golden states” of devices or even maintain a fresh set of detections and responses. Automation will first be generative in helping establish context and then agentic in helping automate new rules and protocols.

Message from the Sponsor

Tines’ workflow and AI orchestration platform enables security teams to operate more effectively, mitigate risk, and focus on what matters most.

Tines powers thousands of mission-critical workflows for customers like Canva, Elastic and McKesson.

Security teams rely on Tines for everything from automating incident response to orchestrating event remediation in real time. By connecting to all their internal and external systems, Tines helps maximize the ROI of their existing tech stacks.

Amazon Web Services (AWS) is a comprehensive and broadly adopted cloud, offering over 200 fully featured services from data centers globally. Millions of customers — including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

Tines integrates seamlessly with the full stack of AWS services, optimizing workflows while reducing operational overhead. Together, AWS and Tines empower organizations to work smarter and respond faster with confidence.

Voice of Security 2025: Security Leaders’ Perspectives on AI Adoption, Team Performance, and Job Satisfaction