AI Wont Replace Your SOC

Why AI Won’t Replace Your SOC

It’s January 15th, 2009. USAir Flight 1649 departs LaGuardia Airport in New York City at 3:24 p.m., destined for Charlotte International Airport. Moments after takeoff, a catastrophic bird strike occurs, causing all engines to be lost at an extremely low altitude. The flight controls are operated via automated drone capabilities, and AI calculates a 53% chance of success in returning to LaGuardia, beginning to adjust the heading accordingly. Failing to adjust for unknown variables and the ethical impact of the 47% chance of failure, Flight 1649 crashes in the Bronx, killing all 150 passengers, five crew members, and countless civilians in the heavily populated area, injuring hundreds of civilians. Damage not seen since September 11th covers the Bronx as jet fuel-coated buildings and city streets burn.

Of course, this is not how “Miracle on the Hudson” played out in our dimension of the multiverse. Instead, Captain Chesley “Sully” Sullenberger and First Officer Jeffrey Skiles’ critical thinking went against the post-mortem simulated odds to safely avoid all the calamities and casualties that could have been. Situations like these demonstrate that AI won’t replace humans in many roles, including SOC Analysts. Humans excel in novel or uncertain situations. AI struggles when data is incomplete or contradictory, such as “save the lives of 150 onboard at the risk of killing more lives on the ground”. AI doesn’t know context or ethics. Two things needed for good security monitoring and response.

In addition to leading a DFIR team here at SecurIT360, I teach at colleges and universities in cybersecurity programs designed to train the next generation of cyber talent. This experience has given me deep insight into the education and training aspect of today’s cybersecurity workforce. Many of the faculty members I speak with are concerned about the lack of critical thinking skills among today’s college students. Critical thinking is the top skill a cybersecurity analyst needs to possess, more so than most technical skills that can be picked up along the way. These mental skills differentiate a quality security team from others. Give me a room of thinkers versus a room of “Paper MCSEs” any day.

Tactics change, but the goals remain the same. With AI, there will always be a lag between the discovery of new tactics and the implementation of detections. This is not new and aligns with other aspects of security. But in the interim, critical thinking during those gaps will keep networks safe. It’s the “second O and D” of the OODA loop. Orient and Decide. Critical thinking enables out-of-the-box ideas based on experiences gathered, whereas AI relies on patterns, past data, and predetermined desired outcomes. Humans can challenge the gap, whereas AI may reinforce the past practices, grow stale, or force false successes. Let’s not forget that if the data AI is trained on is skewed or contaminated, its value erodes exponentially.

Speaking of the accuracy of AI, a recent study from Anthropic and the Alan Turing Institute found that it only takes 250 documents of any size to poison any AI model. With AI-based security collecting thousands of records from hundreds or thousands of devices daily, 250 seems like a very fragile number of records needed to compromise trustworthiness. If these numbers are accurate, a valid AI model for security monitoring could be compromised and rendered useless in just seconds. Instant blindness, where we thought we had insight, the ultimate stealth mode. This, in turn, would cascade into the automated workflows, where AI claims to be better than humans, and ruin those as well. The whole process is broken, and cloaked Klingons are storming the galaxy unnoticed.

The adage goes, “The Threat Actor/Red Team only needs to be right once, whereas the Blue Team needs to be right every time.” AI systems are nothing more than a Golden Retriever with laser-like hunting focus. It will fetch whatever you ask and bring back needles from haystacks, but its focus is on pleasing, not being accurate. We need accuracy in almost everything cyber, especially in network security, detection, and response.  In this modern era of threats, systems prone to AI hallucinations (i.e., all of them) or contaminated data can be unreliable and risky to depend on. Detection and response times could increase, potentially taking us backward in our security posture at a time when adversaries require less dwell time to cause significant harm. And speaking of potential corruption of AI data, how would any work performed by AI ever be admissible in court (civil or criminal) if you can’t prove it hasn’t been altered?

Investigating false positives while false negatives have a chance to fester is taking network security backward. You are not reducing SOC staff resources, one of the major points of the AI/CEO erotica. Even if an AI-based system achieves an accuracy rate of over 90%, a SOC analyst (or someone else) must review 100% of the events to validate the findings and provide feedback to the system to improve its accuracy rates. This isn’t the efficiencies gained as promised. It’s adding more work to existing SOC analysts. AI companies want you to think that computers from the 23rd century, as depicted in Star Trek, are here today. But it’s not the future, and we’re a long way off. Does anyone remember what happened when Mr. Scotty used automation to do the work of the entire crew of the Enterprise during the movie “Star Trek III: Search for Spock”?  At first brush with real-world engagement with an adversary, Enterprise had to be scuttled. Overdependence on overly complicated implementation is a lesson we already know today and should continue to learn from.

Overdependence on AI also removes some of the core skill sets and institutional knowledge organizations have built and trained upon, transforming them into something else. IBM experienced this when it laid off 8,000 employees due to AI and subsequently found that it had to hire 8,000 employees to handle the resulting new AI workload. There were no cost savings, just buckets moved around.

Software engineers, sales professionals, and marketing specialists became essential as IBM pivoted toward more strategic operations requiring human creativity and critical thinking.*

Critical thinking—there are those words again. Now, studies are showing that heavy use of AI tools can erode users’ critical thinking skills over time.* 

Is AI totally worthless for security operations? No. It’s another arrow in the quiver, but not the bow. The core of AI, machine learning, has been around for decades. That is nothing new. “AI” is just the updated marketing buzzword for this technology. What is new is the easier means of entering queries and retrieving results. I often refer to AI as “Search Engine 2.0”. Its search capabilities are next-level compared to traditional search engines. Those can result in meaningful and measurable efficiencies.

Speaking at a security conference a few years ago, just as OpenAI hit the scene. I was asked by several consultants from a well-known company to share my perspective, given my over 20 years of experience in IT and security. My opinions then and now haven’t changed. I told them the area where AI can benefit most is the ability to have smarter and faster search results. Earlier in my career, if I needed a script to automate a task, I would spend a significant amount of time searching a site, copying code, and then trying to modify it to suit my needs. With AI, this task goes from potentially hours to just a few minutes. Junior-level analysts can now assume more senior-level responsibilities for specific tasks. The consultants must have agreed with me, as their presentation at the same conference had the same talking points in their annual report the following year.

AI is a powerful tool, but critical thinking ensures we use it wisely. It’s the difference between having information and knowing what to do. There is no price on experience in security, which is why there are numerous information-sharing avenues—sharing experiences so we can all improve at defense. AI’s documented shortcomings and inaccurate tendencies mean that SOCs with human staff will always be needed. This is why AI won’t replace your SOC.

Notes:

Simulator Evaluations for US Airways A320 Flight 1549 Accident, Ditching in Hudson

River, 1/15/09 (NTSB # DCA09MA026)

IBM laid off 8,000 employees to replace them with AI, but what they didn’t expect was having to rehire as many due to AI.

ChatGPT’s Impact On Our Brains According to an MIT Study | TIME

It Takes Only 250 Documents to Poison Any AI Model

Proactively Guard Your Business From Cybersecurity and IT Threats. Request a Free Consultation Today.