Working Smarter, Not Harder: Bridging the Cyber Security Skills Gap
The Most Effective Security Teams Aren’t Necessarily the Largest or the Most Experienced From WannaCry to NotPetya, 2017 brought with it a new wave of cyber-threats, with machine-speed attacks dominating the headlines on a regular basis. But while a lot of the commentary in the aftermath of ransomware attacks was either concerned with finding out who was behind the attacks, or lamenting our failure to patch, a bigger issue also emerged.
Security teams struggle to react quickly enough in the face of automated attacks. Sadly, as defenders, we struggle to keep pace with the attackers. With a skills gap of over a million cyber security professionals worldwide, how can organizations stay ahead of sophisticated and fast-moving attacks?
Let’s take a look at some tactics that may help you do more with the same resources in 2018. Let AI Do the Heavy-lifting We are facing a dramatic cyber skills shortage, with the demand for skilled practitioners consistently outstripping supply.
Companies struggle to find the right people for the job but beyond that, analysts have to stay motivated – avoiding alert fatigue and burnout. AI technology can not only make our existing teams more efficient, but can also help with retention by doing the heavy lifting and enabling security teams to focus on higher-level, strategic work. By dramatically reducing false positives and alerting security teams to genuine threats, these technologies can ensure your security team can focus on researching and remediating the most serious threats on your network.
Additionally, in an era where employers struggle to hire security analysts, ensure that your cyber professionals remain engaged and avoid burnout. Be Creative in Your Hiring Consider rethinking your hiring strategy.
Traditionally, most security teams have consisted of seasoned security professionals and cyber analysts, who use their experience to identify indicators of threats. However, armed with AI technology, budding cyber security experts can also catch even the most pernicious threats. The most effective security teams aren’t necessarily the largest or the most experienced, but the most diverse – complete with skilled cyber professionals, engineers, analysts, and intuitive business thinkers.
In 2018, we need to restructure and train our teams to work in tandem with new AI technologies that catch and respond to threats. Find Out What’s Happening on the Inside Armed with a badge into the building and a password to the network, some of the most impactful breaches start with an insider gone rogue–and yet these are often the most difficult threats to detect.
A recent Ponemon study found that on average it takes organizations 50 days to remediate a malicious insider attack. Yet it might take just one day for an employee with the right access level to obtain a proprietary drug formula, the details of an upcoming merger, or the launch date of a new project and exfiltrate the information to a competitor. In light of this, you should be asking yourself a critical question – Do I have a tool in my stack that can detect insider threat?
All too often, organizations lack understanding of the normal patterns of their own employees, let alone rogue devices or third-party exposure. Without this knowledge, early indicators of threat are often lost in the noise, not to be discovered until the problem becomes a crisis. The days of retrospective cyber defense have to be over.
In order to accurately detect insider threat, we need teams and technology that can quickly identify, understand, and report threatening user and device behavior–alerting our teams to shifts or changes indicative of early stage cyber-threats. Less is More: Prioritize Threats in Order of Severity We are drowning in data.
ESG research found that 38 percent of organizations collect, process, and analyze more than 10 terabytes of data as part of security operations each month, while an Ovum report found that over a third of banks receive more than 200,000 security alerts daily. Finding an indicator of the next NotPetya or WannaCry is like trying to find a needle in a haystack for security teams. Organizations need to not only find that threat, but find it before it starts inflicting damage – in other words, in real time.
But how can you find the subtle threat lurking in your network when your team is sifting through 200,000 alerts a day? Our security teams face the insurmountable task of triaging thousands of these false positives, traveling between web proxy logs, anti-virus logs, SIEM logs, and more to ultimately – and unfortunately – find an incomplete picture of what transpired. The last thing a SOC needs is yet another tool producing a profusion of alerts.
Investing in methods to effectively visualize and prioritize threats in order of their severity can prove the difference between finding a threat as it emerges and finding a threat hundreds of days later. By implementing a system to rank genuine threats by their level of deviation from ‘normal’, security teams of all sizes can rapidly investigate, remediate, and move on to the next incident, resulting in hours saved and a more effective workflow. Equifax is not the only company that has identified a hack long after the damage is done.
As attacks become faster and hackers become smarter, we need to evolve as well–thinking creatively and finding ways to buy back time for security teams. Artificial intelligence can do much of the heavy lifting for us, prioritizing alerts and autonomously responding to slow threats, while strategic hiring can make our teams more efficient and effective.
These strategies can provide us and our teams with the time to focus on priorities and strategic initiatives, enabling us to take a more proactive approach to cyber defense.
- ^ WannaCry (www.securityweek.com)
- ^ NotPetya (www.securityweek.com)
- ^ cyber skills shortage (www.securityweek.com)
- ^ study (www.accenture.com)
- ^ found (research.esg-global.com)
- ^ Equifax (www.securityweek.com)
- ^ Artificial intelligence (www.securityweek.com)
- ^ Hunting the Snark with Machine Learning, Artificial Intelligence, and Cognitive Computing (www.securityweek.com)