Regardless of how long you’ve been involved in security and/or data-protection projects, you’ll have asked yourself the question “are we protected?” Furthermore, you’ll have repeated that very same question each and every time you read of a similar company in your market having been compromised. In the past several years, legislation has been mandating that companies publically disclose any data breaches. Such breaches have caused some companies to cease their business entirely, grossly affected market confidence and negatively impacted overall brand value, so you are probably checking on your security posture more now than ever before.
By Richard Cassidy
But how can you prevent being affected by the increased number of threats we are seeing?
The good news—of sorts—is that threats haven’t really changed since we first become more publicly conscious of data breaches. For decades we’ve had opportunistic attackers, script kiddies and cyber criminals all following similar methodologies of attack today as they did back then. We still see performance-based attacks through DoS/DDoS, we still see operating-system (OS) vulnerability exploits, and we still see application attacks, albeit in far greater numbers these days. The methodology of attacks has remained very similar over time, with social networking still the route favored by the majority of attackers seeking to distribute malware. Cyber criminals range from those that adopt a mass-market approach, reaching as many organizations as possible so that vulnerabilities can be exploited and the data monetized as quickly as possible, to more-sophisticated attackers. These attackers are more targeted and measured, and they conduct a period of reconnaissance against their targets to identify weaknesses that they then exploit with cleverly crafted methods to exfiltrate confidential data or intellectual property, or to hold corporations ransom. The volume of these types of attacks is less, but has a higher financial gain.
So why are threats seemingly getting worse, when we’ve advanced at a rate of knots in technology and capability when it comes to threat protection?
Many organizations have implemented a breadth of security technologies from multiple vendors to try to get one step ahead of the problem—from host-based antivirus solutions to gateway-scanning tools to log-management and log-monitoring products. At the same time, hackers are becoming more advanced, and organizations have yet to fully comprehend the anatomy of a cyber-attack and/or the mindset of who and what they’re up against in terms of hacker cells, cyber criminals and hacktivists. The greatest victories in the history of battles never came down to sheer size and force; rather, they came down to deep understanding of the motivations and behavior of the target, as well as the landscape and the effective use of the tools at hand.
But the sheer volume of these technologies operating across complex infrastructure environments, coupled with lack of resources to maintain these systems and delve deep into understanding the data these technologies produce, are creating the gap that commercially motivated hackers are looking to exploit.
Analysts estimate that about 80% of the vulnerabilities that hackers exploit are preventable with basic security practices regarding patching, upgrades and enforceable security policies. By eliminating this 80%, organizations could put their scarce resources and budgets to better use, focusing on their security and compliance posture under attack from other methods.
The “cyber kill chain for intrusion detection” is a wonderfully succinct concept that models intrusions in a network, from reconnaissance through to culminating in a system of attack. Hackers leave digital footprints every step of the way (if you know what you are looking for), and so visibility into every corner of your infrastructure is critical to overall threat protection and compliance assurance. Making sure you can see all routes in terms of data transactions into and out of your infrastructure, whether on premises or cloud, is the priority. Logging every transaction at the user, server, network and application levels will provide a goldmine of data that you can use to pinpoint even the most advanced persistent threats that might target your organisation. As every stage of the “kill chain” being exploited will leave a digital footprint, we need to ensure that we implement technologies that allow us to scan and capture that data across our network, spanning host-based security tools to gateway-inspection systems and authentication and encryption tools.
But implementing technology doesn’t complete the picture. You may have the latest logging, host-scanning, IDS, IPS, NAC, SIEM, firewall, content-filter and system/file-monitoring tools at your disposal—tools that, when you purchased them, boasted immense capacity to do x,y and z—but you were probably left with the tools to configure and implement them yourself (or invested a great deal of capital into professional services to do it for you). You may then have had to resource-up to be able to maintain and manage those tools through updating signatures, correlation rules, advanced analytics and understanding behavioral analysis—the list goes on. You must review the thousands of logs and events generated and then correlate the data and combine it with contextual intelligence so that true security threats (rather than false positives) can be prioritized for further investigation and remediation.
You must also understand your policies and mitigation procedures when threat data or compliance breaches emerge; are they agile enough? How quickly can change control be approved for serious incidents? What if the main approvers are offline? Can multiple incidents be handled effectively? We only need to take a look at the fact that Heartbleed is still a risk within many organizations today, with systems still unpatched or exploitable. Organizations are simply struggling to keep up with the rate of threats and have uncovered important concerns related to antiquated response procedures and poorly implemented best-practice security operations when developing new workloads and applications. And we must not forget the importance of almost parrot-fashion education to users on the risks of cyber-threats, what to on the lookout for and how to respond to unusual or suspicious activity, e-mails or communications. Incredibly weak user passwords are still one of the top vectors of successful attacks to date, and there is no sign of a decrease anytime soon!
The age of “as-a-service” has come about because of the many cost and efficiency savings for putting DevTest and production workloads into hosted or public-cloud environments. And with budgets being squeezed and resources stretched, managed security services are rapidly becoming a major component of many organizations’ cybersecurity strategy.
Understanding the organizational threat vectors and reviewing existing security, along with compliance technologies to understand whether they work to mitigate against them in the shortest time frame possible, is critical. Looking at how we can lean on services that bring context to the multitude of threats we face is a good idea, saving a great deal of operational expenditure. Reviewing our internal tools, policies and best practices approach to security and compliance, and looking at their effectiveness against the threats, will serve to quickly close any gaps already open across the organization.
Ultimately we live in the age of shared responsibility. As such, we need to learn to share responsibility while focusing on the parts that can be achieved best in the organization. More often than not, new hardware or software by itself is not the answer to security and compliance challenges; it is effective people and processes around how the data that hardware and software creates that will define those organizations that recover from a breach and those that don’t.