Signal-to-Noise and the Future of Not Wasting the Information Security Officer’s Time

Too much detailed data and too few big-picture overviews have been an issue since the first network admin was hired. In the modern digital-age business, information signals flow out from entrenched (e.g. personal computers, servers, access cards, and keypads), emerging (e.g. mobile devices, and cloud services), and nascent (e.g. IoT devices and localization beacons) technologies. Having just one admin who can understand these signals (and the inevitable alarms they generate) is ridiculously infeasible, yet having specialists focus on each individual technology sacrifices big-picture conclusions.

Tools for dealing with monitoring and alarms continue to be primarily focused on individual verticals – server monitoring, network traffic monitoring, mobile device monitoring – with only a handful of companies actually looking to generate cross-domain conclusions.

My envisioned future has machine-rate detection on all potential attack vectors (e.g. endpoints, network, etc.) and, more importantly, can use information from all detection systems to construct a cohesive model of the entire system. This cohesive model enables a central intelligence to both filter out false alarms from individual systems and increase the sensitivity to important signals in noisy, high-volume environments. Successful integrations will be synergistic, as each security solution will have the ability to utilize knowledge from other systems to improve its own sensitivity and specificity.

There are two primary impediments for transitioning from isolated mechanisms into this cohesive system: the lack of a standard language for reasoning about security events and the abundance of pre-existing, standalone monitoring and detection systems. Both of these are substantial challenges, requiring industry-standard definitions and adoption that could take many years.

The right short-term step is to focus on what we have now, and deploy security solutions that promote compatibility and integration, are intended to be deployed as components of a larger whole, and have mechanisms to utilize knowledge from external systems. Having these interfaces available, even if they are not yet speaking with a standard language, is one step towards the future of high-volume near-real-time security spanning the entire attack surface of your organization.

By Dr. Hamilton Turner, OptioLabs Director of Malware Research

Share this blog