Although one of the greatest benefits of machine learning solutions is their independence from human interaction, they cannot reach their full potential without some form of help. A User Entity Behaviour Analytics (UEBA) solution that uses machine learning doesn’t require you to create policies, filters, or rules in order to detect potentially malicious behaviour; it uses its own statistical models to deduce whether or not a user is doing something out of the ordinary.
Human interaction is required, however, to tell the UEBA solution whether or not it is correct. Adding a thumbs-up/thumbs-down (to put it simply) result to the equation teaches the UEBA whether it is on the right track, or if it is throwing up false positives and needs to recognize the action that is under scrutiny, as legitimate.
The UEBA solution with machine learning provides more accurate events coming into your cybersecurity operations center. The net result of this added accuracy is far less time spent by your security analysts actually analyzing events (and potential events) to be added as criteria to a SIEM. The difference here is prediction versus reality; in this situation, what would you prefer?
While security monitoring solutions that leverage machine learning can find events which may escape conventional detection measures, can it help augment digital forensics? There is at least one use case for machine learning, for sure. If you are performing multiple analyses on varying sets of data pulled from several endpoints, building up a database of findings would be beneficial. Leveraging big data and machine learning enables forensic analysts to store vast amounts of objective data about their findings into a repository, and detect abnormalities in an automated fashion. For example, if you have a user who has been up to no good, or has been compromised by a malicious actor, you can take memory and file system dumps from your forensic image, normalize or index the data, and have your machine learning solution digest the data. The end result being a potential for faster, objective, more accurate forensic analysis. User validation comes in the form of searching for the flagged data, verifying, and providing input back into the system to help identify false positives. The same model for security monitoring could be used, just in a historic fashion instead of real-time.
Machine learning is great because it is proactive, but while proactive detection provides a great advantage, it is nothing without the ability to react appropriately to any security events or incidents that may arise. In addition to machine learning, we also need common sense. While common sense is not all that common, we can help build it into our cybersecurity practice through the introduction of processes and standards. An incident response plan, especially one that takes into account the insider threat specifically, is one of the keys to success in cybersecurity. If your machine learning solution picks up a potential insider threat, how will your response team react? Will they light their torches and grab their pitchforks, or will they have a calculated, uniform response which includes HR, management, and IT?
Proactivity gives us the jump on any potential attackers, whether inside or out. The inability to properly respond, however, will still leave something to be desired. In short, ensure that alongside the next-generation machine learning solution you implement within your environment resides a tried, tested, and true incident response plan.
Speak to our experts on how ZoneFox can enhance your incident response and protect your users and critical data, or if you'd like to hear about how we enabled a client to enhance threat detection capabilities and bolster visibility around risky behaviour, grab our case study.