By James Pittman
My young kids are already interested in making the leap to multiplayer online games. So, I decided to set some ground rules (multiplayer online games are very different from playing with each other or their buddies).
Before they start to play I told them they need to observe and learn, so they can test and refine their own strategies. Taking the time to understand what other players are doing, and if they are doing what they should be doing, will make them better competitors. They’ll be dropped into an environment with different people and devices talking to each other, and they’ll be interacting and competing with lots of other players in new and different ways.
There are a lot of parallels between that advice and one of my favorite parts of my job as a sales engineer at Netography – helping SecOps, NetOps, and CloudOps teams leverage our Netography Fusion® context creation models. The goal is to find the needle in the haystack, tailor detections to their environment, and take sensible action.
The environment security teams operate in today has changed dramatically and legacy threat detection models and traditional approaches to context can’t keep up.
Traditional threat detection models are basically one and done: if you see this, do this. Most of these rules are out-of-the-box and you can’t tweak them to fit your environment. From an “alert cannon” perspective, when you’re dealing with thousands of events, customers tell us the approach doesn’t meet their needs because of the massive amount of noise generated by these methods.
More modern detection models are predicated on having some sense of context. Instead of source IP/destination IP or source port/destination port to determine if something should be denied or allowed, today security and network teams want relatable names and relevant groups that identify devices and users. This is a step in the right direction, but too often context is still treated as a singular, static state (create a label once, assume it to be unchanged thereafter). This approach to context is based on what a device, application, or user is designed or supposed to do, not on real-time behavioral patterns.
Additionally, no one has perfect knowledge of all the participants across their entire environment. We may have some hypotheses we get from authoritative systems including the CMDB, DNS systems, authentication systems, IAM, along with network models, maps, and architectures. But not every source is completely up to date. And what about questions like: Is the server only doing these things? Is it doing other things? These are examples of the other questions we need to be able to ask.
Here’s where Netography context creation models come in and the important concepts of dynamic context and chaining logic:
When I work with a customer that already has some labels and tags from cloud providers, CSPM tools such as Wiz, endpoint software like CrowdStrike, and other systems in their infrastructure and security applications, we can bring those into Netography Fusion and augment from there. But if you are starting from ground zero or lack confidence in the sources you have, we can start with a clean slate.
Within a few minutes, Netography Fusion allows you to see the activity of the different entities across your entire network – cloud and on-prem – and label them. Very quickly you have more context, including lists of server types by services, the client types hitting them, and more. When you search your enriched metadata records, context models surface everything with that label, so you can verify what it is doing and what is happening to it, with updates in real-time as the participants in your environment change. Built for dynamic environments, our context creation models address concerns about visibility gaps, untrusted third-party sources, or data sources that may not be updated quickly enough.
With tags and labels in place, we can initiate more complex queries and ask very specific questions. We start with a single query and layer additional questions as part of an interactive discovery phase to focus on what the customer cares about and filter out noise. Let’s look at an example.
Say the customer wants to know more about what’s going on with their database servers. In a normal architecture, we expect certain things to hit the DB server. If we see a web server talking to a DB server, that is usually normal. But if we see a bunch of desktops hitting the DB server directly, we need to take a closer look at that activity because it is abnormal.
For example, with Microsoft SQL, you use SQL browser for certain functions and the SQL query itself for direct access via a different port. So if we dig deeper and see a few SQL browsers hitting the server that may be activity from the reporting team. But if we see a desktop hitting the backend via a SQL query, that’s a database admin. So now, we ask more questions about that admin:
By chaining logic and building more complex queries we can exclude more noise and produce stronger signal. When the signal potential from chaining together probabilistic clues exceeds the threshold for your environment, you can create high-fidelity detection models.
The days of pre-canned rules you’re stuck with or gaps in network visibility and understanding are gone. Netography Fusion context creation models give you insights so you can find the contours of complex behaviors and tailor your detection models to trigger events and initiate playbooks on things that matter.
The post Getting to High-Fidelity Detections Faster with Context Creation Models appeared first on Netography.
*** This is a Security Bloggers Network syndicated blog from Netography authored by James Pittman. Read the original post at: https://netography.com/getting-to-high-fidelity-detections-faster-with-context-creation-models/