Geographically Improbable Access Detected for Privileged Accounts

Description

Detecting when the same account is logged into twice in a short period of time but from locations very far away, is key to finding account compromise or account credential sharing for your privileged accounts.


Use Case

Insider Threat

Category

Insider Threat, Account Sharing

Security Impact

When the same account is logged into in a short time period from distant locations, that can indicate one of two different problems. The first is account compromise -- threat actors who successfully acquire a user's credentials will usually log in from the same general region that the user lives in to reduce suspicion, but they will sometimes make mistakes (or simply be less diligent), and sometimes users will travel to different regions without the threat actor noticing. The other big scenario that this detection can find is intentional account sharing. Suppose an executive who can't be bothered to follow the standard procedure for granting her EA account access, and just shares her password. When that executive travels to distant areas but the EA stays at home, this search will alert.

Alert Volume

Low (?)

SPL Difficulty

Advanced

Journey

Stage 4

MITRE ATT&CK Tactics

Privilege Escalation
Persistence

MITRE ATT&CK Techniques

Valid Accounts

MITRE Threat Groups

APT18
APT28
APT3
APT32
APT33
APT39
APT41
Carbanak
Dragonfly 2.0
FIN10
FIN4
FIN5
FIN6
FIN8
Leviathan
Night Dragon
OilRig
PittyTiger
Soft Cell
Stolen Pencil
Suckfly
TEMP.Veles
Threat Group-1314
Threat Group-3390
menuPass

Data Sources

Authentication
AWS
Audit Trail

   How to Implement

First, use the "Pull List of Privileged Users" content to generate a list of privileged users (link). Then you just need a log source that provide external IP addresses. If you are using SFDC data, as we are in the live example, it will work easily. Otherwise, any data source that is compliant with the Splunk Common Information Model (so that it contains a src_ip field) should work automatically.

   Known False Positives

There are two big buckets for false positives. One is where the geoip is unreliable -- particularly outside of major economic areas (e.g., US, larger countries in Western Europe), the free MaxMind GeoIP that ships with Splunk Enterprise tends to be less accurate, causing some customers to add the paid version in their Splunk installations. The other big category is where IPs are centralized, such as someone in the US using a Korean VPN service, or using a networking service that originates nation-wide traffic from the same set of IPs (for example, years ago all traffic from a major US cellular carrier originated from the same IP space that was geolocated to Ohio).

   How To Respond

When this fires, you should reach out to the user involved to see if they're aware of why their account was used in two places. You should also see what actions were taken, particularly if one of the locations was unusual. If the user is not aware of the reason, it's important to also ask if the user is aware of sharing their credentials with anyone else. You can also see what other activities occurred from the same remote IP addresses.

   Help

Geographically Improbable Access Detected for Privileged Accounts Help

This example leverages the Simple Search assistant. Our example dataset is a collection of anonymized Salesforce.com logs, during which someone logs in from opposite ends of the earth. Our live search looks for the same activity across the standard index and sourcetype of SFDC data. For this use case, you can use any kind of data source, including VPN logs and others.

SPL for Geographically Improbable Access Detected for Privileged Accounts

Demo Data

First we bring in our basic demo dataset. In this case, AWS CloudTrail logs. We're using a macro called Load_Sample_Log_Data to wrap around | inputlookup, just so it is cleaner for the demo data.
Next we pull the last src_ip for the same user using streamstats (sorted based on the user).
Next we look up the user in our Privileged User lookup. This will add a number of fields, including the risk_score field we'll use in the next command.
Here we filter for privileged users (where the risk score is greater than 0), and for events in a short enough time range that it would be difficult to travel to distant parts of the globe.
Here we resolve the Last src_ip to a physical location, and stick that in a field so that we can conveniently use it.
Now we resolve the *current* src_ip
Now we calculate the distance using an approximation for the curvature of the earth. Easy, right? I do not understand it, I copy-pasted from https://answers.splunk.com/answers/317935/calculating-distances-between-points-with-geoip-us.html#answer-568451
Here we pull the date of the event, to make this easier to run over longer time windows.
Finally we use stats to collect all of the values into one line, per user, per day, and per set of locations. We're using some specific AWS data fields here -- if you're using a log source like VPN, then you might choose other fields.

Live Data

First we bring in our basic dataset. In this case, AWS CloudTrail logs.
Next we pull the last src_ip for the same user using streamstats (sorted based on the user).
Next we look up the user in our Privileged User lookup. This will add a number of fields, including the risk_score field we'll use in the next command.
Here we filter for privileged users (where the risk score is greater than 0), and for events in a short enough time range that it would be difficult to travel to distant parts of the globe.
Here we resolve the Last src_ip to a physical location, and stick that in a field so that we can conveniently use it.
Now we resolve the *current* src_ip
Now we calculate the distance using an approximation for the curvature of the earth. Easy, right? I do not understand it, I copy-pasted from https://answers.splunk.com/answers/317935/calculating-distances-between-points-with-geoip-us.html#answer-568451
Here we pull the date of the event, to make this easier to run over longer time windows.
Finally we use stats to collect all of the values into one line, per user, per day, and per set of locations. We're using some specific AWS data fields here -- if you're using a log source like VPN, then you might choose other fields.

Screenshot of Demo Data