Symantec Endpoint Protection Logs

Data Source Onboarding Guide Overview

Overview  

Welcome to the Splunk Data Source Onboarding Guides (DSOGs)!

Splunk has lots of docs, so why are we creating more? The primary goal of the DSOGs is to provide you with a curated, easy-to-digest view of the most common ways that Splunk users ingest data from our most popular sources, including how to configure the systems that will send us data (such as turning on AWS logging or Windows Security's process-launch logs, for example). While these guides won't cover every single possible option for installation or configuration, they will give you the most common, easiest way forward.

How to use these docs: We've broken the docs out into different segments that get linked together. Many of them will be shared across multiple products. We suggest clicking the "Mark Complete" button above to remind yourself of those you've completed. Since this info will be stored locally in your browser, you won't have to worry about it affecting anyone else's view of the document. And when you're reading about ingesting Sysmon logs, for example, it's a convenient way to keep track of the fact that you already installed the forwarder in order to onboard your Windows Security logs.

So, go on and dive right in! And don't forget, Splunk is here to make sure you're successful. Feel free to ask questions of your Sales Engineer or Professional Services Engineer, if you run into trouble. You can also look for answers or post your questions on https://answers.splunk.com/.

General Infrastructure

Instruction Expectations and Scaling  

Expectations

This doc is intended to be an easy guide to onboarding data from Splunk, as opposed to comprehensive set of docs. We've specifically chosen only straightforward technologies to implement here (avoiding ones that have lots of complications), but if at any point you feel like you need more traditional documentation for the deployment or usage of Splunk, Splunk Docs has you covered with over 10,000 pages of docs (let alone other languages!).

Because simpler is almost always better when getting started, we are also not worrying about more complicated capabilities like Search Head Clustering, Indexer Clustering, or anything else of a similar vein. If you do have those requirements, Splunk Docs is a great place to get started, and you can also always avail yourself of Splunk Professional Services so that you don't have to worry about any of the setup.

Scaling

While Splunk scales to hundreds or thousands of indexers with ease, we usually have some pretty serious architecture conversation before ordering tons of hardware. That said, these docs aren't just for lab installs. We've found that they will work just fine with most customers in the 5 GB to 500 GB range, even some larger! Regardless of whether you have a single Splunk box doing everything, or a distributed install with a Search Head and a set of Indexers, you should be able to get the data and the value flowing quickly.

There's one important note: the first request we get for orchestration as customers scale, is to distribute configurations across many different universal forwarders. Imagine that you've just vetted out the Windows Process Launch Logs guide on a few test systems, and it's working great. Now you want to deploy it to 500, or 50,000 other Windows boxes. Well, there are a variety of ways to do this:

  • The standard Splunk answer is to use the Deployment Server. The deployment server is designed for exactly this task, and is free with Splunk. We aren't going to document it here, mostly because it's extremely well documented by our EDU and also docs.splunk.com, here.
  • If you are a decent sized organization, you've probably already got a way to deploy configurations and code, like Puppet, Chef, SCCM, Ansible, etc. All of those tools are used to deploy splunk on a regular basis. Now, you might not want to go down this route if it requires onerous change control, or reliance on other teams, etc. -- many large Splunk environments with well developed software deployment systems prefer to use the Deployment Server because it can be owned by Splunk and is optimized for Splunk's needs. But many customers are very happy with using Puppet to distribute Splunk configurations.
Ultimately, Splunk configurations are almost all just text files, so you can distribute the configurations with our packaged software, with your own favorite tools, or even by just copying configuration files around.

Indexes and Sourcetypes Overview  

Overview

The DSOGs talk a lot about indexes and sourcetypes. Here's a quick overview.

Splexicon (Splunk's Lexicon, a glossary of Splunk-specific terms) defines an index as the repository for data in Splunk Enterprise. When Splunk Enterprise indexes raw event data, it transforms the data into searchable events. Indexes are the collections of flat files on the Splunk Enterprise instance. That instance is known as an Indexer because it stores data. Splunk instances that users log into and run searches from are known as Search Heads. When you have a single instance, it takes on both the search head and indexer roles.

"Sourcetype" is defined as a default field that identifies the data structure of an event. A sourcetype determines how Splunk Enterprise formats the data during the indexing process. Example sourcetypes include access_combined and cisco_syslog.

In other words, an index is where we store data, and the sourcetype is a label given to similar types of data. All Windows Security Logs will have a sourcetype of WinEventLog:Security, which means you can always search for source=*wineventlog:security (when searching, the word sourcetype is case sensitive, the value is not).

Why is this important? We're going to guide you to use indexes that our professional services organization recommends to customers as an effective starting point. Using standardized sourcetypes (those shared by other customers) makes it much easier to use Splunk and avoid headaches down the road. Splunk will allow you to use any sourcetype you can imagine, which is great for custom log sources, but for common log sources, life is easier sticking with standard sourcetypes. These docs will walk you through standard sourcetypes.

Implementation

Below is a sample indexes.conf that will prepare you for all of the data sources we use in these docs. You will note that we separate OS logs from Network logs and Security logs from Application logs. The idea here is to separate them for performance reasons, but also for isolation purposes-you may want to expose the application or system logs to people who shouldn't view security logs. Putting them in separate indexes prevents that.

To install this configuration, you should download the app below and put it in the apps directory.

For Windows systems, this will typically be: c:\Program Files\Splunk\etc\apps. Once you've extracted the app there, you can restart Splunk via the Services Control Panel applet, or by running "c:\Program Files\Splunk\bin\splunk.exe" restart.

For Linux systems, this will typically be /opt/splunk/etc/apps/. Once you've extracted the app there, you can restart Splunk by running /opt/splunk/bin/splunk restart.

You can view the indexes.conf below, but it's easiest to just click Click here to download a Splunk app with this indexes.conf, below.

Splunk Cloud Customers: You won't copy the files onto your Splunk servers because you don't have access. You could go one-by-one through the UI and create all of the indexes below, but it might be easiest if you download the app, and open a ticket with CloudOps to have it installed.


Sample indexes.conf
# Overview. Below you will find the basic indexes.conf settings for
# setting up your indexes in Splunk. We separate into different indexes 
# to allow for performance (in some cases) or data isolation in others. 
# All indexes come preconfigured with a relatively short retention period 
# that should work for everyone, but if you have more disk space, we 
# encourage (and usually see) longer retention periods, particularly 
# for security customers.

# Endpoint Indexes used for Splunk Security Essentials. 
# If you have the sources, other standard indexes we recommend include:
# epproxy - Local Proxy Activity

[epav]
coldPath = $SPLUNK_DB/epav/colddb
homePath = $SPLUNK_DB/epav/db
thawedPath = $SPLUNK_DB/epav/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[epfw]
coldPath = $SPLUNK_DB/epnet/colddb
homePath = $SPLUNK_DB/epnet/db
thawedPath = $SPLUNK_DB/epnet/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[ephids]
coldPath = $SPLUNK_DB/epmon/colddb
homePath = $SPLUNK_DB/epmon/db
thawedPath = $SPLUNK_DB/epmon/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[epintel]
coldPath = $SPLUNK_DB/epweb/colddb
homePath = $SPLUNK_DB/epweb/db
thawedPath = $SPLUNK_DB/epweb/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[oswin]
coldPath = $SPLUNK_DB/oswin/colddb
homePath = $SPLUNK_DB/oswin/db
thawedPath = $SPLUNK_DB/oswin/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[oswinsec]
coldPath = $SPLUNK_DB/oswinsec/colddb
homePath = $SPLUNK_DB/oswinsec/db
thawedPath = $SPLUNK_DB/oswinsec/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[oswinscript]
coldPath = $SPLUNK_DB/oswinscript/colddb
homePath = $SPLUNK_DB/oswinscript/db
thawedPath = $SPLUNK_DB/oswinscript/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[oswinperf]
coldPath = $SPLUNK_DB/oswinperf/colddb
homePath = $SPLUNK_DB/oswinperf/db
thawedPath = $SPLUNK_DB/oswinperf/thaweddb
frozenTimePeriodInSecs = 604800 
#7 days

[osnix]
coldPath = $SPLUNK_DB/osnix/colddb
homePath = $SPLUNK_DB/osnix/db
thawedPath = $SPLUNK_DB/osnix/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[osnixsec]
coldPath = $SPLUNK_DB/osnixsec/colddb
homePath = $SPLUNK_DB/osnixsec/db
thawedPath = $SPLUNK_DB/osnixsec/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[osnixscript]
coldPath = $SPLUNK_DB/osnixscript/colddb
homePath = $SPLUNK_DB/osnixscript/db
thawedPath = $SPLUNK_DB/osnixscript/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[osnixperf]
coldPath = $SPLUNK_DB/osnixperf/colddb
homePath = $SPLUNK_DB/osnixperf/db
thawedPath = $SPLUNK_DB/osnixperf/thaweddb
frozenTimePeriodInSecs = 604800 
#7 days

# Network Indexes used for Splunk Security Essentials
# If you have the sources, other standard indexes we recommend include:
# netauth - for network authentication sources
# netflow - for netflow data
# netids - for dedicated IPS environments
# netipam - for IPAM systems
# netnlb - for non-web server load balancer data (e.g., DNS, SMTP, SIP, etc.)
# netops - for general network system data (such as Cisco iOS non-netflow logs)
# netvuln - for Network Vulnerability Data

[netdns]
coldPath = $SPLUNK_DB/netdns/colddb
homePath = $SPLUNK_DB/netdns/db
thawedPath = $SPLUNK_DB/netdns/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[mail]
coldPath = $SPLUNK_DB/mail/colddb
homePath = $SPLUNK_DB/mail/db
thawedPath = $SPLUNK_DB/mail/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[netfw]
coldPath = $SPLUNK_DB/netfw/colddb
homePath = $SPLUNK_DB/netfw/db
thawedPath = $SPLUNK_DB/netfw/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[netops]
coldPath = $SPLUNK_DB/netops/colddb
homePath = $SPLUNK_DB/netops/db
thawedPath = $SPLUNK_DB/netops/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[netproxy]
coldPath = $SPLUNK_DB/netproxy/colddb
homePath = $SPLUNK_DB/netproxy/db
thawedPath = $SPLUNK_DB/netproxy/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[netvpn]
coldPath = $SPLUNK_DB/netvpn/colddb
homePath = $SPLUNK_DB/netvpn/db
thawedPath = $SPLUNK_DB/netvpn/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days


# Splunk Security Essentials doesn't have examples of Application Security, 
# but if you want to ingest those logs, here are the recommended indexes:
# appwebint - Internal WebApp Access Logs
# appwebext - External WebApp Access Logs
# appwebintrp - Internal-facing Web App Load Balancers
# appwebextrp - External-facing Web App Load Balancers
# appwebcdn - CDN logs for your website
# appdbserver - Database Servers
# appmsgserver - Messaging Servers
# appint - App Servers for internal-facing apps 
# appext - App Servers for external-facing apps 

Validation

Once this is complete, you will be able to find the list of indexes that the system is aware of by logging into Splunk, and going into Settings -> Indexes.

Forwarder on Windows Systems  

Overview

Installing the Windows forwarder is a straightforward process, similar to installing any Windows program. These instructions will walk you through a manual instruction for getting started (perfect for a lab, a few laptops, or when you're just getting started on domain controllers).

Implementation

Note for larger environments: When you want to automatically roll out the forwarder to hundreds (or thousands or hundreds of thousands) of systems, you will want to leverage your traditional software-deployment techniques. The Splunk forwarder is an MSI package and we have docs on recommended ways to deploy it:

Of course, you can also deploy it with traditional system-configuration management software. This can vary a lot from environment to environment. For this doc we'll just walk you through the installation so that you know what's coming.

The first thing to do is download the Universal Forwarder from Splunk's website (https://www.splunk.com/en_us/download/universal-forwarder.html). This is a separate download from the main Splunk installer, as the universal forwarder is lightweight, so it can be installed on all of the systems in your environment. Most users today will download the x64 version as an MSI installer.

When you double click the downloaded file, the standard MSI installer will appear.

The initial installer screen for the Splunk Forwarder. Click Next to continue, don't worry about customizing the options. You can also install the package silently.

Don't worry about the Cloud checkbox -- we will use the same settings for both.

While you can click "Customize Settings" here and manually insert the address of your indexers or manually choose the log sources you would like to index, etc., we generally don't recommend that, unless you're never going to move beyond the one source you're looking at. (Harder to go find those settings and then apply them to other systems.) Ignore "Customize Options" and click on "Next." The setup will now go through its process, and you'll be finished with a freshly installed forwarder. There are three more steps you'll want to take before you can see the data in Splunk:

  • You will need an outputs.conf to tell the forwarder where to send data (next section)
  • You will need an inputs.conf to tell the forwarder what data to send (below, in the Splunk Configuration for Data Source)
  • You will need an indexes.conf on the indexers to tell them where to put the data that is received (Previous section)

Validation

You can now check Task Manager and you should see Splunk running. Alternatively, check under Services in the Control Panel. You will see Splunk listed and started.

Sending Data from Forwarders to Indexers  

Overview

For any Splunk system in the environment, whether it's a Universal Forwarder on a Windows host, a Linux Heavy-Weight Forwarder pulling the more difficult AWS logs, or even a dedicated Search Head that dispatches searches to your indexers, every system in the environment that is not an indexers (i.e., any system that doesn't store its data locally) should have an outputs.conf that points to your indexers.

Implementation

Fortunately the outputs.conf will be the same across the entire environment, and is fairly simple. There are three steps:

  1. Create the app using the button below (SplunkCloud customers: use the app you received from SplunkCloud).
  2. Extract the file (it will download a zip file).
  3. Place in the etc/apps directory.

For Windows systems, this will typically be: c:\Program Files\Splunk\etc\apps. Once you've extracted the app there, you can restart Splunk via the Services Control Panel applet, or by running "c:\Program Files\Splunk\bin\splunk.exe" restart.

For Linux systems, this will typically be /opt/splunkforwarder/etc/apps/. Once you've extracted the app there, you can restart Splunk by running /opt/splunk/bin/splunk restart.

For customers not using SplunkCloud:

Sample outputs.conf
[tcpout]
defaultGroup = default-autolb-group

[tcpout:default-autolb-group]
server = MySplunkServer.mycompany.local:9997

[tcpout-server://MySplunkServer.mycompany.local:9997]

Here is the completed folder.

Validation

Run a search in the Splunk environment for the host you've installed the forwarder on. E.g., index=* host=mywinsystem1*

You can also review all hosts that are sending data from via | metadata index=* type=hosts

Splunk Configuration for Data Source

Overview  

Most Symantec environments will have a Windows Server running the Symantec Endpoint Protection Management (SEPM) server, where you install the Universal Forwarder (UF), the Symantec Technology Add-In (TA), and the inputs.conf. You don't need to apply this configuration to all of the devices in your environment.

SEPM creates log files, called dump files, in the local file system. The UF is configured to ingest these log files through a deployed configuration file called inputs.conf.

Sizing Estimate  

Sizing of the SEPM logs depend on policy, activity and number of clients. In table 1-6 (page 22) of the Symantec Endpoint Protection 14 Sizing and Scalability Best Practices White Paper, Symantec gives an example of average events per log. Based on this example, the daily ingest into Splunk for Viruses logs could be 0.5MB per 1,000 clients per day.

Install the Technology Add-On -- TA  

Overview

Splunk has a detailed TA that supports ingesting all the different data types generated by your Symantec Endpoint Protection Manager. Like all Splunk TAs, it also includes everything needed to parse out the fields and give them names that are compliant with Splunk’s Common Information Model (CIM), so they can easily be used by the searches in Splunk Security Essentials (SSE), along with searches you will find in other community-supported and premium apps.

Implementation

Find the TA, along with all your other Splunk apps needs, on SplunkBase. You can https://splunkbase.splunk.com/ and search for it or follow the direct link, here.

As with all Splunk TAs, we recommend you deploy it to all parts of your Splunk environment, for simplicity and uniformity. To install the app, start by downloading the file from the SplunkBase URL just shown and then extract it into %SPLUNK_HOME%/etc/apps/ folder. For most modern Splunk environments, that will be C:\Program Files\SplunkUniversalForwarder\etc\apps.

Note: The app itself is a .tgz file, or a gzipped tarball. If you’re running in a pure Windows environment, this means that you will need a third-party program to extract it. Fortunately, .tgz is the most common format in the world after .zip, so virtually any extraction program you have (WinZip, 7z, WinRAR, etc.) will all extract it.

Once you’ve extracted the app, you can restart Splunk via the Services Control Panel applet, or by just running:

c:\Program Files\SplunkUniversalForwarder\bin\splunk.exe restart 

Validation

You can make sure that Splunk has picked up the presence of the app by running:

"C:\program files\splunk\bin\splunk.exe" display app
, will, after asking you to log in, provide you with a list of installed apps. Usually, if you see the folder listed alongside the other apps (learned, search, splunk_httpinput, etc.) you will know that it’s there successfully.

Splunk Cloud Customers: you won't be copying any files or folders to your indexers or search heads, but good news! Even though the Splunk Add-on for Symantec Endpoint Protection is not Cloud Self-Service Enabled, you will still be able to open a ticket with Cloud Ops and be ready to go in short order.

Symantec Indexes and Sourcetypes  

Overview

Amongst Splunk’s 15000+ customers, we’ve done a lot of implementations and we’ve learned a few things along the way. While you can use any sourcetypes or indexes that you want in “Splunk land,” we’ve found that the most successful customers follow specific patterns, as it sets them up for success moving forward.

Implementation

The most common SEPM data types are the Security Log, System Log, and Application Log, but there are a few others as well. Here is a list of the recommended indexes and sourcetypes.

Data Type

Input (inputs.conf, below)

Sourcetype

Index

Client scan data

agt_scan.tmp

symantec:ep:scan:file

epav

Client risk data

agt_risk.tmp

symantec:ep:risk:file

epav

Client proactive threat data

agt_proactive.tmp

symantec:ep:proactive:file

epav

Application and device control data

Agt_behavior.tmp

symantec:ep:behavior:file

ephids

Client security data

Agt_security.tmp

Symantec:ep:security:file

ephids

Server client data

Scm_agent_act.tmp

Symantec:ep:agent:file

ephids

 

Client traffic data

Agt_traffic.tmp

Symantec:ep:traffic:file

epfw

 

Client packet data

Agt_packet.tmp

Symantec:ep:packet:file

epfw

 

Server system data

Scm_system.tmp

Symantec:ep:scm_system:file

epav

 

Client system data

Agt_system.tmp

Symantec:ep:agt_system:file

epav

 

Server policy data

Scm_policy.tmp

Symantec:ep:scm_policy:file

epav

 

Server administration data

Scm_admin.tmp

Symantec:ep:scm_admin:file

epav

 

 

 

 

 

 

If you have already started ingesting the data sources into another index, you can usually proceed (do consider, however, whether you should separate security logs from administration logs, application, and system logs, based on who likely will need access or be prohibited access). If you have already started ingesting data with a different sourcetype, we recommend you switch over to the standardized sourcetypes, if possible. If you're not using the Splunk TA for SEPM to ingest data, keep in mind you may need to go through extra work to align field names to get value out of Splunk Security Essentials and other Splunk content.

To support your SEPM sources, follow the procedure mentioned above in “General Infrastructure--Indexes and Sourcetypes” to add the new indexes for the data you will be bringing in.

For the sourcetypes and monitor statements, we will show those next in “Symantec Configuration Files.”

Symantec Configuration Files  

Overview

Configuration files for SEP Manager inputs tend to be pretty simple. In this case, we just have a single inputs.conf file that will go on the Windows SEPM hosts you will be monitoring. As detailed above in Instruction Expectations and Scaling, you will need some mechanism to distribute these files to the hosts you're monitoring. For initial tests or deployments to just your most sensitive systems, it is easy to copy the files to the hosts. For larger distributions, you can use the Splunk Deployment Server, or use another code distribution system such as SCCM, Puppet, Chef, Ansible, or others.

Implementation

Distribute the below inputs.conf file to your hosts in the %SPLUNK_HOME%\etc\apps\Splunk_TA_symantec_ep\localfolder. If the folder doesn't exist, you will need to create it. For most customers, the path to this file will end up being C:\Program Files\SplunkUniversalForwarder\etc\apps\Splunk_TA_symantec_ep\local\inputs.conf.

Example of inputs.conf that can be deployed to the Splunk UF on the SEP Manager system:


inputs.conf(Download File)
[monitor://C:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\data\dump\agt_scan.tmp]
disabled = false
index = epav
sourcetype = symantec:ep:scan:file

[monitor://C:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\data\dump\scm_admin.tmp]
index = epav
sourcetype = symantec:ep:admin:file
disabled = false

[monitor://C:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\data\dump\agt_behavior.tmp]
index = ephids
sourcetype = symantec:ep:behavior:file
disabled = false

[monitor://C:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\data\dump\scm_agent_act.tmp]
index = ephids
sourcetype = symantec:ep:agent:file
disabled = false

[monitor://C:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\data\dump\scm_policy.tmp]
index = epav
sourcetype = symantec:ep:policy:file
disabled = false

[monitor://C:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\data\dump\scm_system.tmp]
index = epav
sourcetype = symantec:ep:scm_system:file
disabled = false

[monitor://C:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\data\dump\agt_packet.tmp]
index = epfw
sourcetype = symantec:ep:packet:file
disabled = false

[monitor://C:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\data\dump\agt_proactive.tmp]
index = epav
sourcetype = symantec:ep:proactive:file
disabled = false

[monitor://C:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\data\dump\agt_risk.tmp]
index = epav
sourcetype = symantec:ep:risk:file
disabled = false

[monitor://C:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\data\dump\agt_security.tmp]
index = ephids
sourcetype = symantec:ep:security:file
disabled = false

[monitor://C:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\data\dump\agt_system.tmp]
index = epav
sourcetype = symantec:ep:agt_system:file
disabled = false

[monitor://C:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\data\dump\agt_traffic.tmp]
index = epfw
sourcetype = symantec:ep:traffic:file
disabled = false

Splunk Configuration for Data Source References  

System Configuration

Enabling Logging in Symantec Endpoint Protection Manager  

Overview

To maintain a good security posture and to leverage the examples provided in SSE, we recommend following logging configured in the SEPM.

Implementation

Step One
  • Open the admin console on Symantec Endpoint Protection Manager (SEPM).
  • Open the admin panel, click Servers, select your site and select "Configure External Logging."
  • Enable export logs to a dump file.

Here we are configuring the External Logging Policy.
Step Two
  • Click on the Log Filter tab and select the logs to export to a file.
  • Severity level Info should be enabled as well, as it gives for example information on privileged account access (into SEPM), files submitted to Symantec, or functions enabled/disabled.

Here we are defining the log filter policy.
Step Three

The last step (often overlooked!) is to verify that log handling is configured, so that all events on the endpoint are sent to the SEPM.

  • Open the Policies panel, select the Virus and Spyware Protection Policy that is in use, and open it.
  • Open Miscellaneous and select the Log Handling tab.
  • Select show all virus and spyware protection events and select all event types.

Here we are configuring the policy to send events from endpoints to the server.

Validation

Once all configured correctly, the events should flow into Splunk. Below an example of how the source, sourcetype, and index should look. Search example:

index=ep* | stats count by source, sourcetype, index

Confirmed -- we can see events in the ep* indexes!