Windows Security Logs
Data Source Onboarding Guide Overview
Welcome to the Splunk Data Source Onboarding Guides (DSOGs)!
Splunk has lots of docs, so why are we creating more? The primary goal of the DSOGs is to provide you with a curated, easy-to-digest view of the most common ways that Splunk users ingest data from our most popular sources, including how to configure the systems that will send us data (such as turning on AWS logging or Windows Security's process-launch logs, for example). While these guides won't cover every single possible option for installation or configuration, they will give you the most common, easiest way forward.
How to use these docs: We've broken the docs out into different segments that get linked together. Many of them will be shared across multiple products. We suggest clicking the "Mark Complete" button above to remind yourself of those you've completed. Since this info will be stored locally in your browser, you won't have to worry about it affecting anyone else's view of the document. And when you're reading about ingesting Sysmon logs, for example, it's a convenient way to keep track of the fact that you already installed the forwarder in order to onboard your Windows Security logs.
So, go on and dive right in! And don't forget, Splunk is here to make sure you're successful. Feel free to ask questions of your Sales Engineer or Professional Services Engineer, if you run into trouble. You can also look for answers or post your questions on https://answers.splunk.com/.
This doc is intended to be an easy guide to onboarding data from Splunk, as opposed to comprehensive set of docs. We've specifically chosen only straightforward technologies to implement here (avoiding ones that have lots of complications), but if at any point you feel like you need more traditional documentation for the deployment or usage of Splunk, Splunk Docs has you covered with over 10,000 pages of docs (let alone other languages!).
Because simpler is almost always better when getting started, we are also not worrying about more complicated capabilities like Search Head Clustering, Indexer Clustering, or anything else of a similar vein. If you do have those requirements, Splunk Docs is a great place to get started, and you can also always avail yourself of Splunk Professional Services so that you don't have to worry about any of the setup.
While Splunk scales to hundreds or thousands of indexers with ease, we usually have some pretty serious architecture conversation before ordering tons of hardware. That said, these docs aren't just for lab installs. We've found that they will work just fine with most customers in the 5 GB to 500 GB range, even some larger! Regardless of whether you have a single Splunk box doing everything, or a distributed install with a Search Head and a set of Indexers, you should be able to get the data and the value flowing quickly.
There's one important note: the first request we get for orchestration as customers scale, is to distribute configurations across many different universal forwarders. Imagine that you've just vetted out the Windows Process Launch Logs guide on a few test systems, and it's working great. Now you want to deploy it to 500, or 50,000 other Windows boxes. Well, there are a variety of ways to do this:
- The standard Splunk answer is to use the Deployment Server. The deployment server is designed for exactly this task, and is free with Splunk. We aren't going to document it here, mostly because it's extremely well documented by our EDU and also docs.splunk.com, here.
- If you are a decent sized organization, you've probably already got a way to deploy configurations and code, like Puppet, Chef, SCCM, Ansible, etc. All of those tools are used to deploy splunk on a regular basis. Now, you might not want to go down this route if it requires onerous change control, or reliance on other teams, etc. -- many large Splunk environments with well developed software deployment systems prefer to use the Deployment Server because it can be owned by Splunk and is optimized for Splunk's needs. But many customers are very happy with using Puppet to distribute Splunk configurations.
The DSOGs talk a lot about indexes and sourcetypes. Here's a quick overview.
Splexicon (Splunk's Lexicon, a glossary of Splunk-specific terms) defines an index as the repository for data in Splunk Enterprise. When Splunk Enterprise indexes raw event data, it transforms the data into searchable events. Indexes are the collections of flat files on the Splunk Enterprise instance. That instance is known as an Indexer because it stores data. Splunk instances that users log into and run searches from are known as Search Heads. When you have a single instance, it takes on both the search head and indexer roles.
"Sourcetype" is defined as a default field that identifies the data structure of an event. A sourcetype determines how Splunk Enterprise formats the data during the indexing process. Example sourcetypes include access_combined and cisco_syslog.
In other words, an index is where we store data, and the sourcetype is a label given to similar types of data. All Windows Security Logs will have a sourcetype of WinEventLog:Security, which means you can always search for source=*wineventlog:security (when searching, the word sourcetype is case sensitive, the value is not).
Why is this important? We're going to guide you to use indexes that our professional services organization recommends to customers as an effective starting point. Using standardized sourcetypes (those shared by other customers) makes it much easier to use Splunk and avoid headaches down the road. Splunk will allow you to use any sourcetype you can imagine, which is great for custom log sources, but for common log sources, life is easier sticking with standard sourcetypes. These docs will walk you through standard sourcetypes.
Below is a sample indexes.conf that will prepare you for all of the data sources we use in these docs. You will note that we separate OS logs from Network logs and Security logs from Application logs. The idea here is to separate them for performance reasons, but also for isolation purposes-you may want to expose the application or system logs to people who shouldn't view security logs. Putting them in separate indexes prevents that.
To install this configuration, you should download the app below and put it in the apps directory.
For Windows systems, this will typically be: c:\Program Files\Splunk\etc\apps. Once you've extracted the app there, you can restart Splunk via the Services Control Panel applet, or by running "c:\Program Files\Splunk\bin\splunk.exe" restart.
For Linux systems, this will typically be /opt/splunk/etc/apps/. Once you've extracted the app there, you can restart Splunk by running /opt/splunk/bin/splunk restart.
You can view the indexes.conf below, but it's easiest to just click Click here to download a Splunk app with this indexes.conf, below.
Splunk Cloud Customers: You won't copy the files onto your Splunk servers because you don't have access. You could go one-by-one through the UI and create all of the indexes below, but it might be easiest if you download the app, and open a ticket with CloudOps to have it installed.
# Overview. Below you will find the basic indexes.conf settings for # setting up your indexes in Splunk. We separate into different indexes # to allow for performance (in some cases) or data isolation in others. # All indexes come preconfigured with a relatively short retention period # that should work for everyone, but if you have more disk space, we # encourage (and usually see) longer retention periods, particularly # for security customers. # Endpoint Indexes used for Splunk Security Essentials. # If you have the sources, other standard indexes we recommend include: # epproxy - Local Proxy Activity [epav] coldPath = $SPLUNK_DB/epav/colddb homePath = $SPLUNK_DB/epav/db thawedPath = $SPLUNK_DB/epav/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [epfw] coldPath = $SPLUNK_DB/epnet/colddb homePath = $SPLUNK_DB/epnet/db thawedPath = $SPLUNK_DB/epnet/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [ephids] coldPath = $SPLUNK_DB/epmon/colddb homePath = $SPLUNK_DB/epmon/db thawedPath = $SPLUNK_DB/epmon/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [epintel] coldPath = $SPLUNK_DB/epweb/colddb homePath = $SPLUNK_DB/epweb/db thawedPath = $SPLUNK_DB/epweb/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [oswin] coldPath = $SPLUNK_DB/oswin/colddb homePath = $SPLUNK_DB/oswin/db thawedPath = $SPLUNK_DB/oswin/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [oswinsec] coldPath = $SPLUNK_DB/oswinsec/colddb homePath = $SPLUNK_DB/oswinsec/db thawedPath = $SPLUNK_DB/oswinsec/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [oswinscript] coldPath = $SPLUNK_DB/oswinscript/colddb homePath = $SPLUNK_DB/oswinscript/db thawedPath = $SPLUNK_DB/oswinscript/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [oswinperf] coldPath = $SPLUNK_DB/oswinperf/colddb homePath = $SPLUNK_DB/oswinperf/db thawedPath = $SPLUNK_DB/oswinperf/thaweddb frozenTimePeriodInSecs = 604800 #7 days [osnix] coldPath = $SPLUNK_DB/osnix/colddb homePath = $SPLUNK_DB/osnix/db thawedPath = $SPLUNK_DB/osnix/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [osnixsec] coldPath = $SPLUNK_DB/osnixsec/colddb homePath = $SPLUNK_DB/osnixsec/db thawedPath = $SPLUNK_DB/osnixsec/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [osnixscript] coldPath = $SPLUNK_DB/osnixscript/colddb homePath = $SPLUNK_DB/osnixscript/db thawedPath = $SPLUNK_DB/osnixscript/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [osnixperf] coldPath = $SPLUNK_DB/osnixperf/colddb homePath = $SPLUNK_DB/osnixperf/db thawedPath = $SPLUNK_DB/osnixperf/thaweddb frozenTimePeriodInSecs = 604800 #7 days # Network Indexes used for Splunk Security Essentials # If you have the sources, other standard indexes we recommend include: # netauth - for network authentication sources # netflow - for netflow data # netids - for dedicated IPS environments # netipam - for IPAM systems # netnlb - for non-web server load balancer data (e.g., DNS, SMTP, SIP, etc.) # netops - for general network system data (such as Cisco iOS non-netflow logs) # netvuln - for Network Vulnerability Data [netdns] coldPath = $SPLUNK_DB/netdns/colddb homePath = $SPLUNK_DB/netdns/db thawedPath = $SPLUNK_DB/netdns/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [mail] coldPath = $SPLUNK_DB/mail/colddb homePath = $SPLUNK_DB/mail/db thawedPath = $SPLUNK_DB/mail/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [netfw] coldPath = $SPLUNK_DB/netfw/colddb homePath = $SPLUNK_DB/netfw/db thawedPath = $SPLUNK_DB/netfw/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [netops] coldPath = $SPLUNK_DB/netops/colddb homePath = $SPLUNK_DB/netops/db thawedPath = $SPLUNK_DB/netops/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [netproxy] coldPath = $SPLUNK_DB/netproxy/colddb homePath = $SPLUNK_DB/netproxy/db thawedPath = $SPLUNK_DB/netproxy/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [netvpn] coldPath = $SPLUNK_DB/netvpn/colddb homePath = $SPLUNK_DB/netvpn/db thawedPath = $SPLUNK_DB/netvpn/thaweddb frozenTimePeriodInSecs = 2592000 #30 days # Splunk Security Essentials doesn't have examples of Application Security, # but if you want to ingest those logs, here are the recommended indexes: # appwebint - Internal WebApp Access Logs # appwebext - External WebApp Access Logs # appwebintrp - Internal-facing Web App Load Balancers # appwebextrp - External-facing Web App Load Balancers # appwebcdn - CDN logs for your website # appdbserver - Database Servers # appmsgserver - Messaging Servers # appint - App Servers for internal-facing apps # appext - App Servers for external-facing apps
Once this is complete, you will be able to find the list of indexes that the system is aware of by logging into Splunk, and going into Settings -> Indexes.
OverviewInstalling the Windows forwarder is a straightforward process, similar to installing any Windows program. These instructions will walk you through a manual instruction for getting started (perfect for a lab, a few laptops, or when you're just getting started on domain controllers).
Note for larger environments: When you want to automatically roll out the forwarder to hundreds (or thousands or hundreds of thousands) of systems, you will want to leverage your traditional software-deployment techniques. The Splunk forwarder is an MSI package and we have docs on recommended ways to deploy it:
- Via a logon script that runs a silent CLI installation: http://docs.splunk.com/Documentation/Forwarder/7.0.1/Forwarder/InstallaWindowsuniversalforwarderfromthecommandline
- With a static configuration via CLI: http://docs.splunk.com/Documentation/Forwarder/7.0.1/Forwarder/InstallaWindowsuniversalforwarderremotelywithastaticconfiguration
- How to bake it into your gold image: http://docs.splunk.com/Documentation/Forwarder/7.0.1/Forwarder/Makeauniversalforwarderpartofahostimage
Of course, you can also deploy it with traditional system-configuration management software. This can vary a lot from environment to environment. For this doc we'll just walk you through the installation so that you know what's coming.
The first thing to do is download the Universal Forwarder from Splunk's website (https://www.splunk.com/en_us/download/universal-forwarder.html). This is a separate download from the main Splunk installer, as the universal forwarder is lightweight, so it can be installed on all of the systems in your environment. Most users today will download the x64 version as an MSI installer.
When you double click the downloaded file, the standard MSI installer will appear.
Don't worry about the Cloud checkbox -- we will use the same settings for both.
While you can click "Customize Settings" here and manually insert the address of your indexers or manually choose the log sources you would like to index, etc., we generally don't recommend that, unless you're never going to move beyond the one source you're looking at. (Harder to go find those settings and then apply them to other systems.) Ignore "Customize Options" and click on "Next." The setup will now go through its process, and you'll be finished with a freshly installed forwarder. There are three more steps you'll want to take before you can see the data in Splunk:
- You will need an outputs.conf to tell the forwarder where to send data (next section)
- You will need an inputs.conf to tell the forwarder what data to send (below, in the Splunk Configuration for Data Source)
- You will need an indexes.conf on the indexers to tell them where to put the data that is received (Previous section)
You can now check Task Manager and you should see Splunk running. Alternatively, check under Services in the Control Panel. You will see Splunk listed and started.
For any Splunk system in the environment, whether it's a Universal Forwarder on a Windows host, a Linux Heavy-Weight Forwarder pulling the more difficult AWS logs, or even a dedicated Search Head that dispatches searches to your indexers, every system in the environment that is not an indexers (i.e., any system that doesn't store its data locally) should have an outputs.conf that points to your indexers.
Fortunately the outputs.conf will be the same across the entire environment, and is fairly simple. There are three steps:
- Create the app using the button below (SplunkCloud customers: use the app you received from SplunkCloud).
- Extract the file (it will download a zip file).
- Place in the etc/apps directory.
For Windows systems, this will typically be: c:\Program Files\Splunk\etc\apps. Once you've extracted the app there, you can restart Splunk via the Services Control Panel applet, or by running "c:\Program Files\Splunk\bin\splunk.exe" restart.
For Linux systems, this will typically be /opt/splunkforwarder/etc/apps/. Once you've extracted the app there, you can restart Splunk by running /opt/splunk/bin/splunk restart.
For customers not using SplunkCloud:Sample outputs.conf
[tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = MySplunkServer.mycompany.local:9997 [tcpout-server://MySplunkServer.mycompany.local:9997]
Run a search in the Splunk environment for the host you've installed the forwarder on. E.g., index=* host=mywinsystem1*
You can also review all hosts that are sending data from via | metadata index=* type=hosts
Here are links from this section:
- Splunk Docs
- Deployment Server Overview from Splunk Docs
- The Splexicon -- a dictionary for Splunk terms
- Splexicon: Event Data
- Splexicon: Event
- Splexicon: Indexer
- Splexicon: Search Head
- Splunk Docs: Install Forwarder via CLI
- Splunk Docs: Install Splunk Forwarder Remotely with a Static Configuration
- Splunk Docs: Incorporate Forwarder into your System Images
- Download Universal Forwarder
Splunk Configuration for Data Source
Windows event volume can vary greatly based on the type of host. At a very high level, common ranges we’ve seen are:
- Workstation: 4-6 MB/day (Including Application, System, and Security Logs)
- Application Servers: 25-50 MB/day
- Domain Controllers: 50-500 MB/day depending on the number of users
Obviously, these ranges can vary dramatically. For high horsepower AD controllers with thousands of simultaneous users, you could see more volume.
A common follow-on question we often get is what the expected volume is just for the Process Launch Logs (Event ID 4688) – this can vary based on how many new processes spin up, of course, but it is usually a rounding error on event volume. It’s often said that Event ID 4688 is the best bang for the buck in all of security logging!
The last question we get when having these discussions is: "what first?" If you have a formal risk assessment process, it's always best to there, but generally speaking we see most customers start with Domain Controllers as they have the most sensitive information, move on to member servers, and reach the desktops last.
Splunk has a detailed Technology Add-on that supports ingesting all manner of Windows logs. Like all Splunk Technology Add-ons, it also includes everything needed in order to parse out the fields, and give them names that are compliant with Splunk’s Common Information Model, so they can easily be used by the searches in Splunk Security Essentials, along with searches you will find in other community supported and premium apps.
Find the TA along with all your other Splunk apps / needs on SplunkBase. You could go to https://splunkbase.splunk.com/ and search for it, or you could just follow the direct link here: https://splunkbase.splunk.com/app/742/.
As with all Splunk TAs, we recommend you deploy it to all parts of your Splunk environment for simplicity and uniformity, so plan to install the TA on your Search Head, Indexers, and any Windows Forwarders in your environment.
- To install the app, start by downloading the file from the SplunkBase just shown, and then extract it. Note: The app itself is a tgz file, or a gzipped tarball. If you’re a pure Windows environment, this means that you will need a third party program to extract it – fortunately tgz is the most common format in the world behind zip, so virtually any extraction program you have (WinZip, 7z, WinRAR, etc.) will all extract it.
- Once you have the extracted folder, move it into %SPLUNK_HOME%/etc/apps/ folder. For most modern Splunk environments, that will be C:\Program Files\SplunkUniversalForwarder\etc\apps.
- Once you’ve extracted the app, you can restart Splunk via the Services Control Panel applet, or by just running "c:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" restart.
You can make sure that Splunk has picked up the presence of the app by running: "c:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" display app, which after asking you to log in, will provide you with a list of installed apps. Usually though if you see the folder listed alongside the other apps (learned, search, splunk_httpinput, etc.) you will know that it’s there successfully.
Splunk Cloud Customers: you won't be copying any files or folders to your indexers or search heads, but good news! The Splunk Add-on for Microsoft Windows is Cloud Self-Service Enabled. So you can just got to Find Apps, and be up and running in seconds.
Amongst Splunk’s 15000+ customers, we’ve done a lot of implementations, and we’ve learned a few things along the way. While you can use any sourcetypes or indexes that you want in the land of Splunk, we’ve found that the most successful customers follow specific patterns, as it sets them up for success moving forward.
The most common Windows data types are the Security Log, System Log, and Application Log, but there are a few others as well including Microsoft Sysmon. Here are our most commonly used Windows Data Types and the recommended indexes and sourcetypes.
|Data Type||Input (inputs.conf, below)||Source||Index||Notes|
|Windows Security Logs||WinEventLog://Security||wineventlog:security||oswinsec||We leverage a blacklist for common “noise” events, below.|
|Windows Application Logs||WinEventLog://Application||wineventlog:application||oswin|
|Windows System Logs||WinEventLog://System||wineventlog:system||oswin|
|Windows Update Log||monitor://$WINDIR\WindowsUpdate.log||WindowsUpdateLog||oswinsec|
|Microsoft Sysmon Logs||WinEventLog://Microsoft-Windows-Sysmon/Operational||XmlWinEventLog:Microsoft-Windows-Sysmon/Operational||epintel||Based on the sysmon sysinternals tool, not out of the box.|
If you have already started ingesting the data sources into another index, then you can usually proceed (though consider if you should separate Windows Security logs from Process Launch Logs and both from Application and System logs, based on who likely will need access or be prohibited access). If you have already started ingesting data with a different sourcetype, we would recommend you switch over to the standardized sourcetypes if at all possible. If you're not using the Splunk TA for Windows to ingest data, then keep in mind you may need to go through extra work to align field names to get value out of Splunk Security Essentials, and other Splunk content.
To support your Windows sources, follow the procedure mentioned above in General Infrastructure - Indexes and Sourcetypes to add the new indexes for the data you will be bringing in (generally it’s easiest if you just create oswin, oswinsec, epintel).
For the sourcetypes and monitor statements, we will show those next in the Configuration Files.
Configuration files for Windows inputs tends to be pretty simple. In this case, we just have a single inputs.conf file that will go on the Windows Hosts you will be monitoring. As detailed above in Instruction Expectations and Scaling, you will need some mechanism to distribute these files to the hosts you’re monitoring. For initial tests, or deployments to just your most sensitive systems, it is easy to copy the files to the hosts. For larger distributions, you can use the Splunk Deployment Server, or use another code distribution system such as SCCM, Puppet, Chef, Ansible, or others.
Distribute the below inputs.conf file to all of the hosts that you will be monitoring in the %SPLUNK_HOME%/etc/apps/Splunk_TA_windows/local folder. If the folder doesn’t exist, you will need to create it. For most customers, the path to this file will end up being: C:/Program Files/SplunkUniversalForwarder/etc/apps/Splunk_TA_windows/local/inputs.conf.
[WinEventLog://Security] disabled = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:\s+(?!groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:\s+(?!groupPolicyContainer)" blacklist3 = EventCode="4688" Message="New Process Name: (?i)(?:[C-F]:\Program Files\Splunk(?:UniversalForwarder)?\bin\(?:btool|splunkd|splunk|splunk-(?:MonitorNoHandle|admon|netmon|perfmon|powershell|regmon|winevtlog|winhostinfo|winprintmon|wmi)).exe)" index = oswinsec [WinEventLog://Application] disabled = 0 checkpointInterval = 5 index = oswin [WinEventLog://System] disabled = 0 checkpointInterval = 5 index = oswin [monitor://$WINDIR\WindowsUpdate.log] disabled = 0 sourcetype = WindowsUpdateLog index = oswinsec [WinHostMon://Service] interval = 3600 disabled = 0 type = Service index = oswinscript
To maintain a good Security Posture, and to leverage the examples provided in Splunk Security Essentials, we recommend following Microsoft’s official guidance for “Stronger” security visibility. The Audit Policy Recommendations page from Microsoft TechNet provides very detailed configuration settings per operating system from Windows 7 / Server 2008 and up here: https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/plan/security-best-practices/audit-policy-recommendations
Important Note: Splunk is a monitoring product, and not an Active Directory system, so while we’re working hard to centralize some of the recommendations you should follow in one place to make your life easy, we cannot offer support for the actual configuration of anything other than Splunk itself, and strongly recommend that you leverage trained Microsoft resources when making any changes. That’s in large part why we’re pointing you to Microsoft docs for the nitty gritty details!
If you are new to configuring auditing on Microsoft systems, there are two primary ways in which you can go about configuring auditing: a one-off (typically lab) system via the Local Security Policy, or a managed system via Group Policy. Virtually all Splunk customers will configure their Windows audit logging via Group Policy, but you absolutely can use Local Security Policy if you only have a small number of machines, or you are trialing on a few systems.
If you do want to configure via Local Security Policy, you can click Start (or highlight Cortana Search) and then type in “Local Security Policy” to open the policy editor. Finding the right configuration settings is straightforward, just expand “System Audit Policies – Local Group Policy” at the bottom of the list, and then the next item with the same name. If you compare these items to the link above (also included under “References”), you will find that they map directly and you can proceed to mirror what Microsoft recommends – use the Stronger column for adequate security visibility.
To configure via Group Policy, you should open the Group Policy Editor for a group policy that covers any computer accounts that are in scope for monitoring. Most medium to large organizations that we work have a separate admin or group for configuring these types of Group Policy Settings, so it’s usually easiest to send the quoted paragraph below over to that group to apply the settings.
If you do manage Group Policy as well, here’s how you can make the changes on your own – note that Microsoft has different recommendations for servers versus workstations, so if possible it’s best to apply separate policies to each (when in doubt, we usually opt for more visibility):
- Open the Group Policy Manager by opening the Microsoft Management Console (mmc.exe)
- Add the Group Policy Management snap-in (File -> Add)
- Choose or create a policy that is applied to your in-scope systems, and right-click to Edit the policy.
- Find the policy settings that match the Microsoft Document under: Computer Configuration -> Policies -> Windows Settings -> Security Settings -> Advanced Audit Policy Configuration -> Audit Policy
- Go through the Microsoft Doc to implement their recommendations for the stronger audit policy.
- You might have noticed a warning about one other key that you need to have set in order for these audit policies to take effect – don’t worry, it’s been default to on since Vista, but you might as well configure it as well, under Configuration -> Policies -> Windows Settings -> Security Settings -> Local Settings -> Audit: Force audit policy subcategory setting (Windows Vista and later) to override Audit Policy Category Settings
If you can’t (or don’t want to) run through the above on your own, here’s a paragraph you can just send to whoever in your organization manages Active Directory.
Usually the first thing people will see when deploying audit policies is either new systems showing up in Splunk, or at least an increase in system log messages. If you already have some logs coming in and want to validate that you’re getting the new ones, look for the delta between your old policy and your new one, and google “Windows
Here are links from this section: