Linux Auth Logs
Data Source Onboarding Guide Overview
Overview
Welcome to the Splunk Data Source Onboarding Guides (DSOGs)!
Splunk has lots of docs, so why are we creating more? The primary goal of the DSOGs is to provide you with a curated, easy-to-digest view of the most common ways that Splunk users ingest data from our most popular sources, including how to configure the systems that will send us data (such as turning on AWS logging or Windows Security's process-launch logs, for example). While these guides won't cover every single possible option for installation or configuration, they will give you the most common, easiest way forward.
How to use these docs: We've broken the docs out into different segments that get linked together. Many of them will be shared across multiple products. We suggest clicking the "Mark Complete" button above to remind yourself of those you've completed. Since this info will be stored locally in your browser, you won't have to worry about it affecting anyone else's view of the document. And when you're reading about ingesting Sysmon logs, for example, it's a convenient way to keep track of the fact that you already installed the forwarder in order to onboard your Windows Security logs.
So, go on and dive right in! And don't forget, Splunk is here to make sure you're successful. Feel free to ask questions of your Sales Engineer or Professional Services Engineer, if you run into trouble. You can also look for answers or post your questions on https://answers.splunk.com/.
General Infrastructure
Instruction Expectations and Scaling
Expectations
This doc is intended to be an easy guide to onboarding data from Splunk, as opposed to comprehensive set of docs. We've specifically chosen only straightforward technologies to implement here (avoiding ones that have lots of complications), but if at any point you feel like you need more traditional documentation for the deployment or usage of Splunk, Splunk Docs has you covered with over 10,000 pages of docs (let alone other languages!).
Because simpler is almost always better when getting started, we are also not worrying about more complicated capabilities like Search Head Clustering, Indexer Clustering, or anything else of a similar vein. If you do have those requirements, Splunk Docs is a great place to get started, and you can also always avail yourself of Splunk Professional Services so that you don't have to worry about any of the setup.
Scaling
While Splunk scales to hundreds or thousands of indexers with ease, we usually have some pretty serious architecture conversation before ordering tons of hardware. That said, these docs aren't just for lab installs. We've found that they will work just fine with most customers in the 5 GB to 500 GB range, even some larger! Regardless of whether you have a single Splunk box doing everything, or a distributed install with a Search Head and a set of Indexers, you should be able to get the data and the value flowing quickly.
There's one important note: the first request we get for orchestration as customers scale, is to distribute configurations across many different universal forwarders. Imagine that you've just vetted out the Windows Process Launch Logs guide on a few test systems, and it's working great. Now you want to deploy it to 500, or 50,000 other Windows boxes. Well, there are a variety of ways to do this:
- The standard Splunk answer is to use the Deployment Server. The deployment server is designed for exactly this task, and is free with Splunk. We aren't going to document it here, mostly because it's extremely well documented by our EDU and also docs.splunk.com, here.
- If you are a decent sized organization, you've probably already got a way to deploy configurations and code, like Puppet, Chef, SCCM, Ansible, etc. All of those tools are used to deploy splunk on a regular basis. Now, you might not want to go down this route if it requires onerous change control, or reliance on other teams, etc. -- many large Splunk environments with well developed software deployment systems prefer to use the Deployment Server because it can be owned by Splunk and is optimized for Splunk's needs. But many customers are very happy with using Puppet to distribute Splunk configurations.
Indexes and Sourcetypes Overview
Overview
The DSOGs talk a lot about indexes and sourcetypes. Here's a quick overview.
Splexicon (Splunk's Lexicon, a glossary of Splunk-specific terms) defines an index as the repository for data in Splunk Enterprise. When Splunk Enterprise indexes raw event data, it transforms the data into searchable events. Indexes are the collections of flat files on the Splunk Enterprise instance. That instance is known as an Indexer because it stores data. Splunk instances that users log into and run searches from are known as Search Heads. When you have a single instance, it takes on both the search head and indexer roles.
"Sourcetype" is defined as a default field that identifies the data structure of an event. A sourcetype determines how Splunk Enterprise formats the data during the indexing process. Example sourcetypes include access_combined and cisco_syslog.
In other words, an index is where we store data, and the sourcetype is a label given to similar types of data. All Windows Security Logs will have a sourcetype of WinEventLog:Security, which means you can always search for source=*wineventlog:security (when searching, the word sourcetype is case sensitive, the value is not).
Why is this important? We're going to guide you to use indexes that our professional services organization recommends to customers as an effective starting point. Using standardized sourcetypes (those shared by other customers) makes it much easier to use Splunk and avoid headaches down the road. Splunk will allow you to use any sourcetype you can imagine, which is great for custom log sources, but for common log sources, life is easier sticking with standard sourcetypes. These docs will walk you through standard sourcetypes.
Implementation
Below is a sample indexes.conf that will prepare you for all of the data sources we use in these docs. You will note that we separate OS logs from Network logs and Security logs from Application logs. The idea here is to separate them for performance reasons, but also for isolation purposes-you may want to expose the application or system logs to people who shouldn't view security logs. Putting them in separate indexes prevents that.
To install this configuration, you should download the app below and put it in the apps directory.
For Windows systems, this will typically be: c:\Program Files\Splunk\etc\apps. Once you've extracted the app there, you can restart Splunk via the Services Control Panel applet, or by running "c:\Program Files\Splunk\bin\splunk.exe" restart.
For Linux systems, this will typically be /opt/splunk/etc/apps/. Once you've extracted the app there, you can restart Splunk by running /opt/splunk/bin/splunk restart.
You can view the indexes.conf below, but it's easiest to just click Click here to download a Splunk app with this indexes.conf, below.
Splunk Cloud Customers: You won't copy the files onto your Splunk servers because you don't have access. You could go one-by-one through the UI and create all of the indexes below, but it might be easiest if you download the app, and open a ticket with CloudOps to have it installed.
Sample indexes.conf
# Overview. Below you will find the basic indexes.conf settings for # setting up your indexes in Splunk. We separate into different indexes # to allow for performance (in some cases) or data isolation in others. # All indexes come preconfigured with a relatively short retention period # that should work for everyone, but if you have more disk space, we # encourage (and usually see) longer retention periods, particularly # for security customers. # Endpoint Indexes used for Splunk Security Essentials. # If you have the sources, other standard indexes we recommend include: # epproxy - Local Proxy Activity [epav] coldPath = $SPLUNK_DB/epav/colddb homePath = $SPLUNK_DB/epav/db thawedPath = $SPLUNK_DB/epav/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [epfw] coldPath = $SPLUNK_DB/epnet/colddb homePath = $SPLUNK_DB/epnet/db thawedPath = $SPLUNK_DB/epnet/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [ephids] coldPath = $SPLUNK_DB/epmon/colddb homePath = $SPLUNK_DB/epmon/db thawedPath = $SPLUNK_DB/epmon/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [epintel] coldPath = $SPLUNK_DB/epweb/colddb homePath = $SPLUNK_DB/epweb/db thawedPath = $SPLUNK_DB/epweb/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [oswin] coldPath = $SPLUNK_DB/oswin/colddb homePath = $SPLUNK_DB/oswin/db thawedPath = $SPLUNK_DB/oswin/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [oswinsec] coldPath = $SPLUNK_DB/oswinsec/colddb homePath = $SPLUNK_DB/oswinsec/db thawedPath = $SPLUNK_DB/oswinsec/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [oswinscript] coldPath = $SPLUNK_DB/oswinscript/colddb homePath = $SPLUNK_DB/oswinscript/db thawedPath = $SPLUNK_DB/oswinscript/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [oswinperf] coldPath = $SPLUNK_DB/oswinperf/colddb homePath = $SPLUNK_DB/oswinperf/db thawedPath = $SPLUNK_DB/oswinperf/thaweddb frozenTimePeriodInSecs = 604800 #7 days [osnix] coldPath = $SPLUNK_DB/osnix/colddb homePath = $SPLUNK_DB/osnix/db thawedPath = $SPLUNK_DB/osnix/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [osnixsec] coldPath = $SPLUNK_DB/osnixsec/colddb homePath = $SPLUNK_DB/osnixsec/db thawedPath = $SPLUNK_DB/osnixsec/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [osnixscript] coldPath = $SPLUNK_DB/osnixscript/colddb homePath = $SPLUNK_DB/osnixscript/db thawedPath = $SPLUNK_DB/osnixscript/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [osnixperf] coldPath = $SPLUNK_DB/osnixperf/colddb homePath = $SPLUNK_DB/osnixperf/db thawedPath = $SPLUNK_DB/osnixperf/thaweddb frozenTimePeriodInSecs = 604800 #7 days # Network Indexes used for Splunk Security Essentials # If you have the sources, other standard indexes we recommend include: # netauth - for network authentication sources # netflow - for netflow data # netids - for dedicated IPS environments # netipam - for IPAM systems # netnlb - for non-web server load balancer data (e.g., DNS, SMTP, SIP, etc.) # netops - for general network system data (such as Cisco iOS non-netflow logs) # netvuln - for Network Vulnerability Data [netdns] coldPath = $SPLUNK_DB/netdns/colddb homePath = $SPLUNK_DB/netdns/db thawedPath = $SPLUNK_DB/netdns/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [mail] coldPath = $SPLUNK_DB/mail/colddb homePath = $SPLUNK_DB/mail/db thawedPath = $SPLUNK_DB/mail/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [netfw] coldPath = $SPLUNK_DB/netfw/colddb homePath = $SPLUNK_DB/netfw/db thawedPath = $SPLUNK_DB/netfw/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [netops] coldPath = $SPLUNK_DB/netops/colddb homePath = $SPLUNK_DB/netops/db thawedPath = $SPLUNK_DB/netops/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [netproxy] coldPath = $SPLUNK_DB/netproxy/colddb homePath = $SPLUNK_DB/netproxy/db thawedPath = $SPLUNK_DB/netproxy/thaweddb frozenTimePeriodInSecs = 2592000 #30 days [netvpn] coldPath = $SPLUNK_DB/netvpn/colddb homePath = $SPLUNK_DB/netvpn/db thawedPath = $SPLUNK_DB/netvpn/thaweddb frozenTimePeriodInSecs = 2592000 #30 days # Splunk Security Essentials doesn't have examples of Application Security, # but if you want to ingest those logs, here are the recommended indexes: # appwebint - Internal WebApp Access Logs # appwebext - External WebApp Access Logs # appwebintrp - Internal-facing Web App Load Balancers # appwebextrp - External-facing Web App Load Balancers # appwebcdn - CDN logs for your website # appdbserver - Database Servers # appmsgserver - Messaging Servers # appint - App Servers for internal-facing apps # appext - App Servers for external-facing apps
Validation
Once this is complete, you will be able to find the list of indexes that the system is aware of by logging into Splunk, and going into Settings -> Indexes.
Forwarder on Linux Systems
Overview
Installing the Windows forwarder is a straightforward process, similar to installing any Linux program. These instructions will walk you through a manual instruction for getting started (perfect for a lab, a few laptops, or when you're just getting started on domain controllers). You will have three options for how to proceed -- using an RPM package (easiest for any Red Hat or similar system with rpm), using a DEB package (easiest for any Ubuntu or similiar system with dpkg), or using just the compressed .tgz file (will work across Linux platforms).
Note: For full and latest information on installing a forwarder, please follow the instructions in the Linux installation manual:
http://docs.splunk.com/Documentation/Forwarder/latest/Forwarder/Installanixuniversalforwarder
Implementation
Prerequisites- You will need to have elevated permissions to install the software and configure correctly
Make sure you have downloaded the universal forwarder package from Splunk’s website: https://www.splunk.com/en_us/download/universal-forwarder.html and have it on the system you want to install Splunk on.
Run: rpm -i splunkforwarder<version>.rpm
This will install the Splunk forwarder into the default directory of /opt/splunkforwarder
To enable Splunk to run each time your server is restarted use the following command:
/opt/splunkforwarder/bin/splunk enable boot-start
Make sure you have downloaded the universal forwarder package from Splunk’s website: https://www.splunk.com/en_us/download/universal-forwarder.html and have it on the system on which you want to install Splunk.
Run: dpkg -i splunkforwarder<version>.rpm
This will install the Splunk forwarder into the default directory of /opt/splunkforwarder
To enable Splunk to run each time your server is restarted use the following command:
/opt/splunkforwarder/bin/splunk enable boot-start
Installation using the .tgz file:
Make sure you have copied the tarball (or appropriate package for your system) and extract or install it into the /opt directory.
Run: tar zxvf <splunk_tarball_file.tgz> -C /opt
[root@ip-172-31-94-210 ~]# tar zxvf splunkforwarder-7.0.1-2b5b15c4ee89-Linux-x86_64.tgz -C /opt splunkforwarder/ splunkforwarder/etc/ splunkforwarder/etc/deployment-apps/ splunkforwarder/etc/deployment-apps/README splunkforwarder/etc/apps/
Check your extraction:
Run: ls -l /opt
[root@ip-172-31-94-210 apps]# ls -l /opt total 8 drwxr-xr-x 8 splunk splunk 4096 Nov 29 20:21 splunkforwarder
If you would like Splunk to run at startup then execute the following command
/opt/splunkforwarder/bin/splunk enable boot-start
After following any of the above three options, you will have a fully installed Splunk forwarder. There are three more steps you’ll want to take before you can see the data in Splunk:
- You will need an outputs.conf to tell the forwarder where to send data (next section)
- You will need an inputs.conf to tell the forwarder what data to send (below, in the "Splunk Configuration for Data Source")
- You will need an indexes.conf on the indexers to tell them where to put the data received. (You just passed that section.)
Sending Data from Forwarders to Indexers
Overview
For any Splunk system in the environment, whether it's a Universal Forwarder on a Windows host, a Linux Heavy-Weight Forwarder pulling the more difficult AWS logs, or even a dedicated Search Head that dispatches searches to your indexers, every system in the environment that is not an indexers (i.e., any system that doesn't store its data locally) should have an outputs.conf that points to your indexers.
Implementation
Fortunately the outputs.conf will be the same across the entire environment, and is fairly simple. There are three steps:
- Create the app using the button below (SplunkCloud customers: use the app you received from SplunkCloud).
- Extract the file (it will download a zip file).
- Place in the etc/apps directory.
For Windows systems, this will typically be: c:\Program Files\Splunk\etc\apps. Once you've extracted the app there, you can restart Splunk via the Services Control Panel applet, or by running "c:\Program Files\Splunk\bin\splunk.exe" restart.
For Linux systems, this will typically be /opt/splunkforwarder/etc/apps/. Once you've extracted the app there, you can restart Splunk by running /opt/splunk/bin/splunk restart.
For customers not using SplunkCloud:
Sample outputs.conf[tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = MySplunkServer.mycompany.local:9997 [tcpout-server://MySplunkServer.mycompany.local:9997]

Validation
Run a search in the Splunk environment for the host you've installed the forwarder on. E.g., index=* host=mywinsystem1*
You can also review all hosts that are sending data from via | metadata index=* type=hosts
Splunk Configuration for Data Source
Sizing Estimate
Linux event volume can vary greatly based on the type of host. At a very high level, common ranges we’ve seen are:
- Workstation: 4-6 MB/day
- Application Servers: 25-50 MB/day
Obviously, these ranges can vary dramatically. For high horsepower Linux servers with thousands of simultaneous users, you may see more dramatically volume.
A common follow-on question we often get is regarding the expected volume for the Process Launch Logs alone, pulled from auditd. This can vary based on how many new processes spin up, of course, but it is usually a rounding error on event volume. It’s often said that process auditing is the best bang for the buck in all of security logging!
The last question we get when having these discussions is: "What first?" If you have a formal risk-assessment process or information-security audit policy or standards documentation, it's always best to start there. However, we generally see most customers start with servers that contain the most sensitive information and then move on from there. Importantly, remember that some organizations also run Linux desktops too, so don’t miss those.
Install the Technology Add-On -- TA
Overview
Splunk has a detailed technology add-on (Splunk add-on for Unix and Linux) that supports ingesting all manner of Linux logs. Like all Splunk technology add-ons, it also includes everything needed in order to parse out the fields and give them names that are compliant with Splunk’s Common Information Model (Common Information Model Overview), so they can easily be used by the searches in Splunk Security Essentials. It also includes searches you will find in other community-supported and premium apps.
Implementation
Find the TA along with all your other Splunk apps/needs on SplunkBase. You can go to https://splunkbase.splunk.com/ and search for it or follow the direct link here: https://splunkbase.splunk.com/app/833/.
As with all Splunk TAs, we recommend you deploy it to all parts of your Splunk environment for simplicity and uniformity. So plan to install the TA on your search head, indexers, and any Linux forwarders in your environment.
- To install the app, start by downloading the file from the SplunkBase mentioned above and extract it.
Note: The app itself is a .tgz file, or a gzipped tarball. Fortunately, Linux systems make extraction easy, using the tar command tar (zxvf <filename.tgz>) installed on most distributions. - Once you have the extracted folder, move it into $SPLUNK_HOME/etc/apps/ folder. If you follow the default installation path, this will be in /opt/splunkforwarder/etc/apps
Here is an example of the Linux TA extracted into the proper location (note the path) on a Linux universal forwarder.
[root@ip-172-31-94-210 apps]# cd /opt/splunkforwarder/etc/apps/ [root@ip-172-31-94-210 apps]# ls -l total 24 drwxr-xr-x 4 root root 4096 Nov 29 20:14 introspection_generator_addon drwxr-xr-x 4 root root 4096 Nov 29 20:14 learned drwxr-xr-x 4 root root 4096 Nov 29 20:14 search drwxr-xr-x 3 root root 4096 Nov 29 20:14 splunk_httpinput drwxr-xr-x 9 root root 4096 Jan 25 13:34 Splunk_TA_nix drwxr-xr-x 4 root root 4096 Nov 29 20:14 SplunkUniversalForwarder
Once you’ve extracted the app, you can restart Splunk using the command $SPLUNK_HOME/bin/splunk restart ($SPLUNK_HOME is /opt/splunkforwarder if using our default location).
Validation
You can make sure that Splunk has picked up the presence of the app by running $SPLUNK_HOME/bin/splunk display app, will, after asking you to log in, provide you with a list of installed apps. Usually if you see the folder listed alongside the other apps (learned, search, splunk_httpinput, etc.) you will know that it’s there successfully.
Splunk Cloud Customers: you won't be copying any files or folders to your indexers or search heads, but good news! The Splunk Add-on for Unix and Linux is Cloud Self-Service Enabled. So you can just got to Find Apps, and be up and running in seconds.
Linux Indexes and Sourcetypes
Overview
Amongst Splunk’s 15,000+ customers, we’ve done a lot of implementations, and we’ve learned a few things along the way. While you can use any sourcetypes or indexes that you want in the "land of Splunk," we’ve found that the most successful customers follow specific patterns, as it sets them up for success moving forward.
Implementation
The most common Linux data types are listed below, along with the recommended indexes and sourcetypes. Other inputs may be included in your configuration by enabling them in the local/inputs.conf (as described in "Configuration Files" section below.
Data Type | Input (inputs.conf, below) | Sourcetype | Index | Notes |
---|---|---|---|---|
Running processes | script://./bin/ps.sh | ps | osnixscript | Scripted input, for running processes sampled every 30s |
Network Ports | script://./bin/netstat.sh | netstat | osnixscript | Network(s) port status sampled every 60s |
Open Files | script://./bin/lsof.sh | lsof | osnixperf | Open files to process ID map sampled every 10m |
Audited Events | script://./bin/rlog.sh | auditd | osnixsec | System events (adds / moves and changes)/privilege escalation, etc. |
System Log Directory | monitor:///var/log | syslog | osnix | DHCP leases, scheduled tasks, service information, etc. |
| monitor:///var/log/secure | syslog | osnixsec |
|
System Connections | monitor:///var/log/auth.log | Syslog | osnixsec | Multiple protocols (sshd, logind, cron, sudo, etc.) |
Command History | monitor:///root/.bash_history | bash_history | osnixbash | All commands typed in the bash shell |
|
|
|
|
|
If you have already started ingesting the data sources into another index, then you can usually proceed (though consider if you should separate logs, based on who likely will need access or be prohibited access). If you have already started ingesting data with a different sourcetype, we recommend you switch over to the standardized sourcetypes, if possible. If you're not using the Splunk TA for Linux to ingest data, keep in mind you may need to go through extra work to align field names to get value out of Splunk Security Essentials and other Splunk content.
To support your Linux sources, follow the procedure mentioned above in General Infrastructure - Indexes and Sourcetypes to add the new indexes for the data you will be bringing in.
We will show the sourcetypes and monitor statements next in the configuration files.
Configuration Files
Overview
Configuration files for Linux inputs tend to be relatively simple. In this case, we just have a single inputs.conf file that will go on the Linux hosts you will be monitoring. As detailed above in Instruction Expectations and Scaling, you will need some mechanism to distribute these files to the hosts you’re monitoring. For initial tests or deployments to only your most sensitive systems, it is easy to copy the files to the hosts. For larger distributions, you can use the Splunk deployment server or use another code-distribution system, such as Puppet, Chef, Ansible, or others.
Implementation
- Download the Splunk_TA_nix app and extract it, as described earlier
- Create a folder called "local" in the app
- Download the inputs.conf from below, and place it into the local folder. It should now be stored at Splunk_TA_nix/local/inputs.conf
- Distribute the Splunk_TA_nix directory (and its contents) to all of the hosts you will be monitoring.
inputs.conf(Download File)
### bash history [monitor:///root/.bash_history] sourcetype = bash_history index = osnixbash disabled = 0 [monitor:///home/.../.bash_history] sourcetype = bash_history index = osnixbash disabled = 0 [script://$SPLUNK_HOME/etc/apps/Splunk_TA_nix/bin/netstat.sh] interval = 120 sourcetype = netstat source = netstat index = osnixscript disabled = 0 [script://$SPLUNK_HOME/etc/apps/Splunk_TA_nix/bin/lsof.sh] interval = 300 sourcetype = lsof source = lsof index = osnixperf disabled = 0 [monitor:///var/log] whitelist=(log$|messages|mesg$|cron$|acpid$|\.out) blacklist=(\.gz$|\.zip$|\.bz2$|auth\.log|lastlog|secure|anaconda\.syslog) index=osnix sourcetype=syslog disabled = 0 [monitor:///var/log/secure] blacklist=(\.gz$|\.zip$|\.bz2$) index=osnixsec sourcetype=syslog source=secure disabled = 0 [monitor:///var/log/auth.log*] blacklist=(\.gz$|\.zip$|\.bz2$) index=osnixsec sourcetype=syslog disabled = 0 # This script reads the auditd logs translated with ausearch [script://./bin/rlog.sh] sourcetype = auditd source = auditd interval = 60 index = osnixsec disabled = 0 [script://./bin/ps.sh] interval = 30 sourcetype = ps source = ps index = osnixscript disabled = 0
(Optional) Deploy Least Permission
Overview
At Splunk we don’t advise you to run our software with any more privilege than necessary. However, on monitoring systems, there is an argument to do just that. Consider this scenario: an attacker gains unauthorized access to your systems. The attacker's next objective is to gain privilege escalation to a system administrator. Once achieved, you can manipulate the monitored files' (the ones that alert you of an intruder presence) permissions, so that your monitoring tool can no longer read and is effectively blind.
Implementation
If you do wish to restrict the user, you may wish to look at the following solution:
- Change the group permission set to allow the "Splunk" group to read and ensure that only the Splunk user belongs to that group. (We have a monitoring solution for that!)
Above, we see the secure file has no read access for anyone other than the owner (root). - If you followed our best practice for installing Splunk on a Linux system, you will by now have a user and group called "Splunk." If not, create them.
[root@ip-172-31-94-210 ec2-user]# groupadd splunk [root@ip-172-31-94-210 ec2-user]# useradd -g splunk splunk [root@ip-172-31-94-210 ec2-user]#
- Amend the permission set on the files you need to monitor to allow users in the "Splunk" group to read the files.
Above, you can see some configuration where the Splunk Forwarder will have access. - Provided Splunk is running as the splunk user and is a member of the "splunk" group, you will now be able to successfully monitor.
One "gotcha" to this solution is that the files will often be automatically rolled and (alas) your changes will be lost. To avoid this situation, you should amend the logrotate configuration to generate the new file with the permission set we have just configured. As another sneak preview, the "create" statement will help. However, please see the man page and (like always) be sure to thoroughly test and seek the appropriate approvals.
System Configuration
Enabling Monitoring
Overview
To maintain a good security posture and to leverage the examples provided in Splunk Security Essentials, we recommend following your own security or audit policy. In the absence of that, there are a number of industry standard guides to help.
Implementation
Important Note: Splunk is a monitoring product, so while we’re working hard to centralize some of the recommendations you should follow in one place to make your life easier, we cannot offer support for the actual configuration of anything other than Splunk itself. We strongly recommend that you leverage trained expert resources when making any changes. That’s in large part why we’re pointing you to the documentation for the nitty-gritty details!
Fortunately, on most Linux distributions, the monitored inputs we typically see at Splunk are already logged into log files, ready for you to monitor. Typically, some of the audit controls maybe resident in different files, but using the inputs.conf above, you’ve already got that covered!
If after reviewing your company’s policies or industry standard guide you need a finer-grained level of monitoring, you should look to configure the auditd daemon. On most distributions, auditd is installed by default. For those of you with Ubuntu 14.04, you would need to install it. Here’s a document that explains how to do that. As a sneak preview "apt-get install auditd" will be your friend! Once installed (if it is not already), you may configure it to monitor a number of events, such as user and process tracking. The Ubuntu configuration guide is here. The daemon is pretty universal across distributions. Once you have amended your audit policy, these events will be reported in the audit.log that we are monitoring as part of our auditd sourcetype in inputs.conf.
Important Note: As with all system-administration tasks and auditing controls, make sure you seek the appropriate authorization and go through the testing first, to ensure the right level of monitoring and system resource.
Validation
Usually the first thing people will see when deploying audit policies is either new systems showing up in Splunk or, at least, an increase in system log messages. If you already have some logs coming in and want to validate that you’re getting the new ones, look for the delta between your old policy and your new one.