Cisco ASA Logs

Data Source Onboarding Guide Overview

Overview  

Welcome to the Splunk Data Source Onboarding Guides (DSOGs)!

Splunk has lots of docs, so why are we creating more? The primary goal of the DSOGs is to provide you with a curated, easy-to-digest view of the most common ways that Splunk users ingest data from our most popular sources, including how to configure the systems that will send us data (such as turning on AWS logging or Windows Security's process-launch logs, for example). While these guides won't cover every single possible option for installation or configuration, they will give you the most common, easiest way forward.

How to use these docs: We've broken the docs out into different segments that get linked together. Many of them will be shared across multiple products. We suggest clicking the "Mark Complete" button above to remind yourself of those you've completed. Since this info will be stored locally in your browser, you won't have to worry about it affecting anyone else's view of the document. And when you're reading about ingesting Sysmon logs, for example, it's a convenient way to keep track of the fact that you already installed the forwarder in order to onboard your Windows Security logs.

So, go on and dive right in! And don't forget, Splunk is here to make sure you're successful. Feel free to ask questions of your Sales Engineer or Professional Services Engineer, if you run into trouble. You can also look for answers or post your questions on https://answers.splunk.com/.

General Infrastructure

Instruction Expectations and Scaling  

Expectations

This doc is intended to be an easy guide to onboarding data from Splunk, as opposed to comprehensive set of docs. We've specifically chosen only straightforward technologies to implement here (avoiding ones that have lots of complications), but if at any point you feel like you need more traditional documentation for the deployment or usage of Splunk, Splunk Docs has you covered with over 10,000 pages of docs (let alone other languages!).

Because simpler is almost always better when getting started, we are also not worrying about more complicated capabilities like Search Head Clustering, Indexer Clustering, or anything else of a similar vein. If you do have those requirements, Splunk Docs is a great place to get started, and you can also always avail yourself of Splunk Professional Services so that you don't have to worry about any of the setup.

Scaling

While Splunk scales to hundreds or thousands of indexers with ease, we usually have some pretty serious architecture conversation before ordering tons of hardware. That said, these docs aren't just for lab installs. We've found that they will work just fine with most customers in the 5 GB to 500 GB range, even some larger! Regardless of whether you have a single Splunk box doing everything, or a distributed install with a Search Head and a set of Indexers, you should be able to get the data and the value flowing quickly.

There's one important note: the first request we get for orchestration as customers scale, is to distribute configurations across many different universal forwarders. Imagine that you've just vetted out the Windows Process Launch Logs guide on a few test systems, and it's working great. Now you want to deploy it to 500, or 50,000 other Windows boxes. Well, there are a variety of ways to do this:

  • The standard Splunk answer is to use the Deployment Server. The deployment server is designed for exactly this task, and is free with Splunk. We aren't going to document it here, mostly because it's extremely well documented by our EDU and also docs.splunk.com, here.
  • If you are a decent sized organization, you've probably already got a way to deploy configurations and code, like Puppet, Chef, SCCM, Ansible, etc. All of those tools are used to deploy splunk on a regular basis. Now, you might not want to go down this route if it requires onerous change control, or reliance on other teams, etc. -- many large Splunk environments with well developed software deployment systems prefer to use the Deployment Server because it can be owned by Splunk and is optimized for Splunk's needs. But many customers are very happy with using Puppet to distribute Splunk configurations.
Ultimately, Splunk configurations are almost all just text files, so you can distribute the configurations with our packaged software, with your own favorite tools, or even by just copying configuration files around.

Indexes and Sourcetypes Overview  

Overview

The DSOGs talk a lot about indexes and sourcetypes. Here's a quick overview.

Splexicon (Splunk's Lexicon, a glossary of Splunk-specific terms) defines an index as the repository for data in Splunk Enterprise. When Splunk Enterprise indexes raw event data, it transforms the data into searchable events. Indexes are the collections of flat files on the Splunk Enterprise instance. That instance is known as an Indexer because it stores data. Splunk instances that users log into and run searches from are known as Search Heads. When you have a single instance, it takes on both the search head and indexer roles.

"Sourcetype" is defined as a default field that identifies the data structure of an event. A sourcetype determines how Splunk Enterprise formats the data during the indexing process. Example sourcetypes include access_combined and cisco_syslog.

In other words, an index is where we store data, and the sourcetype is a label given to similar types of data. All Windows Security Logs will have a sourcetype of WinEventLog:Security, which means you can always search for source=*wineventlog:security (when searching, the word sourcetype is case sensitive, the value is not).

Why is this important? We're going to guide you to use indexes that our professional services organization recommends to customers as an effective starting point. Using standardized sourcetypes (those shared by other customers) makes it much easier to use Splunk and avoid headaches down the road. Splunk will allow you to use any sourcetype you can imagine, which is great for custom log sources, but for common log sources, life is easier sticking with standard sourcetypes. These docs will walk you through standard sourcetypes.

Implementation

Below is a sample indexes.conf that will prepare you for all of the data sources we use in these docs. You will note that we separate OS logs from Network logs and Security logs from Application logs. The idea here is to separate them for performance reasons, but also for isolation purposes-you may want to expose the application or system logs to people who shouldn't view security logs. Putting them in separate indexes prevents that.

To install this configuration, you should download the app below and put it in the apps directory.

For Windows systems, this will typically be: c:\Program Files\Splunk\etc\apps. Once you've extracted the app there, you can restart Splunk via the Services Control Panel applet, or by running "c:\Program Files\Splunk\bin\splunk.exe" restart.

For Linux systems, this will typically be /opt/splunk/etc/apps/. Once you've extracted the app there, you can restart Splunk by running /opt/splunk/bin/splunk restart.

You can view the indexes.conf below, but it's easiest to just click Click here to download a Splunk app with this indexes.conf, below.

Splunk Cloud Customers: You won't copy the files onto your Splunk servers because you don't have access. You could go one-by-one through the UI and create all of the indexes below, but it might be easiest if you download the app, and open a ticket with CloudOps to have it installed.


Sample indexes.conf
# Overview. Below you will find the basic indexes.conf settings for
# setting up your indexes in Splunk. We separate into different indexes 
# to allow for performance (in some cases) or data isolation in others. 
# All indexes come preconfigured with a relatively short retention period 
# that should work for everyone, but if you have more disk space, we 
# encourage (and usually see) longer retention periods, particularly 
# for security customers.

# Endpoint Indexes used for Splunk Security Essentials. 
# If you have the sources, other standard indexes we recommend include:
# epproxy - Local Proxy Activity

[epav]
coldPath = $SPLUNK_DB/epav/colddb
homePath = $SPLUNK_DB/epav/db
thawedPath = $SPLUNK_DB/epav/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[epfw]
coldPath = $SPLUNK_DB/epnet/colddb
homePath = $SPLUNK_DB/epnet/db
thawedPath = $SPLUNK_DB/epnet/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[ephids]
coldPath = $SPLUNK_DB/epmon/colddb
homePath = $SPLUNK_DB/epmon/db
thawedPath = $SPLUNK_DB/epmon/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[epintel]
coldPath = $SPLUNK_DB/epweb/colddb
homePath = $SPLUNK_DB/epweb/db
thawedPath = $SPLUNK_DB/epweb/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[oswin]
coldPath = $SPLUNK_DB/oswin/colddb
homePath = $SPLUNK_DB/oswin/db
thawedPath = $SPLUNK_DB/oswin/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[oswinsec]
coldPath = $SPLUNK_DB/oswinsec/colddb
homePath = $SPLUNK_DB/oswinsec/db
thawedPath = $SPLUNK_DB/oswinsec/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[oswinscript]
coldPath = $SPLUNK_DB/oswinscript/colddb
homePath = $SPLUNK_DB/oswinscript/db
thawedPath = $SPLUNK_DB/oswinscript/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[oswinperf]
coldPath = $SPLUNK_DB/oswinperf/colddb
homePath = $SPLUNK_DB/oswinperf/db
thawedPath = $SPLUNK_DB/oswinperf/thaweddb
frozenTimePeriodInSecs = 604800 
#7 days

[osnix]
coldPath = $SPLUNK_DB/osnix/colddb
homePath = $SPLUNK_DB/osnix/db
thawedPath = $SPLUNK_DB/osnix/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[osnixsec]
coldPath = $SPLUNK_DB/osnixsec/colddb
homePath = $SPLUNK_DB/osnixsec/db
thawedPath = $SPLUNK_DB/osnixsec/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[osnixscript]
coldPath = $SPLUNK_DB/osnixscript/colddb
homePath = $SPLUNK_DB/osnixscript/db
thawedPath = $SPLUNK_DB/osnixscript/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[osnixperf]
coldPath = $SPLUNK_DB/osnixperf/colddb
homePath = $SPLUNK_DB/osnixperf/db
thawedPath = $SPLUNK_DB/osnixperf/thaweddb
frozenTimePeriodInSecs = 604800 
#7 days

# Network Indexes used for Splunk Security Essentials
# If you have the sources, other standard indexes we recommend include:
# netauth - for network authentication sources
# netflow - for netflow data
# netids - for dedicated IPS environments
# netipam - for IPAM systems
# netnlb - for non-web server load balancer data (e.g., DNS, SMTP, SIP, etc.)
# netops - for general network system data (such as Cisco iOS non-netflow logs)
# netvuln - for Network Vulnerability Data

[netdns]
coldPath = $SPLUNK_DB/netdns/colddb
homePath = $SPLUNK_DB/netdns/db
thawedPath = $SPLUNK_DB/netdns/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[mail]
coldPath = $SPLUNK_DB/mail/colddb
homePath = $SPLUNK_DB/mail/db
thawedPath = $SPLUNK_DB/mail/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[netfw]
coldPath = $SPLUNK_DB/netfw/colddb
homePath = $SPLUNK_DB/netfw/db
thawedPath = $SPLUNK_DB/netfw/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[netops]
coldPath = $SPLUNK_DB/netops/colddb
homePath = $SPLUNK_DB/netops/db
thawedPath = $SPLUNK_DB/netops/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[netproxy]
coldPath = $SPLUNK_DB/netproxy/colddb
homePath = $SPLUNK_DB/netproxy/db
thawedPath = $SPLUNK_DB/netproxy/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[netvpn]
coldPath = $SPLUNK_DB/netvpn/colddb
homePath = $SPLUNK_DB/netvpn/db
thawedPath = $SPLUNK_DB/netvpn/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days


# Splunk Security Essentials doesn't have examples of Application Security, 
# but if you want to ingest those logs, here are the recommended indexes:
# appwebint - Internal WebApp Access Logs
# appwebext - External WebApp Access Logs
# appwebintrp - Internal-facing Web App Load Balancers
# appwebextrp - External-facing Web App Load Balancers
# appwebcdn - CDN logs for your website
# appdbserver - Database Servers
# appmsgserver - Messaging Servers
# appint - App Servers for internal-facing apps 
# appext - App Servers for external-facing apps 

Validation

Once this is complete, you will be able to find the list of indexes that the system is aware of by logging into Splunk, and going into Settings -> Indexes.

Forwarder on Linux Systems  

Overview

Installing the Windows forwarder is a straightforward process, similar to installing any Linux program. These instructions will walk you through a manual instruction for getting started (perfect for a lab, a few laptops, or when you're just getting started on domain controllers). You will have three options for how to proceed -- using an RPM package (easiest for any Red Hat or similar system with rpm), using a DEB package (easiest for any Ubuntu or similiar system with dpkg), or using just the compressed .tgz file (will work across Linux platforms).

Note: For full and latest information on installing a forwarder, please follow the instructions in the Linux installation manual:
http://docs.splunk.com/Documentation/Forwarder/latest/Forwarder/Installanixuniversalforwarder

Implementation

Prerequisites
  1. You will need to have elevated permissions to install the software and configure correctly
Installation using an RPM file:

Make sure you have downloaded the universal forwarder package from Splunk’s website: https://www.splunk.com/en_us/download/universal-forwarder.html and have it on the system you want to install Splunk on.

Run: rpm -i splunkforwarder<version>.rpm

This will install the Splunk forwarder into the default directory of /opt/splunkforwarder

To enable Splunk to run each time your server is restarted use the following command:
    /opt/splunkforwarder/bin/splunk enable boot-start

Installation using an DEB file:

Make sure you have downloaded the universal forwarder package from Splunk’s website: https://www.splunk.com/en_us/download/universal-forwarder.html and have it on the system on which you want to install Splunk.   

Run: dpkg -i splunkforwarder<version>.rpm

This will install the Splunk forwarder into the default directory of /opt/splunkforwarder

To enable Splunk to run each time your server is restarted use the following command:
    /opt/splunkforwarder/bin/splunk enable boot-start

Installation using the .tgz file:

Make sure you have copied the tarball (or appropriate package for your system) and extract or install it into the /opt directory.

Run: tar zxvf <splunk_tarball_file.tgz> -C /opt

[root@ip-172-31-94-210 ~]# tar zxvf splunkforwarder-7.0.1-2b5b15c4ee89-Linux-x86_64.tgz -C /opt
splunkforwarder/
splunkforwarder/etc/
splunkforwarder/etc/deployment-apps/
splunkforwarder/etc/deployment-apps/README
splunkforwarder/etc/apps/

Check your extraction:

Run: ls -l /opt

[root@ip-172-31-94-210 apps]# ls -l /opt
total 8
drwxr-xr-x 8 splunk splunk 4096 Nov 29 20:21 splunkforwarder

If you would like Splunk to run at startup then execute the following command
    /opt/splunkforwarder/bin/splunk enable boot-start

Wrap Up

After following any of the above three options, you will have a fully installed Splunk forwarder. There are three more steps you’ll want to take before you can see the data in Splunk:

  • You will need an outputs.conf to tell the forwarder where to send data (next section)
  • You will need an inputs.conf to tell the forwarder what data to send (below, in the "Splunk Configuration for Data Source")
  • You will need an indexes.conf on the indexers to tell them where to put the data received. (You just passed that section.)

Sending Data from Forwarders to Indexers  

Overview

For any Splunk system in the environment, whether it's a Universal Forwarder on a Windows host, a Linux Heavy-Weight Forwarder pulling the more difficult AWS logs, or even a dedicated Search Head that dispatches searches to your indexers, every system in the environment that is not an indexers (i.e., any system that doesn't store its data locally) should have an outputs.conf that points to your indexers.

Implementation

Fortunately the outputs.conf will be the same across the entire environment, and is fairly simple. There are three steps:

  1. Create the app using the button below (SplunkCloud customers: use the app you received from SplunkCloud).
  2. Extract the file (it will download a zip file).
  3. Place in the etc/apps directory.

For Windows systems, this will typically be: c:\Program Files\Splunk\etc\apps. Once you've extracted the app there, you can restart Splunk via the Services Control Panel applet, or by running "c:\Program Files\Splunk\bin\splunk.exe" restart.

For Linux systems, this will typically be /opt/splunkforwarder/etc/apps/. Once you've extracted the app there, you can restart Splunk by running /opt/splunk/bin/splunk restart.

For customers not using SplunkCloud:

Sample outputs.conf
[tcpout]
defaultGroup = default-autolb-group

[tcpout:default-autolb-group]
server = MySplunkServer.mycompany.local:9997

[tcpout-server://MySplunkServer.mycompany.local:9997]

Here is the completed folder.

Validation

Run a search in the Splunk environment for the host you've installed the forwarder on. E.g., index=* host=mywinsystem1*

You can also review all hosts that are sending data from via | metadata index=* type=hosts

System Configuration

Syslog Overview  

While Splunk can receive syslog messages directly from syslog devices like Cisco ASA, Palo Alto Networks, and others, this is not a best practice for production deployments. Using a separate syslog server allows for less impact to Splunk configuration reloads (which take longer and are more frequent), along with leveraging best of breed tools. We recommend either utilizing an existing syslog server or deploying one, such as rsyslog or syslog-ng on Linux, or Kiwi Syslog Server on Windows. There are many example configurations available for ingesting data with any of these technologies, but for convenience we will provide detailed setup instructions for setting up rsyslog on Linux to ingest data for Splunk in line with our best practices.

Rsyslog server that writes files per source-type to disk, place file .conf in /etc/rsyslog.d/ with a config to receive logs over UDP. Logrotate should be configured on this server to prevent logs from flooding the disk. We will also walk through configuring the Splunk Universal Forwarder on this rsyslog server with to forward logs to Splunk.

If you are setting this up in a lab and just want to get started quickly, you can follow the online documentation about ingesting data directly into Splunk via syslog.

Configuring rsyslog  

Overview

Before we configure the Cisco ASA device, we need to setup a rsyslog server. The supplied config assumes a "vanilla" install of rsyslog. If you already have a rsyslog server in place, you need to validate this config for your deployment.

Implementation

Many customers have experienced issues, as the version of rsyslog shipped with RHEL and Ubuntu is out of date and no longer supported. Run rsyslogd -version and validate that it is running a version higher than or equal to 8.32.0:

rsyslog version of at least 8.32.0.

If the version is not higher or equal to 8.32.0-even after updating your distribution-please add the rsyslog repositories to your distribution and update again. These repos can be found here:

Install rsyslog server on your choice of Linux flavors using the appropriate command, i.e., Yum install rsyslog or apt-get install rsyslog. Some distributions come with rsyslog preinstalled. Make sure that your /etc/rsyslog.conf has a rule that reads $IncludeConfig /etc/rsyslog.d/*.conf.

Copy the following two files into /etc/rsyslog.d/ -- we recommend calling them file splunk.conf and splunk-cisco_asa.conf. splunk.conf will contain all of the global rsyslog configurations, splunk-cisco_asa.conf contains all of the Cisco ASA specific configurations.


splunk.conf(Download File)
# Provides UDP syslog reception, leave these out if your server is already listening to a network port for receiving syslog.
# for parameters see http://www.rsyslog.com/doc/imudp.html
# alternatively these can be commented out from /etc/rsyslog.conf
module(load="imudp") # needs to be done just once
input(type="imudp" port="514")
 
# Provides TCP syslog reception Optional; in case you would like to use TCP as a preferred transport mechanism.
# for parameters see http://www.rsyslog.com/doc/imtcp.html
# module(load="imtcp") # needs to be done just once
# input(type="imtcp" port="514")

splunk-cisco_asa.conf(Download File)
# Rsyslog configuration file
# The purpose of this config is to receive data from a Cisco ASA firewall on a vanilla rsyslog environment
# and stage it for a Splunk universal forwarder
# For an existing server read the comments
# 2018-26-01 Filip Wijnholds fwijnholds@splunk.com
# 2018-30-01 Modified for Cisco ASA - Kyle Champlin kchamplin@splunk.com


module(load="builtin:omfile")
$Umask 0022
# If you are running Splunk in limited privilege mode, make sure to configure the file ownership:
# $FileOwner splunk
# $FileGroup splunk
 
#Filters Data and writes to files per Sourcetype:
$template asa,"/var/log/rsyslog/cisco/asa/%HOSTNAME%-%$MINUTE%.log"


#From splunk transforms.conf in Cisco ASA TA
#[force_sourcetype_for_cisco_asa]
#DEST_KEY = MetaData:Sourcetype
#REGEX = %ASA-\d-\d{6}
#FORMAT = sourcetype::cisco:asa

#Looks for sourcetypes in the data, to
:msg, regex, "%ASA-\d-\d{6}" ?asa

Restart the rsyslog service by running: sudo service rsyslog restart

Validation

After the daemon is restarted and traffic is sent to rsyslog, you should see at least this directory being created:

  • /var/logs/rsyslog/cisco/asa/

Configuring logrotate  

Overview

With the above configuration, we will be able to receive data via UDP syslog from our Cisco ASA device. However, left to its own whims, rsyslog would fill up all the disk space available to it -- not desireable. The most common way to handle this is to use logrotate, which is ubiquitous on Linux. The below configuration will automatically rotate all of your Cisco ASA log files every day, compress the older ones, and then delete the oldest files.

Implementation

Create a file called splunk-ciscoasa in the logrotate directory /etc/logrotate.d/ with the following contents, or just click download below.


splunk-ciscoasa(Download File)
/var/log/rsyslog/cisco/asa/*.log
{
    daily
    compress
    delaycompress
    rotate 3
    ifempty
    maxage 4
    nocreate
    missingok
    sharedscripts
    postrotate
        /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
    endscript
}

For 99.9% of environmemts, you should be set now, as logrotate will run regularly. If you don't see the log files being rotated (see Validation below), you may fall into the 0.01% of environments where you need to configure logrotate to run. The easy way to do this is using crontab (another of those ubiquitous Linux tools).

To edit your crontab, from the terminal run: crontab -e. For a daily schedule that rolls the log at midnight, the following cron job will do:

0 0 * * * root logrotate -f /etc/logrotate.d/splunk-ciscoasa

If you are unfamiliar with cron and want a different schedule, visit this website: https://crontab.guru/#0_0_*_*_*.

Implementation

After 24 hours you should see a .1 appended to the log files when you run: ls -l /var/log/rsyslog/cisco/asa/

General Logging Configuration for Cisco ASA  

The Cisco ASA should utilize TCP as the syslog transport and will maintain an open TCP port with the rsyslog server. Note: A load balancer must not be placed between the ASA and the syslog server.

DNS Prerequisites:

  • For each IP address assigned for management of the ASA, ensure both A and R records exist and match
  • For each egress NAT address assigned to the device, ensure A and R records exist and match
  • For each ingress NAT address assigned to the device, ensure the R record matches the internal destination A. The A record for this IP is not required (might cause communication issues)

Update the ASA configuration to direct syslog messages to the rsyslog server. You can do this either via the command line or via the Cisco Adaptive Security Device Manager (ASDM)-IPS Device Manager (IDM) GUI.

Here is an example of the Cisco Logging Configuration -- click to zoom in:

Cisco ASA Logging Setup Configuration

Here is an example of the Cisco Syslog Server Configuration -- click to zoom in:

Cisco ASA Syslog Server Configuration

Enable Logging of Events  

Cisco ASA provides many configuration options for logging and thus can dictate how much visibility you have on your network. Generally speaking, the following are best practices for balancing between load on the system, logging fidelity, and data volume.

All

Specific Subsystem Logging Levels

  • Edge Firewall:
    • Ensure the default deny rule from external interface is not configured to log
    • Ensure the inbound allow rule for well-known servers (public web, SMTP, VPN, DNS) is not configured to log
  • Border Firewall:
    • Ensure the default deny rule is configured to log in all directions
    • Configure rule logging
    • Ensure ICMP egress is logged
    • Ensure egress for the following protocols are not logged from specifically authorized servers:
      • HTTP(s) from web proxy servers
      • SMTP from email gateway servers
      • DNS from internal recursion servers
    • Ensure ingress for the following protocols are not logged to specifically authorized servers:
      • HTTP(s) web servers
      • DNS authoritative servers
      • SMTP to email gateway servers
  • VPN
    • Ensure all authentication and tunnel status events are logged for accept and failure
    • Ensure all user names are logged without masking
    • Ensure ingress/egress rules are logged except the following:
      • Traffic to the web proxy
      • Traffic to the internal DNS servers

Here is an example of the Cisco Logging Filters Configuration -- click to zoom in:

Cisco ASA Logging Filters Configuration

Here is an example of the Cisco Access Rules Logging Configuration -- click to zoom in:

Cisco ASA Access Rules Logging Configuration

Splunk Configuration for Data Source

Sizing Estimate  

There is a wide amount of variability in the size of Cisco ASA logs. Each firewall message is typically around 230 bytes. We typically see one message per connection (we recommend logging allowed connections, along with denied connections). The volume depends on the size of your ASA device. It can be +/- 10MB/day for a branch office to north of 50 GB/day for a main datacenter cluster.

Using only Cisco's built-in tools, the show ip inspect statistics command will tell you how many connections there have been since last reset. So, one way of estimating event volume is to check that number at the same time on subsequent days and then calculate the number of connections you typically see per day. When multiplied by the general 230 byte number, you will get a decent expectation for data size.

Another common approach is to implement the rsyslog configuration referenced above and then track the size of the files created on disk to determine volume.

These estimates are predicated on logging configuration of "level 6 (informational)," which is detailed later in "Cisco ASA Configuration:"

  • Edge firewall: Negligible
  • Zone-based firewall: 230 bytes per event
  • VPN Services: 10 kb per session, plus firewall activity
  • Operational: Variable, but typically < 200 MB per day, per Cisco ASA

Install the Cisco ASA Technology Addon -- TA  

Overview

Splunk has a detailed technology add-on that supports ingesting all the different data types generated by your Cisco ASA Firewall. Like all Splunk technology add-ons, it also includes everything needed to parse out the fields and give them names that are compliant with Splunk's Common Information Model, so they can easily be used by the searches in Splunk Security Essentials-along with searches you will find in other community-supported and premium apps.

Implementation

Find the TA along with all your other Splunk apps and needs on SplunkBase. You can visit https://splunkbase.splunk.com/ and search for it or you could just follow the direct link: https://splunkbase.splunk.com/app/1620/

As with all Splunk TAs, we recommend you deploy it to all parts of your Splunk environment, for simplicity and uniformity. To install the app, start by downloading the file from the SplunkBase location just shown, and then extract it into the apps folder. On Windows systems, this will be %SPLUNK_HOME%\etc\apps folder, or usually C:\Program Files\Splunk\etc\apps. On Linux systems, this will be $SPLUNK_HOME/etc/apps, or usually /opt/splunk/etc/apps.

Validation

You can make sure that Splunk has picked up the presence of the app by running $SPLUNK_HOME/bin/splunk display app (or on Windows, %SPLUNK_HOME%\bin\splunk display app), will, after asking you to log in, provide you with a list of installed apps. Most likely, if you see the folder listed alongside the other apps (learned, search, splunk_httpinput, etc.) you will know that it's there successfully.

Splunk Cloud Customers: you won't be copying any files or folders to your indexers or search heads, but good news! The Splunk Add-on for Cisco ASA is Cloud Self-Service Enabled. So you can just got to Find Apps, and be up and running in seconds.

Indexes and Sourcetypes  

Overview

Amongst Splunk’s 15,000+ customers, we’ve done a lot of implementations, and we’ve learned a few things along the way. While you can use any sourcetypes or indexes that you want in the "land of Splunk," we’ve found that the most successful customers follow specific patterns, as it sets them up for success moving forward.

Implementation

The following is a table of sourcetypes from Splunk documentation. The Splunk Add-on for Cisco ASA provides the index-time and search-time knowledge for the Cisco ASA, Cisco PIX, and Cisco Firewall Services Module (FWSM) devices, using the following sourcetypes. We will be focused on the cisco:asa sourcetype.

Sourcetype

Index

Description

Common Information Model (CIM) Data Model

cisco:asa

netfw

The system logs of Cisco ASA record user authentication, user session, VPN and intrusion messages.

AuthenticationChange AnalysisNetwork SessionsNetwork TrafficMalware

cisco:fwsm

netfw

The system logs of Cisco FWSM record user authentication, user session, and firewall messages.

AuthenticationNetwork SessionsNetwork Traffic

cisco:pix

netfw

The system logs of Cisco PIX record user authentication, user session, and intrusion messages.

AuthenticationNetwork SessionsNetwork Traffic

For our index, we will standardize on the netfw index to store all firewall logs. If you went through the "Indexes Configuration" above in "Indexes and Sourcetypes," you already have this index configured across your environment. If you are forging your own path, we recommend creating the netfw index on any search heads or indexers in your environment or on your single-instance server.

Cisco ASA inputs.conf  

Overview

Configuration files for Cisco ASA inputs tend to be pretty simple. In this case, we just have a single inputs.conf file that will go on the syslog server you will be monitoring. As detailed above in Instruction Expectations and Scaling, you will need some mechanism to distribute these files to the hosts you're monitoring. For initial tests or deployments to only your most sensitive systems, it is easy to copy the files to the hosts. For larger distributions, you can use the Splunk deployment server or another code-distribution system, such as SCCM, Puppet, Chef, Ansible, or others.

Implementation

Distribute the below inputs.conf file to the Universal Forwarder installed on your syslog server (only where you actually have the rsyslog information). You should create a "local" folder inside of the TA folder. For most customers, the path to this file will end up being /opt/splunk/etc/apps/Splunk_TA_cisco-asa/local/inputs.conf (or on Windows, C:\Program Files\Splunk\etc\apps\Splunk_TA_cisco-asa\local\inputs.conf. You can click Download File below to grab the file.


inputs.conf(Download File)
[monitor:///var/log/rsyslog/cisco/asa/*.log]
host_regex = asa/(.*?)-\d+.log$
sourcetype = cisco:asa
index = netfw
disabled = 0

# search to check
# index=netfw sourcetype=cisco:asa