Stream DNS Logs

Data Source Onboarding Guide Overview

Overview  

Welcome to the Splunk Data Source Onboarding Guides (DSOGs)!

Splunk has lots of docs, so why are we creating more? The primary goal of the DSOGs is to provide you with a curated, easy-to-digest view of the most common ways that Splunk users ingest data from our most popular sources, including how to configure the systems that will send us data (such as turning on AWS logging or Windows Security's process-launch logs, for example). While these guides won't cover every single possible option for installation or configuration, they will give you the most common, easiest way forward.

How to use these docs: We've broken the docs out into different segments that get linked together. Many of them will be shared across multiple products. We suggest clicking the "Mark Complete" button above to remind yourself of those you've completed. Since this info will be stored locally in your browser, you won't have to worry about it affecting anyone else's view of the document. And when you're reading about ingesting Sysmon logs, for example, it's a convenient way to keep track of the fact that you already installed the forwarder in order to onboard your Windows Security logs.

So, go on and dive right in! And don't forget, Splunk is here to make sure you're successful. Feel free to ask questions of your Sales Engineer or Professional Services Engineer, if you run into trouble. You can also look for answers or post your questions on https://answers.splunk.com/.

General Infrastructure

Instruction Expectations and Scaling  

Expectations

This doc is intended to be an easy guide to onboarding data from Splunk, as opposed to comprehensive set of docs. We've specifically chosen only straightforward technologies to implement here (avoiding ones that have lots of complications), but if at any point you feel like you need more traditional documentation for the deployment or usage of Splunk, Splunk Docs has you covered with over 10,000 pages of docs (let alone other languages!).

Because simpler is almost always better when getting started, we are also not worrying about more complicated capabilities like Search Head Clustering, Indexer Clustering, or anything else of a similar vein. If you do have those requirements, Splunk Docs is a great place to get started, and you can also always avail yourself of Splunk Professional Services so that you don't have to worry about any of the setup.

Scaling

While Splunk scales to hundreds or thousands of indexers with ease, we usually have some pretty serious architecture conversation before ordering tons of hardware. That said, these docs aren't just for lab installs. We've found that they will work just fine with most customers in the 5 GB to 500 GB range, even some larger! Regardless of whether you have a single Splunk box doing everything, or a distributed install with a Search Head and a set of Indexers, you should be able to get the data and the value flowing quickly.

There's one important note: the first request we get for orchestration as customers scale, is to distribute configurations across many different universal forwarders. Imagine that you've just vetted out the Windows Process Launch Logs guide on a few test systems, and it's working great. Now you want to deploy it to 500, or 50,000 other Windows boxes. Well, there are a variety of ways to do this:

  • The standard Splunk answer is to use the Deployment Server. The deployment server is designed for exactly this task, and is free with Splunk. We aren't going to document it here, mostly because it's extremely well documented by our EDU and also docs.splunk.com, here.
  • If you are a decent sized organization, you've probably already got a way to deploy configurations and code, like Puppet, Chef, SCCM, Ansible, etc. All of those tools are used to deploy splunk on a regular basis. Now, you might not want to go down this route if it requires onerous change control, or reliance on other teams, etc. -- many large Splunk environments with well developed software deployment systems prefer to use the Deployment Server because it can be owned by Splunk and is optimized for Splunk's needs. But many customers are very happy with using Puppet to distribute Splunk configurations.
Ultimately, Splunk configurations are almost all just text files, so you can distribute the configurations with our packaged software, with your own favorite tools, or even by just copying configuration files around.

Indexes and Sourcetypes Overview  

Overview

The DSOGs talk a lot about indexes and sourcetypes. Here's a quick overview.

Splexicon (Splunk's Lexicon, a glossary of Splunk-specific terms) defines an index as the repository for data in Splunk Enterprise. When Splunk Enterprise indexes raw event data, it transforms the data into searchable events. Indexes are the collections of flat files on the Splunk Enterprise instance. That instance is known as an Indexer because it stores data. Splunk instances that users log into and run searches from are known as Search Heads. When you have a single instance, it takes on both the search head and indexer roles.

"Sourcetype" is defined as a default field that identifies the data structure of an event. A sourcetype determines how Splunk Enterprise formats the data during the indexing process. Example sourcetypes include access_combined and cisco_syslog.

In other words, an index is where we store data, and the sourcetype is a label given to similar types of data. All Windows Security Logs will have a sourcetype of WinEventLog:Security, which means you can always search for source=*wineventlog:security (when searching, the word sourcetype is case sensitive, the value is not).

Why is this important? We're going to guide you to use indexes that our professional services organization recommends to customers as an effective starting point. Using standardized sourcetypes (those shared by other customers) makes it much easier to use Splunk and avoid headaches down the road. Splunk will allow you to use any sourcetype you can imagine, which is great for custom log sources, but for common log sources, life is easier sticking with standard sourcetypes. These docs will walk you through standard sourcetypes.

Implementation

Below is a sample indexes.conf that will prepare you for all of the data sources we use in these docs. You will note that we separate OS logs from Network logs and Security logs from Application logs. The idea here is to separate them for performance reasons, but also for isolation purposes-you may want to expose the application or system logs to people who shouldn't view security logs. Putting them in separate indexes prevents that.

To install this configuration, you should download the app below and put it in the apps directory.

For Windows systems, this will typically be: c:\Program Files\Splunk\etc\apps. Once you've extracted the app there, you can restart Splunk via the Services Control Panel applet, or by running "c:\Program Files\Splunk\bin\splunk.exe" restart.

For Linux systems, this will typically be /opt/splunk/etc/apps/. Once you've extracted the app there, you can restart Splunk by running /opt/splunk/bin/splunk restart.

You can view the indexes.conf below, but it's easiest to just click Click here to download a Splunk app with this indexes.conf, below.

Splunk Cloud Customers: You won't copy the files onto your Splunk servers because you don't have access. You could go one-by-one through the UI and create all of the indexes below, but it might be easiest if you download the app, and open a ticket with CloudOps to have it installed.


Sample indexes.conf
# Overview. Below you will find the basic indexes.conf settings for
# setting up your indexes in Splunk. We separate into different indexes 
# to allow for performance (in some cases) or data isolation in others. 
# All indexes come preconfigured with a relatively short retention period 
# that should work for everyone, but if you have more disk space, we 
# encourage (and usually see) longer retention periods, particularly 
# for security customers.

# Endpoint Indexes used for Splunk Security Essentials. 
# If you have the sources, other standard indexes we recommend include:
# epproxy - Local Proxy Activity

[epav]
coldPath = $SPLUNK_DB/epav/colddb
homePath = $SPLUNK_DB/epav/db
thawedPath = $SPLUNK_DB/epav/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[epfw]
coldPath = $SPLUNK_DB/epnet/colddb
homePath = $SPLUNK_DB/epnet/db
thawedPath = $SPLUNK_DB/epnet/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[ephids]
coldPath = $SPLUNK_DB/epmon/colddb
homePath = $SPLUNK_DB/epmon/db
thawedPath = $SPLUNK_DB/epmon/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[epintel]
coldPath = $SPLUNK_DB/epweb/colddb
homePath = $SPLUNK_DB/epweb/db
thawedPath = $SPLUNK_DB/epweb/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[oswin]
coldPath = $SPLUNK_DB/oswin/colddb
homePath = $SPLUNK_DB/oswin/db
thawedPath = $SPLUNK_DB/oswin/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[oswinsec]
coldPath = $SPLUNK_DB/oswinsec/colddb
homePath = $SPLUNK_DB/oswinsec/db
thawedPath = $SPLUNK_DB/oswinsec/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[oswinscript]
coldPath = $SPLUNK_DB/oswinscript/colddb
homePath = $SPLUNK_DB/oswinscript/db
thawedPath = $SPLUNK_DB/oswinscript/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[oswinperf]
coldPath = $SPLUNK_DB/oswinperf/colddb
homePath = $SPLUNK_DB/oswinperf/db
thawedPath = $SPLUNK_DB/oswinperf/thaweddb
frozenTimePeriodInSecs = 604800 
#7 days

[osnix]
coldPath = $SPLUNK_DB/osnix/colddb
homePath = $SPLUNK_DB/osnix/db
thawedPath = $SPLUNK_DB/osnix/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[osnixsec]
coldPath = $SPLUNK_DB/osnixsec/colddb
homePath = $SPLUNK_DB/osnixsec/db
thawedPath = $SPLUNK_DB/osnixsec/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[osnixscript]
coldPath = $SPLUNK_DB/osnixscript/colddb
homePath = $SPLUNK_DB/osnixscript/db
thawedPath = $SPLUNK_DB/osnixscript/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[osnixperf]
coldPath = $SPLUNK_DB/osnixperf/colddb
homePath = $SPLUNK_DB/osnixperf/db
thawedPath = $SPLUNK_DB/osnixperf/thaweddb
frozenTimePeriodInSecs = 604800 
#7 days

# Network Indexes used for Splunk Security Essentials
# If you have the sources, other standard indexes we recommend include:
# netauth - for network authentication sources
# netflow - for netflow data
# netids - for dedicated IPS environments
# netipam - for IPAM systems
# netnlb - for non-web server load balancer data (e.g., DNS, SMTP, SIP, etc.)
# netops - for general network system data (such as Cisco iOS non-netflow logs)
# netvuln - for Network Vulnerability Data

[netdns]
coldPath = $SPLUNK_DB/netdns/colddb
homePath = $SPLUNK_DB/netdns/db
thawedPath = $SPLUNK_DB/netdns/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[mail]
coldPath = $SPLUNK_DB/mail/colddb
homePath = $SPLUNK_DB/mail/db
thawedPath = $SPLUNK_DB/mail/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[netfw]
coldPath = $SPLUNK_DB/netfw/colddb
homePath = $SPLUNK_DB/netfw/db
thawedPath = $SPLUNK_DB/netfw/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[netops]
coldPath = $SPLUNK_DB/netops/colddb
homePath = $SPLUNK_DB/netops/db
thawedPath = $SPLUNK_DB/netops/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[netproxy]
coldPath = $SPLUNK_DB/netproxy/colddb
homePath = $SPLUNK_DB/netproxy/db
thawedPath = $SPLUNK_DB/netproxy/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days

[netvpn]
coldPath = $SPLUNK_DB/netvpn/colddb
homePath = $SPLUNK_DB/netvpn/db
thawedPath = $SPLUNK_DB/netvpn/thaweddb
frozenTimePeriodInSecs = 2592000
#30 days


# Splunk Security Essentials doesn't have examples of Application Security, 
# but if you want to ingest those logs, here are the recommended indexes:
# appwebint - Internal WebApp Access Logs
# appwebext - External WebApp Access Logs
# appwebintrp - Internal-facing Web App Load Balancers
# appwebextrp - External-facing Web App Load Balancers
# appwebcdn - CDN logs for your website
# appdbserver - Database Servers
# appmsgserver - Messaging Servers
# appint - App Servers for internal-facing apps 
# appext - App Servers for external-facing apps 

Validation

Once this is complete, you will be able to find the list of indexes that the system is aware of by logging into Splunk, and going into Settings -> Indexes.

Forwarder on Windows Systems  

Overview

Installing the Windows forwarder is a straightforward process, similar to installing any Windows program. These instructions will walk you through a manual instruction for getting started (perfect for a lab, a few laptops, or when you're just getting started on domain controllers).

Implementation

Note for larger environments: When you want to automatically roll out the forwarder to hundreds (or thousands or hundreds of thousands) of systems, you will want to leverage your traditional software-deployment techniques. The Splunk forwarder is an MSI package and we have docs on recommended ways to deploy it:

Of course, you can also deploy it with traditional system-configuration management software. This can vary a lot from environment to environment. For this doc we'll just walk you through the installation so that you know what's coming.

The first thing to do is download the Universal Forwarder from Splunk's website (https://www.splunk.com/en_us/download/universal-forwarder.html). This is a separate download from the main Splunk installer, as the universal forwarder is lightweight, so it can be installed on all of the systems in your environment. Most users today will download the x64 version as an MSI installer.

When you double click the downloaded file, the standard MSI installer will appear.

The initial installer screen for the Splunk Forwarder. Click Next to continue, don't worry about customizing the options. You can also install the package silently.

Don't worry about the Cloud checkbox -- we will use the same settings for both.

While you can click "Customize Settings" here and manually insert the address of your indexers or manually choose the log sources you would like to index, etc., we generally don't recommend that, unless you're never going to move beyond the one source you're looking at. (Harder to go find those settings and then apply them to other systems.) Ignore "Customize Options" and click on "Next." The setup will now go through its process, and you'll be finished with a freshly installed forwarder. There are three more steps you'll want to take before you can see the data in Splunk:

  • You will need an outputs.conf to tell the forwarder where to send data (next section)
  • You will need an inputs.conf to tell the forwarder what data to send (below, in the Splunk Configuration for Data Source)
  • You will need an indexes.conf on the indexers to tell them where to put the data that is received (Previous section)

Validation

You can now check Task Manager and you should see Splunk running. Alternatively, check under Services in the Control Panel. You will see Splunk listed and started.

Sending Data from Forwarders to Indexers  

Overview

For any Splunk system in the environment, whether it's a Universal Forwarder on a Windows host, a Linux Heavy-Weight Forwarder pulling the more difficult AWS logs, or even a dedicated Search Head that dispatches searches to your indexers, every system in the environment that is not an indexers (i.e., any system that doesn't store its data locally) should have an outputs.conf that points to your indexers.

Implementation

Fortunately the outputs.conf will be the same across the entire environment, and is fairly simple. There are three steps:

  1. Create the app using the button below (SplunkCloud customers: use the app you received from SplunkCloud).
  2. Extract the file (it will download a zip file).
  3. Place in the etc/apps directory.

For Windows systems, this will typically be: c:\Program Files\Splunk\etc\apps. Once you've extracted the app there, you can restart Splunk via the Services Control Panel applet, or by running "c:\Program Files\Splunk\bin\splunk.exe" restart.

For Linux systems, this will typically be /opt/splunkforwarder/etc/apps/. Once you've extracted the app there, you can restart Splunk by running /opt/splunk/bin/splunk restart.

For customers not using SplunkCloud:

Sample outputs.conf
[tcpout]
defaultGroup = default-autolb-group

[tcpout:default-autolb-group]
server = MySplunkServer.mycompany.local:9997

[tcpout-server://MySplunkServer.mycompany.local:9997]

Here is the completed folder.

Validation

Run a search in the Splunk environment for the host you've installed the forwarder on. E.g., index=* host=mywinsystem1*

You can also review all hosts that are sending data from via | metadata index=* type=hosts

Splunk Configuration for Data Source

Stream Overview  

Splunk Stream is great way to monitor network traffic from a host or via a network tap or span port. The software acts as a network traffic "sniffer." The web GUI interface allows you to choose individual metadata fields that are specific to a network protocol and write that metadata to your Splunk indexers for searching.

This means that you can capture all kinds of useful metadata through Splunk Stream, and even do limited full packet capture! The top use cases for Stream are DNS and DHCP (both protocols where logging is notoriously weak), but many people use Stream to capture HTTP transactions, database queries, emails, and more. Check out all the protocols that Stream can handle!

The simplest way to get Stream set up is as a full, standalone Splunk instance with the Stream app installed. While this initially will act as an indexer, you will add an output configuration to direct Splunk to send the data out to indexers in the Splunk environment. This converts the instance into what is known as a "heavy-weight forwarder" and is the most common way to set up a new Stream environment.

The most common scenario for stream is to just install it on the host that's generating the traffic you want to capture, frequently a Windows domain Controller serving DHCP and DNS Server roles. The next most common model is to install Stream on a SPAN port or a network tap, allowing you to have an out-of-band Stream host monitoring the network. The Splunk configuration for that setup is identical, you only need to lean on your network team to help make it happen. Finally, advanced users might configure stream on a universal forwarder (see the Multi-Forwarder Environments section at the bottom).

Sizing for Splunk Stream  

Stream is perhaps the most difficult product to properly estimate sizing for inside of the Splunk world, for two reasons. The first is that you can choose what protocols you want to capture, and even then you can apply filters so that you only get a certain percentage of that traffic (for example, only HTTP transactions headed out to the internet). The second reason is that even once you've decided to capture a data stream (such as DNS logs), you can then decide what individual fields you're looking for -- just the query, type, and response? Or everything?

For this reason, the easiest way to size your stream ingest is to just give it a shot. Don't put an outputs.conf on your heavy-weight forwarder, and just see what volume it brings in. Or send it into your Splunk environment and closely monitor the volume.

Stream Indexes and Sourcetypes  

Overview

Amongst Splunk’s 15,000+ customers, we’ve done a lot of implementations, and we’ve learned a few things along the way. While you can use any sourcetypes or indexes that you want in the "land of Splunk," we’ve found that the most successful customers follow specific patterns, as it sets them up for success moving forward.

Implementation

Below is a list of the Stream data types from Splunk docs, along with the recommended sourcetypes and indexes.

ProtocolDescriptionSourcetypeIndex
AMQPAdvanced Messaging Queuing Protocolstream:amqpnetfw
DHCPDynamic Host Configuration Protocolstream:dhcpnetipam
DIAMETERAuthentication Protocolstream:diameterstreamsec
DNSDomain Name Servicestream:dnsnetdns
FTPFile Transfer Protocolstream:ftpstream
HTTPHypertext Transfer Protocolstream:httpnetproxy / appwebint / appwebext as appropriate
ICMPInternet Control Message Protocolstream:icmpstream
IMAPInternet Message Access Protocolstream:imapmail
IPInternet Protocolstream:ipstream
IRCInternet Relay Chatstream:ircstreamsec
LDAPLightweight Directory Access Protocolstream:ldapstream
MAPIMessaging Application Programming Interfacestream:mapimail
MySQLMySQL client/server protocolstream:mysqldb
NetBIOSNetwork Basic Input/Output Systemstream:netbiosstream
NFSNetwork File Systemstream:nfsstream
POP3Post Office Protocol v3stream:pop3mail
PostgresPostgreSQLstream:postgresdb
RADIUSRemote Authentication Dial In User Servicestream:radiusstreamsec
RTPReal-time Transport Protocolstream:rtpstream
SIPSession Initiation Protocolstream:sipstream
SMBServer Message Blockstream:smbstream
SMPPShort Message Peer to Peerstream:smppstream
SNMPSimple Network Management Protocolstream:snmpstream
TCPTransmission Control Protocolstream:tcpstream
TDSTabular Data Stream - Sybase/MSSQLstream:tdsdb
TNSTransparent Network Substrate (Oracle)stream:tnsdb
UDPUser Datagram Protocolstream:udpstream
XMPPExtensible Messaging and Presence Protocolstream:xmppstreamsec

Install the Stream App  

Log into Splunk and click Splunk Apps.

Click Splunk Apps to find Splunk Stream.

Search for "Splunk Stream." Click the Install button.

Search for Splunk Stream in Apps

After installation, click Restart Now.

When asked, restart Splunk.

Log back into Splunk and select the Splunk Stream app. Accept the defaults and click Let's Get Started.

Now you're ready to configure Stream to monitor the relevant network interface on your Windows server and forward the resulting DNS metadata to your Splunk indexers.

Configure a New DNS Stream  

Implementation

Within the Splunk Stream app, select Configuration > Configure Streams.

Click Configure Streams, under Configuration.

The Configure Streams dashboard will display the default settings for protocol information to be collected. You'll want to disable the defaults, then select the protocol and details to create your new stream.

You can select all of the available protocols and disable them all at once, by clicking the checkbox next to Name on the title bar.

Select All, and then click Disable to turn off the default streams.

After selecting all of the protocols, click the Disable option.

Now that you've disabled the defaults, create a new stream for collecting the DNS details that you'd like to capture. Start by selecting the New Stream button, then Metadata Stream.

Most value from Stream comes from the Metadata Streams, here we create a new Metadata Stream.

This will bring you into a workflow that allows you to configure the stream.

Select DNS as the protocol in the first step.

We are basing our new stream off the DNS protocol

Once DNS is selected, give it a name and description with some context to help you to identify the data. Click Next.

Give that stream a name and description

On the aggregation step, ensure that No is selected for aggregation, then click Next. (You don't want aggregation because you want to see the individual DNS records.)

Select No for aggregation, because we aren't generating summary statistics.

On the Fields screen, you'll select the fields (specific to DNS) that you want to collect and store in Splunk. Note that some-but not all-fields are selected by default.

Enable or disable any fields you want to collect on DNS traffic.

Once you've selected the DNS fields that you'd like to collect, click Next.

You define filtering of the collected data on the Filters screen. The filters are based on the fields you selected on the previous screen. For instance, if you only wanted Stream to capture data from type "A" queries, you could define that here.

If you want to apply filters for what kinds of queries to record, you can apply those.

Filters are something that you may want to go back and tweak later, once you've collected data for a while and know what you have and what you'd like to keep (or discard).

An example filter, if you decided you only cared about A records (not generally recommended).

After defining filters, select the Next button again to go to the Settings screen, where you'll define the destination index for your DNS data.

Select the destination index from the dropdown menu. You should select netdns in this dropdown. If you don't see netdns listed here, it is because you missed a step in the Indexes and Sourcetypes section. We recommend installing the standard set of indexes on any Heavy-weight Forwarder running Splunk Stream not because it will actually store data in those indexes, but because it will show up in the dropdown here. If you missed that step, don't fret -- you're almost done. Go ahead and finish this section, then go up to Indexes and Sourcetypes Overview to install the indexes. You can come back here and edit your stream afterwards.

Selecting the netdns index for our new stream configuration.

After selecting the destination index, you can choose to save the configuration in Disabled mode, if you're not quite ready to begin collecting data. You can also put it into Estimate mode to get an idea of how much data you'll be collecting once the configuration is enabled.

We will select Enabled here, because we're ready to ingest!

On the Groups screen, you'll have the ability to select a group with which to associate the Stream configuration. In this case, because you are not configuring distributed forwarders, you have not created separate groups-so leave defaultgroup selected. Finally, click Create Stream to save your configuration! You're done!

Everything is done, we can create the stream!

Validation

If you've enabled the configuration, you should now be collecting DNS data. You can validate this by searching for:

index=netdns sourcetype=stream:dns

You should able to see beautiful JSON blobs of DNS transactions, with fields available on the left.

Lovely, Lovely DNS data.

(Optional) Multi-Forwarder Environments  

Overview

If you only have a small handful of stream hosts, it is by far easiest to just install the Heavy-weight forwarder and manually configure it. But if you are planning to roll out a fleet of Stream sensors throughout your network, you will want to centrally monitor them. While Stream can be deployed via the Deployment Server, the actual stream configuration is managed via a different model. We will walk through that model below, but the high level summary is that you can deploy the Stream Technology Add-on (TA) onto Universal Forwarders (no requirement for heavy-weight forwarders for the TA), and tell them to all point to a central Stream configuration server over your standard Splunk port (default http over 8000). See below for the full setup.

Implementation

Note that there are two primary components in Splunk Stream. First is the Splunk Stream app, which provides the web interface and allows stream configuration. This component exposes the configuration you build to clients. The client (Splunk_TA_stream) gets its configuration from the Splunk Stream app via REST API. In the above example of a standalone configuration, both of these components are installed (Splunk_TA_stream comes as part of the Stream app that you download from Splunkbase). In a standalone configuration, the request and transfer of configuration information from server to client takes place on the local network stack. In a distributed configuration, the request and transfer of configuration takes place over the wire.

Stream traffic: Management Plane at the top, Data Plane at the bottom.

The location of the Splunk Stream management server is stored in inputs.conf.

A sample inputs.conf, directing the stream server to localhost:8000. Generally this would be a dedicated stream configuration server in your environment.

You'll need the Splunk_TA_stream app for a forwarder configuration. The custom inputs.conf that resides in that app should point to your remote Stream server, as below.

[streamfwd://streamfwd]
splunk_stream_app_location = http://remote_stream_server:8000/en-us/custom/splunk_app_stream/
stream_forwarder_id = 
disabled = 0

Don't forget to modify the protocol if you're using if you're utilizing SSL/TLS on your Stream server.

Final Notes: when using this configuration, don't forget that your stream forwarders will need to connect home to the Splunk Stream server, so network access will be required. You will also need to adjust the frequency that they call home if you deploy a large number (hundreds or thousands), which you can do by adding the "pingInterval" setting on the streamfwd.conf. The default value is 5 seconds, but in larger environments an interval of many minutes is usually more than sufficient.

Splunk Configuration for Data Source References  

Here are links from this section: