Is anyone using Fluentd as an abstraction layer between hosts and Splunk? If so, what are the trials and tribulations you were faced with? Is it safe to say that metadata is going to be an issue? I am sure it can be solved, because this is spunk after all :)
I realize the Universal Forwarder is the best way to go but I am not sure we will have this luxury given some environment requirements.
Any help/thoughts/prayers welcome :)
↧
Is anyone using Fluentd as an abstraction layer between host and Spunk indexes?
↧
Splunk Add-on for Microsoft Exchange: In the configuration stanza, what is "time_before_close = 0"?
Hi there,
I've been playing with the Splunk Add-on for Microsoft Exchange which has stanzas with the following in them:
time_before_close = 0
The Universal Forwarders don't like this value, however, and say they're going to revert to the default value of 3. Just wondering if the value of 0 is supposed to work? Or is this deprecated and the add-on needs to be updated or what?
Thanks.
↧
↧
How to resolve "err=not_connected" error in Deployment Server configurations?
Hi
In the Deployment Server (DS):
- I copied an app to the /opt/splunk/etc/deployment-apps/
In the Universal Forwarder (UF), I configured it as a Deployment Client:
- splunk set deploy-poll 10.10.10.117:8089
Telnet from the UF to DS in 8089 works fine.
In the DS, I get the errors
"DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected"
Any ideas?
Thanks in advance
↧
App and Add-on for cassandra cluster monitoring: Why are cassandra logs not generated or forwarded?
HI, I am trying to configure the App for cassandra cluster monitoring and Add-on for cassandra cluster monitoring to monitor cassandra cluster. I have universal forwarder on each node.
I have installed Add-on for cassandra cluster monitoring on universal forwarder.
I get following log in splunkd.log
01-16-2017 13:58:15.883 +0000 INFO ExecProcessor - New scheduled exec process: /opt/splunkforwarder/etc/apps/Splunk_TA_cassandra/bin/cache.pl
01-16-2017 13:58:15.883 +0000 INFO ExecProcessor - interval: 60000 ms
01-16-2017 13:58:15.883 +0000 INFO ExecProcessor - New scheduled exec process: /opt/splunkforwarder/etc/apps/Splunk_TA_cassandra/bin/compactionhistory.pl
01-16-2017 13:58:15.883 +0000 INFO ExecProcessor - interval: 60000 ms
01-16-2017 13:58:15.883 +0000 INFO ExecProcessor - New scheduled exec process: /opt/splunkforwarder/etc/apps/Splunk_TA_cassandra/bin/cpu_perf.pl
01-16-2017 13:58:15.883 +0000 INFO ExecProcessor - interval: 60000 ms
01-16-2017 13:58:15.883 +0000 INFO ExecProcessor - New scheduled exec process: /opt/splunkforwarder/etc/apps/Splunk_TA_cassandra/bin/mem_perf.pl
01-16-2017 13:58:15.883 +0000 INFO ExecProcessor - interval: 60000 ms
01-16-2017 13:58:15.883 +0000 INFO ExecProcessor - New scheduled exec process: /opt/splunkforwarder/etc/apps/Splunk_TA_cassandra/bin/netstats.pl
01-16-2017 13:58:15.883 +0000 INFO ExecProcessor - interval: 60000 ms
01-16-2017 13:58:15.883 +0000 INFO ExecProcessor - New scheduled exec process: /opt/splunkforwarder/etc/apps/Splunk_TA_cassandra/bin/process.pl
01-16-2017 13:58:15.883 +0000 INFO ExecProcessor - interval: 60000 ms
01-16-2017 13:58:15.883 +0000 INFO ExecProcessor - New scheduled exec process: /opt/splunkforwarder/etc/apps/Splunk_TA_cassandra/bin/readwrite.pl
01-16-2017 13:58:15.883 +0000 INFO ExecProcessor - interval: 60000 ms
01-16-2017 13:58:15.883 +0000 INFO ExecProcessor - New scheduled exec process: /opt/splunkforwarder/etc/apps/Splunk_TA_cassandra/bin/status.pl
01-16-2017 13:58:15.883 +0000 INFO ExecProcessor - interval: 60000 ms
I don't know if its issue with the forwarder which is not sending data to server or if the server is not receiving it.
I think server is receiving other data fine when I search for
host=<> index=_internal
I do get results but I don't see any of Cassandra source / sourcetype
Is the Add-on for cassandra cluster monitoring working? where does it write logs? in Perl script, i just see print(). does this mean its only STDOUT? how do i check where is the issue? in forwarder or at receiver?
↧
Why does attempting to install the Universal Forwarder on Windows via CLI fail?
I'm trying to perform a simple command line install for Windows Universal Forwarder (UF) and can't seem to get the install to work. All I want is a basic quiet install that points the UF to our deployment server in order to receive the rest of the configurations. The command line I have used is:
**msiexec.exe /i splunkforwarder-6.4.2-00f5bb3fa822-x64-release.msi AGREETOLICENSE="YES" DEPLOYMENT_SERVER=":8089" /quiet**
The command executes but completes immediately. It was run as admin on the system. I also set logging for the command and received the following:
=== Verbose logging started: 1/17/2017 11:00:07 Build type: SHIP UNICODE 5.00.7601.00 Calling process: C:\Windows\system32\msiexec.exe ===
MSI (c) (70:18) [11:00:07:414]: Resetting cached policy values
MSI (c) (70:18) [11:00:07:414]: Machine policy value 'Debug' is 0
MSI (c) (70:18) [11:00:07:414]: ------- RunEngine:
------- Product: C:/Users/-user-/Desktop/splunkforwarder-6.4.2-00f5bb3fa822-x64-release.msi
------- Action:
------- CommandLine: -------------
MSI (c) (70:18) [11:00:07:414]: Client-side and UI is none or basic: Running entire install on the server.
MSI (c) (70:18) [11:00:07:414]: Grabbed execution mutex.
MSI (c) (70:18) [11:00:07:445]: Cloaking enabled.
MSI (c) (70:18) [11:00:07:445]: Attempting to enable all disabled privileges before calling Install on Server
MSI (c) (70:18) [11:00:07:460]: Incrementing counter to disable shutdown. Counter after increment: 0
MSI (s) (FC:A8) [11:00:07:460]: Running installation inside multi-package transaction C:/Users//Desktop/splunkforwarder-6.4.2-00f5bb3fa822-x64-release.msi
MSI (s) (FC:A8) [11:00:07:460]: Grabbed execution mutex.
MSI (s) (FC:20) [11:00:07:460]: Resetting cached policy values
MSI (s) (FC:20) [11:00:07:460]: Machine policy value 'Debug' is 0
MSI (s) (FC:20) [11:00:07:460]: ------- RunEngine:
------- Product: C:/Users/-user-/Desktop/splunkforwarder-6.4.2-00f5bb3fa822-x64-release.msi
------- Action:
------- CommandLine: ------------
MSI (s) (FC:20) [11:00:07:460]: Machine policy value 'DisableUserInstalls' is 0
MSI (s) (FC:20) [11:00:07:476]: SRSetRestorePoint skipped for this transaction.
MSI (s) (FC:20) [11:00:07:476]: Note: 1: 1314 2: /Users/-user-/Desktop/splunkforwarder-6.4.2-00f5bb3fa822-x64-release.msi
MSI (s) (FC:20) [11:00:07:476]: MainEngineThread is returning 2
MSI (s) (FC:A8) [11:00:07:476]: No System Restore sequence number for this installation.
MSI (s) (FC:A8) [11:00:07:476]: User policy value 'DisableRollback' is 0
MSI (s) (FC:A8) [11:00:07:476]: Machine policy value 'DisableRollback' is 0
MSI (s) (FC:A8) [11:00:07:476]: Incrementing counter to disable shutdown. Counter after increment: 0
MSI (s) (FC:A8) [11:00:07:476]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2
MSI (s) (FC:A8) [11:00:07:492]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2
MSI (s) (FC:A8) [11:00:07:492]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\InProgress 3: 2
MSI (s) (FC:A8) [11:00:07:492]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\InProgress 3: 2
MSI (s) (FC:A8) [11:00:07:492]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1
MSI (s) (FC:A8) [11:00:07:492]: Restoring environment variables
MSI (c) (70:18) [11:00:07:492]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1
MSI (c) (70:18) [11:00:07:492]: MainEngineThread is returning 2
=== Verbose logging stopped: 1/17/2017 11:00:07 ===
I can install the UF fine without using the command line but I would like to include this in a package to perform the install remotely and quietly. Any help is greatly appreciated.
↧
↧
Why is the Splunk universal forwarder not pushing data to indexer?
I recently upgraded a workstation to Win10 Enterprise. I installed the Splunk universal forwarder, however I am not collecting any data from the workstation at the indexer. I believe it has something to do with certificates, but I am not very well versed in the product. I'm afraid the documentation isn't helping much either.
↧
How to make the deployment server manage all Universal Forwarders' server.conf account for system unique fields like "sslKeysfilePassword ” and “pass4SymmKey”?
The goal is to have the deployment server manage server.conf on all Universal Forwarders, like it does with inputs/outputs.conf. Automation is preferred as there are over 300 Windows systems.
E.g. When we make certificate updates, change the sslVersions, and/or the allowed cipherSuite, we want the deployment server to handle it all.
This is an issue as the server.conf includes four fields that appear to be unique to *each system*, and based on our understanding the deployment server updates the whole file, not per stanza:
- sslKeysfilePassword
- sslPassword
- pass4SymmKey
- serverName
How do deployment servers handle system unique fields so the deployment server doesn’t just overwrite them and cause configuration issues? Any tips for what direction I need to look in? I would appreciate any help as manually updating all universal forwarders would be insanely time consuming.
#### Here is a scrubbed version of the relevant fields for our deployment server's ~/default/server.conf: ####
[sslConfig]
enableSplunkdSSL = true
useClientSSLCompression = true
useSplunkdClientSSLCompression = true
# enableSplunkSearchSSL has been moved to web.conf/[settings]/enableSplunkWebSSL
#Allow only sslv3 and above connections to the HTTP server
sslVersions = *,-ssl2
sslVersionsForClient = *,-ssl2
sendStrictTransportSecurityHeader = false
allowSslCompression = true
allowSslRenegotiation = true
# For the HTTP server, Diable ciphers lower than 128-bit and disallow ciphers that
# don't provide authentication and/or encryption.
# Use 'openssl ciphers -v' to generate a list of supported ciphers
# Allow only TLSv1 cipher with 'high' encryption suits, i.e. whose key lengths are
# larger than or equal to 128 bits
cipherSuite = TLSv#+HIGH:TLSv#.2+HIGH:@STRENGTH
serverCert = $SPLUNK_HOME/etc/auth/server.pem
sslPassword = ######
caCertFile = $SPLUNK_HOME/etc/auth/cacert.pem
certCreateScript = $SPLUNK_HOME/bin/splunk, createssl, server-cert
# DEPRECATED
caPath = $SPLUNK_HOME/etc/auth
[applicationsManagement]
updateTimeout = #h
sslVersions = tls#.#
caCertFile = $SPLUNK_HOME/etc/auth/#####.pem
sslVerifyServerCert = true
sslCommonNameToCheck = apps.splunk.com, cdn.apps.splunk.com
sslAltNameToCheck = splunkbase.splunk.com, apps.splunk.com, cdn.apps.splunk.com
cipherSuite = TLSv#+HIGH:@STRENGTH
[clustering]
mode = disabled
pass4SymmKey =
register_replication_address =
register_forwarder_address =
register_search_address =
executor_workers = 10
manual_detention = false
encrypt_fields = "server: :sslKeysfilePassword", "server: :sslPassword", "server: :pass4SymmKey", "server: :password", "outputs:tcpout:sslPassword", "outputs:indexer_discovery:pass4SymmKey", "inputs:SSL:password$
#### Here is a scrubbed version belonging to one of the windows systems: ####
[general]
serverName =
pass4SymmKey = $1$###############
[sslConfig]
sslKeysfilePassword = $###############
↧
How can I create a filter to capture certain events from security logs?
Hi All,
I'm a newbie to the Splunk world and trying to figure out a couple things. I currently have Splunk Light installed and used the "Remote Event Log Collection" option to collect logs from my system. My question is: Can I create filters to only capture certain events from the security logs? Or do I need to configure the universal forwarder to collect the logs from my systems then configure filters prior to data getting indexed? Thanks, any info you can provide would be great.
↧
Forwarder Management Host Names Fail to be FQDNs
I've noticed that among all of the universal forwarders checking in with my deployment server, there is no consistency in which hosts are fully qualified, e.g. **myserver.mydomain.com**, and which hosts just check in with short names, e.g. **myserver.**
Can someone tell me how to control this function? Ideally, I want all servers checking in with the deployment server to use FQDNs.
Thanks in advance!
↧
↧
How to make sure all servers checking in with the deployment server use FQDN?
I've noticed that among all of the universal forwarders checking in with my deployment server, there is no consistency in which hosts are fully qualified domain names (FQDN), e.g. **myserver.mydomain.com**, and which hosts just check in with short names, e.g. **myserver.**
Can someone tell me how to control this function? Ideally, I want all servers checking in with the deployment server to use FQDNs.
Thanks in advance!
↧
How to deploy scripted inputs on different OS architectures?
I have two scripted inputs, one bash script for Linux and one batch script for Windows. Both scripts are written to read a static configuration file and output the data for Splunk to ingest. Both scripts work without issue.
Should I deploy both script inputs in the same app? As you know the bash script will not run on Windows and batch script will not run on LInux. Besides the error I get in the splunkd.log, is there anything I should worry about with the scripts executing on the wrong OS architecture?
↧
Can we write UDP or TCP streams directly to indexer ports rather than using a Universal Forwarder in between?
Can we write UDP or TCP streams directly to indexer ports rather than using a Universal Forwarder in between?
↧
Why am I receiving "SOFTWARE PROGRAM ERROR" in Splunk universal forwarder that are AIX servers?
I'm running error script on a bunch of AIX servers but have encountered the "SOFTWARE PROGRAM ERROR" on few of the servers. After getting the internal logs of the servers I have found that whenever there is an automatic restart of that server, this "SOFTWARE PROGRAM ERROR" generates. What should I do to prevent this error from coming up in the servers? Also, what could be the other possible root causes for this getting this error?
↧
↧
How to properly configure Universal Forwarder, located on the same machine as my Splunk instance?
Hi. I am trying to install an universal forwarder on the same machine as my Splunk instance just to see how Universal Forwarder (UF) works. I understand that you can collect the logs locally but just to understand how UF works I am trying to do it. I have followed the installation wizard and entered the receiver details as 127.0.0.1 and 9997 as the port. I left the deployment server details empty. I also configured receiver on the Indexer but I am still unable to see Windows event logs when searched. Could someone please help? I am new to Splunk.
↧
How to convert "_internal" field "date_zone" to time zone?
I am trying to convert the field "date_zone" reported by our Universal Forwarders (UF) in "index=_internal" from +0900 to KRW. Everything I have tried returns my account's local time zone (TZ). The time and date_zone in the event are accurate for our Korea UFs (and other geo locations) but the conversion attempts always return the local zone. I can search for the field date_zone all day, and works fine every time. Changes to my time zone when I try to convert from %z to %Z
We have hundreds of UFs spread across many TZ's and need to monitor and report that they are and continue to have their TZ offset set properly but am trying to make it more friendly to read (KRW is more meaningful than +0900)
↧
How to resolve error "SRC did not 'startsrc splunkweb' on our behalf: exit code=1" when restarting a 6.0.13 universal forwarder on AIX?
My Splunk Deploy Server is CentOS 6.7
The UF, Splunk Universal Forwarder 6.0.13 running on server, AIX 7.1.
If you register boot-start enable after installing the agent, the service is registered in SRC.
However, when you deploy an app from deployment server, an Errpt message is generated on the AIX server.
This issue only occurs when boot-start enable is enabled. It does not occur when disable. It seems to happen when registering Splunk service in SRC.
Splunkd.log
01-18-2016 17:20:54.956 +0900 WARN Restarter - Restarting splunkweb; SRC did not 'stopsrc splunkweb' on our behalf: exit code=1
01-18-2016 17:22:12.337 +0900 ERROR Restarter - Restarting splunkweb; SRC did not 'startsrc splunkweb' on our behalf: exit code=1
01-18-2016 17:45:38.121 +0900 WARN Restarter - Restarting splunkweb; SRC did not 'stopsrc splunkweb' on our behalf: exit code=1
↧
Is it possible to define a custom location for universal forwarder local configurations?
My Splunk Forwarder is installed on a share, which can be mapped to all the servers in my environment. Therefore, I am wondering if it is possible to use binaries out this common location, but have configs installed elsewhere, locally on each server. If so, I would not need to worry about deploying Splunk Forwarders to all the servers, I will be simply pushing configs as needed.
Is it possible to define a custom location for Universal Forwarder local configs (.../etc/system/local)? For example, set this as an ENV variable prior to starting the forwarder, or maybe passing-in this location as an argument, such as:
.../splunkforwarder/bin/splunk start -local-conf /my/custom/config/location/etc/system/local
And similarly for the logs location?
I realize I can probably use symlinks to achieve this. But I was wondering if Splunk supports the ability to define custom config/log paths.
↧
↧
Permission denied
I am running Splunk enterprise 6.3.1 and universal forwarder. We deploy the universal forwarder onto a Linux machine it runs under the account of Splunk.
Splunk is started with the account Splunk and that has the following
uid=880(splunk) gid=880(splunk) groups=880(splunk),600(dba),1201(buildgrp)
But it appears that it can not see directories or files owned by the dba group ie
drwxr-x--- 8 oracle dba 4096 Jan 24 22:15 par-01
drwxr-xr-x 3 oracle dba 4096 Jan 24 21:15 par-02
drwxr-xr-x 3 oracle dba 4096 Jan 24 21:15 par-03
drwxr-xr-x 3 oracle dba 4096 Jan 24 21:15 par-04
it can see par-02 to par-04 but not par-01
↧
Using the forwarder and stream like a dash-cam. Can I capture a set size of logs and only send those to Splunk when triggered?
I have an interesting scenario. Does anyone know if it is possible to process logs collected from the universal forwarder like a dashcam? For instance, in this case I want to let stream run on a box, collect say a 24 hour rolling window worth of data and discard anything older but not send those logs to the indexes unless something that happens to warrant the collection. The thought being here that the stream data will contain forensics about an event but I don't want thousands of endpoints sending stream data unnecessarily all the time. If however I find an event that warrants inspection, I want that/those endpoints to send their stream data for analysis.
I thought about doing something with a forwarding queue and disabling forwarding for that source until I need it and managing that enable/disable through some mechanism, either DS or something manual.
I'd love to see any thoughts around this.
Thanks!
↧
How do I remove host from Data Summary screen but keep data?
Hello,
I'm looking for advice on how to handle systems that are removed from the network.
We have several hundred Windows systems with the UniversalForwarder installed, sending log data to our Splunk server. As systems are decommissioned, I want to keep the log data from those retired systems in Splunk for compliance reasons. But I no longer want the retired system's host name to appear in the Data Summary window in Splunk Search. I only want live production systems to appear on that screen.
Is it just a matter of deleting the client name from the Forwarder Management screen?
Thanks,
Greg
↧