I have two Linux VMs set up, one with a Universal Forwarder and one with an Indexer. I have a script that generates dummy data (on the forwarder) that needs a custom sourcetype set up in order to parse the events correctly.
On the Universal Forwarder props.conf is currently empty, and inputs.conf contains:
[monitor:///home/splunk/data/data1*.soap]
_TCP_ROUTING = SOAP
disabled = false
sourcetype = soaptype
On the Indexer, props.conf contains:
[soaptype]
BREAK_ONLY_BEFORE =
As of right now my events aren't making it into the indexer at all. If I remove the sourcetype from inputs.conf and props.conf, data appears, but it is splitting the events incorrectly.
Any suggestions? Many thanks!
↧
How do I configure custom sourcetypes on Universal Forwarders and Indexers?
↧
Splunk App for Windows Infrastructure: How to troubleshoot why 4 out of 11 domain controllers with universal forwarders stop reporting data?
Hi guys,
Currently in the project I am working on, the client has 11 Domain Controllers with 1 of them as the Master node. From what I was told, the Splunk App for Windows Infrastructure will have a powershell script which triggers at an interval every 15 minutes to collect relevant data from these DCs and populate the Active Directory Overview as well as the Domain Controller dashboards. However, only 7 out of these 11 are returning the results.
Each of these DCs has a Splunk Universal Forwarder installed on them, and whenever we redeploy the Windows Infra App to these clients, the results of all 11 DCs will be shown in the first 15 minutes and subsequently only 7 remains.
I have tried to reinstall the Splunk Universal Forwarder on one the the 4 Dcs that is not returning results, but once again it only works once after I redeploy the app to it.
We have ran out of troubleshooting ideas and I am hoping if anyone has any similar experiences or even better a solution to this issue. Any help would be greatly appreciated!
Thank you!
↧
↧
How to deploy a Splunk Universal Forwarder through GPO?
Does anyone have any script to share?
Splunk Enterprise 6.3.2
↧
Is it possible to reconfigure an existing universal forwarder to low-privilege mode?
Is it possible reconfigure an existing universal forwarder to low privileged mode? We installed our UFs as local system and are being asked to change them to a user in low-privilege mode.
↧
TA-nmon. Error ArchiveContext in splunkd.log on AIX UVF. Which 0652-141 There is no python in (PATH statement listed)
New setup of Universal forward on AIX, with nmon TA-nmon app installed. All seems to be working but getting an increasing error count on the NMON home screen/dashboard and splunkd.log is showing the below....
02-04-2016 15:26:28.424 +1100 ERROR ArchiveContext - From archive='/opt/splunkforwarder/var/run/nmon/var/nmon_repository/"hostname"_160204_1434.nmon': which: 0652-141 There is no python in /opt/splunkforwarder/bin /usr/bin /etc /usr/sbin /usr/ucb /usr/bin/X11 /sbin /usr/java5/jre/bin /usr/java5/bin /opt/ibm/director/bin.
I have checked the various scripts/files in "/opt/splunkforwarder/etc/apps/TA-nmon/bin". 2 of these files have a "which python" statement included but both send this output to /dev/null.
# grep "which python" *
nmon2csv.sh:PYTHON=`which python` >/dev/null 2>&1
nmon_cleaner.sh:PYTHON=`which python` >/dev/null 2>&1
↧
↧
How to troubleshoot why security events from one domain controller are getting indexed with a delay of 5 hours?
Good day,
We have one domain controller that is always about 5 hours behind in having the logs available in Splunk. This is our busiest domain controller and the security event log file is set to 1GB in size. We have already tuned the queue sizes on the heavy forwarders and indexers and all other events come in quickly, which makes us think the issue must be on the universal forwarder (latest version 6.3.2).
The output queues on the DC hovers around 200 KB/s, which makes us think that it's not working hard enough to parse the log file in time
Any suggestions?
↧
Splunk App for Windows Infrastructure: Why do events appear to be broken sent from Microsoft Windows Event Collectors via universal forwarders?
Hello;
I am running several Microsoft Windows Event Collectors, and data contained within the App for Windows Infrastructure; mostly events, appear to be broken. If I search my data for "ComputerName" instead of "host", my searches seem to work; haven't tried in a dashboard or report.
Do I have to change the sourcetype on my event collectors inputs.conf, modify its transforms, re-write the dashboard searches?
I am looking for the easiest and most efficient route here, one that will not break later with an upgrade.
Thank you!
↧
Why is my universal forwarder reporting "INFO WatchedFile - Resetting fd to re-extract header"?
One of my servers running a universal forwarder is spitting out this message quite frequently:
02-04-2016 16:48:49.607 -0500 INFO WatchedFile - Resetting fd to re-extract header.
What is this telling me? Each file does have a header, which we ignore via the FIELD_HEADER_REGEX parameter. Is it telling me that the header is being extracted? (These files roll over quite a bit).
↧
How to configure inputs.conf on a universal forwarder to ignore monitoring and indexing folders that are older than 1 day?
Hi
I am monitoring a folder which has high level of nesting and daily, 1000's of folders gets created. The name of the folder is unique based on some id. I am seeing a delay of 10-12 hours in getting the logs which are placed deep in the nth folder. I believe this is because Splunk checks for each and every folder sequentially for a match. Can we ignore folders older than 1 day so that Splunk does not search inside old folders? I am using a universal forwarder with good bunch of indexers to index the data. There is no throughput issue. The daily ingestion is around 1-2 gigs.
Below is my inputs.conf stanza
[monitor:///]
_TCP_ROUTING = prod
ignoreOlderThan = 2d
whitelist = .log
index = index1
sourcetype = sample_sourcetype
disabled = 0
Please provide your inputs on this issue.
↧
↧
Download link for 6.3.3 Mac universal forwarder is broken, kaput, non functional
Has anyone had any success downloading the 6.3.3 universal forwarder for Mac?
↧
Why am I getting handshake error between my deployment server and 5 out of 10 universal forwarders?
Hello,
I've read a few threads on this topic, but none seem to relate to Splunk 6.3 or have worked for me.
I am taking over a deployment that looks like 10 servers that forward data to a Heavy Forwarder, which then forwards the data to my main Splunk Indexer. All of the servers have a Universal Forwarder installed. 5 of the servers are throwing me this message in the splunkd.log file:
02-05-2016 11:46:13.528 -0500 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected
Any suggestions on what this could be? I've checked the files to make sure they are all the same, but I am not finding the issue. Maybe I'm overlooking something.
↧
Running a universal forwarder in low privilege mode, why am I getting error "Deployment Server not available on a dedicated forwarder"?
Our admin created me a regular domain user to test low P and assigned it these privileges:
• Permission to log on as a service.
• Permission to log on as a batch job.
• Permission to replace a process-level token.
• Permission to act as part of the operating system.
• Permission to bypass traverse checking
I run this to test the automation:
msiexec /i splunkforwarder-6.3.2-aaff59bb082c-x64-release.msi AGREETOLICENSE=Yes INSTALLDIR=c:\SplunkUniversalForwarder RECEIVING_INDEXER=heavy.forwarder:9997 DEPLOYMENT_SERVER=deploy.server:8089 SET_ADMIN_USER=0 LOGON_USERNAME=DOMAIN\splunklpuser LOGON_PASSWORD=somethingclever /quiet /log lar.txt
The lar.txt log shows a 1603 permissions error and the `appdata\local\temp\splunk.log` shows this as the failure point:
Deployment Server not available on a dedicated forwarder
The communication path to the deployment server is open and if I install with LocalSystem, then it is successful.
What is my `DOMAIN\splunklpuser` userid missing?
↧
How to configure a universal forwarder to add multiple fields to events being forwarded via _meta?
We're trying to find a way to have the universal forwarder send data to the indexer essentially pre-marked with a small number of custom fields (or the like) that we can later search on. For example, a particular computer might be from project-X and be in a environment of test or prod or development. Since VMs come and go, we can't do any persistent mapping of which computer has these added characteristics (host-n.n.n.n might be dev today, prod tomorrow), but the 'data' is persistent.
I stumbled across the _meta construct in inputs.conf, which works well enough for 'one' custom field. Just like specifying which index to use, I also specify `_meta = somename::value` in inputs.conf.
The question I have is, how could I have 'multiple' such added fields specified by the universal forwarder? I know there is folklore saying doing this on the forwarder side is somehow evil or something, but we're talking about adding under a half-dozen custom fields (?) for all the events coming from the forwarder computer.
Any suggestions other than pointers to the impossibly unreadable/abstract/no-examples docs which I've wasted tens of hours on already?
↧
↧
Installing a universal forwarder in low privilege mode, why am I getting error "Deployment Server not available on a dedicated forwarder"?
Our admin created me a regular domain user to test low P and assigned it these privileges:
• Permission to log on as a service.
• Permission to log on as a batch job.
• Permission to replace a process-level token.
• Permission to act as part of the operating system.
• Permission to bypass traverse checking
I run this to test the automation:
msiexec /i splunkforwarder-6.3.2-aaff59bb082c-x64-release.msi AGREETOLICENSE=Yes INSTALLDIR=c:\SplunkUniversalForwarder RECEIVING_INDEXER=heavy.forwarder:9997 DEPLOYMENT_SERVER=deploy.server:8089 SET_ADMIN_USER=0 LOGON_USERNAME=DOMAIN\splunklpuser LOGON_PASSWORD=somethingclever /quiet /log lar.txt
The lar.txt log shows a 1603 permissions error and the `appdata\local\temp\splunk.log` shows this as the failure point:
Deployment Server not available on a dedicated forwarder
The communication path to the deployment server is open and if I install with LocalSystem, then it is successful.
What is my `DOMAIN\splunklpuser` userid missing?
↧
How to configure proper line breaking in props.conf on the universal forwarder for my sample data?
Hi beloved Splunkers,
I'm currently trying to set up a data connection between one of our servers and my Splunk deployment. Unfortunately, I encountered some problems when it comes to Splunk recognizing line-endings and -beginnings.
Let's take a closer look at my current problem.
I have a data file with events that look kinda like that:
<666> this, is, the, event, number, 1,<666> this, is, the, event, number, 2,<666> this, is, the, event, number, 3, but, it, is, slightly, longer, than, the, others,<666> this, is, the, event, number, 4,<666> splunk, fast, like, a, f-18, bro,<666> this, is, the, event, number, 6,
What you can see here is, that all those events have something in common.
Yeah, its the "*< 666 >*" part.
Splunk is flawless I give you that, but for some reason, it sometimes combines two single events into one.
So I was thinking that I need to configure a stanza in props.conf on the forwarder to tell splunk how to deceide when a new event starts.
I did write one, but failed.... maybe?!?
[source::/path/to/file/]
BREAK_ONLY_BEFORE = (\<\d+\>)
SHOULD_LINEMERGE = True
I would love to know if someone out there is brave enough to tell me the right solution.
Thank you for your help, bro/sis!
Regards,
pyro_wood
----------
Splunk> like a F-18, bro ♥
↧
caching events to disk on Universal Forwarder
Hi!
According to documentation on outputs.conf, maxQueueSize sets value for amount of RAM that queue can take when indexer is down.
But I need to be able tocache large amounts of events, for example 5 or 10Gb and i want to do it using HDD.
How can I do that?
↧
Why is SSL not working on our Splunk 6.3.0 Windows universal forwarder with error "SSL clause not found or servercert not provided"?
We've been trying to get the Splunk Universal Forwarder for Windows (v6.3.0) to work on a Windows 2008 R2 server and we consistently get the following error.
TcpInputConfig - SSL clause not found or servercert not provided - SSL ports will not be available
We turned on debug logs and saw a little more detail but we're still having issues.
02-15-2016 12:42:55.522 -0600 DEBUG TcpOutputProc - Found group : splunkssl
02-15-2016 12:42:55.522 -0600 DEBUG TcpOutputProc - confifuring ssl for cert path :D:/Program Files/SplunkUniversalForwarder/etc/auth/server.pem
02-15-2016 12:42:55.522 -0600 INFO TcpOutputProc - tcpout group splunkssl using Auto load balanced forwarding
02-15-2016 12:42:55.522 -0600 INFO TcpOutputProc - Group splunkssl initialized with maxQueueSize=512000 in bytes.
First, we've tried all sorts of iterations for the .pem file paths in the outputs.conf file. (We are using the `D:\Program Files\SplunkUniversalForwarder\etc\system\local\outputs.conf` file). This is what the current version looks like, but we've tried lots of different iterations. (quoted, unquoted, double backslash //)
[tcpout]
defaultGroup = splunkssl
[tcpout:splunkssl]
server = X.X.X.X:9997
sslRootCAPath = D:/Program Files/SplunkUniversalForwarder/etc/auth/cacert.pem
sslCertPath = D:/Program Files/SplunkUniversalForwarder/etc/auth/server.pem
sslPassword = {encrypted text removed}
sslVerifyServerCert = true
We are using self-signed certificates, but we found that we had to rename them to cacert.pem and server.pem or else we generated a completely different error.
02-15-2016 10:01:50.425 -0600 ERROR SSLCommon - Can't read key file D:\Program Files\SplunkUniversalForwarder\etc\auth\server.pem errno=101077092 error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt.
I expect that someone has this working. Any Windows-specific recommendations?
↧
↧
Why are universal forwarders reporting error "Metric with the name thruput:thruput already registered"?
Hi there,
By examining the _internal logs I found the following, Metric Error:
ERROR Metrics - Metric with name thruput:thruput already registered
It is reported by Universal Forwarders of several Clients spread over the entire day (with peaks in the morning hours - so I suppose that it's related to the client's start-up)
The interesting thing is, that all of these clients are still reporting events to the Indexers...
Questions:
Why does this happen?
And how can I avoid this?
thx
↧
When will AIX 7.2 be supported for universal forwarders?
Hi,
I see from the release notes that AIX 7.1 is supported in the current universal forwarder, but there is no mention of AIX 7.2.
When will AIX 7.2 be officially supported?
Has anyone tried the UF on 7.2 and be willing to share their findings?
Thanks ...Laurie:{)
↧
How to configure a universal forwarder to add search-time metadata to all events?
Hi Everyone,
Our setup is a universal forwarder --> heavy forwarder --> indexer. I am looking to modify a universal forwarder config so I can search on static metadata in Splunk Web. For example, I'd like to be able to search for an `app_name`, `build_version`, or `environment_name` that would be set when the instance comes up.
I have seen various posts on this site about accomplishing that and most of them come back to the link below. This seems like the correct path, but many of the keys are out of date. I have finally settled on the structure below for my files, but I am not seeing anything in Splunk Web. Is this outcome just not possible with Splunk, or am I missing something?
props.conf:
[host::i-e420f63c]
TRANSFORMS-test = MYTRANSFORM
transforms.conf:
[MYTRANSFORM]
REGEX = .*?
SOURCE_KEY = _raw
FORMAT = instance::app_name
https://answers.splunk.com/answers/39405/adding-static-field-value-using-props-transforms-based-on-source.html?sort=newest
↧