Hi Splunk Experts,
I have configured a monitoring path in my Splunk Enterprise environment with the help of Splunk Universal Forwarder. From last 2 days I have facing an issue that particularly a one log file was not indexing in my Splunk environment whether my rest of logs files are same as like that log file, the pattern, naming convention,type everything is same.
I thought there is a problem in the indexing phase or the problem in the inputs.conf. Many of you will tell me that add **crcSalt** in the inputs.conf but I already added it because I phase this kind of issue previously.
But this time my issue is in my Splunk Universal Forwarder. When I have checked my Universal forwarder **splunkd.log** file then I can get the error log why the log file was not getting indexed in my splunk environment.
The error log is :
**(Date and time) WARN TailReader - Access error while handling path: failed to open for checksum: [My monitoring Log Path] (The system cannot find the file specified)
(Date and time) INFO TailReader - File descriptor cache is full (100), trimming...
(Date and time) INFO TailReader - File descriptor cache is full (100), trimming...
(Date and time) ERROR TcpOutputFd - Read error. An established connection was aborted by the software in your host machine.
(Date and time) INFO TcpOutputProc - Connection to xx.xxx.xx.xx:9997 closed. Read error. An established connection was aborted by the software in your host machine.**
I don't know how to fixed this issue and the important part is, this same configuration has done on a long time ago means at least near by 2 months and it's working properly then I don't know what happened in my Universal Forwarder server that it's showing me this issue.
Please help me on this matter and if you have sufficient Splunk document then please attach the url also and my Universal Forwarder and Splunk Enterprise environment both are in Windows OS.
Thanks,
@saibal6
↧
Recently Splunk didn't indexing one specific log file in many other same log files
↧
Universal Forwarder on FreeBSD ARM (Netgate3100 - pfSense)
Hi All,
I would like to install an UF on an appliance pfSense (netgate3100). It's a FreeBSD running on ARM.
In the UF download section, I could only find UF for FreeBSD x86.
Is there a version for ARM in development yet ? Or a trick to install it with what's already available ?
Thanks in advance !
↧
↧
limits.conf on universal forwarder OR indexer servers?
We have a universal forwarder that monitors json files with number of keys>500. We need to parse this during index time, since we don't want to affect search performance during search time. By default splunk only extracts 100 fields and I need to add below configurations in limits.conf to increase this.
[kv]
avg_extractor_time = 500
limit = 1000
max_extractor_time = 1000
maxchars = 51200
maxcols = 1024
**My question is where do I need to add these configurations, on universal forwarder or on indexer servers?**
I referred **"4. Detail Diagram - UF/LWF to Indexer"** in this page [https://wiki.splunk.com/Community:HowIndexingWorks][1]. But this doesn't tell where to configure inputs.conf exactly.
One more thing below configurations are already added in props.conf on universal forwarder to parse json data.
[sourcetype]
INDEXED_EXTRACTIONS = json
KV_MODE=none
[1]: https://wiki.splunk.com/Community:HowIndexingWorks
↧
Should you add the configurations in limits.conf on universal forwarder OR indexer servers?
We have a universal forwarder that monitors json files with number of keys>500. We need to parse this during index time, since we don't want to affect search performance during search time. By default splunk only extracts 100 fields and I need to add below configurations in limits.conf to increase this.
[kv]
avg_extractor_time = 500
limit = 1000
max_extractor_time = 1000
maxchars = 51200
maxcols = 1024
**My question is where do I need to add these configurations, on universal forwarder or on indexer servers?**
I referred **"4. Detail Diagram - UF/LWF to Indexer"** in this page [https://wiki.splunk.com/Community:HowIndexingWorks][1]. But this doesn't tell where to configure inputs.conf exactly.
One more thing below configurations are already added in props.conf on universal forwarder to parse json data.
[sourcetype]
INDEXED_EXTRACTIONS = json
KV_MODE=none
[1]: https://wiki.splunk.com/Community:HowIndexingWorks
↧
Can the universal/heavy forwarder monitor a folder that is receiving a thousands of files every 15 mins?
Hi, we have our use case here that either we'll be monitoring an approximate of 6 thousand files that are updating at random interval or monitoring a folder that will receive 6 thousand files per 15 minutes that has retention period of 3 months. License-wise, the latter case is the good option but I'm worried about its performance.
We are planning on either using universal or heavy forwarder for this. Will the heavy/universal forwarder's system requirement specified in Splunk Docs be enough in this case? Will adjusting the ulimits enough to monitor a folder in the latter case?
Thank you and have a nice day!
↧
↧
How to establish secure connection between Universal Forwarders and Heavy Forwarders in a distributed environment?
Hi,
Good day!
We have distributed Splunk Enterprise setup, we are trying to establish secure SSL communication between UF-> HF-> Indexer.
We do have certificates configured for Search heads, Indexers and Heavy Forwarders. We have also opened required receiving ports on both Indexer and HF.
On the other hand, we have around 200 UF's, can someone please tell me, if we need to generate 200 client certificates or we can use general certificate which we can deploy on all 200 UF's for establishing communication between UF and HF.
Thanks,
D Vijaya
↧
/local/inputs.conf Not Being Read
Hello all!
I'm experiencing an issue in my initial roll-out of my Splunk Universal Forwarder. While I had no issues in my test environment, I am now seeing an issue regarding /local/inputs.conf.
When I run btool ($splunkhome/bin/splunk btool inputs list --debug) it is not seeing my local/inputs.conf file. It appears that everything is the same as my lab environment, including permissions.
Does anyone have any suggestions as to what could be the cause of this issue? Thank you!
↧
How to use the current Deployment Server to configure remote UFs with a new Deployment Server IP?
Hi,
I have not found the post if it already exists...
But I have to reconfigure a lot of UF(s) to check-in with a new DS.
Unfortunately the original DS was not configured with a FQDN.
Is there a method to send the UFs an app (e.g. "update_DS_IP) that will configure the UF to connect to the new DS (i.e. replace the DS IP)?
Thank you!
↧
rsyslog server with UF not sending events to Splunk
Hi. At Splunk's recommendation, I have a centralized syslog server (using rsyslog) that writes to /logs/hostname/year/month/day/file.log
This works fine.
However, I cannot get the Universal Forwarder to send the events to the Splunk Indexer. I added my stanza to /opt/splunkforwarder/etc/system/local/inputs.conf. When that didn't work, I created an app and put the stanza into /opt/splunkforwarder/etc/apps/syslog/local/inputs.conf
Didn't work.
Here is my stanza:
[monitor:///logs/*]
disabled = false
host_segment = 2
index = main
sourcetype = syslog
That looks straightforward to me.
I checked the Splunk logs on the Indexer and there's no sign that it's ever receiving these events.
In the UF logs I see that it has added a watch to /logs:
INFO TailingProcessor - Parsing configuration stanza: monitor:///logs/*.
INFO TailingProcessor - Adding watch on path: /logs.
I have verified that the port is open between the UF and the Indexer.
Indexer is running 7.2.4 and UF is running 7.1.2.
Am I missing something?
Thank you in advance!
↧
↧
UF silent installation with own certificates and password
Hello
We try to install the latest UF silently on our Windows machines using the following command
msiexec /i splunkforwarder-7.1.2-a0c72a66db66-x64-release.msi
DEPLOYMENT_SERVER=":8089"
LAUNCHSPLUNK=0
SERVICESTARTTYPE=auto
CERTFILE="%CD%\server.pem"
CERTPASSWORD=
ROOTCACERTFILE="%CD%\cacert.pem"
SPLUNKPASSWORD=
AGREETOLICENSE=yes /quiet
The service is starting, however we ge the following error in splunkd.log and it will not connect to DC
03-28-2019 13:28:07.041 +0100 ERROR SSLCommon - Can't read key file C:\Program Files\SplunkUniversalForwarder\etc\auth\server.pem errno=101077092 error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt.
03-28-2019 13:28:07.041 +0100 ERROR HTTPServer - SSL context could not be created - error in cert or password is wrong
03-28-2019 13:28:07.041 +0100 ERROR HTTPServer - SSL will not be enabled
When I paste the certpassword into etc/system/local/server.conf and restart the service it will come up correctly and connect to DC to receive its apps.
Where is the error?
Regards
Klaus
↧
How do you do a silent installation of a universal forwarder with own certificates and password?
Hello,
We tried to install the latest universal forwarder silently on our Windows machines using the following command
msiexec /i splunkforwarder-7.1.2-a0c72a66db66-x64-release.msi
DEPLOYMENT_SERVER=":8089"
LAUNCHSPLUNK=0
SERVICESTARTTYPE=auto
CERTFILE="%CD%\server.pem"
CERTPASSWORD=
ROOTCACERTFILE="%CD%\cacert.pem"
SPLUNKPASSWORD=
AGREETOLICENSE=yes /quiet
When the service is starting, however, we get the following error in splunkd.log, and it will not connect to DC.
03-28-2019 13:28:07.041 +0100 ERROR SSLCommon - Can't read key file C:\Program Files\SplunkUniversalForwarder\etc\auth\server.pem errno=101077092 error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt.
03-28-2019 13:28:07.041 +0100 ERROR HTTPServer - SSL context could not be created - error in cert or password is wrong
03-28-2019 13:28:07.041 +0100 ERROR HTTPServer - SSL will not be enabled
When I paste the certpassword into etc/system/local/server.conf and restart the service it will come up correctly and connect to DC to receive its apps.
Where is the error?
Regards,
Klaus
↧
splunk forwarder to splunk cloud trail version
Hi All,
Am trying to send data to splunk cloud trail version with the help of Universal forwarder.i followed with this doc.
[https://docs.splunk.com/Documentation/SplunkCloud/7.0.2/User/ForwardDataToSplunkCloudFromWindows][1]
But in this doc i have succesfully completed up to step:3 than in step 4 am gettin this error
![image][2]
am configured in **output.conf** also with log files.
[monitor://C:\Pulse\_LOGS\*.log]
disabled=false
index=testaeg_2
thanks in advance
[1]: https://docs.splunk.com/Documentation/SplunkCloud/7.0.2/User/ForwardDataToSplunkCloudFromWindows
[2]: /storage/temp/271715-error.jpg
↧
Why does Splunk universal forwarder have high CPU usage on system?
I added an app recently to pull in PowerShell Transcription logs that are output to C:\Logs\YYYYMMDD\YYYYMMDDHHSS.randomstring.log
So I created the following app:
> [monitor://C:\Logs\\*\\*.txt]> followTail=false > disabled = false > sourcetype = ps_transcript >index = powershell
On some systems, PS is being run constantly from certain program/script updates. (10k in 24 hours on one server in particular). This creates a lot of small files that Splunk universal forwarder (UF) picks up. However, Splunk UF's CPU and memory usage has been going crazy with this. It isn't the size of the events, but I believe more of the number of files it has to monitor. Is this accurate? Is there a way to return the CPU usage to normal while still consuming the PS logs?
↧
↧
Splunk build (SPLUNK_BUILD) for 7.1.2. I
I'd need to run a custom docker build and it required the build hash to grab the release. Thanks.
↧
Will Outputs.conf reflect the timestamp parameters?
Hello Splunkers,
I have outputs.conf in my Universal Forwarder at \etc\system\local\ , I am monitoring some log files gave the monitor path in inputs.conf.
Now just like we mention in props.conf about time stamp parameters,
Can i update the same here in Outputs.conf at SplunkUniversalForwarder\etc\system\local\ ?
Ex:
[sourcetype / source]
DATETIME_CONFIG = none
SHOULD_LINEMERGE = true.
Will i be able to get data cooked with these parameters?
Thanks in advance.
Keep Splunkning :-)
↧
Do Props.conf create any effect, in customize app at Forwrader?
Hey Splunkers!
I have a doubt, when we create any customize app in Splunk, for any purpose, lets say for log monitoring.
So the default props.conf will be effective or if i update something in my Customize App's props.conf at UF level, so that will be effective for my particular sourcetype?
As i read somewhere, If the sourcetype specified in the inputs.conf of Splunk UF was not declared in the props.conf in the Splunk Indexer of Splunk HF, the attributes of the sourcetype will take all the default props.conf settings (Line_BREAKER, TIME_FORMAT etc...) of Universal Forwarder.
Thanks in advance!
Keep Splunking
↧
When universal forwarder using wildcard monitor statements over deep file systems
Hi
I read a post saying "**Using wildcard monitor statements over deep file systems has a significant performance impact, so if this can be avoided it would be of benefit**."
I'd like to better understand what that exactly means? What kind of "performance impact" it is, cpu, memory, disk, IO?
We have a UF 6.5 running on a Linux box, monitor a folder with about 460 files. The folder has 8 levels sub-folders, then come log files. Is this a DEEP file systems?
**When I put the wildcard at the second level of sub-folder, monitor this whole folder tree in one stanza, it shows huge memory consumption percentage, and the log server closes to freezing.
When I specify every individual log file in its own stanza without using wildcard, everything works well without any performance issue.
The issue is, the second level of sub-folder names are dynamic, we have to use an ad-hoc script to manually build configuration file for all directories/files every day. We'd really like a better solution to avoid this daily manual intervention.**
Which makes me doubt, when UF monitors one big folder tree, does it process them all in one thread?
Any other explanation for this, and any solution?
Thanks...
↧
↧
JSON is truncated as soon a timestamp is found
I'm trying to read a json file generated by a ps1 script on Windows, but the UF keep truncating the json as soon it finds a valid timestamp. Removing the timestamp 'fixes' the problem, but I need the timestamps. I've a similar script running on unix machine and the issues is not appearing there. My props is the following:
[sourcetype_name]
TRUNCATE = 0
BREAK_ONLY_BEFORE_DATE = false
DATETIME_CONFIG = NONE
MAX_TIMESTAMP_LOOKAHEAD = 0
INDEXED_EXTRACTIONS = JSON
I'm starting to think that it's a bug on Windows UFs.
↧
Latest App installed on UF
Hi,
Greetings!
Please help me with below queries
1. When was the latest app installed on a UF with time and app name
2. When was the last time a UF was restarted
Thanks in Advance!
↧
Help with downloading App in zip format
Hello,
I would like to deploy the Metricator App. For that I need also the TA-metricator-for-nmon (technical Add-On) on my source where the universal forwarder is running.
How would I get the TA-metricator-for-nmon in the zip form to shift it to my source linux host?
I mean I downloaded the Metricator App from the Splunk Base, deployed it on my SH and it also includes the TA-metricator-for-nmon, so I can monitor my SH itself. I can also see on the OS level under the ~/etc/apps the directory called TA-metricator-for-nmon. But I need this App (TA) on the source systems with the forwarder.
Would I just tar and zip the TA-metricator-for-nmon directory from the SH and copy it to my source system?
Is it as simple as that?
Kind Regards,
Kamil
↧