Hi ,
Looking for an advice in troubleshooting the cause of the issue we are experiencing and how to solve it.
We have few Splunk UF(s) where we are monitoring large amount of big files to our 4 load balanced Heavy Forwarders.
The setup we have was working until last week when we have started to ingest the files with big delay ,3-6 hrs depending on the size. Previously it was taking minutes to ingest.
Best to our knowledge we didn't have any network, OS or Splunk related changes on the day when we started to experience the issue.
We tried:
1. Restart Splunk process on Splunk UF servers
2. Reboot the servers with Splunk UF
3. Per Splunk support we changed server.conf on Splunk UF server:
by adding parallelIngestionPipelines and queue sizes
parallelIngestionPipelines = 2
[queue]
maxSize = 1GB
[queue=aq]
maxSize = 20MB
[queue=aeq]
maxSize = 20MB
4. Per Splunk support we modified limits.conf
by adding max_fd and we had thruput set to unlimited already
[thruput]
maxKBps = 0
[inputproc]
max_fd = 200
All above didn't fix the issue.
Maybe you have experienced the similar issue. It would be great to know how it was solved
Any advice will be appreciated!
↧
Significant sudden slowness in ingesting between servers with Splunk UF to Splunk Heavy Forwarders
↧