Quantcast
Channel: Questions in topic: "universal-forwarder"
Viewing all 1551 articles
Browse latest View live

Is there an app that can restart the Splunk universal forwarder service on Windows every 30 minutes?

$
0
0
Hi, I need to deploy an app from deplyment server which will restart the Splunkd UF application installed on Windows server. Can some one please help me with what should I write in the $Splunk_Home/etc/deployment-apps/restart_app/local folder of Splunk. Thanks. Vikram.

Few forwarders not sending data

$
0
0
Hey everyone, I have installed UF agents in 180 servers and i have seen the data coming to splunk yesterday. But now i have noticed three of them are not sending data, I mean i am seeing 177 hosts in splunk. So how can we find out what are those three UF's which is not sending data. I have configured all these using Deployment server. Same index and same sourcetype. And is there a way that we can get alerts when the forwarder stops sending data or gets any issue? I am using splunk 6.3. Thank you

UF compatability for Knoppix and Fedora

$
0
0
Could you suggest the compatible UF package for the Operating system Knoppix and Fedora? I have checked on this link, but the mentioned OS flavors are not listed there. http://docs.splunk.com/Documentation/Splunk/7.0.1/Installation/Systemrequirements Please suggest. Regards, Arun

How do you run script WinNetMon on universal forwarder?

$
0
0
I wanna to run WinNetMon on UF and I put to SplunkUniverstalForwarder\etc\system\local\inputs.conf

Uninstall universal forwarder error: "Splunk Installer was unable to enable event log monitoring. Splunk exitcode='1'"

$
0
0
I am trying to uninstall Universal Forwarder 6.1.3 and it gives me an error "Splunk Installer was unable to enable event log monitoring. Splunk exitcode='1'". Does anyone know how to fix this so i can remove and update to the new version?

Is it possible to get Cisco eStreamer data processed by the Splunk universal forwarder?

$
0
0
Hi, Is it possible to get Cisco eStreamer data processed by the Splunk Universal forwarder? Is there any step-by-step guide? Many thanks

Universal Forwarder Not sending my windows events log

$
0
0
Well! i have configured my suplunk server to accept logs on 9997 from remote. And i have configure my universal forwarder to forward logs to my splunk server to 9997 port. My output.conf file is as: [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = 10.0.71.250:9997 [tcpout-server://10.0.71.250:9997] and my input.conf is as: [default] host = splunk1-PC [script://$SPLUNK_HOME\bin\scripts\splunk-wmi.path] disabled = 0 [WinEventLog:Application] disable = false [WinEventLog:Security] disable = false [WinEventLog:System] disable = false By doing netstat -n to my splunk server and windows system [universal forwarder] is can see this vice versa Local Address Foreign Address State 10.0.70.70:51137 10.0.71.250:9997 ESTABLISHED apache logs are coming from the windows system[universal forwarder] but windows events are not. I am unable to find the exact problem. Kindly help!!

Unable to read large input file from Universal Forwarder

$
0
0
We have a Linux server which is receiving our syslog traffic and on that machine we have a universal forwarder running on it to read all of the syslog files to send them off to our Splunk indexers. The syslog server has 300+ different devices which send to it and a few of them get to be very large files. There is a separate file for each device and it rolls over to a new file at midnight. This is where the issue occurs. The universal forwarder is hittting this error on some of the files: **WARN TailReader - Enqueuing a very large file** And it says that for each of those large files. Some of the files do seem to get read eventually but the data is behind at that point and other of the files are not read. What can I do on the universal forwarder to avoid these files from being read in batch mode (which is how the ones that do eventually get read work) and instead just tail the files as they go along? And ensure that all of the files are getting picked up? Thanks.

Can a universal forwarder be restarted via REST API?

$
0
0
Can UF be restart via REST API? What other things can be done to UF via REST API?

Universal forwarder is listening to the wrong port for the splunkd process

$
0
0
We are rolling out the UF to our windows servers, no apps yet, just the UF. The deploymentclient.conf only has the deployment server: targetUri = xxx.xxx.xxx.xxx:8089. this is causing some issues with another instance of SPLUNK our business folks have running. How do I change the port that the deployment server listens on How do I push this change to all the UF if they are not in a server class or have applications? Thanks!

Universal Forwarder don't write events to persistent queue with graceful service shutdown

$
0
0
I'm using distributed Universal Forwarders in remote location in order to collect events from remote sites. To prevent data loss I configured persistent queue on disk for specific inputs. input.conf [udp://514] connection_host = ip index = remotelogs queueSize = 1MB persistentQueueSize = 10MB sourcetype = syslog Everything works perfect except the following case. While the UF is disconnected from Splunk Server the events received by UF are stored in memory. Even when the UF is gracefully stoped by using: _$SPLUNK_HOME/bin/splunk stop_ the events from memory are not saved to persistent queue on disk. Dose anyone knows if this is a known issue or an bug? I didn't find any references on this issue. Evaluated versions: 7.0.1 for both Server and UF.

How to configure inputs.conf to send data from 1 directory to 2 different clusters with different index/sourcetype

$
0
0
We have a scenario where we need to forward data from 1 directory to 2 different indexer clusters. While this is achievable through TCP Routing in inputs.conf, I believe the solution will only work if everything else remains the same in the monitoring stanza. We need to send data to the 2 clusters with different index/sourcetype configuration. Is this possible using the same inputs.conf file? We have observed that setting up 2 different stanzas for the same monitored directory results in only one of the stanzas being respected. Below is a description of the configuration. [monitor:///A/B/C] index = index1 sourcetype = st1 _TCP_ROUTING = cluster1 [monitor:///A/B/C] index = index2 sourcetype = st2 _TCP_ROUTING = cluster2 The above configuration resulted in the data only flowing to cluster2. We tried differentiating the 2 stanzas by putting asterisk at the end of the directory name, but it didn't make a difference.

Universal Forwarder resending event log data

$
0
0
If the IP address for a host changes or if it gets a new GUID, would the forwarder resend the entire Windows event log?

Why UF think my file is binary?

$
0
0
In my environment, UF monitors the file and forwards it to Splunk. It was able to capture the file without problems before, Due to the version upgrade of software that outputs log that was being monitored, and the character code is changed from Shift_jis to utf-16LE (with BOM), it is no longer imported. When checking the internal log of UF, message saying "it was a binary file, so ignored it". Is this a bug? Is there any workaround other than updating? If anyone knows, it would be greatly appreciated if you could tell me. UF ver: 6.2.0

When installing UF, should THP be invalidated or not?

$
0
0
It is related to the following answers, but is it recommended to invalidate THP after all? https://answers.splunk.com/answers/523835/turn-thp-off-on-universal-forwarder.html If it does not seem to affect indexing and search performance only, It seems there is no need to invalidate it. Also, if it is recommended, I would like to know how it is affected. I hope someone can tell me.

Unable to add more sources to Universal Forwarder

$
0
0
Hi, I'm pretty new in the Splunk field. I've installed a little environment of Splunk on virtual machine and Universal Forwarder on my own machine (both are windows). When I'm trying to add more sources to the Forwarder such as log file, I can see the changes in the App's inputs.conf file but there are now events getting in to the system. I did a restart to the splunkd service on my machine and didn't help. What I'm missing here?

Universal Forwarderのログ転送先について

$
0
0
Syslogサーバー(+Universal Forwarder) → Splunkサーバー 上記の図のように、Syslogサーバーにフォワーダーをインストールし、正常にSplunkサーバーにもログが取り込めていることは確認できているのですが、Splunkサーバー上のどこにフォワーダーで転送したログは保存されているのでしょうか? Added Translation: Syslog server (+ Universal Forwarder) → Splunk server As shown in the above figure, I installed the forwarder on the Syslog server and confirmed that the log can be successfully imported to the Splunk server, but the log transferred by the forwarder where on Splunk server is saved Is it there?

Manipulating data before indexing

$
0
0
I have multiple forwarders (heavy and universal) and I want to manipulate the data they send to my indexers. For each event I want to add a field, which the value is based on the event content and other information. It is possible for me to add this field on search, but I prefer to do it before indexing the event and make the search easier. Is it possible to do so?

Universal Forwarder - Linux server - multiple processes running

$
0
0
Hi there, maybe a simple question for the pros. I have installed on different linux servers the UF to get logs and events. I noticed on this servers that splunk is running with 40 processes (splunkd -p 8089 start) at the same time. Is this a normal behavior? Can a reduce the amount of running processes? I'm using version 7.0.0 for the UF and the IDX. Thanks!

Universal Forwarder - Configured but inactive forwarders.

$
0
0
I have a fresh install of `7.0.x` in our QA environment to test with. I have an indexer/search head/deployment server running on a RHEL7 box. I have one Universal Forwarder on a Windows Server 2012 R2 box. I have configured the indexer to listen on port 9997 and it reports it is properly doing so when I run `splunk display listen`. I have the forwarder pointed to the indexer on that same port but when I run the list forward-server command I get the following: `Active forwards: None Configured but inactive forwards: indexer.domain.com:9997 ` Where `indexer.domain.com:9997` matches `splunk show default-hostname`. When I run `lsof -i TCP:9997` on my indexer I get back the following: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME splunkd 86629 splunk 111u IPv4 2544734 0t0 TCP *:palace-6 (LISTEN) When I run `splunk btool inputs list splunktcp --debug` I get back the following: /opt/splunk/etc/system/default/inputs.conf [splunktcp] /opt/splunk/etc/system/default/inputs.conf _rcvbuf = 1572864 /opt/splunk/etc/system/default/inputs.conf acceptFrom = * /opt/splunk/etc/system/default/inputs.conf connection_host = ip /opt/splunk/etc/system/local/inputs.conf host = indexer.domain.com /opt/splunk/etc/system/default/inputs.conf index = default /opt/splunk/etc/system/default/inputs.conf route = has_key:_replicationBucketUUID:replicationQueue;has_key:_dstrx:typingQueue;has_key:_linebreaker:indexQueue;absent_key:_linebreaker:parsingQueue /opt/splunk/etc/apps/launcher/local/inputs.conf [splunktcp://9997] /opt/splunk/etc/system/default/inputs.conf _rcvbuf = 1572864 /opt/splunk/etc/apps/launcher/local/inputs.conf connection_host = ip /opt/splunk/etc/apps/launcher/local/inputs.conf disabled = 0 /opt/splunk/etc/system/local/inputs.conf host = indexer.domain.com /opt/splunk/etc/system/default/inputs.conf index = default From my point of view everything is configured correctly. The firewall ports are still open from when we decommissioned our `6.5` QA machines. When I check the `splunkd.log` on the indexer I can see these events post configuring the listener: 01-24-2018 17:11:04.311 -0600 INFO TcpInputConfig - IPv4 port 9997 is reserved for splunk 2 splunk 01-24-2018 17:11:04.311 -0600 INFO TcpInputConfig - IPv4 port 9997 will negotiate s2s protocol level 3 01-24-2018 17:11:04.312 -0600 INFO TcpInputProc - Creating fwd data Acceptor for IPv4 port 9997 with Non-SSL You can see the contents of my `inputs.conf` from the btool output above. The content of my `outputs.conf` from my forwarder looks like this: [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = indexer.domain.com:9997 [tcpout-server://indexer.domain.com:9997] The `splunkd.log` on my forwarder contains a lot of the following: 01-24-2018 17:59:06.807 -0600 WARN TcpOutputProc - Cooked connection to ip=10.2.1.12:9997 timed out 01-24-2018 17:59:07.136 -0600 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected Which is showing the right IP and port but I don't understand why it's timing out. The firewall is configured properly because it hasn't been changed since we upgraded from 6.5 to 7.0 in this environment and we are using the same ports. Any thoughts, comments, or advice is greatly appreciated. Thank you.
Viewing all 1551 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>