Quantcast
Channel: Questions in topic: "universal-forwarder"
Viewing all 1551 articles
Browse latest View live

Why am I receiving an error when deploying a new Splunk forwarder?

$
0
0
Hi, I try to deploy a new forwarder since i've updated my indexer to 7.0.3. I got some problems and i found my answers on this forum. But I haven't been able to solve, below the error message in the splunkd.log 04-13-2018 13:22:44.069 +0000 INFO TcpOutputProc - Removing quarantine from idx=IPAddress:9997 04-13-2018 13:22:44.072 +0000 ERROR TcpOutputFd - Read error. Connection reset by peer 04-13-2018 13:22:44.074 +0000 ERROR TcpOutputFd - Read error. Connection reset by peer 04-13-2018 13:22:44.074 +0000 WARN TcpOutputProc - Applying quarantine to ip=IPAddress port=9997 _numberOfFailures=2 04-13-2018 13:22:51.491 +0000 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_IPAddress_8089_Hostname_ShortName_E4BC416F-983F-4CEF-AA47-45BA28ED0FF3 04-13-2018 13:22:51.503 +0000 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_IPAddress_8089_Hostname_ShortName_E4BC416F-983F-4CEF-AA47-45BA28ED0FF3 04-13-2018 13:23:51.505 +0000 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_IPAddress_8089_Hostname_ShortName_E4BC416F-983F-4CEF-AA47-45BA28ED0FF3 04-13-2018 13:23:51.517 +0000 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_IPAddress_8089_Hostname_ShortName_E4BC416F-983F-4CEF-AA47-45BA28ED0FF3 04-13-2018 13:24:17.921 +0000 WARN TcpOutputProc - Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group splunkssl has been blocked for 600 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. And on my indexer : 04-13-2018 15:24:50.665 +0200 INFO ClientSessionsManager:Listener_AppEvents - Received count=1 AppEvent from DC ip=172.25.225.49 name=E4BC416F-983F-4CEF-AA47-45BA28ED0FF3 04-13-2018 15:26:42.372 +0200 ERROR TcpInputProc - Error encountered for connection from src=IPAddress:47781. error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number Port 8089, 9997 listen and telnet in between works. **Forwarder outputs.conf** [tcpout] [tcpout:splunkssl] server = indexer:9997 [tcpout-server://indexer:9997] sslCertPath = /opt/splunkforwarder/etc/certs/splunk-sys-forwarder.pem sslCommonNameToCheck = indexer sslPassword = CaCertPassword sslRootCAPath = /opt/splunkforwarder/etc/certs/cacert.pem sslVerifyServerCert = false **Indexer inputs.conf** [splunktcp-ssl:9997] disabled = 0 connection_host = ip [SSL] serverCert = /opt/splunk/etc/certs/splunk-sys-indexer.pem sslPassword = CaCertPassword requireClientCert = false

What is the windows universal forwarder product Id?

$
0
0
Any one know the product id for UF 7.0.1 ? i have this for 665 Package Splunk665 { # Ensure = Present Path = "C:\Software\splunkforwarder-6.6.5-b119a2a8b0ad-x64-release.msi" Name = "UniversalForwarder" ProductId = "9D47FB24-B169-437D-9D8B-F0CC951644D7" Arguments = "AGREETOLICENSE=Yes /quiet" }

Why is the Windows universal forwarder not showing in forwarder management?

$
0
0
I am trying to create a new universal package for our windows servers. The log data from our test server is showing up in Splunk the way it should; however, I don't see the server name in Forwarder Management. Our old package which was "lost" did populate the forwarder management list. Any troubleshooting recommendations or advice? We do not have server classes or apps configured in the deployment server at this time. The current forwarders only show as clients.

How to switch between Splunk Universal Fowarders?

$
0
0
Hi, We have a production environment and disaster recovery environment, Splunk universal forwarder is installed on both environments. When production system goes down the UF on production system has to be stopped and UF on DR environment has to start and activate the data ingestion to index. Is there any possibility to set up this mechanism? If yes, how to set up or configure this mechanism? Please let me know if any additional details are required. Thank you, Best regards, KK.

Windows Universal Forwarder Hide Domain Password

$
0
0
Hello I need to deploy Windows Universal Forwarders with Domain Account and I am wondering where if: - There is any way to not have LOGON_PASSWORD explicit in clear text? - Is the Domain password stored encrypted somewhere? - Any ideas how to deploy this without being having the cleartext password in the installation package? msiexec.exe /i splunkuniversalforwarder_x64.msi /l*v install_splunkforwarder-6.1-201357-x64-release.msi.log LOGON_USERNAME=adtest1\lowpriv-testuser LOGON_PASSWORD=win1@splunk AGREETOLICENSE=Yes SET_ADMIN_USER=0 /quiet Thank you

Options on installing universal forwarders on "Windows Machines"

$
0
0
Hello All, Im a bit confused with the installation of a UF on the windows machine. According to the documents, there are 2 methods to install splunk UF. One with the local account and one with the domain account. Now, My question is lets say im setting it up on a "web server" , Should i select local account or domain account to pull the web server logs, in this case the iis logs. Also what excatly do they mean when they say one can monitor logs from other domain account as well if it is provided with a domain username and password? Is it like i have a server and there are multiple people logging on to that "specific server" also if i install it as a domain user, will it not collect the system logs by default? Also importantly just by installing a UF on 1 server with domain credentials does it mean i dont have to install UF on all the servers?

How to import this kind of CSV data?

$
0
0
I've a CSV file like the one reported below, and on my UF I've added the following props but on the search heads the events are not parsed. props.conf [sourcetype] FIELD_HEADER_REGEX=#LineAboveHeader\n(.*) FIELD_DELIMITER=, CSV example #LineAboveHeader "Header1","Header2","Header3","Header4" "Field1", "Field2", "Field3", "Field4" "Field1", "Field2", "Field3", "Field4" "Field1", "Field2", "Field3", "Field4" What I would like is that splunk sees the headers and import the field names, and then create an event for each line.

How do I configure a UF on Linux to receive and forward windows events?

$
0
0
I need to configure a Linux based UF to receive Windows events and then forwarder those to the indexers. I am guessing that there is a inputs.conf and outputs.conf needing to be configured. Just not sure how to configure these stanza's, mostly inputs.conf. This would receive events from windows server in a webzone, so we only need to open the firewall for the UF. Thanks!

Find source types from UF to HF...

$
0
0
Hi all...one of my Heavy Forwarders is relaying much data, we are using it for an intermediate forwarding tier to Splunk Cloud. Many UFs are sending to this HF. I need to run a search to find what source types the Universal Forwarders are sending to this Heavy Forwarder. The Heavy forwarder is not running in preview mode. I've run plenty of searches that report both UF/HF activity to the SH....but I really want to understand what is going to this HF without bouncing it and putting it in local indexing/preview mode. Thanks for any input...!

How to selectively forward the log files to specific indexes in Splunk?

$
0
0
Is it possible to selectively forward the log files to specific indexes in Splunk. I want to forward a docker container running 3 services logs to Splunk indexer, the problem is that if I use Docker logging driver, all the data written to STDOUT goes to the same index and data segregation is not possible. Instead of that I've setup forwarder and able to send logs but all are going to the same index, I want to configure splunk forwarder to send specific logs to a specific index.

Why is Docker Splunk UF sending logs with 2 different hostnames?

$
0
0
Docker-compose splunkuf: image: splunk/universalforwarder:7.0.2 network_mode: host environment: SPLUNK_START_ARGS: --accept-license --answer-yes SPLUNK_USER: root SPLUNK_CMD: install app /tmp/splunkclouduf.spl -auth admin:changeme SPLUNK_DEPLOYMENT_SERVER: XXXX.cloud.splunk.com:8089 SPLUNK_ADD_1: monitor /docker/log SPLUNK_ADD_2: monitor /mnt/logs/postgres volumes: - /opt/splunk/etc - /opt/splunk/var - /var/log:/docker/log - $DATA_DIR/logs/postgres:/mnt/logs/postgres - $DATA_DIR/certs/splunkclouduf.spl:/tmp/splunkclouduf.spl The container is running in Ubuntu instance. In Splunk cloud I can see 2 hostnames for the same instance: 1. ubuntu 2. The real hostname Any reason why it happens?

Universal Forwarder

$
0
0
Before I start this is a serious case of blind leading the blind. Currently we have a VMware running Windows Server 2016 hosting Splunk Enterprise, to date we have managed to get the forwarder installed on Windows 7, Windows 2003, Windows 2008, Solaris and Mint Linux (Just for a laugh). Without much administration it all works well, but we come to RHEL 7, for some reason we can not get it work, everything appears to be okay. Installed the RPM forwarder, but nothing appears to be happening. As this is a test system we have disabled both Server and Client firewalls, can ping the server in both directions - but we can't seem to get it to work. The only thing that we have managed to find using "google" is a potential issue with SELINUX so we have disabled that. Any suggestions as this would save the sanity of the "intern"

Is there any way to disable the Splunk Universal Forwarder to generate the dump file, because I want to generate the dump on demand?

$
0
0
Hi, I have installed the Splunk universal forwarder to store logs from my IIS WebServer in Splunk. The SplunkUniversalForwarder have executed some days with success, but after that, it begins to crash and generate a lot of dumps. I want to disable the generation of the Dumps. I have tried follow this post https://answers.splunk.com/answers/478637/how-to-stop-splunkdexe-from-creating-crash-dump-fi.html (mainly: http://superuser.com/questions/1069578/disable-werfault-exe-on-crashes-for-specific-processes-on-windows-10), but without success. Is there any way to disable the Splunk universal forwarder to generate the Dump file, because I want to generate the Dump on demand? Fernando.

Can someone help me understand how my current outputs.conf settings work?

$
0
0
A splunk engineer told us to deploy an app with the deployment server (to universal forwarders) that contained the outputs.conf file. the problem is that even with this app deployed running btool still shows that the config is being taken from system/local/outputs.conf --which is what I expected based on the documents from Splunk....maybe he was confused.... Anyway the real problem is I don't understand the correlation between a few values in outputs.conf [target-broker:deploymentServer] targetUri = clustermaster:8089 [tcpout] defaultGroup=my_indexers [tcpout: my_indexers] server=1.1.1.1:9997,1.1.1.2:9997 [tcpout-server://1.1.1.1:9997] [tcpout: my_LB_indexers] autoLBFrequency=30 server=1.1.1.1:9997,1.1.1.2:9997,1.1.1.3:9997 useACK=true After reading the outputs.conf documentation on Splunk I'm confused about a few things: 1) What is the result of the above config? Does the default group attribute mean that it will never send to the my_LB_indexers group? Or does the universal forwarder always send to all output groups? Since duplicate values exist would it still only send to each one once? 2) is the single server stanza even needed? I saw one splunk document that said it was optional and another that said it's always needed. 3) If the config makes it only send to my_indexers, does it ever fail over to another group if the my_indexers group becomes unavailable?

Rotated log file to another directory causes duplication

$
0
0
**Test inputs.conf** [monitor:///var/log/application/active/*.log] disabled=0 sourcetype=application index=application [monitor:///var/log/application/rotated/*.log] disabled=0 sourcetype=application index=application **Expected result:** If I understand the CRC that Splunk calculates, when `/var/log/application/active/application.log` is rotated to `/var/log/application/rotated/application.20171231.log` the log events should not be duplicated because the first 256 bytes remained the same. **Actual result:** Except, my entire file is duplicated, with splund.log stating: Normal record was not found for initCrc=0xbd68c9187f8e7490. Is this because it's in a different directory or a different inputs.conf stanza? I'm not using `initCrc=`, so I did not expect the directory to make a difference. Can anyone explain the detail I'm missing here?

Universal Forwarder on Chromebooks?

$
0
0
Hi all, long time lurker here! Has anyone had any luck installing a universal forwarder on a Chromebook? My company will most likely be purchasing some of these, and I'd like to be able to monitor them if possible. I know Chrome OS isn't listed as a supported operating system, but it is Linux-based so I'm not sure if there are workarounds (we're a Windows-only shop so I don't have much experience in that area). Thanks!

How to filter logs from the source with a universal forwarder?

$
0
0
Hi, I have UFs on a few ec2 aws instances, reading logs from /temp. I want to regex and only send logs containing ERROR and WARN on to the HF and then on to the indexers. I want to the filter to occur closest to the source to reduce the amount of data being sent. Is it possible to regex in the inputs.conf of the UF? If so please explain. Thank you

Universal forwarder (Windows) does not send logs even though "active"

$
0
0
Hi Folks, I am testing log forwarding using universal forwarder from Windows to Splunk but can't seem to receive any logs. My test environment has Splunk Enterprise OVA (standalone) as server and Windows 2012 (with universal forwarder) as client. Steps i followed (not necessarily in that order): On Windows client (Universal forwarder): * Installed Universal forwarder * configured as deployment client * Added firewall rule to allow destination port 9997 * checked using "splunk list forward-server" to confirm server is listed in "active" section On Splunk OVA enterprise server * Configured listening on port 9997 using web console * Added forwarder input using Settings -> "Data Inputs" -> "Forwarded Inputs" -> "Windows Event Logs" (could see my desired deployment client in the list). Selected Application, security & system events * Stopped iptables service (just to ensure its not blocking traffic) * Followed [this][1] link to receive logs from forwarder Testing: * created user in windows (client) and checked local event logs. Local log can be seen in "Security" events * Ran search in server (web console) to see this event. It says "no events found" for the specific index [1]: https://answers.splunk.com/answers/49833/splunk-forwarder-connection-refused-from-splunk-indexer.html

Splunk Docker Logging driver vs Universal Forwarder

$
0
0
What is the best option between Splunk logging driver for Docker or Universal forwarder running on the host or inside container for sendings logs to an indexer server. What are the limitations of Splunk logging driver for Docker.

After installing a new UF, why is it not forwarding logs to the Indexers?

$
0
0
05-10-2018 15:13:13.954 +0000 ERROR TcpOutputProc - Error initializing SSL context - invalid sslCertPath for server 45.125.XXX.X:9997 05-10-2018 15:13:13.959 +0000 ERROR SSLCommon - Can't read key file /opt/splunkforwarder/etc/auth/server.pem errno=151429224 error:0906A068:PEM routines:PEM_do_header:bad password read. I just installed a new UF but it's not forwarding logs to the Indexers and the $SPLUNKHOME /var/log/splunk/splunkd.log shows the error message above. The IP in the error message is that of the Indexer: It is connecting to the Deployment Server and getting configs but not sending logs to the Indexers. I need help understanding what is happening. I have reinstalled the UF but still got the same error messages. The certs are default Splunk certs Thanks
Viewing all 1551 articles
Browse latest View live