What is the best option between Splunk logging driver for Docker or Universal forwarder running on the host or inside the container for sending logs to an indexer server? What are the limitations of Splunk logging driver for Docker.
Hi Splunkers ,
I am getting this splunkd log entry in only one splunk forwarder .
05-09-2018 08:11:39.579 +0000 INFO ThruputProcessor - Current data throughput (258 kb/s) has reached maxKBps. As a result, data forwarding may be throttled. Consider increasing the value of maxKBps in limits.conf.
Please let me know how to solve this only for particular splunk forwarder . For doing only for one splunk forwarder can i mention the below entry in splunk forwarder for which i want or do i need to mention that in indexer (but how to mention that in indexer only for one splunk forwarder host not for all)
[thruput]
# means unlimited
maxKBps = 0
I installed Splunk Universal Fwd and Splunk Enterprise on my C drive. I created a sample file and modified the inputs.conf as mentioned in one of the ans(link given below) and enabled the receiver by setting port to 9997. Do we have to modify/create outputs.conf file? I tried creating outputs.conf too..but no use. In outputs.conf I gave the server name as localhost and port as 9997. Am I missing something? Also, do we have to modify anything in distributed search? I assume my Splunk Enterprise is acting both as SH and Indexer.
Have referred to below ans but didnt got the answer
https://answers.splunk.com/answers/490343/how-to-properly-configure-universal-forwarder-loca.html#answer-656030
I am trying to index new data and it is not happening.
I am indexing a single log file that is being written to by the server when ever new events are added.
I put this statement into the MSIADDED inputs on the universal forwarder because that is where my current input live.
This is what I added.
[Monitor://D:\Software\Waratek\HR-Config\HR.log]
disabled = 0
sourcetype = waratek
index = main
This is sample of the file.
2018-05-02 11:02:09,851 CEF:0|ARMR:CWE-114: Process Control|CWE-114: Process Control|1.0|Process Forking - 02|Load Rule|Low|outcome=success
2018-05-02 11:02:13,252 CEF:0|ARMR:CWE-114: Process Control|CWE-114: Process Control|1.0|Process Forking - 02|Link Rule|Low|outcome=success
2018-05-02 11:02:13,263 CEF:0|ARMR:CWE-114: Process Control|CWE-114: Process Control|1.0|Process Forking - 03|Load Rule|Low|outcome=success
2018-05-02 11:02:14,135 CEF:0|ARMR:CWE-114: Process Control|CWE-114: Process Control|1.0|Process Forking - 03|Link Rule|Low|outcome=success
I can see the sourcetype show up in data summary; however, when I search for the data there is nothing there. Any suggestions here?
Hi Everyone,
I am testing universal forwarding in our testing environment and also installed universal forwarder in one of windows server, but can't get the desire logs.
My test environment included Splunk Enterprise OVA as server and Windows server (with universal forwarder installed) which is client. I had used the "deployment server" command(set deploy-poll) and then restart.
On Splunk OVA enterprise server
Added forwarder input using Settings -> "Data Inputs" -> "Forwarded Inputs" -> "Windows Event Logs"-> New (could see my desired deployment client in the list). Selected Application, security & system events.
Tested:
1. I had check the Eventviewer logs; there logs are generating
2. Check the Tcp dump; there is also logs are coming from the windows server.
Also I am geeting Messages:
- Skipped indexing of internal audits event will keep dropping events until indexer congestion is remedied.check disk space and other issues that may cause indexer to block.
- Forwarding the indexer group default-autolb-group blocked for 10 seconds.
I have installed the UF on a number of servers and I configured ti to monitor the winodws event logs (Application, System, Security). It looks like the UF has only picked up the event logs starting from when it was installed. Is there a way to tell the UF to ingest all of the event logs from the past?
I'm reading through all of the API docs, and I am executing GET API calls against my search head successfully. However, I want to restart the separate universal forwarder and edit inputs.conf via the API but I can't figure out how to enable the REST API on it.nThere are no splunk accounts on it, so what do I need to configure here?
I am trying to install Splunk Insights .. Installed splunk Server .. when i am trying to install Forwarder am not allowing to do that in my Environment so i tried manually to install Agent.. and it is installed sucessfully. but it is not showing that Added host in web URL. Can you please help me out here.
Thanks,
NAK
Trying to setup the Universal Forwarder on the Web Server to forward IIS logs to SPLUNK.
The Windows Event log ARE forwarding correctly. My IIS logs are NOT stored in the default location so I'm trying to figure out the correct stanza to use.
My actual IIS log directoiry structure is
E:\weblogs\w3svc1\*.log
E:\weblogs\w3svc2\*.log
E:\weblogs\w3svc3\*.log
Etc... multiple web sites
I tried the following Stanzas neither have seemed to work
[monitor://E:\weblogs\\*\\*.log]
disabled = 0
[monitor://E:\weblogs\\...\\*.log]
disabled = 0
I even tried tho log just a single site
[monitor://E:\weblogs\\w3svc1\\*.log]
disabled = 0
I restart splunk forwarded after changing the path
If I run 'splunk list monitor' I get for all stanzas
E:\weblogs\*.log
No logs are being imported that I can tell
Appreciate any assistsnce anyone can provide.
-MARK-
Hi - I saw these errors in SPlunkd.log. our UF is currenlty down and cannot be restarted. I'm not sure if these errors impacts the UF itself but what does it mean if i get these errors in UF splunkd.log? will this cause the UF to be down?
UF was down 30mins after these errors.
05-21-2018 00:01:42.952 +0000 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/_apps_inputs/bin/service.sh" Note: This output shows SysV services only and does not include native
05-21-2018 00:01:42.952 +0000 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/_apps_inputs/bin/service.sh" systemd services. SysV configuration data might be overridden by native
05-21-2018 00:01:42.952 +0000 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/_apps_inputs/bin/service.sh" systemd configuration.
05-21-2018 00:01:42.952 +0000 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/_apps_inputs/bin/service.sh" If you want to list systemd services use 'systemctl list-unit-files'.
05-21-2018 00:01:42.952 +0000 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/_apps_inputs/bin/service.sh" To see services enabled on particular target use
05-21-2018 00:01:42.952 +0000 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/_apps_inputs/bin/service.sh" 'systemctl list-dependencies [target]'.
I know it is possible to install a UF on the same machine as my Splunk instance as stated in these posts:
1. https://answers.splunk.com/answers/131245/running-a-universal-forwarder-on-the-same-server-as-the-enterprise-server.html
2. https://answers.splunk.com/answers/471936/install-both-universal-forwarder-and-splunk-enterp.html
but I will like to know if there are notable reasons why to do so or not.
- Are there any benefits to having both on the same machine or otherwise?
- What is the best practice and why is that so?
- Which approach is most prone to errors?
Thanks in advance! :)
We're trying to determine if Splunk is appropriate for our scenario, which is to monitor our own agent that runs on our users' PCs and Macs. We have several million customers, and it seems like it would be burdensome (based on the posted system requirements) to deploy a universal forwarder onto every user's machine (plus I'm not sure how we would integrate this into the existing installer & upgrader features of our app).
All we really need to do is to periodically upload (either daily or hourly) a .json file containing some structured data for metrics that describe the current state of the app during that interval, as well as some exception events (crashes, thrown exceptions of note, etc.). In theory, this would just be an HTTPS call to our Splunk instance with the appropriate authentication, but I can't locate any online documentation that describes this - the REST API seems to be more about controlling existing collectors and doing extraction & analysis of collected data.
When you deploy Splunk Insights for Infrastructure you use the specific script to install a forwarder. Can we use Splunk Universal Forwarder to collect and send metrics to Splunk Insights for Infrastructure and to send other logs to a different Splunk Instance at the same time?
Our Qualys report detected various SSL certificate vulnerabilities for any devices using Splunk universal forwarder via 8090. We have deployment server configured to push configuration to servers running Splunk agent. After doing some research it appears we need to create a certificate on the deployment server and distribute to any server running Splunk agent. I'm curious to know which certificates I need to distribute. I was able to create self-sign certificates on the deployment server. I would like to resolve vulnerabilities detected by Qualys. I found the following documentation that cert authentication is not recommended for deployment and clients. - https://docs.splunk.com/Documentation/Splunk/7.1.0/Security/Securingyourdeploymentserverandclients
Additional information:
http://docs.splunk.com/Documentation/Splunk/7.1.0/Security/Howtoself-signcertificates
http://docs.splunk.com/Documentation/Splunk/7.1.0/Security/HowtoprepareyoursignedcertificatesforSplunk
Qualys Vulnerabilities:
• X.509 Certificate SHA1 Signature Collision Vulnerability
• SSL Certificate - Self-Signed Certificate
• SSL Certificate - Expired
• SSL Certificate - Subject Common Name Does Not Match Server FQDN
• SSL Certificate - Signature Verification Failed Vulnerability
• HTTP Security Header Not Detected
I have a UF installed on my local machine and I installed a different UF on a server which I remotely connect to. Whenever I forward files from the remote server it works well but instead of the "host" field value showing as the server name, it shows my local machine name instead. I don't know why this is. Since I am forwarding from the server I expected that the host value will be the server name. Am I missing something? Is there a way to make the host value the server name instead of my local machine name?
I noticed on the download page that Splunk Enterprise is supported on OSX 10.13 but the Universal Forwarder is not. Setting aside the kerfuffle caused by the new OSX logging mechanisms, is there any reason not to use UF 7.1.1 on OSX 10.13?
What's the official stance on that?
I did find the reference to SPL-129734, and I would like to add a vote for requesting that functionality, but bothering the support folks with a formal ticketed request seems excessive.
Thanks all!
Hi there,
i followed the install [instructions](https://docs.splunk.com/Documentation/Splunk/7.0.3/Admin/Integrateauniversalforwarderontoasystemimage) for the installation of the splunk UF in our Citrix environment.
We used this command for the installation on the master.
`msiexec.exe /i splunkforwarder-7.0.3-fa31da744b51-x64-release.msi DEPLOYMENT_SERVER=":8089" AGREETOLICENSE=yes LAUNCHSPLUNK=0 /quiet`
The preperation of the master image works fine.
After we start the first provisioned server with this image, we saw that the UF communicates with the deployment server and received the prepared inputs.conf, outputs.conf. And a few minutes later we received some events from the event log on the indexer. So far everythings works as expected.
But if we restart the provisioned server, the image of the server will be reseted and therefore the previosly generated "GUID" of the server is gone. After the splunk UF service is started again, the service will generate a new guid.
On the splunk master server in the Forwarders:Deployment menu we see for each reboot a new entry of the UF for the provisioned server.
Is there a way to set to the provisioned servers always the same guid? Is it possible to deploy the instance.cfg by the deployment server?
Any help / info is welcome.
Thanks.
I am running two setups of Splunk, one is in Datacenter and another is in AWS.
DC : 2 Node search heads, 3 nodes : indexers, 1 deployment server & license manager
AWS : 2 Node search heads, 3 nodes : indexers, 1 deployment server & license manager
I am trying to add AWS indexer cluster to DC search head. If this is possible we will stop the AWS hosted SHs because we want to keep only one SH cluster which should be able to search across two distinct indexer clusters.
Please note that there is no replication or any connection between the AWS hosted and DC hosted indexer cluster. We don't want to setup multisite indexer clustering.
Can this be done ?