Hello,
I would like to know what happens when the forwarder is configured to send data to a non-existent index, either with or without Indexer Acknowledgement enabled. All other parameters are set to the default ones.
I was trying to send data to a supposed index that is in fact not yet created, but I couldn't find any error message showing me that something was wrong (I looked into the metric.log and the splunkd.log of the forwarder).
Did I miss something?
Thank you in advance.
↧
What happens when the forwarder is configured to send data to a non-existent index?
↧
Why is the universal forwarder shutting down?
I am installing a universal forwarder 6.6.2 to Windows servers. On reboot SPLUNKD starts, reports in and then after syncing with the deployment server the service attempts to restart. After 360 seconds of failed attempts to phone home the admin handler server control forces a shutdown and the service no longer is a running state.
Thanks!
↧
↧
Need help configuring i/o to capture data from universal forwarders
looking to find a procedure or help to configure i/o so i can capture the same from universal forwarders.
currently the iostat source type is not showing any i/o for disk, but it shows only for cpu and mem.
can you guide me to set this up so i can collect disk i/o
thanks
↧
How to get data in Enterprise from universal forwarder
I installed a Splunk Enterprise 7.0 on a Unix machine and wish to get data from a Windows machine (any data would suffice for now since I'm new to Splunk, trying to grasp the concept of it all)
Some configs I did using the documentation available:
**Splunk Enterprise server (unix system)**
$ cat inputs.conf
[default]
host = SPLUNK
[splunktcp://9997]
disabled = 0
**Splunk Universal Forwarder (Windows Server machine)**
-> splunk add forward-server :9997
-> splunk set deploy-poll :9997
-> Added some config in 'inputs.conf'
# Windows platform specific input processor.
[WinEventLog://Application]
disabled = 0
[WinEventLog://Security]
disabled = 0
[WinEventLog://System]
disabled = 0
[monitor:///apache/*.log]
disabled = 0
-> splunk enable eventlog System
Specified input collection has been enabled
Now I want to add a Forwarder using the Splunk Web on my Enterprise system.
I log on to the website, select 'Add data' > 'Forward' > 'There are currently no forwarders configured as deployment clients to this instance.'
Not sure what I'm doing wrong. However, when I search for data, I do see some results there from the Windows machine!
↧
How do you get data into Splunk Enterprise with a universal forwarder?
I installed a Splunk Enterprise 7.0 on a Unix machine and wish to get data from a Windows machine (any data would suffice for now since I'm new to Splunk, trying to grasp the concept of it all)
Some configs I did using the documentation available:
**Splunk Enterprise server (unix system)**
$ cat inputs.conf
[default]
host = SPLUNK
[splunktcp://9997]
disabled = 0
**Splunk Universal Forwarder (Windows Server machine)**
-> splunk add forward-server :9997
-> splunk set deploy-poll :9997
-> Added some config in 'inputs.conf'
# Windows platform specific input processor.
[WinEventLog://Application]
disabled = 0
[WinEventLog://Security]
disabled = 0
[WinEventLog://System]
disabled = 0
[monitor:///apache/*.log]
disabled = 0
-> splunk enable eventlog System
Specified input collection has been enabled
Now I want to add a Forwarder using the Splunk Web on my Enterprise system.
I log on to the website, select 'Add data' > 'Forward' > 'There are currently no forwarders configured as deployment clients to this instance.'
Not sure what I'm doing wrong. However, when I search for data, I do see some results there from the Windows machine!
↧
↧
Customize Splunk App for *nix
Hi All,
Hope you are doing good.
We have Splunk app for *nix installed on my Linux application servers and being used to monitor the stats. We have TaniumClient software installed on those servers and that partition related to this software doesn't have READ permission for the Splunk user. Due to this, we are seeing Permission Denied error messages when df.sh script from Splunk App for *nix runs.
So we decided to blacklist the /opt/Tanium/ partition in the df.sh script. So could you please help me how to blacklist this partition? Is it the same way we blacklist few logs in monitor stanza of inputs.conf?
Thanks in advance.
↧
Is it possible to fetch application log at UF directly to my SH
I want to fetch DNS and DHCP logs from my server directly to my local system, where I have my Splunk enterprise, without implementing HF and others.
Is it possible to do so? If yes then how? Kindly help!
↧
Need to change lines in custom app
I generated an app today with inputs.conf to push
[monitor://]
index=
sourcetype=
recursive=true
but when this is pushed it appears like
[monitor://]index=sourcetype=recursive=true
This is the reason it is not working, howeverwhen we changed the config to different lines and restarted the UF it worked.
is there any way I can push the logs as written in the deployemtn-apps folder.
↧
Is it possible to send application logs at the universal forwarder directly to my searchhead?
I want to fetch DNS and DHCP logs from my server directly to my local system, where I have my Splunk enterprise, without implementing HF and others.
Is it possible to do so? If yes then how? Kindly help!
↧
↧
Splunk App for Unix and Linux: How can we customize this app to blacklist the /opt/Tanium/ partion in the df.sh script?
Hi All,
Hope you are doing good.
We have Splunk app for *nix installed on my Linux application servers and being used to monitor the stats. We have TaniumClient software installed on those servers and that partition related to this software doesn't have READ permission for the Splunk user. Due to this, we are seeing Permission Denied error messages when df.sh script from Splunk App for *nix runs.
So we decided to blacklist the /opt/Tanium/ partition in the df.sh script. So could you please help me how to blacklist this partition? Is it the same way we blacklist few logs in monitor stanza of inputs.conf?
Thanks in advance.
↧
Universal Forwarder client showing up in wrong server class
Out of our deployement of about 1,000 UF clients, a handful of systems are reporting data to the wrong indexes -- even though they are clearly configured to point to the correct one.
Here's the observations:
Lookup:
Name: daniels
Address: 10.14.108.60
Green_windows Server Class defined as:
10.14.96.*, 10.14.104.*, 10.14.105.*, 10.14.106.*, 10.14.107.*, 10.14.108.* <-- note the 108.*
Red_windows server class:
10.14.120.*, 10.14.121.*, 10.14.112.*, 10.14.12.*
BUT, he's showing up with Red's configurations:
daniels
Apps
Red_base_config, Red_windows
Server Classes
Red_windows
But wait, there's more: he's actually sending logs to two indexes: Red_windows AND Orange_windows... (but not Green's)
Sanity check -- the server class for Orange_windows is configured as:
10.14.40.*, 10.14.56.*, 10.14.72.*, 10.14.62.*, 10.14.78.*, 10.14.64.*, 10.14.13.*
We've confirmed the packages being deployed all point to the correct indexes -- and others in the same range are actually working properly!
Client is a Windows 10 system, if that matters...
Thoughts?
Thanks!
↧
Splunk Universal Forwarder missing events
Hi all,
Have you ever seen a UF missing events? I’ve observed some of our UF’s missing ~8 seconds of events and then picking up halfway through the event they reach. The gaps are creating some muddy data and it doesn’t seem to be limited to one server, I’ve got a list of 100 or so across all of our environments and corresponding Splunk clusters.
Here's a 3 line example of what Splunk is seeing in the source(/app/search/show_source?blah). I've been able to manually confirm that there is a gap and plenty of logs between.
2017-12-03 22:25:37 GET /Something/Something/1 from=2017-12-02&to=2017-12-04 80 - 0.0.0.0 HTTP/1.1 - - Some.url.was.here.com.au 200 0 0 00000 000 00 - HasedKeyWasHere ServiceName -
0.0.0.0 HTTP/1.1 - - ome.url.was.here.com.au 200 0 0 000 000 0 - HasedKeyWasHere ServiceName -
202017-12-03 22:25:45 GET /Something/Something/1 from=2017-12-02&to=2017-12-04 80 - 0.0.0.0 HTTP/1.1 - - Some.url.was.here.com.au 200 0 0 00000 000 00 - HasedKeyWasHere ServiceName -
I've tried this with and without line breaking logic to see if it would make any difference in the props.conf with no success. Which is not entirely surprising in hindsight.
It should be worth mentioning that these are all IIS logs being forwarded to a 6 peer node cluster with no heavy forwarders inbetween.
↧
Need to change lines in custom app_RESOLVED
I generated an app today with inputs.conf to push
[monitor://]
index=
sourcetype=
recursive=true
but when this is pushed it appears like
[monitor://]index=sourcetype=recursive=true
This is the reason it is not working, howeverwhen we changed the config to different lines and restarted the UF it worked.
is there any way I can push the logs as written in the deployemtn-apps folder.
↧
↧
Is there a way that we can install universal forwarders in a bunch of servers at a time? Thank you
Is there a way that we can install universal forwarders in a bunch of servers at a time? Thank you
↧
Can a single UF forwards data to multiple HF's?
Is it possible to send data from universal forwarder to multiple heavy forwarders?
if yes how can specify the HF group.
↧
Timeout talking to Deployment Server Windows
I'm seeing this message in the splunkd.log file just before a Universal Forwarder starts a shutdown.
11-25-2017 18:38:11.690 -0800 INFO NetUtils - Connect timeout - waited for 5 seconds. ip=aaa.bbb.ccc.ddd port=8089
11-25-2017 18:38:11.690 -0800 WARN HTTPClient - Connect to=deployment_server:8089 timed out; exceeded 5sec
11-25-2017 18:38:11.690 -0800 WARN HTTPClient - Download file cancelled due to: Connect to=deployment_server:8089 timed out; exceeded 5sec
I'm trying to figure out if the value of 5 seconds can be changed. I haven't found any file with 5 seconds in it. Any ideas where the value of 5 seconds comes from?
TIA,
Joe
↧
how does UF handle both metrics and event data
I have my UF and indexer set up and what I want to do is sending both metrics and event data from UF to indexer.
from my understanding what I could do is set up two stanzas in **inputs.conf** of indexer like below
[tcp://9997]
connection_host = dns
index = event_index
sourcetype = syslog
[tcp://9998]
connection_host = dns
index = metric_index
sourcetype = syslog
and the idea situation would be sending metric data to `:9998` and event data to `:9997` separately
but the problem is it seems impossible to achieve this through configuring **outputs.conf** (I could send both data to both ports using the data cloning technique mentioned in doc, but that's not ideal)
So is there a way to achieve this separation of data forwarding?
↧
↧
UF needs to be restarted every time to get data
We have configured our UFs to send data from a particular folder.
But every time the UF need to be stopped and started again after which it starts sending data.
I am also surprised why this is the kind of behavior as it is not feasible idea to restart the services every time whenever we want to get data into Splunk.
↧
Need an app to restart Splunk UF service on Windows every 30 min
Hi,
I need to deploy an app from deplyment server which will restart the Splunkd UF application installed on Windows server.
Can some one please help me with what should I write in the $Splunk_Home/etc/deployment-apps/restart_app/local folder of Splunk.
Thanks.
Vikram.
↧
When a universal forwarder is unable to connect to an indexer, will the forwarder still be collecting data from the server?
Hi Team,
We have an log file in one of the server and which is keep generated in the directory for every 10 mins once as below,
12/13/17 10:10 log1213171010
12/13/17 10:20 log1213171020
12/13/17 10:30 log1213171030
12/13/17 10:40 log1213171040
...........
...........
12/13/17 11:50 log1213171150 and keeps going.
We had an issue, our Splunk indexer was down for some 2 hours and we have fixed the splunk indexer issue. But we have noticed that, the above logs are not in Splunk for that particular span of time when the indexer was down. But the same time forwarder was up & running fine.
I have few question on this.
1. When the universal forwarder is not able to connect to respective indexer(standalone), will the forwarder still be collecting data from the server?
2. If forwarder is collecting the data, then will it resend the old data once the connection established with indexer.
Please help me on this.
↧