Can someone please provide an example of what the outputs.conf file would look like on a universal forwarder in an index clustered environment?
For example: 1 sh, 2 indexers, 1 clustering Master, 4 nodes with universal forward ready to send data once the setup is complete.
Rep factor 2, search factor 2
**1) idx1:9997
2) idx2:9997
3) clustermaster:8089**
I've been searching Splunk documentation, but it only provides examples for load balancing forwarders.
Can someone please provide an example of what the outputs.conf file should look like?
↧
What is an example of what the outputs.conf file would look like on a universal forwarder in an index clustered environment?
↧
Forwarding to Splunk cloud from AWS and on prem
Hi,
Our setup is as follows:
- Managed Splunk Cloud instance
- Heavy Forwader (on-prem)
- Syslog server (on-prem)
Our on prem servers have universal forwarders on them and forwarder to the HF which then sends to splunk cloud.
We are starting to spin up EC2 instances in AWS and want to do the same monitoring, so UF installed on the instance and forwarding to splunk cloud.
My question is how do we do this?
It seems a bit daft to send our logs back to our on-premis HF to then send to the cloud.
So should we create a HF in our AWS VPC and point all our ec2 instances towards that?
How has everyone else tackled this issue?
Cheers,
Fraser
↧
↧
Ensure regex filter in transforms.conf and stanza in props.conf only apply to a specific input
Hello, so I understand that my props.conf and transforms.conf (below) in theory allow me to filter out the events that match the regex specified.
props.conf
[filter_out_auth_logs]
TRANSFORMS-tonull = filter_out_word
transforms.conf
[filter_out_word]
REGEX = WORD\[.*?\]:
DEST_KEY = queue
FORMAT = nullQueue
What I am unsure of, is how I ensure this filter is only applied to a specific input?
For example, if I have the following entries in the input.config file, where do I specify that I want the input that sends logs to myindex1 to make use of the filtering specified in props and transforms configs?:
[monitor:///var/log/syslog]
index = myindex1
sourcetype = syslog
.
[monitor:///var/log/syslog.log]
index = myindex2
sourcetype = syslog
↧
Can default certificate be used for communication between universal forwarder and heavy forwarder in Splunk cloud?
I am pretty new to splunk. We are implementing heavy forwarder on EC2 instance which receives the data from UF and forwards to splunk cloud. I am trying to test the data forwarding by configuring default splunk certs on HF inputs.conf and UF outputs.conf . But I am seeing below errors on the HF. Any pointers would be most appreciated.
WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read server certificate B', alert_description='unknown CA'.
ERROR TcpOutputFd - Connection to host=xxx.xxx.xxx.xxx:9997 failed. sock_error = 0. SSL Error = error:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name.
↧
How to ensure regex filters in transforms.conf and a stanza in props.conf only applies to a specific input?
Hello, so I understand that my props.conf and transforms.conf (below) in theory allow me to filter out the events that match the regex specified.
props.conf
[filter_out_auth_logs]
TRANSFORMS-tonull = filter_out_word
transforms.conf
[filter_out_word]
REGEX = WORD\[.*?\]:
DEST_KEY = queue
FORMAT = nullQueue
What I am unsure of, is how I ensure this filter is only applied to a specific input?
For example, if I have the following entries in the input.config file, where do I specify that I want the input that sends logs to myindex1 to make use of the filtering specified in props and transforms configs?:
[monitor:///var/log/syslog]
index = myindex1
sourcetype = syslog
[monitor:///var/log/syslog.log]
index = myindex2
sourcetype = syslog
↧
↧
Hurricane Labs Add-On for Unified2 compatibility?
Is this app compatible with the latest version of Splunk and Splunk UF? Is this intended to replace the need for barnyard2?
↧
Failed to set up Universal forwarder with docker compose
I want to setup a universal forwarder that receive logs from a syslog server (share a volume) and send logs to a receiver.
For some reason I get the error below on my forwarder container:
*splunk-forwarder_1 | TASK
[splunk_universal_forwarder : Disable indexing on the current node] *******
splunk-forwarder_1 | fatal:> [localhost]: FAILED! =>> {"cache_control": "no-store, no-cache,> must-revalidate, max-age=0", "changed": false, "connection": "Close", "content": "\n\n \n \n In handler 'conf-outputs': Could not flush changes to disk: /nobody/system/outputs/indexAndForward/index:
ConfPathMapper: /opt/splunkforwarder/etc/system/local\n \n\n", "content_length": "279", "content_type": "text/xml; charset=UTF-8", "date": "Tue, 06 Aug 2019 08:23:31 GMT", "elapsed": 0, "expires": "Thu, 26 Oct 1978 00:00:00 GMT", "msg": "Status code was 500 and not [201, 409]: HTTP Error 500: Internal Server Error", "redirected":false, "server": "Splunkd", "status":500, "url":"https://127.0.0.1:8089/servicesNS/nobody/system/configs/conf-outputs","vary": "Cookie, Authorization","x_content_type_options": "nosniff", "x_frame_options": "SAMEORIGIN"}*
The outputs.conf on the forwarder:
[tcpout:splunkreceiver]
server=splunkreceiver:9997
**When I remove this file - the error is gone, so I guess the problem is in this file.**
My docker-compose.yml:
syslog-server:
build: './collector'
ports:
- '8081:8081'
volumes:
- syslog-logs:/var/log/syslog-ng
depends_on:
- splunk-forwarder
splunk-forwarder:
hostname: splunkuniversalforwarder
image : splunk/universalforwarder
ports:
- '8082:8082'
volumes:
- ./forwarder/inputs.conf:/opt/splunkforwarder/etc/system/local/inputs.conf
- ./forwarder/outputs.conf:/opt/splunkforwarder/etc/system/local/outputs.conf
- syslog-logs:/opt/splunkforwarder/var/log
env_file:
- ./forwarder/forwarder.env
depends_on:
- splunk-receiver
splunk-receiver:
hostname: splunkreceiver
image : splunk/splunk:latest
ports:
- '8083:8083'
env_file:
- ./receiver/receiver.env
volumes:
- ./receiver/inputs.conf:/opt/splunk/etc/system/local/inputs.conf
Any Ideas?
------------------
More files:
The inputs.conf on the forwarder:
[monitor:///opt/splunkforwarder/var/log]
index=my-index
sourcetype=my-source-type
disabled = 0
The inputs.conf on the receiver:
[splunktcp://9997]
disabled = 0
↧
problems running file_meta_data app in aix 7.x
Hi, I am trying to run file_meta_Data app in aix, and keep getting an exit code of 1 from introspection
It runs successfully for me in Linux, so I believe I have the basic config setup working properly.
Versions:
splunk universal forwarder version 7.2.6
AIX version 7.1
python version 2.6.2
file_meta_data app version 1.4.2
inputs.conf config sample
[file_meta_data://blah837P]
file_path = /npc/clients/blah837P/
host=
interval = 2m
recurse = 1
only_if_changed = 1
include_file_hash = 0
depth_limit = 10000
# file_filter = 999*
index = app_custom
sourcetype = db:meta:files
[file_meta_data://ntst277CA]
file_path = /npc/clients/blah/277CA/
host=
interval = 2m
recurse = 1
only_if_changed = 1
include_file_hash = 0
depth_limit = 10000
# file_filter = 999*
index = app_custom
sourcetype = db:meta:files
error message from splunkd.log:
08-07-2019 11:08:32.784 -0400 ERROR ModularInputs - Introspecting scheme=file_meta_data: script running failed (exited with code 1).
08-07-2019 11:08:32.784 -0400 ERROR ModularInputs - Unable to initialize modular input "file_meta_data" defined inside the app "ntst_app_file_meta_data": Introspecting scheme=file_meta_data: script running failed (exited with code 1).
doing a search for index=_internal ExecProcessor "file_meta_data" sourcetype=splunkd yields no results for this host
doing a search for index=_internal sourcetype=file_meta_data_modular_input also yields no results for this host
it is acting like it is unable to run the python script.
Any thoughts on how to fix or troubleshoot?
↧
Universal Forwarder
I installed UF on Win 10 based on steps shown in Splunk web site.
But after finishing, I can not find this program while it is in list my installed program in control panel.
i tried this many times and no change .
Please let me know what happened and what is the solution .
↧
↧
inputs.conf monitor stanza for Windows Universal Forwarder with wildcards not working
I'm facing a problem with writing a stanza that would collect log files from a directory tree. The tree is (example):
D:\Log\App\Module1\Log\%timestamp%-actual.log
D:\Log\App\Module2\Log\%timestamp%-actual.log
D:\Log\App\Module3\Log\%timestamp%-actual.log
I wish to grab the .log files from the tree.
Thus I wrote into inputs.conf:
[MonitorNoHandle://D:\Log\App\*\Log\*.log
This isn't really working. In fact, I've tried several ways, none are working (just two examples below):
[MonitorNoHandle://D:\Log\App\...]
whitelist = \\*\.log$
[MonitorNoHandle://D:\Log\App\Module\Log]
whitelist = \\*\.log$
I'm also placing below the above:
disabled = 0
index = test
sourcetype = app-log
Please help with the stanza wildcards?!
I've read several posts on the forums already, not mentioning the documentation, and this doesn't seem to work.
There are no obvious errors (log_level > info) when monitoring after `splunk reload deploy-server`, the app is downloaded to the folders... but the logs are not coming in.
↧
splunk-perfmon.exe errors of Counter is not found
I have noticed that after updating the Universal Forwarder to 7.3.1 (not sure if it is that update or a Windows update) running on Windows 10 Pro (64bit) Version 1809. I get about 2735 of the same type of the following lines around the same time each day in the Forwarders splunkd.log file. Anyone got an idea of how to fix?
08-16-2019 20:56:04.314 -0700 ERROR ExecProcessor - message from "D:\SplunkUniversalForwarder\bin\splunk-perfmon.exe" splunk-perfmon - OutputHandler::composeOutput: Counter is not found: IO Data Bytes/sec
08-16-2019 20:56:04.314 -0700 ERROR ExecProcessor - message from "D:\SplunkUniversalForwarder\bin\splunk-perfmon.exe" splunk-perfmon - OutputHandler::composeOutput: Counter is not found: IO Other Bytes/sec
08-16-2019 20:56:04.314 -0700 ERROR ExecProcessor - message from "D:\SplunkUniversalForwarder\bin\splunk-perfmon.exe" splunk-perfmon - OutputHandler::composeOutput: Counter is not found: % Processor Time
08-16-2019 20:56:04.314 -0700 ERROR ExecProcessor - message from "D:\SplunkUniversalForwarder\bin\splunk-perfmon.exe" splunk-perfmon - OutputHandler::composeOutput: Counter is not found: % User Time
08-16-2019 20:56:04.314 -0700 ERROR ExecProcessor - message from "D:\SplunkUniversalForwarder\bin\splunk-perfmon.exe" splunk-perfmon - OutputHandler::composeOutput: Counter is not found: % Privileged Time
08-16-2019 20:56:04.314 -0700 ERROR ExecProcessor - message from "D:\SplunkUniversalForwarder\bin\splunk-perfmon.exe" splunk-perfmon - OutputHandler::composeOutput: Counter is not found: Page Faults/sec
... and more lines ...
↧
Squid proxy & universal forwarder
Hello,
I'm trying to send data from a directory on a server to Splunk Cloud using the universal forwarder. This traffic goes through a squid proxy. I've tried to configure the proxy in server.conf:
[proxyConfig]
http_proxy = http//:8080
https_proxy = https//:8080
Port 8080 is open for tcp traffic.
I am able to connect from the server to the proxy using telnet, I am not able to connect to the indexers using telnet, however this should be possible while connecting from the universal forwarder using the forwarder credentials package, right?
The forwarder seems to be unable to connect to the indexers. splunkd file has the following warnings:
TcpOutputProc - 'sslCertPath' deprecated; use 'clientCert' instead..
Cooked connection to ip=:9997 timed out.
In the splunkd text file I don't see anything about the proxy I configured either, should this show in the splunkd file?
Does anyone have an idea on how to troubleshoot this issue?
Thanks, kind regards,
Willem Jongeneel
↧
Splunk unexpected timestamp parsing behavior
Greetings,
In my environment, I have set up an Universal Forwarder that is monitoring a single server .log file, which is then forwarded to a Splunk indexer instance for parsing etc. as a specific sourcetype(log4j). My Universal Forwarder configuration is as follows:
inputs.conf
[default]
host = 1
[monitor://server.log]
sourcetype=log4j
index= targetIndex
On the indexer, I have noticed several issues, both with timestamp parsing and event breaking. As you can see in the following image, there are events mixed in with local timestamps dating 3 hours ago, but Splunk has assigned the current time for said event. On top of that, Splunk has made a separate event for the Headers: and Payload: entries, which should have been a part of the event below. Note that these events all come from the same host and all have the same sourcetype. ![alt text][1]
For additional context, the following image visualizes the format of the .log file as seen on the forwarding instance. Note how there is a slight gap between the second event's Content-Type and Headers fields, which, I believe, is what is causing Splunk to assign it to a separate event.
![alt text][2]
[1]: /storage/temp/274494-question2.jpg
[2]: /storage/temp/274493-question1.jpg
Here is the props.conf that I currently have set on my indexer instance:
[log4j]
BREAK_ONLY_BEFORE=\d\d\d\d-\d\d-\d\d\s\d\d:\d\d:\d\d.\d\d\d
MAX_TIMESTAMP_LOOKAHEAD=23
TZ=Europe/Riga
TIME_FORMAT=%Y-%m-%d %H:%M:%S.%3NZ
TIME_PREFIX=^
SHOULD_LINEMERGE=true
As well as the limits.conf, although, to my understanding, it shouldn't affect the parsing behavior:
limits.conf
[search]
max_rawsize_perchunk = 0
To summarize:
1. Splunk is unexpectedly breaking up events;
2. There are events dated back exactly 3 hours mixed in with current events;
Could this be a timezone issue? Both of the instances seem to have the same timezone (EEST), but there seem to be events dated back exactly 3 hours mixed in with current events. What could be the possible cause of this?
Thanks in advance!
↧
↧
JSON fields are extracted/displayed twice
JSON fields are extracted twice.
On Universal forwarder (7.0.3) the settings `props.conf` are like this
[my_sourcetype]
SHOULD_LINEMERGE=true
LINE_BREAKER=([\r\n]+)
NO_BINARY_CHECK=true
CHARSET=UTF-8
INDEXED_EXTRACTIONS=json
KV_MODE=none
category=Structured
disabled=false
pulldown_type=true
TIMESTAMP_FIELDS=timestamp
On Search Head(7.2.6), tried all combinations of below in `props.conf`
[my_sourcetype]
INDEXED_EXTRACTIONS=json
KV_MODE=none
AUTO_KV_JSON = false
↧
Why are JSON fields extracted and displayed twice?
JSON fields are extracted twice.
On Universal forwarder (7.0.3) the settings `props.conf` are like this
[my_sourcetype]
SHOULD_LINEMERGE=true
LINE_BREAKER=([\r\n]+)
NO_BINARY_CHECK=true
CHARSET=UTF-8
INDEXED_EXTRACTIONS=json
KV_MODE=none
category=Structured
disabled=false
pulldown_type=true
TIMESTAMP_FIELDS=timestamp
On Search Head(7.2.6), tried all combinations of below in `props.conf`
[my_sourcetype]
INDEXED_EXTRACTIONS=json
KV_MODE=none
AUTO_KV_JSON = false
↧
How to figure out if forwarders are utilizing props or transforms?
We have Universal Forwarder on our windows servers varying in version from 6.2.3 to 7.1.3. Our Splunk Enterprise version is 7.0.1 (upgrading soon).
I was always under the impression that formatting data on a UF was impossible but I have learned today that in some rare circumstances (structured data) that it can be done.
https://docs.splunk.com/Documentation/Splunk/6.1.2/Data/Extractfieldsfromfileheadersatindextime#Forwa
My question is, is there a way to tell with a search which, if any, forwarders are utilizing props or transforms?
↧
How do I forward and delete logs?
I would like to be able to forward logs and then delete them using a UF. How can I do this?
For the sake of the Splunk community, it would be nice if this question had a run-anywhere solution. However, I will also detail my use case specifically.
I am using Windows Event Forwarding (WEF) to collect 4800/4801 Windows security logs from 2000 of our workstations into a Windows Event Collector (WEC) that has a UF on it. I only spun up the WEC VM with an 80GB disk, as there is no reason to assign more disk space to merely a collection node, and storage is money. I can forward the logs from the WEC without a problem, but I need to be able to purge the logs after forwarding.
↧
↧
Splunk Windows universal forwarder zip file
Hi Team,
I am facing issues with Splunk universal forwarder installation-* in windows environment.
when I went through the Splunk.docs I came to know that Splunk universal forwarder on windows environment ZIP file will be provided only by the Splunk team.
Could you please help me on this installation/ZIP file ASAP.
Best Regards,
Indudhar
↧
Determine which Active servers with Universal Forwarder areNOT sending logs to Splunk
We have a bunch of servers with UFs installed. These servers may have different operational states. For example, "Active", "Build in Progress", "Decommissioned", and "Decom in Progress". We use ServiceNow for the asset inventory.
This is the search query used to determine the version of the UFs installed.
index=_internal source=*metrics.log component=Metrics group=tcpin_connections | dedup hostname | table hostname sourceIp os arch version
I would like to be able to get the Active servers that have splunk UF installed and NOT reporting to splunk.
Also, looking for a way to dynamically update the list of Active servers, as this list changes whenever there are new servers onboarded or old servers decommissioned.
I've looked at the Splunk App/Add-on for ServiceNow and could not find an option to do this.
I've also looked at https://docs.splunk.com/Documentation/Splunk/7.3.1/DMC/Configureforwardermonitoring, but am not sure how to go about configuring it to dynamically update the asset list with the Active servers.
Any help will be much appreciated.
↧
What is procedure to upgrade universal and heavy forwarders?
Hello ,
We have around 13 heavy forwarders.How does the upgrade thing work , should we log into each instance and do the upgrade or is there any way to upgrade through the deployment server.The same way we have 500 + universal forwarders , what is the way to upgrade every U.F.
Thanks IN ADVANCE
↧