Hi,
why is my UF on Windows executing various splunk-* tools without them beeing configured in any input?
Every few minutes I see them in sysmon:
splunk-powershell.exe
splunk-regmon.exe
splunk-powershell.exe
splunk-netmon.exe
splunk-admon.exe
splunk-MonitorNoHandle.exe
splunk-winprintmon.exe
I do not see them in any inputs.conf.
thx
afx
↧
Universal forwarder executes regmon, powershells and others with out them beeing explicitly configured
↧
How to configure UF on Linux to send a log file to Splunk HF?
Hi,
I installed and configured UF on a Linux server to send syslog to Splunk HF. I am now trying to send an application log also on the same server, say it's in /opt/application/applog.log, to the HF. What I need to modify on the UF .conf file(s) ?
Thanks,
↧
↧
Why is the Universal forwarder executing regmon, powershells and others with out them being explicitly configured?
Hi,
why is my UF on Windows executing various splunk-* tools without them beeing configured in any input?
Every few minutes I see them in sysmon:
splunk-powershell.exe
splunk-regmon.exe
splunk-powershell.exe
splunk-netmon.exe
splunk-admon.exe
splunk-MonitorNoHandle.exe
splunk-winprintmon.exe
I do not see them in any inputs.conf.
thx
afx
↧
How to configure universal forwarder on Linux to send a log file to Splunk heavy forwarder?
Hi,
I installed and configured UF on a Linux server to send syslog to Splunk HF. I am now trying to send an application log also on the same server, say it's in /opt/application/applog.log, to the HF. What I need to modify on the UF .conf file(s) ?
Thanks.
↧
start request repeated too quickly for splunk.service
● splunk.service - Systemd service file for Splunk, generated by 'splunk enable boot-start'
Loaded: loaded (/etc/systemd/system/splunk.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Fri 2020-02-21 14:11:39 PST; 785ms ago
Process: 30472 ExecStartPost=/bin/bash -c chown -R splunk:splunk /sys/fs/cgroup/memory/system.slice/%n (code=exited, status=0/SUCCESS)
Process: 30469 ExecStartPost=/bin/bash -c chown -R splunk:splunk /sys/fs/cgroup/cpu/system.slice/%n (code=exited, status=0/SUCCESS)
Process: 30468 ExecStart=/opt/splunk/splunkforwarder/bin/splunk _internal_launch_under_systemd (code=exited, status=1/FAILURE)
Main PID: 30468 (code=exited, status=1/FAILURE)
Feb 21 14:11:39 localhost systemd[1]: splunk.service: main process exited, code=exited, status=1/FAILURE
Feb 21 14:11:39 localhost systemd[1]: Unit splunk.service entered failed state.
Feb 21 14:11:39 localhost systemd[1]: splunk.service failed.
Feb 21 14:11:39 localhost systemd[1]: splunk.service holdoff time over, scheduling restart.
Feb 21 14:11:39 localhost systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'.
Feb 21 14:11:39 localhost systemd[1]: start request repeated too quickly for splunk.service
Feb 21 14:11:39 localhost systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'.
Feb 21 14:11:39 localhost systemd[1]: Unit splunk.service entered failed state.
Feb 21 14:11:39 localhost systemd[1]: splunk.service failed.
↧
↧
Blacklist WinEventLog::/Security with user names ending in $
I'm trying to get a blacklisted log entry that works on Universal Forwarders to filter out specific event codes with user fields that end in $ in their value.
What I have now, works on my test environment with uploaded sample logs, but not directly on the Universal Forwarder itself:
blacklist1 = EventCode="(4624|4634)" user=".*\$"
blacklist2 = EventCode="4672" Account_Name=".*\$"
What can I do to get this right so it actually works? I know that in the event log, raw, the matching line actually is space indented and something like:
...
Subject:
Security ID: S-1-5-18
Account Name: something$
Account Domain: domain
...
Thank you!
↧
Request for splunk upgrade shell script for Linux/Unix
Hi Team, Currently I am working on a UF Auto installation script where the script has to automatically upgrade the UF package on all Linux boxes (that have v6.5.3) running to v7.3.4 using this script.
The script should work as below:
- Check for any existing Splunk UF version on the Linux box, if it has UF v6.5.3 is already running then, stop the UF agent, upgrade the Splunk UF package - v7.3.4 (Untar the splunkforwarder.tgz) package and then start the Splunk services.
- Post that it should connect to a DS (updating deploymentclient.conf) with DS and 8090 port details.
- If the Linux box doesn't have any Splunk UF package installed then, the script should freshly install the UF v7.3.4 package on that Linux server and then connect to DS
Wanted to check if you have any reference shell script for the above upgrade/installation. Please note I will just use that script for reference purpose only and I won't use it directly as I don't have much details on shell scripting syntax. Request your help on this.
regards, Santosh
↧
Limiting RAM CPU and Disk utilization on Universal Forwarder
Hello Splunkers,
I want to know if we can limit the RAM, CPU and Disk utilization of a server where I have installed the Universal Forwarder.
Currently, based on my research I understand that the Bandwidth limit can be controlled using limits.conf.
Background: As I am installing the UF on my production server (*nix, Windows) I wouldn't want to choke the resource utilization on the server cuz of UF on it instead, I would like to limit the RAM and CPU utilization to 2-5% and Disk to 10-15%.
Thanks in advance.
↧
How to filter windows event logs in forwarder based on event codes.
Hi,
I am trying to pull event logs from remote machines using universal forwarders. I have done the configuration in the inputs.conf files.
below is the configuration in my inputs.conf file.
[WinEventLog://Application]
disabled = 0
index = win_events
crcSalt = SOURCE
[WinEventLog://Security]
disabled = 0
index = win_events
crcSalt = SOURCE
[WinEventLog://System]
disabled = 0
index = win_events
crcSalt = SOURCE
[WinEventLog://Setup]
disabled = 0
index = win_events
crcSalt = SOURCE
Now I dont want all event codes from the logs. I would require only 4800 and 4801.
is there any way in which only the events related to the two events can be forwarded to an index.
Thanks
↧
↧
Splunk Forwarder enable boot start not working on Windows XP
Hi, I have Universal Forwarder on my Windows XP machine. I enabled the boot-start upon installation but upon rebooting the machine, splunk forwarder is not running and it needs to start manually. Does anyone encounter this type of problem and do you have solutions for this. Thanks!
↧
Splunk Universal Forwarder 7.2.x compatible with Linux kernel 4.x / RHEL 8?
Are the Splunk UF 7.2.x releases compatible with being run on Linux kernel versions 4.x, specifically RHEL 8?
↧
One UF isn't connecting to the indexer
One of my forwarders is not connecting with the indexers. Another system that is identical is connecting just fine. I keep getting errors about the message being rejected because it's too big, but I can't figure out where to adjust the allowed message size. This error is from the indexer:
03-04-2020 15:45:26.122 +0000 ERROR TcpInputProc - Message rejected. Received unexpected message of size=369295616 bytes from src=x.x.x.x:57186 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload.
↧
Splunk forwarder preventing Docker rebuild
I am wondering if anyone has come accross this issue before:
System and application versions:
• Docker version 18.09.4
• Splunk version 7.2.6 (?)
• Windows Server 2019 1809 Build
A summary of what we’ve discovered and background information:
• Splunk seems to prevent docker from starting docker containers, they are stuck in a “Created” state
• We do not use Splunk explicitly as our docker logging service, i.e. Splunk is not referenced in any docker config
• Docker and the SplunkForwarder service both start up on host boot
• Changing the dependencies on the service (i.e. docker start first or splunk start first) doesn’t fix the issue
• Stopping splunk while docker is running and then creating the containers works
o As soon as one container has started successfully, we can start splunk and still create more containers
• Restarting splunk while docker is running and then creating the containers does not work
Steps to reproduce on a machine:
1. Boot server up, docker and splunk start automatically
2. Attempt to run docker-compose to create our containers with no containers already running or in an exited state, docker gets stuck with containers in a “Created” state
Steps to mitigate issue:
1. When there are no containers running, stop the splunk service
2. Run docker-compose to create at least one container successfully
3. Start the splunk service
4. Run docker-compose to bring up any remaining containers
Any help or ideas to get a work around would be appreciated
TIA
↧
↧
How does a Splunk universal forwarder talk to an indexer?
Total newb here, so please be gentle. So we are on the Windows platform and have Splunk Universal Forwarder 8.0.2 installed on many Windows 10 workstations as well on a bunch of Windows Server 2012 R2 etc. I am aware of the C:\Program Files\SplunkUniversalForwarder\etc\system\local directory construct and how to modify files in here and not in the default location
My question is if in the deploymentclient.conf file all we have is the:
[target-broker:deploymentServer]
targetUri = OurDeploymentServer:8089
Defined, how are our logs getting to our cluster of Indexers. By the way everything is working fine as I joined this team after they built and configured our Splunk environment already, I am just trying to catch up and was upgrading the UniversalForwarders to the latest version, hence my question of how does the data get to the Indexers when all the Forwarders know about is the Deployment server.
↧
Shuold I upgrade my universal forwarders when after I upgrade my HF?
Hi team!
Shuold I upgrade my universal forwarders when after I upgrade my HF?
Data > UF > HF > Indexer
Right now all is in 6.5.2 version. Indexer and HF will be in 7.3.4 soon.
Thanks!
Salut
↧
JSON file event breaking parsing on universal forwarder
I have a JSON file.
Once I upload the file on the search head using the below stanza in props.conf it's indexed properly.
Splunk 7.3.4
[json_test]
CHARSET = UTF-8
DATETIME_CONFIG = CURRENT
SEDCMD-cut_footer = s/\]\,\n\s*\"total\":.*$/g
SEDCMD-cut_header = s/^\{\n\s*\"matches\":\s\[/g
category = Structured
disabled = false
HEADER_FIELD_LINE_NUMBER = 3
SHOULD_LINEMERGE = 0
TRUNCATE = 0
INDEXED_EXTRACTIONS = json
KV_MODE = none
Once I upload the data from UF the data do not break to events
**Universal Forwarder**
props.conf
[json_test]
CHARSET = UTF-8
INDEXED_EXTRACTIONS = json
inputs.conf
[monitor:///tmp/*.json]
disabled = 0
sourcetype = json_test
index = test_hr
crcSalt = REINDEXMEPLEASE
initCrcLength = 780
**Indexer**
props.conf
[json_test]
DATETIME_CONFIG = CURRENT
SEDCMD-cut_footer = s/\]\,\n\s*\"total\":.*$/g
SEDCMD-cut_header = s/^\{\n\s*\"matches\":\s\[/g
category = Structured
disabled = false
HEADER_FIELD_LINE_NUMBER = 3
SHOULD_LINEMERGE = 0
TRUNCATE = 0
**Search Head**
props.conf
[json_test]
KV_MODE = none
↧
Universal Forwarder vs Heavy FOrwarder
Hi All,
Is there any recent test,conf discussion or doc around mentioned below splunk blog 2016:
https://www.splunk.com/en_us/blog/tips-and-tricks/universal-or-heavy-that-is-the-question.html
Is it still 6 times lower with UF?
↧
↧
Reading large Files using Splunk UF
Hi,
I am currently trying to read logs file of size 10Gb. I have changed thruput to 0 but still takes about 30 min-1 hr for Splunk to finish reading the file. Is there a way to increase the reading speed further for splunk UF.
↧
Reset password of splunk service account (Used in installing UFs)
Hi everyone,
I have an issue in splunk UF installation in windows regarding the user, previously i did all the UF installation by splunk service account (domain account) for all windows servers and now i forgot the password of the service account and i'm not able to install by local account (company policy), so my question is if i rest the password of service account does that will affects the others UFs behavior?
↧
Splunk Universal Forwarder Upgrade
Hi ,
I am looking for some information on Splunk Universal forwarder upgrade. We have 3000 + forwarders that needs a mass upgrade. What are the things do i need to consider for upgrading like backups etc ? i am looking for some scripts for linux system to remote upgrade the forwarders. Please let me know if any one have some scripts available. I am looking for the same info for windows as well.
↧