HI,
trying to install linux auditD on universal forwarder. The app has been installed by support on Splunk Cloud.
The UF in installed on syslog server and forwards data direct to Splunk cloud, no HF or indexer in between. I refereed to
github.com/doksu/splunk_auditd/wiki/Installation-and-Configuration and did not find any info about installing on UF.
After installing the app on SPlunk CLoud the Unix logs are getting tagged (some of non-audit logs as well) as eventype:auditd.
Would like to know what all changes needs to be done on UF?
Is there a change required to inputs.conf file and what should be added there?
any other helpful tip would be great.
here is a sample log:
Aug 21 20:24:34 10.10.0.1 <133>XXX: NetScreen device_id=XXX [Root]system-notification-00257(traffic): start_time="2017-08-21 15:03:59" duration=0 policy_id=320001 service=proto:112/port:0 proto=112 src zone=Null dst zone=self action=Deny sent=0 rcvd=56 src=YYYY dst=ZZZZ session_id=0
action = Deny dst = ZZZZZ eventtype = auditd file os resource unix eventtype = auditd_events eventtype = nix-all-logs host = YYYY sent = 0 service = proto:112/port:0 source = /logs/YYYYY/2017/08/21/user.log sourcetype = syslog src = YYYYY tag = file tag = os tag = resource tag = unix
Thanks in advance.
↧
Linux auditD install on Universal forwarder
↧
Can I use a Splunk universal forwarder to monitor memory, disk I/O, and CPU consumption?
Hello Splunkers,
I want to ask you about Splunk Universal Forwarder memory, CPU and DISK I/O consumption monitoring on client machines because I can do this only with a full Splunk Enterprise instance using a DMC server but can't do that with Splunk universal forwarder.
Is there any solution? and I'm so thankful.
↧
↧
Why are there many duplicate events in the indexer cluster?
I have a single site cluster that contains 5 indexers, 4 search heads, a master node, and a deployer. There are also some universal forwarders with load balancing.
All events in the indexer cluster are from Universal forwarders. The data flow direction is as follows.(The most common cluster architecture)
Server/Host (UF installed here)—————TCP—————>indexer cluster
Server/Host(syslog)—————Universal Forwarder—————TCP—————indexer cluster
Server/Host(UF monitors a file)——————TCP————>Indexer cluster
So the question is coming
1. Why does it return duplicate events when I search? Is it because I'm using TCP? https://answers.splunk.com/answers/537368/why-is-there-event-duplication-via-tcp-port.html?
2. I disabled the use_ACK function in the outputs.conf on the UF
3. What are the common causes of repeated events? Please tell me, I can exclude it one by one. Thank you
Forgive me for my English
↧
Why are there many duplicate events in the indexer cluster?
I have a single site cluster that contains 5 indexers, 4 search heads, a master node, and a deployer. There are also some universal forwarders with load balancing.
All events in the indexer cluster are from Universal forwarders. The data flow direction is as follows.(The most common cluster architecture)
Server/Host (UF installed here)—————TCP—————>indexer cluster
Server/Host(syslog)—————Universal Forwarder—————TCP—————indexer cluster
Server/Host(UF monitors a file)——————TCP————>Indexer cluster
So the question is this:
1. Why does it return duplicate events when I search? Is it because I'm using TCP? https://answers.splunk.com/answers/537368/why-is-there-event-duplication-via-tcp-port.html?
2. I disabled the use_ACK function in the outputs.conf on the UF
3. What are the common causes of repeated events? Please tell me, I can exclude it one by one. Thank you
Forgive me for my English
↧
Why am I seeing these extra fields when I log a BZ2 file?
One of the log files being monitored by Splunk is a bz2 file. It is being read by the UF on the server. The local/props.conf in the add-on to process the events looks like this:
[mvm:csv]
DATETIME_CONFIG =
INDEXED_EXTRACTIONS = csv
KV_MODE = none
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = false
TIMESTAMP_FIELDS = mvm_assets_last_found_date_time
TRUNCATE = 999999
TZ = UTC
When I examine the events I see the following:
"Rising_Column","mvm_resultcode","mvm_assetid","mvm_hostid","vulnstartdate","vulnenddate","dest_ip","xref","cve","signature","mvm_description","mvm_observation","mvm_recommendation","mvm_addeddate","mvm_patch_type","vendor_product","os_app","mvm_basescorevalue","mvm_baseexploitvalue","mvm_baseimpactvalue","mvm_site_id","mvm_scantype","mvm_dmzhost","mvm_devicetype","mvm_dc_host","mvm_mit_patch","mvm_osname","mvm_nbname","mvm_dnsname","mvm_macaddress","mvm_patch_status","mvm_region","mvm_assets_last_found_date_time"
And there is a blank line. When I unzipped the BZ2 file the blank line is a control-Z. I haven't figured out how to remove the header and the line with the control-Z on it. Any ideas?
TIA,
Joe
↧
↧
Is it okay to run a universal forwarder without an inputs.conf?
I am the security guy and Splunk admin. I am running 6.6.x universal forwarders on all my windows servers. I just found out that the server admins are cloning boxes all willy-nilly. When trying to figure out why SERVER05 wasn't reporting in, it was because its inputs.conf had "host = SERVER01". I was getting my data, it was just hiding under the wrong host.
Googling around I found the solution is to delete inputs.conf and server.conf, then restart the UF. This seems to work. The UF does recreate the server.conf, but not the inputs.conf.
My question is, is this a problem? All of my inputs are managed in apps via a deployment server. Do I need an inputs.conf that specifies the hostname? I can't see any problems right now but wanted to ask the community.
↧
Optimising CPU + RAM usage on Universal Forwarder
Hello guys,
I've been looking around in the questions and most of them are about forwarders causing High CPU because of some bug or some misconfiguration. My questions is about optimising and tweaking a universal forwarder that is working well in order to reduce its CPU impact.
So anyone who has tips and tricks to share it will be very much welcome. Even if you have system level tips for linux/windows it's also welcome!
Best regards,
David
↧
Can my lookups be forwarded to a Splunk Cloud search head from a local forwarder?
Hi,
We are in process of migrating On-Premise Apps to Splunk Cloud.
There is one App in which few scripts are there which (by accessing local directory) updates the lookup files continuously to be used on Searchhead.
For this EITHER we can place the scripts on Local Universal Forwarders, where it will update the lookup files locally (by accessing cifs mounts) and then need to check if there's any mechanism by which these lookups file can be forwarded continuously to Splunk Cloud Searchhead from local forwarder OR the scripts to be placed directly on Cloud Searchhead.
Out of this the 2nd option won't work as scripts can't be placed over Cloud Searchhead as it access/needs the local filers (cifs) mount points to update the lookup file data.
So need to know if there is any mechanism by which the updated lookups file can be forwarded continuously from local Universal Forwarders to Splunk Cloud Searchhead?
Thanks
↧
How can we monitor changes to inputs.conf file on our universal forwarders?
Using Splunk Enterprise 6.2.2
The Problem: No data ingested.
We have several deployed APPs and would like to monitor changes to inputs.conf file on our universal forwarders. We have created a new app called confMonitor. It's input file is shown below.
[monitor://C:\Program Files\splunkuniversalforwarder\etc\apps\windows\local\inputs.conf]
disabled = false
sourcetype = syslog
index = testdata
There are three APPS on this universal forwarder; confMonitor, windows and sendtoindexer; only the later two function.
The splunkd.log file shows the following; no other messages exist about this APP or inputs file.
08-XX-20XX 10:23:56.277 -0400 INFO TailingProcessor - Adding watch on path: C:\Program Files\splunkuniversalforwarder\etc\apps\windows\local\inputs.conf.
sourcetype=syslog is a valid sourcetype; index=testdata is a valid index. We tried using crcSalt = ; we've tried csv as a sourcetype. We have stopped/started the universal forwarder in order to re-read the APPS on the universal forwarder. We do not use a deployment server. It looks like fschange from previous versions of Splunk may have worked, but I think it's been deprecated. Help is appreciated.
↧
↧
Linux auditD install on universal forwarder
HI,
Trying to install Linux auditD on universal forwarder. The app has been installed by support on Splunk Cloud.
The UF is installed on Syslog server and forwards data direct to Splunk Cloud, no HF or indexer in between. I referred to
github.com/doksu/splunk_auditd/wiki/Installation-and-Configuration and did not find any info about installing on UF.
After installing the app on Splunk Coud the Unix logs are getting tagged (some of non-audit logs as well) as eventype:auditd.
Would like to know what all changes needs to be done on UF?
Is there a change required to inputs.conf file and what should be added there?
Any other helpful tip would be great.
here is a sample log:
Aug 21 20:24:34 10.10.0.1 <133>XXX: NetScreen device_id=XXX [Root]system-notification-00257(traffic): start_time="2017-08-21 15:03:59" duration=0 policy_id=320001 service=proto:112/port:0 proto=112 src zone=Null dst zone=self action=Deny sent=0 rcvd=56 src=YYYY dst=ZZZZ session_id=0
action = Deny dst = ZZZZZ eventtype = auditd file os resource unix eventtype = auditd_events eventtype = nix-all-logs host = YYYY sent = 0 service = proto:112/port:0 source = /logs/YYYYY/2017/08/21/user.log sourcetype = syslog src = YYYYY tag = file tag = os tag = resource tag = unix
Thanks in advance.
↧
How to send Windows events to a third-party server using Splunk Universal Forwarder?
Hello,
I'm trying to send windows events using an Universal Forwarder to a 3rd party system.
I configured outputs.conf as shown below:
***[tcpout]
defaultGroup = primary_indexers***
***[tcpout:primary_indexers]
server = indexer1:9997,indexer2:9997, etc
autoLB = true
compressed = true***
***[tcpout:exernal]
server=10.10.10.10:514
sendCookedData=false***
The forwarder has an inputs.conf which looks for WinEvent:Security. The events are reaching the splunk indexers successfully...but not the 3rd party server. The 3rd party server is only receiving splunk internal events, which tells me that the outputs.conf stanza is correct and i have connectivity between the 2 machines.
Is there anything specific i need to configure in order to forward the windows events to the 3rd party server as well? I only need to send the raw events, no other parsing/transformation is needed. Any suggestion would be highly appreciated.
Thanks!
↧
Universal forwarder on Windows servers
We are in the process of planning our Splunk deployment. We have some where around 5,000 Windows servers that will be using the UF to forward. Currently in our DEV space we are sending to the indexer with no filtering of events. We are doing an exercise to collect only what we need to report or correlate, so our plan is to send to a heavy forwarder.
Can I filter at the heavy forwarder for Windows?
Are there some docs to help me with configuration?
↧
Is there any way to replicate the whitelist settings on the deployment server?
Hi,
I installed the universal forwarder agent on some servers for monitoring and would like to add a whitelist filter on the Windows security event.
When I add the "whitelist" line in the inputs.conf file in the "C: \ Program Files \ SplunkUniversalForwarder \ etc \ apps \ SplunkUniversalForwarder \ local" from the server that I installed the agent, the filter works.
As it is configured, I need to edit all the inputs.conf files on the servers that I installed the agent to add to the whitelist.
Is there any way to replicate the whitelist settings on the deployment server?
Thanks.
↧
↧
Why did all of my servers stop sending logs? Configuration issue?
Hello Guys,
I have a bit of a curious case and it is really bugging our production environment. I have deployed around 12 Windows UF to monitor Security event logs within AD servers which are located behind a firewall. The version of the UFs is 5.0.2 currently and I have set the input and output configurations using a deployment server.
From the first deployment, I could see all 12 servers are sending the logs just fine. After several hours, the number of servers dropped to 7. The drop sequence continue until no server is sending logs at all.
I tried to use just a single server as a test project and I found that the server is only sending logs for about 3 - 4 hours max before stopped sending completely. No errors or warnings found within splunkd.log of the forwarder and my indexer. The splunkd.log's entries were only "Connected to ...." and "... phone home ....". I also did not see any blocking event from metrics.log
My configurations are like this:
**inputs.conf**
[WinEventLog://Security]
disabled = 0
index = app_ad
sourcetype = tseladscrt
start_from = oldest
current_only = 0
_TCP_ROUTING = loadheavyfwd
**outputs.conf**
[tcpout:loadheavyfwd]
compressed = true
server = :9997
sslCertPath = D:\Program Files\SplunkUniversalForwarder\etc\auth\cert.pem
sslPassword = xxxxxxxxxxxxx
sslRootCAPath = D:\Program Files\SplunkUniversalForwarder\etc\auth\CoreCA.pem
sslVerifyServerCert = true
Where should I start to troubleshoot?
Thank you.
↧
What's best practice for monitoring bash_history of all users in the system?
Hello, all!
Maybe someone has set up tracking bash_history file from all users in /home/*/.bash_history
I experimented with fschange, but splunkforwarder don't send data to server.
Splunk user can access to read .bash_history files.
Can anybody help me with this question?
Thanks!
↧
Is there a way in Splunk universal forwarder to set CPU and Memory consumption for splunkd process to a particular limit ? so that they don't shoot up the threshold limit every now and then ?
We have more than 3000+ forwarders in our environment. Few weeks back unix team has published a report showing all the top process that consume more cpu and memory usage.
Splunkd was among the top 3. We need to somehow restrict splunkd from taking up so much usage. on few them the report was showing splunkd had taken more than 90% mem usage.
Please suggest a way to mitigate this issue.
↧
Is there a way set CPU and Memory consumption for splunkd process to a particular limit?
We have more than 3000+ forwarders in our environment. Few weeks back unix team has published a report showing all the top process that consume more cpu and memory usage.
Splunkd was among the top 3. We need to somehow restrict splunkd from taking up so much usage. on few them the report was showing splunkd had taken more than 90% mem usage.
Please suggest a way to mitigate this issue.
↧
↧
How to set up a universal forwarder using Puppet?
I am looking for information or examples on how to install and configure universal forwarder on Windows using Puppet.
I had built a powershell script for on non puppet supported device but need to also have something for puppet.
Thanks!
↧
How to find universal forwarder IP address?
Hello. I installed free universal forwarder from splunk website now i installed it on my pc but what is the ip address for that instance, how to find it.
↧
Splunk Cloud: do custom universal forwarder certificates ever expire?
Does the Universal Forwarder custom certificate for Splunk Cloud ever expire? If so, when does it expire?
↧