The Splunk Add-on for Microsoft Cloud Services documentation at http://docs.splunk.com/Documentation/AddOns/released/MSCloudServices/Install seems to be stating that you must configure the input on the search head if you are using a Universal Forwarder. Underneath, however, it says if installing on a search head cluster you should configure the input to be on the forwarder.
What are you supposed to do when you are using a search head cluster but the (unsupported) Universal Forwarder?
↧
Splunk Add-on for Microsoft Cloud Services: Where to install when using search head cluster and universal forwarder?
↧
What does splunk-parameter "--auto-ports"do?
This parameter is used in auto-install-scripts for the Universal Forwarder but I could not find any information about it in the documentation. Is this something used in older versions of Splunk?
The Linux-strings-command outputs "--auto-ports" in /opt/splunk/bin/splunk - so it is recognized (like e.g. --accept-license).
↧
↧
Varonis vs Universal Forwarder: If an organization is collecting server logs in Varonis, should it be integrated with Splunk or install Universal Forwarders on all servers?
I have a situation where the organization has Varonis and is in the process of deploying Splunk. Since all the server logs are already being collected in Varonis, would it be better to just integrate it with Splunk or continue down the path of installing the Universal Forwarder on all servers?
Thanks for the input!
↧
Is it essential to restart splunk universal forwarder after deletion of monitored logs?
Hello fellow ninjas,
Good day. I'd like to ask if splunk uf restart is essential after I deleted a log file that is being monitored by splunk? Because, every time I delete the file, splunk is still reading it resulting high resources usage.
![alt text][1]
Thanks,
Dan
[1]: /storage/temp/208581-capture.png
↧
Universal Forwarder with Sysmon not forwarding Correctly
Hi,
I'm trying to study the activities of some Malware thus I created the following environment using virtualbox. But I could not get the forwarder to work correctly. I could only get 1 event when I reboot guest 2. Did I miss out some other configurations?
**
- Host
**
Disable VirtualBox Host-Only Network so that Guest and Host could not ping each other but Guest can guest to guest.
**
- Guest 1:
**
IE8WIN7, SP1, IE Version 8.0.7601.17514
Network: Nat Network
IP: 10.0.2.15
Installed Splunk Enterprise
Open port 9998 to receive events (set up at http://localhost:8000/en-US/manager/search/data/inputs/tcp/cooked)
Set Firewall to allow inbound and outbound 10.0.2.4 and port 9998.
**
- Guest 2:
**
IE8WIN7, SP1, IE Version 8.0.7601.17514
IP: 10.0.2.4
Installed Splunk Universal Forwarder
Install sysmon via CLI "sysmon -i -n -accepteula"
Added the following into universal forwarder input.conf
"[WinEventLog://Microsoft-Windows-Sysmon/Operational]
disabled = false
renderXml = true"
Set Firewall to allow inbound and outbound 10.0.2.15 and port 9998.
I only got 1 event after Guest 2 reboots. After that, no matter what programs I open in Guest 2, there is no events seens from Guest 1.
![alt text][1]
[1]: /storage/temp/206846-capture.png
↧
↧
How to index lost data from a specific date range?
Hi Splunkers,
I have AD server integrated to our splunk that is indexing Active Directory logs and we've lost data from specific date range, however, logs is active on that date. Is there any way to index that lost AD logs? The dates are July 13-15, 2017 to be specific.
Thanks,
Dan
↧
Upgrading Universal Forwarder from 5.0.2 to 6.5.4
Hello Splunk Experts,
Recently, I've been tasked to upgrade a distributed Splunk environment with the condition as follows:
- Search Head & Indexer version: 6.1.3
- Universal Forwarder version: 5.0.2
There are 2 things in which I need confirmation of:
1. Is it possible to perform upgrade directly from 5.0.2 to 6.5.4 for UF?
2. Could I upgrade UF first before upgrading SH and then IDX?
Thank you and please advise.
↧
How to collect logs from a DC with a Splunk Universal Forwarder and with a 3rd party WMI at the same time?
Hi,
Our customer has several DC (Domain Controllers). They already collect logs from DC using SyslogNG Agent over WMI. They planning to implement a Splunk system and want to collect logs with UF (Universal Forwarders), too. So SyslogNG Agent over WMI and Universal Forwarder will operate at once on DCs.
Can this scenario lead to any problem? Compatibility, performance, etc?
Does anyone have any experience on the subject?
I forgotten it: the main focus on only security logs, at least now.
Regads,
István
↧
Forwarding data to third party from universal forwarder
Hello,
I currently have some Windows Servers with the Universal Forwarder installed that are sending data to our indexer. I am now in a situation where I need to have the forwarder also send the data to a third party server. According to the documentation, the following in outputs.conf should send all data;
[tcpout]
[tcpout:fastlane]
server = 10.1.1.2:1517
sendCookedData = false
However, I have the third party server getting data but only is receiving "INFO" type logs which appear to be transaction type information from the splunk forwarder program itself and not the actual log data (windows events iis etc.) that I am sending into splunk that I need.
Am I missing something or will the universal forwarder not send that data?
Thanks
↧
↧
Why would INDEXED_EXTRACTIONS=JSON in props.conf be resulting in duplicate values?
Using Splunk to analyze bro network transaction data in JSON format. I noticed the stats command and field summary stats would show a count of 2 for unique session ID's, although search results only show one event. After a lot of verification I'm certain my event source does not contain duplicate events.
Thanks to this post: https://answers.splunk.com/answers/223095/why-is-my-sourcetype-configuration-for-json-events.html, I started messing with my JSON settings in props.conf. I thought this would be my fix, but I found the opposite scenario to be true for me...
In short, I'm seeing that using index-time JSON field extractions are resulting in duplicate field values, where search-time JSON field extractions are not.
In props.conf, this produces duplicate values, visible in stats command and field summaries:
INDEXED_EXTRACTIONS=JSON
KV_MODE=none
AUTO_KV_JSON=false
If I disable indexed extractions and use search-time extractions instead, no more duplicate field values:
#INDEXED_EXTRACTIONS=JSON
KV_MODE=json
AUTO_KV_JSON=true
From what I can tell this behavior is different than what others reported in earlier posts. I'm running Splunk 6.6.2 Enterprise on a Debian VM and a 6.6.2 Universal Forwarder on another VM. Maybe there is a deployment client configuration I have wrong somewhere that is causing weird behavior for index-time extractions but no luck so far.
Using search-time extractions seems to work fine, but wondering if anyone is seeing this or if there are any ideas on root cause.
Thanks.
↧
How to resolve "ERROR ExecProcessor...No such file or directory" error from a python script through a universal forwarder?
Hi to all,
I installed on monitored server, by universal forwarding, an app that uses python script to load data about cpu, disk,..
The app contains a file table2event.py that start with:
#!/usr/bin/python
import time
import logging
import urllib2 as u
import os
import subprocess
import sys
global log
In splunkd.log I see several errors like
ERROR ExecProcessor - message from "python /opt/splunkforwarder/etc/apps/ta-adapter/bin/table2event.py iostat.sh" /bin/sh: \r: No such file or directory
How can I solve?
Thanks,
Andrea
↧
Splunk Monitor Stazas
Hi Team,
We have a distributed environment with several forwarders managed by Deployment Server. Recently, I have configured few logs paths as below
[monitor:///appl_*/logs/wserv/]
whitelist = access_logs_wasLC*_PROD_\.\d{8}
blacklist = \.gz
recursive = true
index = was_logs
sourcetype = ibm:was:app
[monitor:///appl_*/logs/wserv/]
whitelist = access_logs_wasCC*_LUAT_\.\d{8}
blacklist = \.gz
recursive = true
index = luat_was_logs
sourcetype = ibm:was:app
Both the Prod and LUAT environment servers are in the same server class. Now the issue is even the Prod logs are going to LUAT index. So could you please help me in resolving this issue? Is there any priority when the path is same?
↧
Linux Auditd: How to override the default configurations for props.conf?
When the Linux Auditd app is installed on a Splunk Enterprise (indexer), is the props.conf in the TA_linux-auditd/default/props.conf overriding anything by default? I am confused on how overriding works.
Splunk documentations says the following:
Note: If you forward data, and you want to assign a source type for a source, you must assign the source type in props.conf ***on the forwarder***. If you do it in props.conf on the receiver, the override has no effect.
So if I have the Linux Auditd app installed on an indexer and I have a universal forwarder sending audit log data to my indexer, will any configuration I add in TA_linux-auditd/local be applied to data received from forwarders or data that my indexer itself is forwarding??
The NOTE above makes it sound like I need to install Linux Auditd app on my forwarder not just my indexer.
↧
↧
Syslog Servers in RedHat Cluster with Splunk Universal Forwarder/s
We are planning to implement Red Hat Cluster (RHCS) for Syslog servers. They will be in active-passive controlled through heart-beat and will have Universal Forwarders installed.
There are two options that we need to select from:
1. Shared SAN storage between these two servers: In case primary goes down, the syslog demon is started in secondary automatically. We can auto start on Splunk service on secondary syslog in case primary server is not available.
2. Separate Storage on each of the servers and Splunk service will be running on both. In case primary goes down, secondary syslog server will collect the logs and then transmit data to Indexer through heavy forwarders.
Which of them can be the better way?
I assume many of us would have implemented Syslog with failover to send data to Splunk. Please advice.
↧
Universal forwarder doesn't show up in my cloud.
I have created Splunk light Cloud instance and download/installed universal forwarder for windows.
downloaded credential file and configured the authentication.
no indication of error.
but the forwarder doesn't show up in my Cloud dashboard.
How do I troubleshoot and fix this?
Thanks
↧
Host-Only Guest to Host-only Guest Forwarding not working?
Hi,
Is it possible to forward data from a vmnet1 Host-Only VM Guest to another vmnet1 Host-Only VM Guest? I actually set up this way and list forward-server does see the ip:port active but my other Guest collector could not see any data coming in.
After I switch to vmnet8 nat on both machines, I can see my data coming in but I need to block my data source from the internet. How do I do it without vmnet Host-Only or Host-only network configuration??
↧
How do I forward to a vm and forward it out again?
Hi,
how does one forward something like sysmon from 1 vm (guest1) to another vm (guest2) and then out to another pc (outside network)?
Do I install universal forwarder and sysmon on Guest 1, and use deployment server to send out to another PC outside network?
↧
↧
How to edit outputs.conf for universal forwarder in linux
Hi,
I was trying to edit outputs.conf for universal forwarder, but when i was searching for outputs.conf file in
etc/system/local
i can see only README
inputs.conf
server.conf
deploymentclient.conf
does it means i need to change outputs.conf in deployment server ? if i need to change it in deployment server do i need change in an app ? if so what is the exact path that i can edit outputs.conf for the forwarder in deployment server please.
Thanks
↧
How can I find a listing of all universal forwarders that I have in my Splunk environment?
Hello. How can I find a listing of all universal forwarders that I have in my Splunk environment?
↧
Sourcetype Assignment
Hello All,
I have two servers with hostnames H1 & H2, both have the same log file named "/apps/logs/log.log"
I have set the line breaking based on source file name in my props.conf,
For ex:
[source::///apps/logs/log.log]
But the log.log available in H1 & H2 are with different time zones.
Even though I separate sourcetypes for H1 & H2 in inputs.conf, default source file configuration is applied based on props.conf
How can I overcome this conflict.
In the example I have just quoted two hosts but in our environment we have 100 such servers.
Regards,
BK
↧