Hello Splunk-Community,
for month we are discussing with our Linux admins, if it is ok to install Splunk Universal Forwarder on Linux (Red Hat) or not.
We just want to collect Tomcat / Apache logs from various Linux Hosts, and really don't know how.
The main concern is the management of the needed permissions (per Host / Application for about 1000 Linux Systems) to get the Forwarder to the needed application log directories. We don't want to run the Forwarder as root.
So what are you doing? Do you have any best practices?
I can't belive we are the only one facing this discussion.
Thank you
PS: As a side note, at Windows it seems to be ok to run the Forwarder as System Service.....
↧
Linux Universal Forwarder - Security Recommendations
↧
Deploying and updating Splunkbase apps using a deployment server?
I'm running Splunk for Enterprise 7.3.0 on Ubuntu 18.04 doing a demo deployment with a sales trial license. It's a single instance deployment with only a handful of hosts, but the production deployment will separate out the roles to different servers.
I would like to deploy the Splunk App for Windows Infrastructure app and the other Windows add-ons to my Windows Universal Forwarders, as listed here: https://docs.splunk.com/Documentation/MSApp/1.5.2/MSInfra/HowtodeploytheSplunkAppforWindowsInfrastructure (not enough karma for links, sorry). It's to my understanding that I would have to do the following to prep an app for deployment:
1. Download the "Splunk Add-on for Windows" from Splunkbase (App 742) .tgz file.
2. Manually extract and copy the contents of the of the app to $SPLUNK_HOME/etc/deployment-apps/.
3. Manually restart splunk with "splunk reload deploy-server", non-optional.
This procedure is completely different then the easy GUI based approach when adding apps to my search head.
1. Click Apps -> Find More Apps
2. Search for the App through Splunkbase, even seeing which apps are already installed.
3. Click Install. Authenticate and accept the T&Cs.
4. Click Restart if needed.
If there's an update to an app installed via Splunkbase, and the app is visible, I can click the update button in the listed apps on the home page. To update the same deployed app on the same splunk instance, it appears I have to do the manual process.
Since my search head is also my deployment server, shouldn't installing deployable apps have the same ease and functionality? If I want to update a deployed app that's on Splunkbase, do I have to do this manual process for each Splunkbase app? Is there a GUI based way to install apps for deployment, be it either from Splunkbase or manually written? Am I missing something in my workflow? Is there an app that offers this functionality, or at least notifies me if a Splunkbase deployed app is out of date? I don't want to deploy outdated, broken, or exploitable apps, especially if there's a newer version available.
I can understand the need for maintaining older versions of deployed apps, and not wanting them to update when a Splunkbase maintainer updates their app, but I think there would be the option of at least updating the app through some process in the GUI, or at least notifying the user an update is available.
↧
↧
What is the admin account for on a Universal Forwarder?
I have UFs on some "sensitive" servers and the owners - that did the install are questioning the purpose of the Admin account.
I have just accepted the fact that all splunk nodes require credentials and an account.
Is there an official document or explanation for the reason a UF needs one?
These are windows servers.
Thank you.
↧
Deploying and updating Splunkbase apps using a deployment server?
I'm running Splunk for Enterprise 7.3.0 on Ubuntu 18.04 doing a demo deployment with a sales trial license. It's a single instance deployment with only a handful of hosts, but the production deployment will separate out the roles to different servers.
I would like to deploy the Splunk App for Windows Infrastructure app and the other Windows add-ons to my Windows Universal Forwarders, as listed here: https://docs.splunk.com/Documentation/MSApp/1.5.2/MSInfra/HowtodeploytheSplunkAppforWindowsInfrastructure (not enough karma for links, sorry). It's to my understanding that I would have to do the following to prep an app for deployment:
1. Download the "Splunk Add-on for Windows" from Splunkbase (App 742) .tgz file.
2. Manually extract and copy the contents of the of the app to $SPLUNK_HOME/etc/deployment-apps/.
3. Manually have splunk rescan the directory with "splunk reload deploy-server", which is non-optional and not automatic.
This procedure is completely different then the easy GUI based approach when adding apps to my search head.
1. Click Apps -> Find More Apps
2. Search for the App through Splunkbase, even seeing which apps are already installed.
3. Click Install. Authenticate and accept the T&Cs.
4. Click Restart if needed.
If there's an update to an app installed via Splunkbase, and the app is visible, I can click the update button in the listed apps on the home page. To update the same deployed app on the same splunk instance, it appears I have to do the manual process.
Since my search head is also my deployment server, shouldn't installing deployable apps have the same ease and functionality? If I want to update a deployed app that's on Splunkbase, do I have to do this manual process for each Splunkbase app? Is there a GUI based way to install apps for deployment, be it either from Splunkbase or manually written? Am I missing something in my workflow? Is there an app that offers this functionality, or at least notifies me if a Splunkbase deployed app is out of date? I don't want to deploy outdated, broken, or exploitable apps, especially if there's a newer version available.
I can understand the need for maintaining older versions of deployed apps, and not wanting them to update when a Splunkbase maintainer updates their app, but I think there would be the option of at least updating the app through some process in the GUI, or at least notifying the user an update is available.
↧
Universal Forwarder Stops sending data
Hi,
We have a Universal Forwarder on our Linux rSyslog server. It was working fine until two weeks ago. The problem was it would stop sending data to the indexer, but showed no errors in the splunkd.log. When we would restart it, it would send a burst of information over the course of 4-5 minutes then stop sending data again.
Over the past two weeks we have replaced the rSyslog server with a new server. The new server has 8 cores, tons of memory, and a 10GB network connection to the Splunk indexer. Once we installed the forwarder it ran for two days non-stop catching up on the data that had been missed over the two week period. At 6pm last night it stopped forwarding data again. We're now back to the same problem we started with. We get a burst of log data on restart, but then it just stops. No errors, nothing to suggest we've hit any limits. The splunkforwarder.server process is still running. What we DO notice is that Splunkd holds the files open, and the number of open files continues to climb once it stops forwarding data. Some of these files are large, but we don't get any error messages about batch
in limits.conf we have this set
maxKBps = 0
max_fd = 10240
The ulimits on the server are set to 100000 - we're averaging about 4500-5000 before the forwarder stops running.
Indexer 7.3.1
Universal Forwarder 7.3.1
We have about 35 windows forwarders working on servers with no issues at all. It's this one Linux forwarder that's not working correctly.
Any help you can give would be appreciate. Let me know if there is any additional information needed.
↧
↧
Can multiple Splunk Universal Forwarders use same NAT IP for sending data to Heavy Forwarder ?
We have around 100 Universal Forwarders in a specific Office location A and another 50 Universal Forwarders in Office location B. We are trying to use a single NAT IP (192.168.10.20) for Office location A and a single NAT IP (192.168.10.30) for Office Location B for sending data from these Universal forwarders to a Heavy Forwarder placed in a different Office location C.
Can Splunk distinguish each Universal Forwarder with its own host IP even though its communicating and sending data to HF with a single NAT IP ?
Is this TCP Connection stream handling between the Splunk UF and Splunk HF is capable of managing the multiple TCP client connections on the same NAT IP ?
↧
Universal Forwarder to report 2 Indexer
What is the best way to route security events to Security Indexers and rest of the sourcetypes to operational indexers?
And Can we manage universal forwarder with 2 deployment servers?
↧
Universal Forwarder - Tag or add identifier to data to distinguish environment
Hey everyone,
Summary of the long post:
On universal forwarders, I need to add some kind of identifier like a tag or metadata value to all data before it is sent to distinguish the environment it is coming from, allow it to be searchable based on the value, and a heavy/intermediate forwarder using props/transforms to change and forward data based on this value.
I'm currently working on a large environment that will have multiple environment's universal forwarders reporting to my environment. The way we are setup:
- Have about 10 customers with their own environments
- Each environment will have roughly 10-50 servers in AWS
- Each server will have a universal forwarder installed to point to my splunk environment
- The universal forwarders will use data cloning to send data to my indexers and to an intermediate forwarder on the edge of my environment.
- All of the data will be indexed on my indexers and certain inputs that are sent to the intermediate forwarder will be sent to another environment for security monitoring.
- The intermediate forwarder has props and transforms setup to forward data to the external splunk environment based off of sourcetype, but now that we are adding multiple customer environments that want the security monitoring, and will use different indexes, the transforms need to be modified.
So here is my question,
Is there a way to tag data or add an identifier within the universal forwarders in an environment so the intermediate forwarder can forward to the external splunk environment to a specific index?
The intermediate is a heavy forwarder without local indexing and is the only connection that has routes to the external splunk environment. The reason for all of this and they way it's constructed, is the level of security requirements from our primary.
For example:
- Customer A has 20 servers with universal forwarder installed. Universal forwarders add an identifier to all data as it is sent that matches the customer's environment name like CustA.
- Customer B has 40 servers and much like Customer A, the forwarders add an identifier to all data, CustB.
- The inputs for both environments are configured to go to their respective indexes on my indexers; Customer A to customerA_data and Customer B to customerB_data. The data is then forwarded to both my indexers and the intermediate forwarder.
- The indexes customerA_data and customerB_data exist on my indexers and receive the data, but the external splunk security environment has custA_security and custA_application, and custB_security and custB application.
- The intermediate forwarder would use props and transforms. When it receives data with sourcetype=linux_audit and the identifier is CustA, it sends that data to the external environment's custA_security index and when receives sourcetype=nginx (or any application source) and the identifier is CustB, it sends that data to the custB_application index in the external environment.
- While this is all occurring on the intermediate, all data is being sent from the universal forwarders to my indexers with and being indexed and now be searchable using CustA or CustB.
Thanks in advance, it's a lot of information.
↧
Is it possible to update the Splunk Universal Forwarder but not change anything else?
I have some old versions of Splunk lying around and want to just do an update, not change the directory being monitored or anything else.
How can I do that?
The Sudplunk required items in the pillar want me to change more than I want.
↧
↧
Monitor all remaining files not specifically matched
We have several syslog-ng collectors with UFs on them. The UF monitors the paths and files that syslog-ng generates that we point it to, but I know there are probably several systems sending syslog data that we are missing. Is there a way to point a UF monitor stanza at the top level file path and tell it to monitor everything not matched elsewhere and send it to a specific index so that we can search that index to see what data we're missing?
↧
Monitoring Registry via universal forwarder not working
Hi,
I am trying to monitor a registry key from a remote server using a universal forwarder. No matter what i put in my inputs.conf, i just cannot get it to work. This is my inputs.conf:
[WinRegMon://Registry]
disabled = 0
hive = HKEY_LOCAL_MACHINE\\SOFTWARE\\WOW6432NODE\\SOPHOS\\AUTOUPDATE\\UPDATESTATUS\\.*
proc = .*
type = set
I can see the following error in my splunkd.log:
message from ""Program Files\SplunkUniversalForwarder\bin\splunk-regmon.exe" --driver-path "Program Files\SplunkUniversalForwarder\bin"" splunk-regmon - No enabled entries have been found for regmon or procmon in the conf file.
I must be missing something simple! Please help!
Many thanks,
Michael
↧
How to externally trigger a universal forwarder to send data to an indexer using PowerShell modular input
I have server "X" on which is installed a universal forwarder.
Typically, I'd use the universal forwarder's cron functionality to trigger the execution of a PowerShell script. The PowerShell script will have been implemented using PowerShell modular input to send the data to an indexer, i.e., the script emits a stream of .NET objects and Splunk does the right thing with them.
Now, I have a PowerShell script whose execution is triggered by an event external to the universal forwarder. This script will also emit a stream of .NET objects and I want to use PowerShell modular input to send data to an indexer.
How to externally trigger the universal forwarder to send data using PowerShell modular input to an indexer?
I would appreciate it if you'd provide locations of and examples of *.conf files
↧
Cannot see the data that is being forwarded/indexed in the Splunk web interface
Hi everyone,
I am currently facing an issue which am not getting my head around it. I have installed the universal forward in win srv 2012r2 to send every log to Splunk server. However, In the Splunk web interface, I cannot see the data that is being forwarded/indexed. I have done a Tcpdump to monitor traffics on port 9997.
I can see that the communication is being made between the Splunk server and the windows machine on that port, however, I cannot see the data being indexed or displayed on the graphic. Can anyone tell me where does the data that is being collected usually stored? it is indexed on the default index or somewhere else. Because so far I cannot find it in the default index or where ever.
Thanks in advance.
↧
↧
Can't see see a list of files that Splunk is currently monitoring
I want to list out the current data inputs,
I ran the following command:
C:\Program Files\SplunkUniversalForwarder\bin>splunk list monitor
Splunk prompted me for username and password, I entered my admin username and password, but I did not see a list of files that Splunk is currently monitoring.
Instead the command prompt reverted back to:
C:\Program Files\SplunkUniversalForwarder\bin
What am I doing wrong? Thanks for your help
↧
How to see a list of files that Splunk is currently monitoring?
I want to list out the current data inputs,
I ran the following command:
C:\Program Files\SplunkUniversalForwarder\bin>splunk list monitor
Splunk prompted me for username and password, I entered my admin username and password, but I did not see a list of files that Splunk is currently monitoring.
Instead the command prompt reverted back to:
C:\Program Files\SplunkUniversalForwarder\bin
What am I doing wrong? Thanks for your help
↧
Upgrade UF package credential
Hi all,
We are trying to upgrade UF package credential in our intermediate forwarders (including HFs).
PFB steps which I followed:
1. Login to SH
2. Go to apps --> Universal Forwarder
3. splunkclouduf.spl file downloaded and installed on HFs via GUI.
But when it came to UFs, I am not sure whether GUI was enabled or UF has GUI so I had to copy the 100_splunkcloud app folder (from HF) and pasted into apps folder in UF.
Then I restarted UF. And started getting error: can't read key file.
I am sure I am missing something while doing this task in UF.
Could someone please help?
Thanks in advance.
Regards,
Tejas
↧
univversal forwarder 7.3.1 install failing with no logging
i have used this script with previous versions with no issues
msiexec.exe /i splunkforwarder-7.3.1-bd63e13aa157-x64-release.msi DEPLOYMENT_SERVER="mydeploymnetserver:8089" SPLUNKPASSWORD="mypassword" AGREETOLICENSE=Yes /quiet
when i try with the new forwarder it does nothing and does not even log anything to C:\Windows\Temp\splunk.log like it normally does when you do this install
is there something new with 7.3.1 that is required for the script to work
↧
↧
Universal Forwarder requires restart after registering new WinEventLog source
We are running a Universal Forwarder on our Windows servers which host several of our application. Each application logs to the same Windows Event Logbook, but use different sources to be able to determine the source of the logging.
We have configured the UF to forward all messages from the logbook to the heavy forwarders according to the inputs.conf-snippet below:
[WinEventLog://ServerApps]
disabled = 0
sourcetype = ServerAppLogs
source = ServerApp
renderXml = false
The problem is, whenever a new application is deployed, it will register a new source with the Windows Event logbook (using Powershell: `[System.Diagnostics.EventLog]::CreateEventSource($eventSourceName, $eventLogName)`). But the UF does not pick up this new source untill it is restarted even though we did not specify any whitelisting or blacklisting.
Is it possible to make the UF listen to new sources in the Windows Event Log without having to restart it?
PS. We are using version 7.3.1 of the Universal Forwarder
↧
not getting internal logs from forwarder
Hello,
We are not getting any internal logs from one of our forwarder but its phoning home. we can also add or delete an app through deployment server remotely. The forwarder is ingesting logs to one of our index but its not continuous. this all happened after when we tried to ingest logs from a folder on that server. Let me know if anyone have any idea.
Thanks.
↧
How to configure the universal forwarder to Heavy forwarder then to an Indexer?
Hi,
Can someone help what are the step I need to do if I have below flow :
Universal Forwarder ------- Heavy forwarder ------- Indexer
And need help how to parse the traffic when the log will at heavy forwarder from Universal Forwarder.
↧