3 questions:
Can I use directly syslog for everything enabling it to each machine, without getting use of universal forwarder or heavy?
What is the advantage to use directly it rather than install UF for instance?
What is the difference between the heavy forwarder and an indexer for example?
Then must I use the add-on with a universal forwarder or I can install it without to use it?
Would everything work the same? what do they used for is it like a dsm?
↧
Help with choice of forwarder
↧
Trouble forwarding splunkd.log output to syslog.
I must be missing something very simple here so bear with me.
I am running a Splunk universal forwarder instance, and I would like to forward its internal logs (e.g. splunkd.log) to my own Syslog server.
I would like to do this for my own internal audit logging separate from the indexer that is connected to the forwarder. Is there any support for this?
I have tried to set up rsyslog to watch splunkd.log but there seem to be some inotify shenanigans going on and rsyslog fails to recognize that the log file is being updated.
↧
↧
Splunk Universal Forwarder vs Deployment Server troubleshooting
I'm having some issues getting Universal Forwarders to talk to the Deployment Server, and I'm looking for some troubleshooting pointers. Here's the scenario, pretty basic setup.
Splunk Enterprise 7.3 on Ubuntu - also configured as a Deployment Server.
Universal Forwarder installed on Windows workstation with the following command:
msiexec /i splunkforwarder-7.3.2-x64.msi AGREETOLICENSE=yes /quiet RECEIVING_INDEXER="splunkslogs:9997" DEPLOYMENT_SERVER="splunkslogs:8089"On the server, when I run netstat, I see established connections on ports 8089 and 9997. But I don't see the client listed under "Forwarder Management" of the Splunk GUI. Suggestions?
↧
Applying quarantine and removing quarantine
Hi All,
This is kind of similar issue as mention on below link but since it was unanswered posting it again.
https://answers.splunk.com/answers/211112/applying-quarantine-removing-quarantine.html
We have installed Universal forwarder on a new server to send logs to Splunk cloud, since we didn't have direct connectivity to indexers we are sending logs to heavy forwarder and we didn't have connectivity to DS as well so we are doing manual configuration in /etc/system/local, but we are getting below errors in UF.
11-01-2019 16:37:21.594 +0800 INFO TcpOutputProc - Removing quarantine from idx=xx.xx.xx.xx:9997
11-01-2019 16:37:21.780 +0800 ERROR TcpOutputFd - Read error. An existing connection was forcibly closed by the remote host.
11-01-2019 16:37:21.966 +0800 ERROR TcpOutputFd - Read error. An existing connection was forcibly closed by the remote host.
11-01-2019 16:37:21.967 +0800 WARN TcpOutputProc - Applying quarantine to ip=xx.xx.xx.xx port=9997 _numberOfFailures=2
11-01-2019 16:37:22.142 +0800 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected
11-01-2019 16:37:25.062 +0800 WARN TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group splunk has been blocked for 300 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
We have some other universal forwarders sending logs to the same heavy forwarder and it's working fine.
Thanks in advance any help will be much appreciated.
Thanks.
↧
Linux deployment of Universal Forwarder issue around not getting prompted to create user
Hi - I am trying to deploy the universal forwarder to Linux. We have Altiris to deploy both the script and the package and a service account on the machines we want to deploy to. So I don't need a complete end-to-end script that I've been seeing all of the splunk answers board when researching this.
What my issue with this script at the moment is that no matter how I structure it, it always prompts to create a user yet the "edit user" command is in the splunk documentation to configure the user.
Any ideas or a workaround to this? I could be understanding something wrong so feel free to re-work this if you think I am approaching it incorrectly.
#!/bin/sh
tar xvzf /tmp/splunkforwarder-8.0.0-1357bef0a7f6-Linux-x86_64.tgz -C /opt
/opt/splunkforwarder/bin/splunk edit user admin -password fakepassword -auth admin:fakepassword --accept-license --answer-yes
/opt/splunkforwarder/bin/splunk enable boot-start -user serviceaccount
/opt/splunkforwarder/bin/splunk set deploy-poll "172.16.182.76:8089"
↧
↧
Unable to install Splunk Universal Forwarder on Network drive
Hi Team,
Currently, I am facing the following issue:
- I would like to install Splunk UF package (6.5.3) on a Network drive on Windows System.
Windows Server IP: 10.23.97.2
- I was able to copy the UF package on the above Windows Server under Installer
- Have mapped the Network drive path to I:> drive on Local system
\\10.23.97.2\Installer\Splunk_UF_6.5.3.msi
- When I double clicked the .msi package, I am getting the message as Unable to install the package as it is not a Local drive.
I can't login to server using Remote desktop.
I am unable to install UF package on this server. Kindly help me on how can I install the UF package.
Does splunk supports Network installation? regards, Santosh
↧
Does a file monitor input work even if the log being monitored is open for writing by the application that manages it?
Hello all,
As the title states, I'd like to know whether a file input continues to index a log even though that file is open for writing by the application that manages it. I'm busy evaluating whether to keep UFs on source systems with file inputs active, or whether it might be better to externalize those logs through a secondary process and index those to avoid performance issues.
Best regards,
Andrew
↧
Splunk forwarder - When the log file will be send to splunk indexer ?
Hi,
I'm currently monitoring log files on unix server. Jobs application write log file in a directory.
I want to monitor only log file that is finished to be write. How i can do that ?
I don't want splunk to compare the beginning of the file to know if a file was already send to the indexer because if new lines are added to this file the forwarder will not see it.
If the forwarder compare last lines to see if a file is new, it will sent the same file multiple time rights ?
Can you help me please ?
↧
Universal Forwarder Windows 2019 Server core: Domain account set up
Is Windows 2019 server core supported for the universal forwarder?
I need to install the universal forwarder into another domain to get security logs from the domain controller.
What domain account would I need to setup?
↧
↧
Deployment of Universal Forwarder to Apple Mac fleet
Our company operates a fleet of Apple Macs. We would like to automate the deployment and configuration of the Universal Forwarder agent to these Macs via our MDM platform, but there is very little information provided by Splunk on how to automatically configure the MacOS Universal Forwarder to communicate with our Splunk infrastructure. Given the size of the Mac fleet we ideally do not wish to have a technician install and configure the Universal Forwarder on every machine manually.
The only documentation we've been able to locate is what is posted on this Splunk web page: "docs.splunk.com/Documentation/Forwarder/8.0.0/Forwarder/Installanixuniversalforwarder#Install_the_universal_forwarder_on_Mac_OS_X" - which unfortunately does not provide any guidance on automatically applying the custom configuration settings during the install.
For the MSI (Windows) version of the Universal Forward installer there are a number of parameters available, such as 'DEPLOYMENT_SERVER', 'AGREETOLICENSE', 'SPLUNKUSERNAME' and 'SPLUNKPASSWORD' (ref: "docs.splunk.com/Documentation/Forwarder/latest/Forwarder/InstallaWindowsuniversalforwarderfromthecommandline"). Does anyone know if these parameters are also available for the MacOS version of the Universal Forwarder installer ?
If anyone has experience with deploying the Universal Forwarder to a large Mac fleet we'd be keen to hear how you've automated that process. If indeed it is possible to do so...
↧
Splunk UF Deployment - Possible Issues
Hello. We are planning on deploying UFs across our enterprise ~ 3000 systems. Currently, we have deployed UFs to 50 systems and have seen no issues. Before doing a large deployment to cover our entire enterprise - I was curious if anyone has seen any issues arise from deploying UFs?
↧
Heavy Forwarder Configuration Query
Hi All,
I have inherited Splunk Enterprise in my company which includes 3 Indexers, 2 Search Head and each Deployment & Licensing Master and Cluster Master.
Now in order to receive events from more than 250 servers, Do I need to setup a separate Heavy Forwarder (server) or can we use the above setup/configuration and use one of them as heavy forwarder.
Thanks,
↧
Splunk Forwarder support by Splunk
Hi - We are upgrading Splunk to 7.2.8 since 7.0 is out of support. the Universal forwarders are not mentioned in the Splunk support page and refers only to the Splunk enterprise. Does the UF need to be upgraded to be covered under Splunk Support?
https://www.splunk.com/en_us/legal/splunk-software-support-policy.html
↧
↧
Execute a command through the CLI on a remote system
When I run `splunk cmd`, I can execute any external system command using Splunk's context.
I want to combine that with the `-uri` parameter to be able to send remote commands to Universal Forwarders.
However the `cmd` engine treats `-uri` as a part of the command itself, for example:
`splunk cmd dir -uri https://uf_hostname:8089`
`dir: cannot access https\://uf_hostname\:8089: No such file or directory`
How can I send the command to a remote Splunk instance?
↧
Syslog vs Forwarders
If I have an environment with an rsyslog collection server that is working just fine and collecting from thousands of endpoints, should I keep that or get rid of that and collect events using a UF on each endpoint?
↧
How to run basic PowerShell script on universal forwarder
I'm trying to do something very simple but for some reason I can not get it to work. I'm trying to run the basic PowerShell command below on a universal forwarder (on a Windows 10 workstation) but the output is not going to Splunk.
One question I have is what sourcetype should I be using? Each PowerShell command will have a different output...so do I need to have a sourcetype for each command I run?
(And I have read the article but its just not clicking for me https://docs.splunk.com/Documentation/Splunk/8.0.0/Data/MonitorWindowsdatawithPowerShellscripts)
Key points:
*Workstation is connected to the deployment server
*I am using a very basic custom add-on app that host the PowerShell command
*Custom Add-on app info
2 directories -> local and metadata. The local folder has two files: app.conf and inputs.conf (which is below).
[powershell://test-script]
script = Get-Process | Select-Object Handles, NPM, PM, WS, VM, Id, ProcessName -Last 5
schedule = **system is not showing this correctly but it polls every minute**
sourcetype = Windows:Process
↧
How to remove the Windows message description
Found a great article on how to remove the Windows message description - https://www.hurricanelabs.com/splunk-tutorials/windows-event-log-filtering-design-in-splunk# - and followed the article to create the following props/transforms conf files:
props.conf:
[source::WinEventLog:Security]
TRANSFORMS-removedescription = removeEventDesc1
transforms.conf:
[removeEventDesc1]
LOOKAHEAD = 16128
REGEX = (?msi)(.*)This event is generated
DEST_KEY = _raw
FORMAT = $1
Waited some time for the UFs to phone home and pick up the change, but when I search the Windows events, I still see the description in the event.
Any idea or insights as to why would be greatly appreciated.
Thx
↧
↧
Is there a way to delay splunk universal forwarder from monitoring specific files?
Hello,
We have an issue monitoring os_metrics logs where the log entries are generated from a Windows command wmic and written to a file under this path `D:\catmstarFiles\systems\main\logs\os_metrics*.log`
The issue is that events are distorted even after placing the props (see below) in our heavy forwarder and search head cluster. The same set of files are read correctly if we are coping it to test server and monitoring it from there, however, in real-time the events are not breaking correctly as expected.
So, just wanted to know if there is an attribute that can be used in inputs.conf to reduce uf file reading/monitoring time? or Is it something to be done at source end to delay writing files to this particular path?
Can anyone please advise, if it's something to be done at source end I will then reach out to the concerned team and get it discussed. Thanks in advance
[sourcetype]
SHOULD_LINEMERGE=true
NO_BINARY_CHECK=true
CHARSET=AUTO
BREAK_ONLY_BEFORE=\w+\s+\d+\/\d+\/\d+\s+\d+:\d+:\d+.\d+
disabled=false
TIME_PREFIX=\w+\s
TIME_FORMAT=%m/%d/%Y %H:%M:%S.%N
↧
How to monitor and alert when the Splunk universal forwarder service has been stopped or modified?
On my Universal Forwarders, I want to have the ability to monitor and alert off when the Splunk Universal forwarder service has been stopped or modified.
Any options on how to do this?
I am already looking into basic windows event monitoring on windows services, but I didn't know if there was a Splunk related way to do this?
Possibly some Splunk app or something?
↧
Splunk UF Backlog
Hey everyone, quick UF question here... If a UF stops for whatever reason then comes back on later on, will the UF send the backlogs it missed while the service went offline?
↧