Quantcast
Channel: Questions in topic: "universal-forwarder"
Viewing all 1551 articles
Browse latest View live

Splunk indexing more than normal amount of data after re-installation of the universal forwarder

$
0
0
The universal forwarder which was installed on "server A" was uninstalled on 14th May due to some issue. So post 14th May logs from the "server A" was not being indexed in Splunk. On 30th May, we re-installed the universal forwarder on "server A" but there was a huge spike in the data ingested for the next couple of days. If the daily ingestion rate was 1GB per day, it started ingesting at the rate of approx. 15GB per day for the next 2 days. Moreover the source from where the logs are ingested on "server A" keeps 1 day worth of data. So can somebody please explain, for the above scenario, how the indexing of the data increased almost 15 times?

Windows Event Collection and Splunk

$
0
0
So I have read many of the posts here regarding Window Event Collection and Splunk. So far I have not been able to find what I'm looking for, which is probably pretty basic stuff, but I haven't been able to get Splunk to do what I need. Here are my questions: How do I get Splunk to override the host with the computername? I have tried setting this up in props and transforms on my Indexer(not the WE Collector server running the Universal Forwarder). I copied the props and transforms to */splunk/etc/system/local* and edited those, as per the warning in the files. I assume that is the correct location for those files. I have tried both of these(one at a time) and neither worked. Am I supposed to be setting this up on the indexer or on the WEC server where the Forwarder is installed? [WinEventLog:*] TRANSFORMS-change_host = WinEventHostOverride [(?:::){0}WinEventLog:...] TRANSFORMS-FixWinEventLogHost = WinEventLog-SetForwarderName,WinEventLog-SetOriginatingHost When my WEC server receives security events from various Windows boxes, those events get forwarded to Splunk, however, they show up as coming from the WEC server, not from the individual computername. Is it possible to get the Universal Forwarder to NOT FORWARD all of its Metrics info, etc; When I do a search in Splunk for things from my WEC server I see page after page of this. When I install the Forwarder, should I be selecting "Forwarded Events" and "Security Events" or just one or the other? I only want Security Events, however, they are forwarded from other systems. Thanks for any assistance!

Is there a way to determine the install date for Splunk universal forwarders?

$
0
0
We are using SCCM to install Splunk Universal Forwarder in our organization and via our Deployment server, I can keep track of when the UF is installed on endpoints. Is there a way via a search or using the REST API to see what the install date is for each UF? Being that we're doing a rolling install I'd like to keep track of which date the UF was installed on each endpoint. Thx

No event from WinEventLog://Microsoft-Windows-TaskScheduler/Operational

$
0
0
Hello, I have the following entry on the inputs.conf on the Universal Forwarder App to collect all the windows task scheduler log: [WinEventLog://Microsoft-Windows-TaskScheduler/Operational] disabled = false sourcetype = WinEventLog:TaskScheduler index=prod_swift and I have the following entry on the HF app props.conf just to route them to the correct index cluster [WinEventLog:TaskScheduler] TRANSFORMS-routing2 = routeC2 disabled=false Other logs on the same server with the same routing are coming in just fine, but I don't see a single event for the task scheduler even though we can see new events on the event viewer every minute or so. There's no error message on the splunkd.log, and I can see the following entry on the metrics.log *06-20-2019 22:20:57.646 +1000 INFO Metrics - group=per_sourcetype_thruput, series="**wineventlog:microsoft-windows-taskscheduler/operational**", kbps=0.259293, eps=1.032258, kb=8.038086, ev=32, avg_age=1.000000, max_age=1* So I suppose it should have been coming in somewhere, but I just couldn't find it. It's not there on the target index nor the default index. Now I'm not entirely sure if it's actually being read. The splunkforwarder is already running as a "Local System" on the box and we're using v6.1.1 at the moment. Any idea please?

Unable to initialize modular input "WinEventLog" after server restart

$
0
0
Having this intermittent problem with UF on multiple servers where it occasionally fails to start up the WinEventLog component after a system restart. This is happening on a number of servers and we only started seeing this after upgrading them to Windows Server 2016. When the service starts it logs these two lines: 06-23-2019 04:44:20.122 +0000 ERROR ModularInputs - Unable to initialize modular input "WinEventLog" defined in the system context: Introspecting scheme=WinEventLog: script running failed (exited with code 255). 06-23-2019 04:44:19.575 +0000 ERROR ModularInputs - Introspecting scheme=WinEventLog: killing process, because executing it took too long (over 30000 msecs). When this happens, other input modules will continue to read events. For example, _internal, stream and others data continues to get sent from this system, but nothing will be processed from the Event Log. Restarting the Splunk UF service on the server instantly fixes the problem, so I know it's not a problem with inputs.conf or anything else. It simply seems that some component fails to start up within 30 seconds and Splunk gives up on it. The fact that this happens intermittently on the same system (some restarts everything is fine and other times this happens) confirms this. Things I tried: - Changing the service to Delayed Start - No change. Found some obscure documentation that in Server 2016 Microsoft configured the services that get launched with Delayed Start to run with lowest priority. https://blogs.technet.microsoft.com/askperf/2008/02/02/ws2008-startup-processes-and-delayed-automatic-start/ . Relevant quote: "The Service Control manager also sets the priority of the initial thread for these delayed services to THREAD_PRIORITY_LOWEST. This causes all of the disk I/O performed by the thread to be very low priority." - Upgraded from 7.1.3 to 7.2.x - No change - Ticket with support. There are no tune-able parameters for this. Turning on debug logging for this module "category.ModularInputs=DEBUG" did not reveal any additional helpful information. Only idea i have left is to brute-force this and add a scheduled task to restart the service 10-15 minutes after a system restart, but before I do this, any suggestions from the community?

Can you use Splunk universal forwarder to forward data to Splunk Enterprise ran locally?

$
0
0
I am trying to get an Universal Forwarder installed on a server to forward some logs data to my Splunk Enterprise that is running on localhost (not hosted on any server). Is this possible? If I forward the data from the Universal Forwarder to my host's IP address and enable receiving, will it work?

About configuration in forwarding by using SSL.

$
0
0
I want to ask some point. 1. When using the default certificate, `sslVerifyServerCert` in `outputs.conf` is `false`, and `requireClientCert` in `inputs.conf` is `true` by default. In this case, there is no proof on the server side, but it seems that only the client side is proofing. Is such a setting recommended? https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Inputsconf https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Outputsconf 2. In step of following manual, it is configured `requireClientCert` is `false` in indexer side, also configured `requireClientCert` is `false` in forwarder side. In this case, I think that it doesn't have to configure `clientCert` AND `serverCert`, am I wrong?. https://docs.splunk.com/Documentation/Splunk/7.3.0/Security/ConfigureSplunkforwardingtousethedefaultcertificate

Universal Forwarder Upgrades and our Deployment Server

$
0
0
I've been working on automating our UF upgrade process and have found what appears to be an issue with a deprecated key, sslKeysfilePassword ... When I upgrade an old 6.1 or 6.2 host beyond Splunk 6.5, I've found that while the UF can still maintain forwarding over SSL to our indexers, they can no longer handshake with our deployment server. Spending most of my week on this, I've come across a workaround where, prior to performing the upgrade ... ( stopping splunk; tar -zxf blah ), if I remove the deprecated key "**sslKeysfilePassword**" from etc/system/local/server.conf ... the handshake problem is no longer an issue. The odd thing here is that, this is the **only** thing that had to be changed to rectify the issue, but my understanding of a deprecated object is that it would just be ignored. It doesn't appear to be the case in this instance. So, this isn't really a question perse, but has anyone ever run up against this before?

How to start indexing log files based on date

$
0
0
I have a new installation of Splunk Enterprise and we're about ready to start indexing our log files from our various applications. Currently, if we point our various Splunk Forwarders to our log directories, with file name filters, the volume of data imported would exceed our quota within a few hours. How do we configure the import so we only import log files created on or after a specified date and ignore everything else?

Splunk UF on VDI image re-ingests old data when image is reloaded

$
0
0
I have no doubt this is a configuration problem, but unfortunately can't find how to proceed. The problem occurs when a new Citrix image is put out to the user base. The image updated and the image is then saved. This image is then pushed out to the environment. Once this is done and the system starts the image and splunk starts to re-ingest all of that data which was ingested previously. The UF is unaware this data was already ingested. I looked into a couple items such as followtail (recommended against doing) and as well as ignoreOlderthan (I believe the file is appended, not rolled over). Normally I wouldn't really be bothered by this however this causes around an additional 30GB to be indexed. This doesn't occur too often (1-2 times per month) but it does trigger a license warning when it occurs. Thanks for any help!

Splunk UF Not reading logs.

$
0
0
Hi , I am in a situation now , My splunk Universal forwarder is sometimes sending the logs and sometimes its not sending the logs , i checked we have logs generated constantly on systems and checked the input path mentioned is correct. We didnt change anything last logs we got on july 1. Then we restarted couple of forwarders and reload the deployment server after some times 2 events came. Then next day no events and today on july 5 we received 5 events only which is way below. Sometimes its sending logs very few and sometimes not. I checked in internal index , splunkd logs i didnt find any error messages on splunkd logs. What might be the issue here ? before july 1 everything was normal log ingestion inputs paths everything is same as today.

Configured but inactive forwards

$
0
0
Hello Splunkers! i'm in doubt, i have installed UF on windows server but when i list forward-server it says that there are no active fordware but is configurated, on port 9997 and also de deploy with 8088. What issue do you think it is? is there a way to active the forwarder? Thanks

Monitoring file truncates earlier than expected, but uploading file works just fine.

$
0
0
I have an input setup on a universal forwarder where I am monitoring a log file. The monitor on Splunk seems to read the file line-by-line and is truncating log entries way too early. Uploading the file into Splunk works just fine though. Here is an example of a log entry that I am trying to read: 2019-07-08 22:25:42.314 INFO [MessageHandler.java:91] Processing the following message from Queue ------------------------------------ 2019-07-07T23:11:39.0002019-08-04T23:36:29.0002019-07-07T22:49:51.0002019-08-04T23:58:50.000test.bsp......... ------------------------------------ However, when I search for this data, I find an entry that looks like this: 2019-07-11 17:00:27.192 INFO [MessageHandler.java:91] Processing the following message from Queue ------------------------------------ Then, when I search for the time within the `` XML tag, I've found this as a standalone event:2019-07-07T23:11:39.000 It seems that the monitor is reading the file line-by-line instead of respecting the line break rules defined in the props.conf, then the date parser takes the incorrect time as the log time. The logger might be flushing the file to disk after each line but that is something that is completely out of my control. Here is the sourcetype in my props.conf: [my-sourcetype] BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = LINE_BREAKER = ([\r\n\s]*)\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\.\d{3}\s[\w+\s\[\w\.]+(\:\d+)?\] MAX_EVENTS = 2000 MAX_TIMESTAMP_LOOKAHEAD = 23 NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIME_FORMAT = %Y-%m-%d %H:%M:%S.%3N TIME_PREFIX = ^ TRUNCATE = 50000 category = Custom description = This is my source type yo pulldown_type = true When I test this out by uploading the file directly, this source type works just fine AND reads the entire log entry as a single one. How can I adjust this so it works for the monitor:// input type as well? ----------------- Edit: here is the properties from the upload: ![screenshot of file upload props][1] Edit 2: where does the event breaking and fitting to a sourcetype actually occur? I did change the sourcetype on my universal forwarder instance. Do I need to rename the sourcetype on the UF, or will the indexer/searcher update the sourcetype? Or does it not matter? [1]: https://i.imgur.com/2AlJ6bJ.png

Is the HEC a loosely coupled solution?

$
0
0
One of our clients wonder which solution is more loosely coupled – the Universal Forwarder or HEC. I see the decoupling with the Universal Forwarder solution as the writer to the logs and the reader (UF) are completely independent of each other. However, I'm not sure about the HEC solution. From [Loose coupling][1] [1]: https://en.wikipedia.org/wiki/Loose_coupling -- In computing and systems design a loosely coupled system is one in which each of its components has, or makes use of, little or no knowledge of the definitions of other separate components. Subareas include the coupling of classes, interfaces, data, and services.[1] Loose coupling is the opposite of tight coupling.

Universal Forwarder invoke powershell script on-demand

$
0
0
How can I run a powershell script on a Universal Forwarder on-demand instead of scheduling it in the inputs.conf and restarting the splunkforwarder service to pick up the change and run it? I am able to schedule my scripts to run in the inputs.conf, however, when testing those scripts it is sometimes inconvenient to set them up in the inputs.conf and then restart the splunkforwarder service as it may execute other scripts that are also in that inputs.conf when I restart the service. Is there a way for me to execute this on-demand through powershell and have it send data to the index without restarting the Universal Forwarder Service? [powershell://GetJobStatus] script = . "C:\SplunkAutomation\GetJobsStatus.ps1" schedule = 0 40 5 \* * ? sourcetype = Windows:Powershell index = powershell

NMON Performance Monitor for Unix and Linux Systems: TA_nmon app producing data on universal forwarder but not going to indexer

$
0
0
I have a 50G dev license sandbox where I've installed NMON on the indexer and TA_nmon on one of the universal forwarders (manually since my dev instance doesn't seem to allow a deployment server). But I never see data arrive at the indexer. On the forwarder, I can see csv files cyclically come and go in `/opt/splunkforwarder/var/log/nmon/var/csv_repository/` But nothing ever shows up on the indexer. E.g., `index=mon` or `index=*mon*` show no results. *[Note that the above us under* .../var/log/ *on my install and not* .../var/run/ *per the trouble shooting article*] If I search on `index=_internal host=myUFHost *nmon*` I see lots of results saying things like: WatchedFile - WatchedFile - Checksum for seekptr didn't match, will re-read entire file='/opt/splunkforwarder/var/log/nmon/var/csv_repository/dev-app01_57_VM.nmon.csv'. and WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/nmon/var/csv_repository/dev-app01_11_VM.nmon.csv'. If I constrain the search for a given `file=`, I can see that at least some these messages repeat roughly hourly for a given file name. (I'm guessing the numbers are minute w/in the given hour?) I did some searching on these messages and saw some suggestion that perhaps the UF tries to read the file before it's populated? Or perhaps it's getting deleted before processing completes? With some help from folks on the Splunk Slack#getting-data-in channel I blithely tried `index=_internal "drop" "index"` and got a few hits like this on sourcetype=mongod: 2019-07-18T22:01:01.226Z I STORAGE [conn967] dropCollection: s_nmon1Dpb033BBAauqdcA1GXmim53_kv_nmoyLxvM60i16Ei2OkLQ@wn5GLC.c (7bdb7e61-4fa5-48ff-bf30-2fe97841eaa6) - index namespace 's_nmon1Dpb033BBAauqdcA1GXmim53_kv_nmoyLxvM60i16Ei2OkLQ@wn5GLC.c.$_UserAndKeyUniqueIndex' would be too long after drop-pending rename. Dropping index immediately. Any guidance would be greatly appreciated. Platform: - Splunk Enterprise 7.0.3 - Linux RHEL5 64bit (2.6.18-419.el5) Places I've looked: - https://answers.splunk.com/answers/400165/nmon-performance-monitor-for-unix-and-linux-system-5.html - http://nmonsplunk.wikidot.com/documentation:userguide:troubleshoot:troubleguide - https://answers.splunk.com/answers/126878/what-more-can-i-do-to-solve-file-too-small-to-check-seekcrc-probably-truncated-will- Thanks!

Is there a way to force UF to phone home to DS?

$
0
0
Hi All, I just want to ask if there's a way to force UF to phone home to DS, we want to initiate a force phone home without editing the phone home interval under the deploymentclient.conf, currently the UF is set to phone home to the DS every 6 hours but we have a requirement that sometimes it is required to force phone home to the DS asap. I've already tried restarting the UF and running ./splunk reload deploy-server on the DS to see if it can trigger the UF to force phone home to the DS but it does not work, is there a way to do this, any suggestion can help. Thanks and regards, ...

How much data to send to one forwarder

$
0
0
Hello, I am setting up a log collector with a Universal Forwarder attached for collecting network logs (syslog-ng) and then sending them to Splunk Cloud. I am wondering if there is a good rule of thumb/best practice as to how many devices, or how much data should be sent to one collector/forwarder. I plan to collect logs from: 6 firewalls, 32 routers, 165 switches, as well as some software logs like Cisco ISE. All of those devices are spread around the world. Should I set up collectors in regional data-centers, or would I be OK sending everything to one?

What is the admin account for on a Universal Forwarder?

$
0
0
I have UFs on some "sensitive" servers and the owners - that did the install are questioning the purpose of the Admin account. I have just accepted the fact that all splunk nodes require credentials and an account. Is there an official document or explanation for the reason a UF needs one? These are windows servers. Thank you.

How much data should be sent to one forwarder?

$
0
0
Hello, I am setting up a log collector with a Universal Forwarder attached for collecting network logs (syslog-ng) and then sending them to Splunk Cloud. I am wondering if there is a good rule of thumb/best practice as to how many devices, or how much data should be sent to one collector/forwarder. I plan to collect logs from: 6 firewalls, 32 routers, 165 switches, as well as some software logs like Cisco ISE. All of those devices are spread around the world. Should I set up collectors in regional data-centers, or would I be OK sending everything to one?
Viewing all 1551 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>