Quantcast
Channel: Questions in topic: "universal-forwarder"
Viewing all 1551 articles
Browse latest View live

(beginner) how to use splunk universal forwarder?

$
0
0
Hello. I want to import some data(not kubernetes logs, metrics) to splunk (Enterprise). I've heard I should use splunk universal forwarder. (My data is like commit info of Git (who commit, when, how many files, file names, ...) not system log and whenever get same data...) I find out it's docker container exists but I can't find it's helm chart. First time, I think I can run forwarder container every x minutes using Jenkins. but I can't find example in googling and I realize that I need PV for it. So my second thinking is run forwarder container like daemon on kubernetes. but also I can't find example (and helm chart for this). What is the best usecase of forwarder in this case? Do you have a plan for provide helm chart of forwarder? Many thanks,

Dealing with a UF client that is sending too much data

$
0
0
I have a number of windows clients using the Universal forwarder to send a small log file to Splunk. Typically around 15kb per day per client. However, when testing this I found a client that is sending almost 1gb a day rather than the expected 15kb. It appears as though this client is having issues and is writing a massive amount of errors to the log daily. If I scale up the deployment of the UF for this app to more clients, then I am concerned that multiple clients having this issue could push my data ingest up to an unsustainable level. I need to be able to reduce the amount of data this client (and any future clients that have the same issue) are sending, but I don't want to exclude it entirely as then I won't be able to see which clients are having this manic log writing issue. What is the best way to solve this? Can I limit the total data that can be forwarded per client for this app, or can I do some de-duplication on the data prior to forwarding in order to reduce the amount sent? It writes the same log lines repeatedly within the same timestamp Thanks for any advice you can offer.

Is the universal forwarder 8.0 supported on Windows 2012 R2?

$
0
0
The [Forwarder Manual 8.0][1]'s mention of system requirements links to the [Splunk Enterprise Installation Manual 8.0][2], which only lists 2016 and 2019. But I'm unclear if that just means that running the indexers/search heads are only supported on the newer windows, or if that means that the forwarder too is only supported on the newer version. Has Windows 2012 R2 support been dropped from the Universal Forwarder 8.0? [1]: https://docs.splunk.com/Documentation/Forwarder/8.0.0/Forwarder/Systemrequirements [2]: https://docs.splunk.com/Documentation/Splunk/8.0.0/Installation/Systemrequirements

Will an updated datetime.xml temporarily solve the Y2K timestamp issue?

$
0
0
I have recently migrated to Splunk cloud and completed the necessary version upgrades to ensure we are compatible with the timestamp issue patching. However, I still have an on-prem instance of Splunk (that is still widely used by teams) that will be de-commissioned in the next few months (upon tying up loose ends with the cloud instance). I am running version 6.6.3 on-prem. Rather than upgrade to a compatible version, can I simply update the version of datetime.xml and apply it to each on-prem Splunk server to solve the Y2K-timestamp issue? Obviously, this would be a temporary solution - just long enough to allow me to complete the cloud migration and de-com the on-prem environment. Thanks!

Universal forwarder error from splunk-wmi.exe

$
0
0
I have been trouble shooting this problem for a little while now and no luck. Anyone have any guidance on what is causing the following error? It is being executed by the splunk-wmi.exe script. WMI - Error occurred while trying to retrieve results from a WMI query (error="Specified class is not valid." HRESULT=80041010) (root\cimv2: SELECT Name, IDProcess, PrivateBytes, PercentProcessorTime FROM Win32_PerfFormattedData_PerfProc_Process)

What is the max file size that a universal forwarder can input via a batch stanza?

$
0
0
Splunk universal forwarder inputs.conf batch stanza is attempting to read CSV files that range in size from a 10MB to 2GB. On the forwarder the splunkd.log shows "Stale file handle" and "CRC calculation" related warnings and errors on the larger files i.e. 800MB and 1.4GB. The files are not indexed and then they are deleted. Are there hard or configured file size limits? and/or What might cause these issues other than file size?

how does Universal Forwarder work?

$
0
0
Hi, all I wonder about Universal Forwarder. I have to switch master uri of deploymentclient.conf and outputs.conf because I created new cluster master(new is production environment) If the switch does not work, I change master uri to original. Will I lost data during the switchover work? Or will the UF send the past data when it is change to master node? If UF sends data to the new cluster master and then turns masteruri to original cluster master, Does the data that has already been sent can not index in original? Thank you for helping me.

Universal Forwarder props.conf and transforms.conf settings

$
0
0
I am trying to get the output from a python script to indexer. So i added transforms.conf and props.conf under C:\Program Files\SplunkUniversalForwarder\etc\system\local transforms.conf [myexternaltable] REGEX = (.) external_cmd = addnum.py $1 DEST_KEY = queue FORMAT = indexQueue props.conf [sitescope_daily2_log] TRANSFORMS-runscript=myexternaltable But its not working, can anyone please help me with correct settings needs to be done on UF. Thanks, Niloo

how to configure splunk forwarder to monitor a file whose name changes on daily basis

$
0
0
Hi All, I am trying to monitor a logfile which is generated in a path every day at 23:55 from a python script. My problem here is the file name of the log file changes everyday as the script is appending date to the file name. Eg: Today the file name is "eswitch_16122019_235501_7000.log" Tomorrow the file name will be "eswitch_17122019_235501_7000.log" My inputs.conf is as below [monitor:///opt/home/splunk_eswitch/eswitch_*.log] disabled = false index = test2 sourcetype = eswitch Now when I run splunk list monitor I am seeing a below /opt/home/splunk_eswitch/eswitch_*.log /opt/delphi/splunk_eswitch/eswitch_16122019_235501_7000.log My question is tomorrow does the forwarder sends the newly created file log to indexer with any issue as the yesterday's file will not be present in the same path. Is there any better regex to have in inputs.conf then above one

How do I copy forwarder inputs from one indexer to another indexer?

$
0
0
I'm working on load balancing the universal forwarder and want to make sure the additional indexer that will now receive inputs from forwarders is configured to accept.

Splunk datetime issue - does this affect Universal Forwarders forwarding to Splunk Cloud?

$
0
0
We use Splunk Cloud and have 3 Heavy Forwarders (which I updated yesterday with the new datetime.xml). We also have about 10 universal fowarders (most of them on Windows). Do I need to apply the datetime.xml to those as well? thanks!

How to keep powershell process alive

$
0
0
Hello, I've created a Powershell script that I use to monitor a folder. It all works how it's suppose to work, but the problem is when I deploy it as an Splunk App, it starts the Script but doesn't keep the powershell process alive. Here are the input.conf en .path files I've used. inputs.conf [script://$SPLUNK_HOME\etc\apps\TA_TEST\bin\FolderMonitor.path] disable=false interval=-1 index=winlogs FolderMonitor.path $Systemroot\System32\WindowsPowerShell\v1.0\powershell.exe -executionpolicy bypass -Command " & '$SPLUNK_HOME\etc\apps\TA_TEST\bin\FolderMonitor.ps1'" I've tried several things Changing the .path file to powershell.exe -noexit -noprofile -executionpolicy bypass -Command, but that didn't work at least not when it's deployed by Splunk if I put that directly in Command Prompt it does work. Changing the interval from -1 to 0 but that just started a new powershell process, and I need the original process to be kept alive. Any tips or help would be grealy appreciated. With kind regards, Patrick

Best Practices for SNMP traps from Universal Forwarder

$
0
0
I am trying to send SNMP traps from Cisco wireless controllers to our universal forwarder which has net-snmp installed. While I have it working and data is getting to the indexer, I have a few problems listed below. Note that I cannot send traps directly to splunk. All data needs to hit the UF first. **SNMP output is not clean** With the STRING, INTEGER, and other random output between the key values, it's causing Splunk to incorrectly parse them. A hacky way would be for me to use SEDCMD to remove this data in props.conf but it is not working. My assumption is SEDCMD does not work on a Windows indexer but I've been told it should. Is there a better way with net-snmp to prevent this? 2019-12-27 10:14:28 Agent_Address = 0.0.0.0 Agent_Hostname = UDP: [10.20.20.10]:44369->[10.20.20.200]:162 PDU_Attribute_Value_Pair_Array: sysUpTimeInstance = Timeticks: (1440866000) 166 days, 18:24:20.00 snmpTrapOID.0 = OID: bsnDot11StationAssociate bsnStationAPMacAddr.0 = STRING: 5c:83:8f:79:6d:40 bsnStationAPIfSlotId.0 = INTEGER: 1 bsnUserIpAddress.0 = IpAddress: 10.20.196.141 bsnStationUserName.0 = STRING: limguest bsnStationMacAddress.0 = STRING: 78:7e:61:d1:d0:f8 bsnAPName.0 = STRING: "uslcoAP2302" --- **Breaker lines not working** I am having multiple events appear under a single event in Splunk. props.conf [snmptrapd] DATETIME_CONFIG = KV_MODE = none LINE_BREAKER = ([\r\n]+)Agent_Address\s= MAX_TIMESTAMP_LOOKAHEAD = 60 NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIME_FORMAT = %Y-%m-%d %H:%M:%S TIME_PREFIX = Date\s=\s TZ = UTC category = Custom description = parse snmptrapd logging with custom kvpair splunk formatting disabled = false pulldown_type = true EXTRACT-node = ^[^\[\n]*\[(?P[^\]]+) REPORT-snmptrapd = snmptrapd_kv

Install Universal forwarder from Splunk Deployment Server?

$
0
0
Hi, Want to monitor many devices on my local site and on remote, can I deploy installation of universal forwarder agent on these devices from the splunk deployment server?

*Nix add-on with official universal forwarder docker: cannot run cpu.sh nor install sar/mpstat in splunk's official container

$
0
0
We're able to partially get the official Splunk universal forwarder docker container to run the official *Nix add-on so an endpoint can collect & send its basic host metrics, but some of the add-on's host metrics collector scripts fail, such as `cpu.sh`: ``` [ansible@alpha bin]$ cat debug--cpu.sh--Wed_Jan__1_12-35-08_UTC_2020 Not found any of commands [sar mpstat] on this host, quitting ``` Most scripts run fine like `netstat`/`top`/`ps` as we do `docker run --pid=host`. However, it looks like the official container is stripped down, so `cpu.sh` has missing dependencies as above. We were just going to `apt-get install sar`... except we see no apt-get/apt/apk/yum: -- Is there an alternate universal forwarder container we can put on these endpoints? This feels like the usual "alpine vs slim" issue, and other enterprise projects do stuff like dual releases here, but I couldn't find any. -- Is there some other way to install those packages while keeping the forwarder in a slim container?

Getting List of the Universal forwarders

$
0
0
Hi There, I wanted to get a list of forwarders from the metric logs. The base logs have confused me a lot. Below is the sample. For the same hostname "hostname=ip-10-142-xx-29.us-west-2.compute.internal", I see different "sourceIp" & "sourceHost". Now can anybody help me understand which one is the actual identifier for a forwarder - sourceIp, sourceHost or the hostname? 01-03-2020 16:11:41.894 +0000 INFO Metrics - group=tcpin_connections, ingest_pipe=1, 10.xx.xx.107:6018:9997, connectionType=cooked, sourcePort=6018, sourceHost=10.xx.xx.107, sourceIp=10.xx.xx.107, destPort=9997, kb=1.6796875, _tcp_Bps=132.10136144345114, _tcp_KBps=0.12900523578462025, _tcp_avg_thruput=0.12900523578462025, _tcp_Kprocessed=1.6796875, _tcp_eps=0.5376218198279988, _process_time_ms=0, evt_misc_kBps=0, evt_raw_kBps=0, evt_fields_kBps=0, evt_fn_kBps=0, evt_fv_kBps=0, evt_fn_str_kBps=0, evt_fn_meta_dyn_kBps=0, evt_fn_meta_predef_kBps=0, evt_fn_meta_str_kBps=0, evt_fv_num_kBps=0, evt_fv_str_kBps=0, evt_fv_predef_kBps=0, evt_fv_offlen_kBps=0, evt_fv_fp_kBps=0, build=8f0ead9ec3db, version=7.1.1, os=xyz, arch=x86_64, hostname=ip-10-142-xx-29.us-west-x.compute.internal, fwdType=uf, ssl=true, lastIndexer=None, ack=true

Assigning sourcetype by host - UF

$
0
0
Hi All, I have a UF which gets logs of syslog via UDP:514. I am trying to set sourcetypes by hosts' IPs but i can't figure this out. For example, for [host::192.168.0.1] I want to set source type of "wineventlog". Note: I don't have an option to separate the logs into different folders by host.. Thanks !

Stop UF service to delete and reinstall app via Deployment Server

$
0
0
I have an issue deploying the Splunk Stream App. The Stream apps are already installed on UF's but I get an error when reloading deploy server and the config can't overwrite as there is a file (NPF) running which stops the config change. I don't have access to these servers so need to try and resolve this issue using either the deployment server or by creating an app with a scripted input but have no idea how to accomplish this. I just need to stop this service, delete and reinstall the stream app if possible. Thanks in advance

Not able to read CSV from Universal forwarder

$
0
0
I am trying to read csv from one of my universal forwareder, below is my inputs file [monitor://D:\DUMP\Updated_Dump*.CSV] sourcetype=csv disabled=false index=xyz crcSalt= After checking splunkd log getting below events INFO TailingProcessor - Adding watch on path: D:\DUMP INFO TailingProcessor - Parsing configuration stanza: monitor://D:\DUMP\Updated_Dump*.CSV Please let me know how this can be resolved.

How to do own encryption and decryption on splunk universal forwarder.

$
0
0
I am trying to do custom encryption and decryption of data on the universal forwarders. I am trying to configure the Splunk UF to use own certificates and forward the encrypted data to the third-party system(Java socket). The reason I am doing this is to recover the Splunk event logs to the java socket connection by decrypting the event changelogs. How can I do this on Splunk UF?
Viewing all 1551 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>