Quantcast
Channel: Questions in topic: "universal-forwarder"
Viewing all 1551 articles
Browse latest View live

Why does introspection on a Splunk 6.3.3 AIX universal forwarder not give data on Resource usage (PerProcess, Hostwide, or Dispatch)?

$
0
0
Hi. After having enabled introspection_generator_addon on a Universal Forwarder on AIX, I get data for partion and FishBucket, but not on Perprocess, Hostwide, or Dispatch. My Universal Forwarder is version 6.3.3, and the app has been enabled using CLI on the AIX host. After enabling, the UF has been restarted. Are there any other out there having the same problem, and have found a work around? I do not have access to the AIX machine, and have almost no knowledge about AIX. Kind Regards Lars

Why is my universal forwarder on Windows server 2012 R2 not collecting log files with my attempted configurations?

$
0
0
I have a monitor that that isn't working. I turned debug on in log.cfg, and the Universal Forwarder reports no match on whitelist. The following has been tried: [monitor://E:\Program Files\Microsoft SQL Server\MSRS11.MSSQLSERVER\Reporting Services\LogFiles] disabled = false index = app_ops_prod whitelist=ReportServerService*.log sourcetype = mssql:ilink:rptsvrsvc ignoreOlderThan = 3d OR [monitor://E:\Program Files\Microsoft SQL Server\MSRS11.MSSQLSERVER\Reporting Services\LogFiles\ReportServerService*.log] disabled = false index = app_ops_prod sourcetype = mssql:ilink:rptsvrsvc ignoreOlderThan = 3d Splunkd.log says that is matches and then skips. As noted above, I tried 2 configurations. 04-07-2016 17:34:20.348 +0000 DEBUG TailingProcessor - Item '' matches stanza: /E:\Program Files\Microsoft SQL Server\MSRS11.MSSQLSERVER\Reporting Services\LogFiles. 04-07-2016 17:34:20.348 +0000 DEBUG TailingProcessor - Not using stanza for this item (File did not match whitelist 'ReportServerService*.log'.). The file is `E:\Program Files\Microsoft SQL Server\MSRS11.MSSQLSERVER\Reporting Services\LogFiles\ReportServerService__test.log`. The host is Windows Server 2012 r2, and the UF is at verison 6.2.6.

How do I configure a universal forwarder to send data to the Splunk Cloud free trial?

$
0
0
Hi, I recently started using the Splunk Cloud free trial. I installed a universal forwarder locally and authorized it with the credential downloaded from Splunk Cloud. I don't see any option in the Splunk Cloud UI to configure a receiving port. How do I make the forwarder send data to Splunk Cloud? Thanks, Saravana

Universal Forwarder has not removed itself from the DMC

$
0
0
I have had a host go down in aws that was not recoverable a few weeks ago and the universal forwarder is still showing as missing in the "distributed management console". Does anyone know how to force its removal?

Why am I unable to disable a Deployment Client using a "splunk" user account?

$
0
0
Hello Guys, I have installed a Splunk Universal Forwarder in my environment and set the deployment server. I also have an account named "splunk" which owns /opt/splunkforwarder. However, if I sudo to Splunk and then disable the deployment client, I'm not able to do so. I get a permission deny error. However, If I sudo to root, I'm able to disable the deployment client. Any help why it is so? Regards

I don't want to monitor or forward the Apache log files to Splunk server anymore. Is there any solution to stop it or delete it?

$
0
0
sudo /opt/splunkforwarder/bin/splunk add monitor /var/log/apache2 -index main -sourcetype Apache2 I don't want to monitor or forward the Apache log files from the Universal Forwarder to the Splunk server anymore. Is there any solution to stop it or delete it?

How to troubleshoot why a universal forwarder is forwarding duplicate events for monitored CSV files?

$
0
0
We are processing CSV files to index in Splunk, but the Splunk forwarder is always forwarding files twice. Can you please guide us how to avoid this duplicate indexing? If we keep low number of files in the directory, we dont see the duplicate indexing, if number of files are huge, its showing duplicate entries, because we are doing rsync from google storage and its causing this issue. currently we have more than 90000 csv files in a directory. can you please suggest how to handle this case. Forwarder config: inputs.conf [monitor:///opt/apps/appdata/apps/test/] index=mobileapps sourcetype=mobilegpcsv crcSalt = whitelist = \.csv$ props.conf [mobilegpcsv] CHARSET=UCS-2-INTERNAL INDEXED_EXTRACTIONS=csv KV_MODE=none NO_BINARY_CHECK=true SHOULD_LINEMERGE=false TIMESTAMP_FIELDS=Date TIME_FORMAT=%Y-%m-%d disabled=false pulldown_type=true Also noticed this in the splunkd.log file, 04-11-2016 12:39:00.266 -0400 WARN UTF8Processor - Using charset UTF-8, as the monitor is believed over the raw text which may be UCS-2-INTERNAL - data_source="/opt/apps/appdata/apps/test/installs_com.test.aaaa_201206_overview.csv", data_host="node.abc.xyz.com", data_sourcetype="mobilegpcsv" 04-11-2016 12:39:03.270 -0400 INFO WatchedFile - Will begin reading at offset=0 for file='/opt/apps/appdata/apps/test/installs_com.test.aaaa_201206_overview.csv'. 04-11-2016 12:39:03.272 -0400 WARN UTF8Processor - Using charset UTF-8, as the monitor is believed over the raw text which may be UCS-2-INTERNAL - data_source="/opt/apps/appdata/apps/test/installs_com.test.aaaa_201206_overview.csv", data_host="node.abc.xyz.com", data_sourcetype="mobilegpcsv" Adding DEBUG logs: 04-11-2016 16:32:49.354 -0400 DEBUG WatchedFile - setting trailing nulls to false via 'true' or 'false' from conf' 04-11-2016 16:32:49.354 -0400 DEBUG WatchedFile - Loading state from fishbucket. 04-11-2016 16:32:49.354 -0400 DEBUG WatchedFile - Attempting to load indexed extractions config from conf=source::/opt/apps/appdata/apps/test/installs_com.test.aaaa_201206_overview.csv|host::node.abc.xyz.com|mobilegpcsv|6 ... 04-11-2016 16:32:49.354 -0400 DEBUG WatchedFile - Loaded indexed extractions settings: mode=2 HEADER_FIELD_LINE_NUMBER=0 HEADER_FIELD_DELIMITER=',' HEADER_FIELD_QUOTE='"' FIELD_DELIMITER=',' FIELD_QUOTE='"' 04-11-2016 16:32:49.354 -0400 DEBUG WatchedFile - Reading for CSV initCrc... 04-11-2016 16:32:49.354 -0400 DEBUG WatchedFile - CSV initCrc: skip_bytes=127 at have_read=256. Note that skip_bytes might be different to the actual number of bytes skipped in the file because of utf-8 conversion during utf8Converter parsing. 04-11-2016 16:32:49.354 -0400 DEBUG WatchedFile - Reading for CSV initCrc... 04-11-2016 16:32:49.354 -0400 DEBUG WatchedFile - CSV initCrc: checksum_bytes=61 after consumed=67 at have_read=512. 04-11-2016 16:32:49.354 -0400 DEBUG WatchedFile - Reading for CSV initCrc... 04-11-2016 16:32:49.354 -0400 DEBUG WatchedFile - Reading for CSV initCrc... 04-11-2016 16:32:49.354 -0400 DEBUG WatchedFile - Reading for CSV initCrc... 04-11-2016 16:32:49.354 -0400 DEBUG WatchedFile - Reading for CSV initCrc... 04-11-2016 16:32:49.355 -0400 DEBUG WatchedFile - Reading for CSV initCrc... 04-11-2016 16:32:49.355 -0400 DEBUG WatchedFile - Reading for CSV initCrc... 04-11-2016 16:32:49.355 -0400 DEBUG WatchedFile - Reading for CSV initCrc... 04-11-2016 16:32:49.355 -0400 DEBUG WatchedFile - Reading for CSV initCrc... 04-11-2016 16:32:49.355 -0400 DEBUG WatchedFile - Reading for CSV initCrc... 04-11-2016 16:32:49.355 -0400 DEBUG WatchedFile - initcrc has changed to: 0x58eed70075603e5f. 04-11-2016 16:32:49.355 -0400 DEBUG WatchedFile - Record found, will advance file by offset=4552 initcrc=0x58eed70075603e5f. 04-11-2016 16:32:49.355 -0400 DEBUG WatchedFile - Attempting to load indexed extractions config from conf=source::/opt/apps/appdata/apps/test/installs_com.test.aaaa_201206_overview.csv|host::node.abc.xyz.com|mobilegpcsv|300 ... 04-11-2016 16:32:49.355 -0400 DEBUG WatchedFile - Loaded indexed extractions settings: mode=2 HEADER_FIELD_LINE_NUMBER=0 HEADER_FIELD_DELIMITER=',' HEADER_FIELD_QUOTE='"' FIELD_DELIMITER=',' FIELD_QUOTE='"' 04-11-2016 16:32:49.355 -0400 DEBUG WatchedFile - min_batch_size_bytes set to 20971520 04-11-2016 16:32:49.355 -0400 DEBUG WatchedFile - seeking /opt/apps/appdata/apps/test/installs_com.test.aaaa_201206_overview.csv to off=4552 04-11-2016 16:32:49.355 -0400 INFO WatchedFile - Resetting fd to re-extract header. 04-11-2016 16:32:49.355 -0400 DEBUG WatchedFile - Saving off=4552 before processing header... 04-11-2016 16:32:49.355 -0400 DEBUG WatchedFile - seeking /opt/apps/appdata/apps/test/installs_com.test.aaaa_201206_overview.csv to off=0 04-11-2016 16:32:49.355 -0400 DEBUG WatchedFile - Loaded structured data settings: configured=1 mode=2 HEADER_FIELD_LINE_NUMBER=0 HEADER_FIELD_DELIMITER=',' HEADER_FIELD_QUOTE='"' FIELD_DELIMITER=',' FIELD_QUOTE='"'. 04-11-2016 16:32:49.355 -0400 DEBUG WatchedFile - Restoring off=4552 after processing header. 04-11-2016 16:32:49.355 -0400 DEBUG WatchedFile - seeking /opt/apps/appdata/apps/test/installs_com.test.aaaa_201206_overview.csv to off=4552 04-11-2016 16:32:49.355 -0400 DEBUG WatchedFile - Reached EOF: /opt/apps/appdata/apps/test/installs_com.test.aaaa_201206_overview.csv (read 0 bytes) 04-11-2016 16:32:49.356 -0400 DEBUG WatchedFile - setting trailing nulls to false via 'true' or 'false' from conf'

How to troubleshoot why a universal forwarder is not forwarding data to Splunk Cloud?

$
0
0
Hi All, **Universal Forwarder** -I got my splunk cloud free trial login -Downloaded the universal forwarder app -installed the app by using the credential downloaded as spl file. -I added a particular directory to monitor. **Using Splunk Enterprise Forwarder** -Configured the splunk cloud instance and port in forwarder section of my splunk enterprise. -Not able to see receiving port section in splunk cloud instance When I do list monitor, I get the directory in list of monitored directories. but data is not available in search of Splunk Cloud Please let me know as to where the problem might be. Thanks, Saravana

How to edit my configuration to collect Windows event logs with a universal forwarder to send to a syslog collector?

$
0
0
Yes, this question has been asked a hundred times. I have looked at all of the examples, but my grasp of the different conf files and their interactions is lacking. First: I have a Windows device. It has the Universal forwarder installed. (Version 6.3) My destination device is a syslog server (TIBCO Loglogic, accepts standard syslog) My config files are as follows: (This is the entire config, not snippits) **inputs.conf** [default] host = $decideOnStartup connection_host = "ip" [script://$SPLUNK_HOME\bin\scripts\splunk-wmi.path] disabled = 1 [WinEventLog://Security] index = winevt disabled = 0 current_only = 0 **transforms.conf** [send_to_syslog] REGEX = . DEST_KEY = _SYSLOG_ROUTING FORMAT = my_syslog_group **props.conf** [host::10*] TRANSFORMS-mine = send_to_syslog [source:*] SEDCMD-rmlines=s/[\n\r\t]/ /g **outputs.conf** [tcpout:group1] server=172.17.1.12:514 sendCookedData = false [syslog:my_syslog_group] server = 172.17.1.12:514 type = tcp timestampformat = %b %e %H:%M:%S The problems I am having: 1. I was hoping for something much more simple. Just something in the outputs.conf: Winevent in, syslog out. 2. I get a lot of junk information (it looks like splunk application info) with "INFO" or "WARN" that has nothing to do with Windows events. 3. Most importantly: My Windows logs are broken into newlines! A single winevent takes 15 or so lines. My transforms.conf seems to do nothing, nor any of the other examples I have seen. So yes, I am getting Windows logs as syslog, but the data is not usable to the end user due to the newlines. Any help would be greatly appreciated!

Splunk App for Stream: How to enable payload data extraction on Universal Forwarder?

$
0
0
I have installed Splunk App for Stream on the Search head and Splunk TA stream on Universal forwarder. Also installed Splunk TA stream on the Indexer. Now I need to extract the payload data also. I am trying enable it, however, nothing working. What is the CLI option to enable payload extraction on UF and to be visible on SH?

Where Is Timezone Offset Information on Universal Forwarder?

$
0
0
Trying to determine why some of my forwarders sending in data from Windows virtual desktop instances are having their data offset at the indexer and others are not. I know the documentation says that post 6.0 infrastructure will respect the timezone information dictated by the forwarder. Where is this information specified on the forwarder? I don't remember configuring anything like that. Is it something that the installer obtains from the local machine at install time? I'm trying to confirm if this setting is in place on the forwarder or if the issue with the offset is occurring on the indexer side.

Why are Spool Mail contents only shown partially in Splunk?

$
0
0
We have set up a Splunk monitor for getting contents of `/var/spool/mail/root` to Splunk. We are running a Splunk 6.2.8 Universal Forwarder on all the Linux hosts and the Splunk Enterprise version on the indexer is 6.2.1 splunk add monitor /var/spool/mail Though we are seeing the contents of root's mail on Splunk, they are partial as shown in the attachment. How do we make sure we list the full contents of root's mail rather than the first few lines. ![alt text][1] [1]: /storage/temp/122225-splunk-image.png

Why am I getting "Login failed" trying to add a Splunk universal forwarder?

$
0
0
I am using Splunk Enterprise (Amazon Market Place AMI) I have added Forwarding receiving port 9997 Installed universal forwarder and adding the forwarder to server failed: xx.xx.xxx.xx is my serverIP PRODUCTION [root@jenkins bin]$ ./splunk add forward-server xx.xx.xxx.xx:9997 -auth admin:abcdef@123 Login failed But using console xx.xx.xxx.xx:8000 with the same password and same username, I am able to login. Please Help.

How do I fix a large amount of duplicate events that are locking out my instance?

$
0
0
I've been tasked with installing Splunk Cloud on our hosted Windows environment, and I'm running into issues getting all of the forwarding working properly. I have two Universal Forwarders sending data to a Heavy Forwarder acting as a Gateway Forwarder. This Gateway Forwarder is then communicating with our cloud SaaS. On the Universal Forwarder instance I get the following error: No connection could be made because the target machine actively refused it. The problem is I already have the Gateway Forwarder set to accept connections on this port, and additionally, there are no firewall rules to block the communication. The logs on the Gateway Forwarder report that essentially all of the logs coming through it are possible duplicates, and after some point, the cloud SaaS blocks communications temporarily. This duplicate entry issue is appearing for Splunk's own logs as well as the logs for our application. I've tried reinstalling the Universal Forwarders, but are there any other steps that I could follow or configurations that I could change? Thanks in advance!

How to edit local a universal forwarder configuration that was pushed via deployment server?

$
0
0
I use my deployment server to deploy the Splunk Add-on for Microsoft Windows to Universal Forwarders. Splunk_TA_windows/ ├── default │   └── inputs.conf #unchanged defaults ├── local │   └── inputs.conf #edited I enabled the Security log in local/inputs.conf, like: [WinEventLog://Security] disabled = 0 Everything works great. However, I have one user that wants to enable a few things. Let's say that he wants to: [WinEventLog://Application] disabled = 0 Where would he make that change? Wouldn't the deployment server overwrite Splunk_TA_windows/local/inputs.conf if he made the change there?

Do we need to install a universal forwarder on our MySQL machine, or only install the Splunk Add-on for MySQL on the indexer and enable DB Connect inputs?

$
0
0
Hi :) I have read Splunk MySql docs, but I have a question: Do we have to install a universal forwarder on the MySQL machine to get MySQL general and error logs? or just only install add-on on the indexer and enable DB Connect inputs? Thanks

How to configure a universal forwarder to send data to a specific index on our Splunk Cloud instance?

$
0
0
Hi, I'm trying to send data to a specific index on our Splunk Cloud instance I've tried several methods found in answers.splunk.com but still with no apparent success. What I've tried: /opt/splunkforwarder/bin/splunk add monitor /home/oracle/workdir/*csv -index top10 Parameters must be in the form '-parameter value' # cat /opt/splunkforwarder/etc/system/local/inputs.conf [default] host = hostname omitted but it is there "The code block has been omitted but it is there" [monitor:///home/oracle/workdir/*csv] sourcetype=csv index=top10 The latter one was followed by a restart of the forwarder. In Splunk, an all time search of `index=top10` yields 0 results. Not sure what I'm missing.

How to troubleshoot why universal forwarders are reporting "Could not send data to output queue (parsingQueue), retrying..."?

$
0
0
I'm getting this message below on Universal Forwarders' splunkd.log... INFO BatchReader - Could not send data to output queue (parsingQueue), retrying... INFO TailingProcessor - Could not send data to output queue (parsingQueue), retrying... INFO TailReader - Could not send data to output queue (parsingQueue), retrying... I did follow this step below... 1. `grep "*blocked=true*" /opt/app/splunkforwarder/var/log/splunk/metrics.log*` I don't see any blocked queues 2. I did add limits.conf in /opt/apps/splunkforwarder/etc/system/local [thruput] maxKBps = 0 Still I see the message: Could not send data to output queue (parsingQueue), retrying... What are the next options I need to look to resolve this..??

Why is one universal forwarder reporting "Error writing to "/opt/app/splunkforwarder/var/log/splunk/metrics.log": No space left on device"?

$
0
0
I did see this error in splunkd.log on one of the Universal Forwarders... 04-13-2016 19:42:38.555 -0500 ERROR Logger - Error writing to "/opt/app/splunkforwarder/var/log/splunk/metrics.log": No space left on device I did `$df -h` on the box, which I don't see any space constraints. What kind of space is the Splunk log referring to...? Can anyone please shed some light on this. Thanks..!!

How to add Cisco devices to the Cisco Networks App for Splunk Enterprise?

$
0
0
I have Cisco logs coming into my syslog-ng server, and I added the log file on a universal forwarder to monitor and send to a Splunk server. How do I check whether or not data is being dumped into the indexer? I also want to add Cisco devices to the Cisco Networks App in Splunk. How do I do this?
Viewing all 1551 articles
Browse latest View live




Latest Images