Quantcast
Channel: Questions in topic: "universal-forwarder"
Viewing all 1551 articles
Browse latest View live

I installed a universal forwarder on a Windows, but why do I not see this host in the list of forwarders under "Add data"?

$
0
0
I have Installed a Splunk universal forwarder on a Windows host and started the services. But while adding the data under "Add data" in my Splunk app, I am not able to see the installed Windows machine on list of forwarders. Is that something I need to edit the inputs.conf on the forwarder? Could someone share steps to send logs from Windows machine to a Splunk server (linux)?

Why are multiple host names being reported for the same host?

$
0
0
'Morning... I have a v6.5, clustered environment (deployment server), Universal Forwarder on all hosts. I am getting several Linux systems reporting in with two names, shortname and FQDN. But not all of them are doing this, even members of the same Server Class. It seems that all the shortnames are only pulling a **sourcetype** of **syslog** or **linux_messages_syslog** and are only **source=/var/log/messages**. The FQDNs are showing appropriate sourcetypes and sources (all under **/var/log/** -- but NOT messages). I have a very simple **inputs.conf** being deployed: [monitor:///var/log] index = servers disabled = 0 I confirmed that syslog is not configured on these to also send to my heavy forwarders. They are reporting in to the Forwarder Management interface as one system (mixture of short and FQDN). I haven't found a lot of mentions of this here -- I guess this is not very common...? Thoughts? Thanks! Michael

How to install the Monitoring of Java Virtual Machines on a Universal Forwarder?

$
0
0
Hi All, We planned to install the SPLUNK4JMX in the universal forwarder so that it runs the app in the local machine of the universal forwarder (UF) and sends the data to the indexer. The reason for this is our JVM is using Java 1.6.0 JDK and we don't want to open the ephemeral port range in firewall for sending the data. So far we done: Deployed to UF these folder/files as mention in this Question (https://answers.splunk.com/answers/91051/splunk4jmx-universalforwarder-installation.html ): SplunkUniversalForwarder/etc/apps/SPLUNK4JMX/bin/* SplunkUniversalForwarder/etc/apps/SPLUNK4JMX/default/inputs.conf SplunkUniversalForwarder/etc/apps/SPLUNK4JMX/default/app.conf SplunkUniversalForwarder/etc/apps/SPLUNK4JMX/logs SplunkUniversalForwarder/etc/apps/SPLUNK4JMX/local We set the config.xml to run on host=localhost and rmi port that was set in the JVM. Currently there isn't any data received in the index, and we are not sure what needs to be put in inputs.conf or if is there are other things that need to be configured for this to work. UF we use is version 6.4.2 Thanks https://answers.splunk.com/answers/91051/splunk4jmx-universalforwarder-installation.html

Is there a comparison of CPU consumption of HF and UF?

$
0
0
hi all, I want to use splunk heavy forwarder in my company but i wonder that what does it cost me to use HF? Is there any test or something like that about cpu, IO consuming etc. ?

Is there a test to compare CPU and memory consumption of a heavy forwarder versus a universal forwarder?

$
0
0
hi all, I want to use a Splunk heavy forwarder in my company, but I wonder that what does it cost me to use a HF? Is there any test or something like that about cpu, IO consuming etc. ?

How to create a golden image of Windows 2008R2 with a Splunk universal forwarder?

$
0
0
Hello, I am trying to create a golden image of Windows 2008r2 with a Splunk forwarder on it. I tried running the command `SplunkUniversalForwarder\bin\splunk cone-prep-clear-config`, but I got an error stating cone-prep-clear-config is not a valid command. I have successfully ran this command on Linux. Am I supposed to use some other command for Windows? Error: PS C:\Program Files\SplunkUniversalForwarder\bin> .\splunk.exe cone-prep-clear-config Command error: 'cone-prep-clear-config' is not a valid command. Please run 'splunk help' to see the valid commands. Data forwarding configuration management tools. Commands: enable local-index [-parameter ] ... disable local-index [-parameter ] ... display local-index add [forward-server|search-server] server remove [forward-server|search-server] server list [forward-server|search-server] Objects: forward-server a Splunk forwarder to forward data to be indexed search-server a Splunk server to forward searches local-index a local search index on the Splunk server

Why do special characters "[0[0m" appear in my events?

$
0
0
Hi I deploy Splunk forwarder on a JBoss server to forward data towards my test environment Splunk. In the Universal Forwarder (UF) monitor file server.log file, the line 01/12/16 15:11:50,398 INFO [org.jboss.as] (MSC service thread 1-3) JBAS015950: JBoss EAP 6.4.8.GA (AS 7.5.8.Final-redhat-2) stopped in 358ms is transfomed by the event below [0m[0m01/12/16 15:11:50,398 INFO [org.jboss.as] (MSC service thread 1-3) JBAS015950: JBoss EAP 6.4.8.GA (AS 7.5.8.Final-redhat-2) stopped in 358ms All lines is prepended by characters `[0m[0m` for INFO message or `[0[31m` when it's a ERROR message Someone can explain why?

How to combine my two searches to alert on duplicate GUIDs for universal forwarder installations?

$
0
0
Hello, We recently deployed Splunk in our environment and recently discovered that our engineering teams are cloning systems without clearing out the universal forwarder GUID and related logs prior to cloning the machine. I'm trying to set up a search and email alert to identify these problematic systems. I have the following search that I can run on my Deployment Server which will give me back duplicate UF GUIDs and count. | rest /services/deployment/server/clients count=0 splunk_server=local| fields hostname name ip dns utsname| stats count by name | where count > 1 I also have this search that returns all my UF installations from my deployment server. | rest /services/deployment/server/clients count=0 splunk_server=local| fields hostname name ip dns utsname| rename name as clientName I need help tying these two searches together. ...search... | rest /services/deployment/server/clients count=0 splunk_server=local| fields hostname name ip dns utsname| stats count by name | where count > 1) WHERE GUID IN (| rest /services/deployment/server/clients count=0 splunk_server=local| fields hostname name ip dns utsname| stats count by name | where count > 1) I'm familiar with SQL, but still learning SPL so I'm not sure how to link the two separate searches together with a equivalent SQL IN clause. Lastly, I want to schedule this search and email me a report of machines with duplicate GUIDs (but not email me an empty report). Any help is appreciated. Thank you.

How to use BigFix to install and maintain the Universal Forwarder?

$
0
0
I am attempting to use BigFix to install the Universal Forwarder on machines within a multi-tenant environment. I use a single deployment server, and can manually install the UF on a machine and point it to the deployment server, and all works fine if I use the Run as Administrator option. When I attempt to deploy using BigFix to a Windows machine, it appears to attempt the install, but never (re)starts the Splunkd service, and does not actually perform the installation. In fact, it acts very similar to attempting to install manually without the Run as Administrator option. My command line for running the install in BigFix is as follows: msiexec.exe /i "\path\to\installer\splunkforwarder-x64.msi" DEPLOYMENT_SERVER="server.domain.com:8089 AGREETOLICENSE=Yes /quiet Has anyone else done this successfully? Am I missing something? I DO want the UF to be running as Local System account, so I am not trying to do anything special in that regard. I am simply trying to install and maintain the UF binaries with BigFix. I am not interested in creating an "image", as these machines are already built and running. Thanks!

Why is Universal Forwarder unable to process props.conf configuration for structured data?

$
0
0
I have a customer that wants to index psv files with headers. If I omit the props.conf file on the Universal Forwarder (UF), the entire psv file gets indexed as one event without any parsing. I have a props.conf on the indexer, but it's my understanding that the indexer does not parse forwarded structured data. However, when I add the props.conf to the UF's, no data is indexed. I have tried with UF versions 6.1.2, and 6.4 running on Linux and Sun. My inputs.conf and props.conf on the UF's are as follows: inputs.conf [monitor:///tmp/testmetrics*.txt] crcSalt = sourcetype = test_pri index = test disabled = 0 props.conf [test_pri] FIELD_DELIMITER=| HEADER_FIELD_DELIMITER=| HEADER_FIELD_LINE_NUMBER=1 INDEXED_EXTRACTIONS=psv NO_BINARY_CHECK=1 SHOULD_LINEMERGE=false TIMESTAMP_FIELDS=DATETIME TIME_FORMAT=%Y%m%d/%H%M%S KV_MODE=none The data is in this format with CRLF terminations after each line: col1|col2|col3 row1|row11|row1111 row2|row22|row222 row3|row33|row333 splunkd.log: 12-02-2016 15:02:13.567 -0500 INFO WatchedFile - Will begin reading at offset=0 for file='/tmp/testmetrics.txt'. 12-02-2016 15:03:02.914 DEBUG TailingProcessor - File state notification for path='/tmp/testmetrics.txt' (first time). 12-02-2016 15:03:03.059 DEBUG PropertiesMapConfig - Performing pattern matching for: source::/tmp/testmetrics.txt 12-02-2016 15:03:03.059 DEBUG TailingProcessor - Skipping itemPath='/tmp/testmetrics.txt', does not match path='/proj/unix/cen/tools/splunkforwarder/etc/splunk.version' :Not a directory :Not a symlink 12-02-2016 15:03:03.059 DEBUG TailingProcessor - Skipping itemPath='/tmp/testmetrics.txt', does not match path='/proj/unix/cen/tools/splunkforwarder/var/log/splunk' :Not a directory :Not a symlink 12-02-2016 15:03:03.059 DEBUG TailingProcessor - Skipping itemPath='/tmp/testmetrics.txt', does not match path='/proj/unix/cen/tools/splunkforwarder/var/log/splunk/splunkd.log' :Not a directory :Not a symlink 12-02-2016 15:03:03.059 DEBUG TailingProcessor - Skipping itemPath='/tmp/testmetrics.txt', does not match path='/proj/unix/cen/tools/splunkforwarder/var/spool/splunk' :Not a directory :Not a symlink 12-02-2016 15:03:03.059 DEBUG TailingProcessor - Skipping itemPath='/tmp/testmetrics.txt', does not match path='/proj/unix/cen/tools/splunkforwarder/var/spool/splunk' :Not a directory :Not a symlink 12-02-2016 15:03:03.059 DEBUG TailingProcessor - Item '/tmp/testmetrics.txt' matches stanza: /tmp/testmetrics*.txt. 12-02-2016 15:03:03.059 DEBUG TailingProcessor - Will use CRC salt='/tmp/testmetrics.txt' for this source. 12-02-2016 15:03:03.059 DEBUG FilesystemFilter - Testing path=/tmp/testmetrics.txt(real=/tmp/testmetrics.txt) with global blacklisted paths 12-02-2016 15:03:03.059 DEBUG TailReader - Will attempt to read file: /tmp/testmetrics.txt. 12-02-2016 15:03:03.059 DEBUG PropertiesMapConfig - Performing pattern matching for: source::/tmp/testmetrics.txt 12-02-2016 15:03:03.059 DEBUG FileClassifierManager - Finding type for file: /tmp/testmetrics.txt 12-02-2016 15:03:03.059 DEBUG PropertiesMapConfig - Performing pattern matching for: source::/tmp/testmetrics.txt 12-02-2016 15:03:03.059 DEBUG PropertiesMapConfig - Performing pattern matching for: source::/tmp/testmetrics.txt|test_pri 12-02-2016 15:03:03.059 DEBUG WatchedFile - Storing pending metadata for file=/tmp/testmetrics.txt, sourcetype=test_pri, charset=UTF-8 12-02-2016 15:03:03.059 DEBUG PropertiesMapConfig - Performing pattern matching for: source::/tmp/testmetrics.txt|host::testhost|test_pri|45 12-02-2016 15:03:03.060 DEBUG WatchedFile - Attempting to load indexed extractions config from conf=source::/tmp/testmetrics.txt|host::testhost|test_pri|45 ... 12-02-2016 15:03:03.060 DEBUG VerboseCrc - Checksumming salt_data="/tmp/testmetrics.txt". 12-02-2016 15:03:03.060 DEBUG PropertiesMapConfig - Performing pattern matching for: source::/tmp/testmetrics.txt|host::testhost|test_pri|46 12-02-2016 15:03:03.060 DEBUG WatchedFile - Attempting to load indexed extractions config from conf=source::/tmp/testmetrics.txt|host::testhost|test_pri|46 ... 12-02-2016 15:03:03.060 DEBUG TailReader - About to read data (Opening file: /tmp/testmetrics.txt). 12-02-2016 15:03:03.060 DEBUG WatchedFile - seeking /tmp/testmetrics.txt to off=0 12-02-2016 15:03:03.060 DEBUG WatchedFile - seeking /tmp/testmetrics.txt to off=0 12-02-2016 15:03:03.060 DEBUG PropertiesMapConfig - Performing pattern matching for: source::/tmp/testmetrics.txt|host::testhost|test_pri|46 12-02-2016 15:03:03.060 DEBUG WatchedFile - seeking /tmp/testmetrics.txt to off=14598 12-02-2016 15:03:03.060 DEBUG WatchedFile - Reached EOF: fname=/tmp/testmetrics.txt fishstate=key=0x915b2ffd0a19e405 sptr=14598 scrc=0xf4eb0f294d1af3b2 fnamecrc=0x5fae16cea4aef038 modtime=1480708933 12-02-2016 15:03:03.060 DEBUG FilesystemChangeWatcher - inotify doing infrequent backup polling for healthy path="/tmp/testmetrics.txt" Thanks.

Why am I unable to forward data from a Splunk forwarder to Splunk Cloud on Windows?

$
0
0
Hello, I have been trying for the last 8 hours to forward data to a Splunk Cloud instance. I generated the credentials off the Splunk Cloud instance as directed and attempted to use them on a heavy forwarder to no avail. I also tried a universal forwarder as well but it just won't work. I believe the problem is related to the credentials. One particular message I received was: 12-02-2016 19:27:20.156 -0500 WARN TcpOutputProc - 'sslCertPath' deprecated; use 'clientCert' instead 12-02-2016 19:27:20.156 -0500 WARN TcpOutputProc - 'sslCertPath' deprecated; use 'clientCert' instead 12-02-2016 19:27:20.156 -0500 WARN TcpOutputProc - 'sslCertPath' deprecated; use 'clientCert' instead 12-02-2016 19:27:20.156 -0500 WARN TcpOutputProc - 'sslCertPath' deprecated; use 'clientCert' instead 12-02-2016 19:27:20.156 -0500 WARN TcpOutputProc - 'sslCertPath' deprecated; use 'clientCert' instead I made a change to the config files to fix this, but it still will not work. In splunkd.log all I see is: 12-02-2016 19:38:59.726 -0500 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 12-02-2016 19:39:07.772 -0500 WARN TcpOutputProc - Cooked connection to ip=52.55.109.251:9997 timed out 12-02-2016 19:39:11.737 -0500 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 12-02-2016 19:39:23.739 -0500 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 12-02-2016 19:39:27.664 -0500 WARN TcpOutputProc - Cooked connection to ip=52.204.196.213:9997 timed out 12-02-2016 19:39:35.740 -0500 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 12-02-2016 19:39:44.356 -0500 WARN HttpPubSubConnection - Unable to parse message from PubSubSvr: 12-02-2016 19:39:44.356 -0500 INFO HttpPubSubConnection - Could not obtain connection, will retry after=84.982 seconds. 12-02-2016 19:39:47.553 -0500 WARN TcpOutputProc - Cooked connection to ip=52.44.41.196:9997 timed out 12-02-2016 19:39:47.740 -0500 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected Any ideas? Thanks, JG

How to stop splunkd.exe from creating crash dump files under var\log\splunk on a universal forwarder?

$
0
0
On the universal forwarder, splunkd.exe is creating many crash dump files that are filling up disk space, which affects the services on the server. Please let me know if you have any configurations to disable crash dumping.

How to route to an Index based on SourceType AND Host combination in inputs.conf?

$
0
0
I have a setup as Universal Forwarder (UF) - Heavy Forwarder (HF) - Indexer - Search Head (SH). Where multiple UF are sending data to single HF which in turn sends data to single Indexer. I have below stanza on my multiple UF's inputs.conf file [perfmon://CPU Load] counters = % Processor Time;% User Time object = Processor instances = _Total interval = 30 sourcetype = Perfmon index = idx_XXX_Perfmon_CPU-Load Where XXX is server name. Now, in order to have a common app across all UF to be deployed through Deployment server, I have removed the Index from the stanza and wants to assign index based on Host + SourceType combination on a HF using props.conf and transform.conf. Example: - If event comes from Server1 with sourcetype as Perfmon then set index = idx_Server1_Perfmon_CPU-Load - If event comes from Server2 with sourcetype as Perfmon then set index = idx_Server2_Perfmon_CPU-Load. Please help me to design correct stanza for this requirement.

How to show the host name from a CSV lookup file when there are no results found?

$
0
0
I have tried various suggestions from this site but I'm unable to get the desired results. A 3rd party installs UF's (Universal Fowarders) and provides a csv list of hosts that have been deployed. I have this list loaded into Splunk as a csv lookup file. What I need to achieve is show the host name from the csv file where there is no match in search results, it also must deal with case insensitive. The csv is very simple host,owner,os The result should be the hosts that are yet to show in the search results so a report can be run and delivered to the vendor to resolve. What I have tried so far is similar to this, but does not deal with case and I'm not 100% sure its giving accurate results. | inputlookup uf_deploy.csv | search NOT [search index=*_linux OR index=wineventlog host=* | dedup host | fields host] Any help would be great. Regards Rob

Does anyone have an example of using Puppet Module to uninstall Universal Forwarder?

$
0
0
Hello guys, Does anyone have an example of using Puppet Module to uninstall / delete / remove properly the UF (Universal Forwarder) on Linux and Windows? Thanks.

Should the hardware on my Heavy Forwarder be the same as my Indexer?

$
0
0
My current system is (vastly underpowered, 3.5gig a day tops) a single indexer/search head combo, and 2 heavy forwarders. I have recently been given a requirement to bump this up to ~120GB a day indexed. I am looking at this document to determine hardware requirements: http://docs.splunk.com/Documentation/Splunk/6.5.1/Capacity/Referencehardware but nowhere in here does it comment on a heavy forwarder. My reading tells me that the HF does parsing before it ever sends data to the indexer. So, does that mean if I have a small lightweight VM acting as a heavy forwarder sending 100GB a day to the indexer with 12 cores+64gig ram, my indexer performance is mostly pointless, because my heavy forwarder is my bottleneck? Should I plan my heavy forwarder to be the same spec as the indexer, or make my indexer underpowered and beef up the HF? (No logs go directly to the indexer.) Or, do I keep my underpowered heavy forwarder VM and just convert it to use the universal forwarder? I would then make sure that all transforms/props/etc get placed on the indexers, not the forwarder. The only thing on the forwarder I do that isn't just passthrough is adding a metadata tag "forwarder=locationX", which I guess I would have to find a substitute for. It is useful for me to track where a log originated, though.

What is the best way to collect and monitor Windows 2008 R2 print server events?

$
0
0
I'd like to track print events from a Windows 2008 R2 print server. I have configured my Universal Forwarder (UF) via this blog: http://blogs.splunk.com/2014/04/21/windows-print-monitoring-in-splunk-6/ and I am receiving print event data, but the page print metrics are inconsistent. I am capturing page counts for some users, but most user page counts are zero. My Splunk configuration is : Print Server UF (6.5) ->Heavy Forwarder->SplunkCloud I have also tried this configuration: Print Server UF(6.5)->SplunkCloud Both exhibit the same problem. How do you monitor print events?

How to edit props.conf to override Splunk truncating JSON data?

$
0
0
Hi Guys, So I figured out that my Splunk instance is truncating my JSON data. That's not good and I'd like to remedy this. In reading, it looks as though I need to override my props.conf file by using the local/props.conf file. Since I'm using a Universal Forwarder it appears I don't need touch this ( http://wiki.splunk.com/Community:HowIndexingWorks ) as from the picture TRUNCATE happens in the parsing stage on the main server. So on my main server I added the following stanza. I then read ( http://docs.splunk.com/Documentation/Splunk/6.5.1/admin/Propsconf ) that I simply need to hit `| extract reload=T` and I should be in business. **Well it didn't work!** Can someone with a bigger brain please point out my error? On the forwarder side I'm am monitoring the following file. *FWIW - I also tried editing the UF side props.conf and that didn't work either.* /opt/splunk/splunkforwarder/bin/splunk add monitor /var/log/django/ -sourcetype json /opt/splunk/etc/system/local/props.conf [_json] pulldown_type = true INDEXED_EXTRACTIONS = json TRUNCATE = 30000 KV_MODE = none category = Structured description = JavaScript Object Notation format. For more information, visit http://json.org/ Finally here is the image of it getting truncated at 10000 which is the default. ![alt text][3] [3]: /storage/temp/173351-2016-12-06-09-00-43.jpg

Using Splunk Web, can I search a specific host name or IP address that returns the “Identified UF Version” of that system?

$
0
0
Hello Splunkers - Using Splunk Web, can I search/index a specific host name or IP address that returns the “Identified UF Version” of that system? The Universal Forwarder 6.4 is already installed. Any assistance would be greatly appreciated, thank you.

Is the checkpointInterval attribute configurable?

$
0
0
We have thousands of Universal Forwarders (UF) in a large virtual desktop environment where we need to minimize the footprint and particularly the I/O as much as possible. Question is for WinEventLog configuration in Splunk 6.4.1 UF on Windows 7 x64 use a 60 second checkpointInterval. For example: [WinEventLog://Security] checkpointInterval = 5 evt_resolve_ad_obj = 0 disabled = 0 We believe that for this particular input there's no need to checkpoint every 5 seconds, so hoping to modify this interval to reduce the disk writes to be like below but Splunk is not taking into account the new value ( checkpointInterval = 60 ) [WinEventLog://Security] checkpointInterval = 60 evt_resolve_ad_obj = 0 disabled = 0
Viewing all 1551 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>