I installed the Splunk TA for Solaris 11 in my UF (Universal Forwarder) and left the default collection from the inputs.conf
The stanza:
[script://./bin/ldoms.sh]
disabled=0
index = ia
interval=600
source=solaris:ldoms
sourcetype=solaris:ldoms
is default but no data is being collected. When I run the ldoms.sh as root, it outputs the expected results. I do not see any errors in the splunkd.log file associated with the script.
Any help in troubleshooting this issue would be great
↧
Splunk TA for Solaris 11: How to get the Solaris ldoms.sh script to send data to indexer?
↧
What is the intended behavior when setting the "instances" option for perfmon data in inputs.conf?
In the inputs.conf spec for collecting perfmon data (https://docs.splunk.com/Documentation/Splunk/6.5.1/Admin/Inputsconf#Performance_Monitor ), there is an option called "instances". Reading the description of the option seems to suggest that it allows one to specify string patterns that will filter the reported perfmon data based on if the instance field from the host matches the string specified in the stanza. For example, if one wanted to capture perfmon data for all instances of svchost, I would assume this could be done by specifying a stanza like the following:
[perfmon://Process]
counters = Working Set;Virtual Bytes;% Processor Time;Handle Count;Thread Count;Elapsed Time;Creating Process ID;ID Process;
disabled = 0
index = perfmon
instances = svchost*
interval = 30
object = Process
mode = multikv
showZeroValue = 1
Setting up the stanza in this way does not result in all instances of svchost being reported with the prescribed configuration. Instead, the only thing reported back is the perfmon data for the top-level, parent svchost process, and its value for the "instance" field is set to the pattern in the stanza, e.g., "svchost*". None of the child svchost processes (whose instances should be svchost#1, svchost#2, etc.) are reported.
Is this the expected behavior?
I tested this with Splunk Forwarder 6.4.4, Splunk Add-on for Windows version 4.8.0 on Windows 10 64-bit.
Another user (@Yorokobi) reported seeing this on Windows Server 2012 R2.
↧
↧
Why did upgrading my Universal Forwarder result in a license violation?
I am monitoring the directory where IIS logs are stored. The universal forwarder is sending the information on a dedicated index.
To upgrade the universal forwarder, I saved the customization files then I uninstalled the previous version. Then I installed the latest version copying the customization.
As result the universal forwarder re-indexed all files in the logs directory introducing a license violation.
Is it possible to avoid this behaviour saving the previous status of the indexed logs?
↧
How to blacklist indexing a security event based on the Account Name?
I'm running the Splunk Universal Forwarder and I've configured the inputs.conf for the Splunk Add-on for Microsoft Windows to monitor the Security event logs for Windows.
At this time though I'm looking to blacklist / not index any security event that displays a specific account name. The account name is "wilmsplunksvc".
I've went ahead and created a blacklist within the inputs.conf without any luck. Below is the syntax I used.
blacklist4 = Account_Name="wilmsplunksvc"
Any assistance would be greatly appreciated.
↧
How I can monitor my Splunk universal forwarder to make sure the forwarder is working as expected?
Hello!
Recently noticed some universal forwarders hang and not sending logs to indexer. So, how I can monitor my Splunk universal forwarder sending logs to make sure the forwarder is working as expected. I have `index="_internal"`, but is there any search to help me to create a dashboard or alert?
↧
↧
How to uninstall Independent Stream Forwarder?
I did quite a dumb thing, I installed the Independent Stream Forwarder onto my Universal Forwarder, I didn't know that the Universal Forwarder can become a Stream Forwarder without installing the Independent Stream Forwarder.
Now, my Stream Forwarder isn't working. Is there any way to uninstall the Independent Stream Forwarder?
If anyone wants to try assist me to solve the Stream Forwarder not working, please see error message below.
2016-12-13 08:36:41 INFO [140079871240000] (SnifferReactor/SnifferReactor.cpp:154) stream.SnifferReactor - Starting network capture: sniffer
2016-12-13 08:36:41 ERROR [140079871240000] (SnifferReactor/PcapNetworkCapture.cpp:231) stream.SnifferReactor - SnifferReactor failed to open pcap adapter for device . Error message:
2016-12-13 08:36:41 FATAL [140079871240000] (CaptureServer.cpp:1893) stream.CaptureServer - SnifferReactor was unable to start packet capturesniffer
2016-12-13 08:36:41 INFO [140079871240000] (main.cpp:1084) stream.main - streamfwd has started successfully (version 7.0.0 build 128)
2016-12-13 08:36:41 INFO [140079871240000] (main.cpp:1086) stream.main - web interface listening on port 8889
Unfortunately, there's no error message. So I can't really tell what's wrong so, I'm assuming it's because I installed the Independent Stream Forwarder.
↧
Has anyone seen duplicate windows server universal forwarders after update?
I have one forwarder that is showing duplicate on my Splunk server. I updated 3 forwarders to test them. It was from v4 to v5 UF. The other two were fine, the 3rd is having an issue. Over the weekend between the two "forwarders" there were 7 connections and it overloaded the license. I uninstalled via command line and reinstalled the latest forwarder. Yet again my server is showing two forwarders with unique GUIDs. Not sure how to fix, if an uninstall and reinstall doesn't work.
↧
How do I debug perfmon:memory missing on a windows 2012 R2 host?
I have a couple of hosts that have the same version of Windows (2012 R2) that one will produce perfmon:memory data, and the other will not. They have been installed with the same version of the UF (6.5.0) and they are getting the same Splunk_TA_windows app from the deployment server. There is no real difference in the data in the _internal index for these hosts. So I'm thinking that the problem lies in the host itself. **How do I debug** what the TA is doing for the data to go to be indexed?
↧
Can we add additional parameters (IP and hostname) to the logs which are collected thorough a Windows universal forwarder?
I am kind of new in Splunk and I am curious about something. When I install universal forwarder to a Windows server, it sends only name or ip, and by default, it sends the name of the server (can be configured with inputs.conf file). I also want to add another field that sends the ip of the server. Since not all servers are in domain, I can't find the ip address when I try to lookup from the DNS. The other thing is, since I am not a part of the systems team when i see only IP addresses, it also doesn't tell much to me. So I need both ip and hostname. Can we do it?
↧
↧
How to Restart universal forwarder (agent) remotely via deployment manager?
We face few issues whereby our endpoints (clients) mayhave Splunk Service Stopped.
Can we force restart Universal forwarder (agent) "splunk service" or "splunk" from our deployment manager?
Currently we are asking the support team of the respective application team to do it for us, but would be great if we can manage the agents ourselves. (BTW the agents have local accounts in the clients)
↧
Why is the Universal Forwarder not loading Splunk Add-on for Unix and Linux?
I'm working on deploying the Splunk Add-On for Unix and Linux to the universal forwarders in my environment using a configuration management system. I packaged the add-on into an RPM for easier management, which simply decompresses the archive into `$SPLUNK_HOME/etc/apps` so that I now have `/opt/splunkforwarder/etc/apps/Splunk_TA_nix` with the application - directories `appserver`, `bin`, etc. I've created a `local` directory and copied `default/inputs.conf` into it with inputs and enabled a number of the inputs. However, the single-node Splunk server, which does receive a number of other inputs from this forwarder, is not getting any of the inputs configured in the app.
I've examined the output from splunkd, and during startup it lists that it is reading in the various configuration stanzas in `/opt/splunkforwarder/system/local/inputs.conf`, but it does not output anything about any of the stanzas configured in the Splunk Add-on for Unix and Linux. This makes me think that it's completely ignoring the add-on, but I can't figure out why. I've checked and the add-on folder is owned by root but is all readable by Splunk. Any ideas as to why it's not working?
↧
How to restart a universal forwarder remotely via deployment server?
We are facing a few issues whereour endpoints (clients) may have the Splunk service stopped.
Can we force a restart of the Universal forwarder (agent) "splunk service" or "splunk" from our deployment server?
Currently, we are asking the support team of the respective application team to do it for us, but would be great if we can manage the agents ourselves. (BTW the agents have local accounts in the clients)
↧
Is it possible to add and correct fields for past events?
Hi,
we just set up our first Universal Forwarder which now works as expected. But it didn't do so initially, before we had all set up correctly. We now have the problem, that the first events we forwarded didn't have the fields we defined for that source type. I later added a props.conf for the forwarder, so the new events have the correct fields. I also had one copy & paste error on the first try, so the second bunch of events has one wrongly spelled field name.
Is there a way to correct these mistakes? Or should we just delete all old events and re-upload them manually?
Thanks a lot!
↧
↧
How to set up Splunk DB Connect with Splunk Cloud?
Hi,
I'm just beginning the process of getting Splunk DB Connect and Splunk Cloud working together. I've read the docs, but I'm having a hard time understanding how to get this to work with Splunk Cloud. Could someone put together a list of steps to get it installed and running? Conceptual steps would be ok, just something that I can try to wrap my head around.
Thank you!!!
↧
How to see www* as host from secure.log and access.log ?
Hello Splunkers,
I am forwarding logs from Universal Forwarder, to a Search Peer (Standalone Inderxer) and doing the search from a standalone Search Head. I have done as far from my understanding. **How can I see access.log and secure.log from host www1 -www9.**
Below is the inputs.conf of my UF: (log path:- /opt/logs/www1 - www9)
[default]
host = UF-01-248
[monitor:///opt/log/www*/secure.log]
disabled = 0
host_segment = 5
sourcetype = secure.log
index = main
[monitor:///opt/log/www*/access.log]
disabled = 0
host_segment = 9
sourcetype = access.log
index = web
↧
How to configure an Intermediate Forwarder and the inputs.conf and outputs.conf files in the Application servers?
Hi All
We currently have universal forwarder installed in our 3 application servers to forward application logs to Indexer.
The inputs.conf file in each of the application server looks like this
[monitor://C:\logs\logfiles\Application\Applog_*]
sourcetype = business_iis
index = business_idx1
The outputs.conf file in each of the application server looks like this
[tcpout:LoadBalancedIndexers]
defaultGroup = LoadBalancedIndexers
server = splunkbusinessindexer.info.com:13071
We are trying to implement the concept of Intermediate forwarder for the 3 application servers.
We will have an intermediate universal Splunk forwarder which will receive the log files from the universal Splunk forwarders installed in each application servers and forward them to Indexer.
For that I am trying to configure the inputs.conf and outputs.conf files in the Application servers and the Intermediate forwarder.
I am not able to understand which IP and port number should be configured in which file in comparison to what we already have.
Can someone please help me in writing the correct configuration.
Thanks
Nirmalya
↧
How to troubleshoot why my heavy forwarder is not receiving Windows event logs from the universal forwarder?
I want to send "wineventlog:security " logs to **Heavy forwarder(KIWISERVER)** and below are the configuration files that I have created on the **Universal forwarder**
**inputs.conf:**
[WinEventLog://Security]
disabled = 0
index = activedirectory
sourcetype=adlog_003
**outputs.conf:**
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server = xxx.xx.xxx.xx:9997
[tcpout-server://xxx.xx.xxx.xx9997]
When i see the "Splunkd" log it shows "**Connected to idx=xxx.xx.xxx.xx:9997"** but i'm unable to see the events in splunk search *index=active**
**sample **splunkd** log file :**
12-17-2016 01:09:30.162 -0500 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='C:\Program Files\SplunkUniversalForwarder\var\log\splunk\license_usage_summary.log'.
12-17-2016 01:09:30.162 -0500 INFO WatchedFile - Will begin reading at offset=424312 for file='C:\Program Files\SplunkUniversalForwarder\var\log\splunk\metrics.log'.
12-17-2016 01:09:30.178 -0500 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='C:\Program Files\SplunkUniversalForwarder\var\log\splunk\remote_searches.log'.
12-17-2016 01:09:30.178 -0500 INFO WatchedFile - Will begin reading at offset=854 for file='C:\Program Files\SplunkUniversalForwarder\var\log\splunk\conf.log'.
12-17-2016 01:09:30.287 -0500 INFO TcpOutputProc - Connected to idx=xxx.xx.xxx.xx:9997
Please let me know what mistake I have done.....
![noresults][1]
[1]: /storage/temp/173422-results.png
↧
↧
Why are no events showing on any indexers after using "Add Data" on the universal forwarder?
Hello,
I have 2 Indexers along with 1 search head. Both the indexers are added under distributed search peer. From a universal forwarder, I followed the method to add data from Files and Directories from Forwarded Inputs. After add the inputs, no events are showing on any of the indexers. At the same time, got a message below:
Search peer INDEXER-02 has the following message: Received event for unconfigured/disabled/deleted index=abc with source="source::/opt/log/abc/abc.log" host="host::abc" sourcetype="sourcetype::abc.log". So far received events from 1 missing index(es).
Note that, I have an additional indexer name "abc" on INDEXER-01. Is this message for the Indexer "abc" not found @INDEXER-02 ?
Why is the added data not available at the Search Head? Thanks in advance for your help.
↧
Why is SHOULD_LINEMERGE not allowing me to set to "false"?
I'm using the Universal Forwarder, and I have a requirement to log events under a specific Source Type using specified line breaks, while at the same time sending some events to the nullQueue. From what I understand, as I'm using the Universal Forwarder, I should be configuring my Splunk server instance to parse my logs.
On disk, the log is formatted as PSV, so I cloned this Source Type and renamed it. The only advanced settings that I added are as follows -
> LINE_BREAKER = (\r\n)> TRANSFORMS-set = setnull_CheckLive
After doing this, I noticed that nothing was getting logged, so I removed the advanced setting for `TRANSFORMS-set` and tried again. This time I did see logging, but it was not as expected; rather than each event being logged separately, a whole bunch were logged together, suggesting that my `LINE_BREAKER` advanced setting was being ignored.
Upon further investigation, I've found that whenever I add the `LINE_BREAKER` advanced setting, the default setting `SHOULD_LINEMERGE` is set to `true` and I'm **unable to amend that value** (whenever I change it and click "Save", it just changes back). This is odd because in the docs it explicitly states the following -
> When using LINE_BREAKER to delimit events, SHOULD_LINEMERGE should be set to false, to ensure no further combination of delimited events occurs.
Please note I'm unable to access the server that hosts the splunk instance, so I can't provide an extract from **props.conf**, and because I'm new, I'm not allowed to upload a screen shot of the settings from the Splunk console.
I've found some other answers that address this issue, but none with an excepted answer and sadly none that are of help regarding my issue.
Is anyone able to suggest what may be going wrong here? I'm happy to provide more information if required.
Please note that I did try to include some links in here, but it seems that I'm not allowed to do that either.
Thanks,
David
↧
How to troubleshoot the Universal Forwarder when it is not sending events to the indexer?
We have a existing infrastructure of Splunk where events are passed from multiple Linux boxes to Splunk indexers.
We recently have installed Splunk **forwarder** in a **Windows** box. When we search in Splunk using that host name, we don't see the events.
We have checked the logs with the following observation
- It is picking up new monitor config.
- No error is reported in Splunkd.log
Can you please share the **troubleshooting** **steps** for the forwarder? Can **forwarder log files** help us pin point - **if forwarder at all sending the events to Indexer?**
↧