Quantcast
Channel: Questions in topic: "universal-forwarder"
Viewing all 1551 articles
Browse latest View live

Several of my forwarders are having issues blacklisting the _internal index

$
0
0
Several of my forwarders are having issues blacklisting the _internal index. On my forwarder's **\etc\system\local** folder, I have a outputs.conf file with the following logic **[tcpout] defaultGroup = default-autolb-group forwardedindex.3.blacklist = (_internal|_audit)** I use this same logic on my workstations with successful results but however, on this representative machine, even confirming that the blacklist item is being processed by the fowarder (looking at splunkd.log), it still doesn't blacklist the __internal index. I have tried a more aggressive filter (forwardedindex.3.blacklist = _.*) but that doesn't work as well. I'm a bit stumped as to where to check next as to why this is happening and how to correct. Any help would be appreciated. Thank you!

Help with installing two universal forwarders on the same Windows box - service shutting down on second install

$
0
0
I need to install 2 separate universal forwarders on the same Windows box. I have the install built, one via msi and the other via scripted process. On one install the service shuts down. I connected both services to 1 deployment server and that seemed fine, when I change the deployment client to point to the other deployment server the service also shuts down. Here is the log where you see it removing the and then splunkd restarting. ?? 09-20-2017 12:29:54.412 -0400 INFO DeployedApplication - Removing app=Splunk_TA_windows at='C:\program files\splunk-PI\etc\apps\Splunk_TA_windows' 09-20-2017 12:29:54.537 -0400 WARN BundlesUtil - C:\program files\splunk-PI\etc\apps\SplunkUniversalForwarder\metadata\local.meta already exists but with different casing: C:\Program Files\splunk-PI\etc\apps\SplunkUniversalForwarder\metadata\local.meta 09-20-2017 12:29:54.537 -0400 WARN BundlesUtil - C:\program files\splunk-PI\etc\system\metadata\local.meta already exists but with different casing: C:\Program Files\splunk-PI\etc\system\metadata\local.meta 09-20-2017 12:29:54.552 -0400 WARN BundlesUtil - C:\program files\splunk-PI\etc\apps\learned\metadata\local.meta already exists but with different casing: C:\Program Files\splunk-PI\etc\apps\learned\metadata\local.meta 09-20-2017 12:29:54.552 -0400 WARN DC:DeploymentClient - Restarting Splunkd... 09-20-2017 12:29:54.552 -0400 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.210.147.150_8090_L74B00-PC0ETLVM.prod.travp.net_L74B00-PC0ETLVM_22D4D347-CE64-48BC-A1F0-352E78032799 09-20-2017 12:29:55.956 -0400 INFO PipelineComponent - Performing early shutdown tasks 09-20-2017 12:29:55.956 -0400 INFO loader - Shutdown HTTPDispatchThread 09-20-2017 12:29:55.956 -0400 INFO ShutdownHandler - Shutting down splunkd 09-20-2017 12:29:55.956 -0400 INFO ShutdownHandler - shutting down level "ShutdownLevel_Begin" 09-20-2017 12:29:55.956 -0400 INFO ShutdownHandler - shutting down level "ShutdownLevel_FileIntegrityChecker"

Files not indexing due to fast rotation

$
0
0
Hi All, Hope you are doing good. I have come across a difficult situation in indexing a file. We have few Universal Forwarders, on which files will be rotated very fast (within seconds) during mid night. Once they reach the specified size limit, they will be gzipped and moved to archive folder (we are not monitoring this folder). Due to this fast rotation, we are unable to see the logs from those files at that particular time (not indexing may be). The inputs.conf stanza is configured as below: [monitor:///logs/user/*.op] blacklist = (\.\d+|\.gz) index = index sourcetype = sourcetype recursive = true We have default value for the throughput on the Universal Forwarders. Could you please help me in resolving this issue? Thanks in advance.

Will Splunk run a modular input using system Python on a Universal Forwarder

$
0
0
If I have a modular input written in Python, will Splunk attempt to execute it on a Universal Forwarder if the host has Python installed?

How to automate Splunk Universal Forwarder installation on Windows script?

$
0
0
Hi, Seeking for an assistance on how can I automate splunk forwarder installation using windows script? Can I add this command on a windows script? msiexec.exe /i splunkforwarder-6.6.1-aeae3fe0c5af-x64-release.msi INSTALLDIR="C:\Directory" AGREETOLICENSE=Yes DEPLOYMENT_SERVER="ip:8089" /quiet Cheers, Dan

Whitelisting for universal forwarder not working in 6.6.3.0

$
0
0
I am using UF 6.6.3.0 on my domain controller and following is my inputs.conf. The whitelisting part is not working I am seeing all event codes. [WinEventLog://Security] disabled = 0 start_from = newest current_only = 1 evt_resolve_ad_obj = 0 checkpointInterval = 5 # only index events with these event IDs. whitelist = 4723,4724,4740,4782 index = wineventlog renderXml=false

Why isn't whitelisting for universal forwarder working in Splunk v6.6.3?

$
0
0
I am using UF 6.6.3.0 on my domain controller and following is my inputs.conf. The whitelisting part is not working I am seeing all event codes. [WinEventLog://Security] disabled = 0 start_from = newest current_only = 1 evt_resolve_ad_obj = 0 checkpointInterval = 5 # only index events with these event IDs. whitelist = 4723,4724,4740,4782 index = wineventlog renderXml=false

How can I change my alerts so they do not resend once they've already been triggered?

$
0
0
Hi All, We have the below query which is getting triggered everyday based on the missing UF server from the lookup table and it creates a ticket for the same. Currently this alert creates a ticket multiple times for the same forwarder. But we need open a ticket once for each server. For example, if testsplunk1 is "missing", it should open a ticket after 7 days. On the 8th day, if it is not resolved, it currently opens another ticket. This change should make Splunk aware that it has already opened a ticket for testsplunk1 so that it doesn't open another ticket the next day. Current search query : | inputlookup forwarder_assets | makemv delim=" " avg_tcp_kbps_sparkline | eval sum_kb = if (status == "missing", "N/A", sum_kb) | eval avg_tcp_kbps_sparkline = if (status == "missing", "N/A", avg_tcp_kbps_sparkline) | eval avg_tcp_kbps = if (status == "missing", "N/A", avg_tcp_kbps) | eval avg_tcp_eps = if (status == "missing", "N/A", avg_tcp_eps) | rename_forwarder_type(forwarder_type) ----> This is a macro (I have removed tick symbol) | eval current_time=now() | eval diff_time=(current_time - last_connected) | search status=missing | fields hostname, forwarder_type, version, os, arch, status, sum_kb, avg_tcp_kbps_sparkline, avg_tcp_kbps, avg_tcp_eps, current_time, last_connected, diff_time | search status=missing diff_time>604800 Kindly guide me how to write/modify the query to create a ticket once per server instead of creating tickets multiple times for the same server.

Why are some Windows Security events not logging in Splunk?

$
0
0
I have a UF setup on a windows 2012 server. I am logging Win sec logs but I see some in the event viewer that are not going into splunk.. How can I get all the logs to go into Splunk from the windows server?

Why does the universal forwarder container require docker.sock to be mounted

$
0
0
Can someone explain why the docker universal forwarder container requires docker.sock to be mounted? Is there a specific reason it requires this? Is there a way to get around this? From everything that I read, it's pretty much recommended that you don't do this unless absolutely necessary and you are absolutely sure you can trust the security of the container that is mounting it. Since there's always some degree of not trusting anything anymore, I find it hard to find anything that meets those requirements. So bottom line: Do you have to mount the docker socket? Does the forwarder become useless or even function without it?

Splunk Universal Forwarder TCPOUT Cutting Events in Transit

$
0
0
I have a UF that is monitoring 5 rather large (200MB to 12GB) files and then sending via TCPOUT uncooked data to an rsyslog server. However, it appears that some of the events are getting split randomly. I suspect it's due to the AUTOLB function but I want to ask here before I resort to sending additional tens of GBs of data per hour to a single server. Inputs on UF: [batch:///output/file.txt] move_policy = sinkhole crcSalt = _TCP_ROUTING = senddata Outputs on UF: [tcpout] [tcpout:senddata] server=1.1.1.1:515, 1.1.1.2:515, 1.1.1.3:515, 1.1.1.4:515 sendCookedData = false disabled=false To also add to this I have verified that the data is NOT cut in the raw txt file before the UF picks it up. It is about 5-19 cuts per file (so about losing 5-19 events per hour per file) which makes me suspect that AutoLB for TCPOUT is load balancing in the middle of an event. All of these events are single-line JSON files, the data is cut randomly throughout the events, sometimes it's 5 characters in, in the middle of a JSON object or it is even sometimes the very last "}" in the JSON and no other characters.

IIS filter transform not processing when forwarded from universal forwarder, but does with manual file input?

$
0
0
I've found many entries on the subject of filtering IIS logs, with people saying X has worked. However, I'm not able to get it fully working. If I copy an IIS log that should be filtered to the server and import it manually it works (as far as I can tell, I only went to preview) but if I use a UF from a server 2003 (so older UF version) box, to the Splunk server on windows 2012 (6.6.3), it doesn't get filtered. Any help here? Props.conf: [iis] TRANSFORMS-ignoredpages= iis_ignoredpages Transforms.conf: [iis_ignoredpages] #SOURCE_KEY=field:cs_uri_stem REGEX=(Page1|Page2) DEST_KEY= queue FORMAT=nullQueue Page1 and Page2 are only part of the cs-uri-stem (that's its name in the IIS logs, but Splunk seems to turn it into cs_uri_stem), instead they're like companyname.product.page1/service.asmx or companyname.product/page2.asmx I've tried placing the props and transforms files on both the system/local directory of the UF and the Splunk receiver, restarted both and it continued to process the unwanted pages. I understand that it looks like UF itself can't filter these lines, but that it processes them sufficiently to get past props and transforms on the Splunk machine. **I assume there's a way I can make Universal Fowarder send the logs RAW and the Spunk box will go "OH, W3C, process normally," but how do I do that?** ---- Less relevant ---- Filtering out these pages is absolutely critical as they're hundreds of thousands of internal calls that would spam the Splunk logs, and overwhelm our 500mb/day limit that I need to stay under for proof of concept.

Search logs show up only when I restart universal forwarder on domain controller

$
0
0
Hi Guys, I have installed splunk UF 6.3.3 on our Domain Controller 2k12 and following is my inputs.conf [WinEventLog://Security] disabled = 0 start_from = newest current_only = 1 evt_resolve_ad_obj = 0 checkpointInterval = 5 # exclude these event IDs from being indexed. blacklist = 4634,4648,5156,4776,5145,4769,5158,5140,4658,4768,4661,4771,4672,5136,4770,4932,4933,4760,4625,4656,4663,4690,5154,4670,5152,5157,4724,4738,4931 index = wineventlog renderXml=false ISSUE is I can see in data summary count of logs increasing for this source type realtime that is events are getting indexed but when i do a search does not show any new events only when i restart the UF i began to see logs which stop again and i have to keep repeating the restart of spluknd on UF to see the new logs in search. Any help would be appreciated thanks in advance

Error messages when I try to connect the universal forwarder

$
0
0
Hi, I'm brand new to Splunk and been given an existing Splunk environment to manage. I need to get a universal forwarder installed on a couple servers. This environment already has several universal forwarders in place. I installed the forwarders and selected Windows Application, Security and System logs. The deployment is setup to listen on port 9997. In the splunkd log on the forwarder server, I see these lines repeated and not sure what they mean. I'd appreciate any help and keep in mind, I'm still very new to this. Thanks! 09-28-2017 18:45:47.694 -0400 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 09-28-2017 18:45:59.695 -0400 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 09-28-2017 18:46:02.913 -0400 WARN HttpPubSubConnection - HTTP client error in http pubsub Connection closed by peer uri=https://team-splunk01:9997/services/broker/connect/A917C286-95F0-4285-9F0C-8FDE5F9C5596/TEAM-SV-FILE01/c8a78efdd40f/windows-x64/8089/7.0.0/A917C286-95F0-4285-9F0C-8FDE5F9C5596/universal_forwarder/TEAM-SV-FILE01 09-28-2017 18:46:02.913 -0400 WARN HttpPubSubConnection - Unable to parse message from PubSubSvr:

Splunk for Blue Coat ProxySG: why can't I import using a universal forwarder?

$
0
0
http://docs.splunk.com/Documentation/AddOns/released/BlueCoatProxySG/Releasenotes I am using Splunk Add-on for Blue Coat ProxySG. I can successfully import using GUI. However, using universal forwarder does not work. Does anyone know anything? I think that the commented out part does not work well.

Trouble setting up universal forwarder for Windows Log Collection

$
0
0
I am trying to setup my splunk enterprise 6.6.1 to be able to injest windows logs from remote pc's but not having much luck. I know I am missing something, or not comprehending something, but can't figure it out. So far, I have configured the receiver on my indexer as TCP port 9997. I have installed the windows universal forwarder v. 7.0.0 on the windows PC i want to collect the logs from. I have enabled to collect both the system and application logs. I am seeing the following in my splunkd log file on the client where the universal forwarder is installed: 09-29-2017 08:58:23.417 -0400 INFO TcpOutputProc - Connected to idx=10.0.103.210:9997, pset=0, reuse=0. 09-29-2017 08:58:59.026 -0400 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.1.211.25_8089_bens-testbox.patientfirst.com_BENS-TESTBOX_FC09E8A3-4F3E-4CCC-BF5B-8C3D6884D2C4 09-29-2017 08:59:59.040 -0400 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.1.211.25_8089_bens-testbox.patientfirst.com_BENS-TESTBOX_FC09E8A3-4F3E-4CCC-BF5B-8C3D6884D2C4 I have the following in my inputs config on the universal forwarder client: [default] host = BENS-TESTBOX # Windows platform specific input processor. [WinEventLog://Application] disabled = 0 [WinEventLog://Security] disabled = 1 [WinEventLog://System] disabled = 0 I then have the following in my Splunk Enterprise inputs config file: [default] host = splunk1 [splunktcp://9997] connection_host = none disabled=0 When I try and do a search though my search head (currently my setup is a single indexer with a single separate search head) for host: #ipofclientpc, I don't get anything. I have not setup a data input, which i think is my issue, but can't figure out the correct process to configure that to pull/receive from the forwarder. If anyone can help, i would be most appreciative.

Windows Events Not showing Up on Indexer

$
0
0
A UF was installed on 2 Windows domain Controllers. These are in a different windows forest than my other devices. I had to manually add these to the windows_eventlog class by IP as the DNS name can't be resolved. I now see them sending to the indexer but I can't search any of the events. How can I trouble shoot this? Thanks!

File not being read by Splunk in a directory while others are

$
0
0
Hi, I have a directory which is defined in inputs.conf on a host (which has UF running), directory is: /var/middleware/inventory/var As per the logs (splunkd.log), the directory is now monitored: 10-04-2017 11:50:50.105 +0200 INFO TailingProcessor - Adding watch on path: /var/middleware/inventory/var. In this directory there are nine different files. But only eight of them are read. They all have the same permissions and the content format is also the same. Does anyone know why the last file is not being read by Splunk? There is no log about it. Thanks for your help.

Splunk Universal Forwarder 6.5.2 running on 100% on Solaris

$
0
0
Can someone help me in resolving the issue? Splunkd Universal Forwarder is taking 100% process. I am monitoring around 50 logs files and the data is not more than 30GB daily. For monitoring i am not having any wild characters and have given full path of log files.

What is the recommended version of the universal forwarder?

$
0
0
Hi Folks, We have various kind of Splunk universal forwarder version(4.3.1, 5.0.1, 6.1.1 ) on our environment and we are planing to upgrade the old version to new splunk recommended version , is there any recommendation version for universal forwarder for keep the forwarder stable. Thanks, Sridhar
Viewing all 1551 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>