Quantcast
Channel: Questions in topic: "universal-forwarder"
Viewing all 1551 articles
Browse latest View live

Filtering Windows Security Events based on blacklist

$
0
0
Hello I am using Splunk UF 6.1.4 on my Windows Domain controllers to monitor windows events. I've put in place a working blacklist to filter out a number of events and that works fine. The issue I have is I also want to filter out an EventCode 4776 where the Error_Cdoe is 0x0 **[WinEventLog://Security] disabled = 0 start_from = oldest evt_resolve_ad_obj = 1 checkpointInterval = 5 index = soc ignoreOlderThan = 2d #whitelist = Category=9 blacklist1 = 4624,4634,4658,4656,4690,4661,4662,5136,5137,538,675,540,566,565,562 blacklist2 = EventCode="4776" Error_Code="0x0"** As I say the blacklist1 list works, Or should I be setting blacklist2 to **blacklist2 = EventCode="4776" Message="Error Code:*0x0"**

Problem with Line breaking between Splunk 6.2.3 vs 6.3.0

$
0
0
We have a development environment (replica of prod) running Splunk 6.2.3 (upgraded from 6.1.5). I am testing monitoring of a file which has snmp traps received using net-snmp snmptrapd on *nix platform. Earlier this week I upgraded Splunk from 6.1.5 to 6.3.0 on a **new** standalone instance of test environment to validate new feature set. And import of snmp trap file was one of them. I am noticing that line breaking dosent seems to work on upgraded 6.3.0 release. Is anyone else facing this situation? In 6.2.3 release, only the first event breaks incorrectly, all other events are breaking with or without TA. In 6.3.0 release, the events are getting merged. *__Note:__* I added the events using oneshot method. To force line breaking on both releases I created props.conf with default values as below, still the same behavior: [snmptrap:generic] TIME_FORMAT = %Y-%m-%d %H:%M:%S SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true Sample Traps logged as below: ========================= `2015-09-25 11:30:13 10.11.12.13(via UDP: [trapforwarder]:162->[traprec] TRAP, SNMP v1, community testing .1.3.6.1.4.1.6827.10.17.7.1 Enterprise Specific Trap (1035) Uptime: 22 days, 19:41:52.45 .1.3.6.1.4.1.6827.10.17.3.1.1.1.1 = INTEGER: 1 2015-09-25 11:30:13 10.11.12.13(via UDP: [trapforwarder]:162->[traprec]) TRAP, SNMP v1, community testing .1.3.6.1.4.1.6827.10.17.7.1 Enterprise Specific Trap (1034) Uptime: 22 days, 19:41:53.07 .1.3.6.1.4.1.6827.10.17.3.1.1.1.1 = INTEGER: 1 2015-09-25 11:30:14 10.11.12.13(via UDP: [trapforwarder]:162->[traprec]) TRAP, SNMP v1, community testing .1.3.6.1.4.1.6827.10.17.7.1 Enterprise Specific Trap (1035) Uptime: 22 days, 19:41:53.71 .1.3.6.1.4.1.6827.10.17.3.1.1.1.1 = INTEGER: 1` alt text [1]: /storage/temp/62231-630.jpg

Is there a plan to release a Universal Forwarder for the Raspberry Pi 2?

$
0
0
Is there a plan to release a Universal Forwarder for the Raspberry Pi 2? With a different processor, it's my understanding that it will need to be recompiled...

How to troubleshoot why an indexer is only receiving data from 50% of forwarders in my environment?

$
0
0
I spent hours trying to figure this out Friday, and it's been bugging me all weekend. So, I'm hoping the community can help me figure this out! The info below is all from memory, hopefully I don't miss anything. First off, I'm completely new to Splunk... So if I butcher terminology or concepts, please understand! I am now trying to come in and fix something that appears to have never worked. Several months ago, the Splunk universal forwarder was pushed out to all of my Windows machines. I am fairly certain that it was pushed out using our patching solution "BigFix". Fast forward to today. I am receiving data from about 150 hosts. Unfortunately, I should be receiving data from closer to 350. My domain controllers are included in the list of the systems that are not forwarding data. The guy before me decided to set up a heavy forwarder, something about blowing through our license. I haven't looked into the heavy forwarder too much, but I'm assuming that it's working since half of the hosts are getting through to the indexer. 1 - So far I've compared the local/inputs.conf and the local/server.conf on the working system and the not-working system. According to the guy who did the install, those are the only files that he touched after the install. On each of the systems both the local/inputs.conf and the local/server.conf files are basically identical. 2 - Also, on the not-working system and the heavy forwarder I've run `NETSTAT -an` to verify that the 2 systems are establishing a connection between each other. 3 - I've dug through the `var/logs/splunkd.log` on both the working and the non-working system, and I didn't see anything obvious that would indicate what is wrong on the non-working system. 4 - I've spent hours making changes to the inputs.conf and the server.conf, then restarting the Splunk forwarder service, to no avail. Where else can I look...What else can I do... to try and figure out why only half of my systems are able to forward events to the indexer, and the other half cannot? Any and all help would be greatly appreciated. Thanks!

Why are fields not being extracted using props.conf on my universal forwarder?

$
0
0
Hi, I have been using a props.conf file to extract fields in my event logs, but it does not seem to be working. Below are the sample props.conf and event. Any help is much appreciated. C:\Program Files\SplunkUniversalForwarder\etc\apps\my_app\local\props.conf [Script:WinService] EXTRACT-service = SERVICE_NAME: (?\S*) EXTRACT-state = STATE\s*?: [0-9]\s*(?\S*) and the event is shown in attached image. Many thanks in advance. Regards, Rajnish Kumar

Windows Custom Application logs onboarding - Scan all drives and list the filenames

$
0
0
We have a requirement to detect various application logs from multiple Windows boxes. The current data collection process is too manual by going to specific teams and finding the location of application logs etc.. I wanted to test out the "full scan and learn approach" So my plan is: - Collect any location of logs (eg `*.log`, `*.logs`) in C Drive, D drive etc.. - By getting a hint of the logs, do a 2nd iteration to collect specific logs 1. Has anyone tried this approach? 2. How to just get the "filenames" recursively in Windows using a Splunk Universal Forwarder?

Why does my Deployment Client not phone home with error "unable to resolve my hostname."?

$
0
0
I have installed a universal forwarder on a Linux machine, and I configured it as a deployment client to phone a Splunk server at 192.168.1.28:8089. Unfortunately, it never does so. My deploymentclient.conf is [deployment-client] disabled = false [target-broker:deploymentServer] targetUri = 192.168.1.28:8089 And I checked on the Client side with: `splunk display deploy-client` It outputs Deployment Client is enabled. However, when I took a look at the splunkd.log, searching for DC(for Deployment Client), I saw lines: Creating a DeploymentClient instance unable to resolve my hostname. DeploymentClient is disabled. .... I think this is the problem, but I cannot solve it. I don't know where the "hostname" thing is, so I don't know how to modify it. Anyone please help me out! Cheers.

Why is my sourcetype not parsing as CSV and am getting two events: one with a header and one with a raw event?

$
0
0
I'm trying to parse a CSV file, but I'm getting two events: one with a header and one with a raw event. It is driving me nuts. I've tried deleting and reloading the data multiple times. The file has 2 lines, so at least it is small. The file is being loaded via a CLI: splunk add oneshot -sourcetype backtestMetaData -index grb_test On my server, props.conf in ./etc/apps/<app_name>/local/props.conf I've looked for 'backtest' in other props.conf files, but don't see any. Nothing special on the forwarder. [ backtestMetaData] INDEXED_EXTRACTIONS = csv KV_MODE = none NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = startTime category = Structured description = format for csv from testREsutls.csv disabled = false pulldown_type = true [source::.../testResults.csv] sourcetype=backtestMetaData

How does universal forwarder load balancing work?

$
0
0
Given this in outputs.conf: [tcpout: my_LB_indexers] server=10.10.10.1:9997,10.10.10.2:9996,10.10.10.3:9995 It states in the documentation that "The universal forwarder will load balance between the three receivers listed. If one receiver goes down, the forwarder automatically switches to another one on the list." Question is, what if 10.10.10.1:9997 is always up, does that mean it wont send the data to the other two indexers? and only then will it change indexer, once 10.10.10.1:9997 is down? Or it distributes the data to all three indexers regardless if one is up/down?

UF not sending logs from all folders monitored

$
0
0
Hello Splunkers. I have an issue that I've been dealing with for the past 2 days but no success in solving it. I'm working on a Splunk cluster environment, 3 SH and 2 IDX. I have an UF installed in a SunOS machine. This UF monitors a file called runlog.098880020 (the number is actually just an ID, it doesn't really matters). This log can be found at the path `/export/tsi/tsi/tsiout.1509/runlog.098880020` The thing is: the application creates a new folder every month (tsiout.1505, tsiout.1506, tsiout.1507, tsiout.1508, tsiout.1509....) this is how I've setted my inputs.conf: [monitor:///export/home/tsi/tsi/.../runlog*] index = tsi sourcetype = tsi_logs However when Splunk starts to indexing the files, it indexes only a few folders (e.g., tsiout.1406 and tsiout.1409). If I set my inputs.conf as following, I can see the current log beeing indexed: [monitor:///export/home/tsi/tsi/tsiout.1509/runlog*] index = tsi sourcetype = tsi_logs Do you guys know why this is happening? Shouldn't the `...` tell Splunk to search in every folder for the runlog* file? Thank you guys! Regards!

How to automate a silent installation of a Splunk universal forwarder on Solaris using the PKG file?

$
0
0
Hello fellow Splunkers, Have any of you been able to install Splunk Universal Forwarder on Solaris using the PKG file? I'm trying to script it so that it is installed silently without any interactions. Has anyone been able to achieve this? Might be a basic one, but can't seem to find that answer anywhere.

How to install Splunk App for Stream in a test machine without installing Splunk

$
0
0
Hello, If I want to install Splunk App for Stream on a universal forwarder of a local test machine for sending the data to a Splunk Enterprise instance without installing Splunk, how do I do this? Thank you very much!

Why is my deployment client showing as disabled and says splunkd needs to be up, but it already seems to be?

$
0
0
I'm troubleshooting a deployment client and I've gotten stuck; Deploy server $ /splunk/bin/splunk --version Splunk 6.1.4 (build 233537) Note: This server deploys apps successfully to 125+ clients. Deploy client $ /opt/splunkforwarder/bin/splunk --version Splunk Universal Forwarder 6.1.5 (build 239630) --------------------------------------------------------------------------------------------------- The problem; no deployments. $ /opt/splunkforwarder/bin/splunk display deploy-client Deployment Client is disabled. This command [GET /services/messages/restart_required/] needs splunkd to be up, and splunkd is down. $ service splunk status Splunk status: splunkd is running (PID: 4922). splunk helpers are running (PIDs: 4923). Other than the obvious need to upgrade, does this indicate a glaring mistake I'm missing or is this more subtle? Currently setting up log debugging to see if I can learn more. Thanks.

Is there any history of the apps downloaded to my universal forwarders from my deployment server?

$
0
0
Is there any history of the apps downloaded to my universal forwarders from my deployment server?

Can someone help me to install and configure a universal forwarder on a Windows 7 machine to forward data to Splunk Cloud?

$
0
0
I need to collect the security logs from the Windows 7 machine and add the data to Splunk Cloud. I am new to Splunk and am not familiar with the product. Thanks,

After installing a universal forwarder on Windows 7, why am I only receiving log entries from WinEventLog:Setup?

$
0
0
I have installed the Universal Forwarder on a Windows 7 Enterprise Workstation. I installed selecting all the Eventlog sources. It is forwarding events to an indexer running on Linux, but the Indexer only seems to be processing data for the WinEventLog:Setup sourcetype. I installed the Splunk Add-on for Microsoft Windows. Everything is at the default settings. I'm not certain why the Indexer is only choosing to process this Windows Event log sourcetype. How do I go about testing?

How does universal forwarder load balancing work?

$
0
0
Given this in outputs.conf: [tcpout: my_LB_indexers] server=10.10.10.1:9997,10.10.10.2:9996,10.10.10.3:9995 It states in the documentation that "The universal forwarder will load balance between the three receivers listed. If one receiver goes down, the forwarder automatically switches to another one on the list." Question is, what if 10.10.10.1:9997 is always up, does that mean it wont send the data to the other two indexers? and only then will it change indexer, once 10.10.10.1:9997 is down? Or it distributes the data to all three indexers regardless if one is up/down?

UF not sending logs from all folders monitored

$
0
0
Hello Splunkers. I have an issue that I've been dealing with for the past 2 days but no success in solving it. I'm working on a Splunk cluster environment, 3 SH and 2 IDX. I have an UF installed in a SunOS machine. This UF monitors a file called runlog.098880020 (the number is actually just an ID, it doesn't really matters). This log can be found at the path `/export/tsi/tsi/tsiout.1509/runlog.098880020` The thing is: the application creates a new folder every month (tsiout.1505, tsiout.1506, tsiout.1507, tsiout.1508, tsiout.1509....) this is how I've setted my inputs.conf: [monitor:///export/home/tsi/tsi/.../runlog*] index = tsi sourcetype = tsi_logs However when Splunk starts to indexing the files, it indexes only a few folders (e.g., tsiout.1406 and tsiout.1409). If I set my inputs.conf as following, I can see the current log beeing indexed: [monitor:///export/home/tsi/tsi/tsiout.1509/runlog*] index = tsi sourcetype = tsi_logs Do you guys know why this is happening? Shouldn't the `...` tell Splunk to search in every folder for the runlog* file? Thank you guys! Regards!

How to automate a silent installation of a Splunk universal forwarder on Solaris using the PKG file?

$
0
0
Hello fellow Splunkers, Have any of you been able to install Splunk Universal Forwarder on Solaris using the PKG file? I'm trying to script it so that it is installed silently without any interactions. Has anyone been able to achieve this? Might be a basic one, but can't seem to find that answer anywhere.

How to install Splunk App for Stream in a test machine without installing Splunk

$
0
0
Hello, If I want to install Splunk App for Stream on a universal forwarder of a local test machine for sending the data to a Splunk Enterprise instance without installing Splunk, how do I do this? Thank you very much!
Viewing all 1551 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>