Quantcast
Channel: Questions in topic: "universal-forwarder"
Viewing all 1551 articles
Browse latest View live

Splunk Universal Forwarder Deployment with SCCM

$
0
0
Hello, We are trying to deploy the Splunk Universal Forwarder using Microsoft SCCM. I can successfully install the MSI from the command line using: msiexec /i "splunkforwarder-6.3.0-aa7d4b1ccb80-x64-release.msi" AGREETOLICENSE=Yes DEPLOYMENT_SERVER="*mydeploymentserver*:8089" /quiet However when our SCCM admin uses the same command in his deployment manager, the installation fails. According to the SCCM log, the error is: > [LOG[Failed to clear product> advertisement, error code> 1603]LOG]!> date="10-29-2015" component="execmgr"> context="" type="3" thread="17300"> file="msiexecution.cpp:264" I know this is most likely an SCCM issue, but wanted to see if anyone out there has received a similar error or had a similar issue. Thanks!

How to deploy a Splunk Universal Forwarder through GPO and MST setup?

$
0
0
I have been trying to push the Splunk Universal Forwarder out to my client systems via GPO. I would like, however, to generate an MST file that: a) Accepts the EULA and b) sets a predefined Receiving Indexer. Utilizing Orca.exe I have made attempts at MSTs with the following: Under the Property Table I assigned the Property of AGREETOLICENSE the value of Yes. As for the receiving indexer, I tried both the following: 1) Create new row in the Property table called RECEIVING_INDEXER and set the value to *ipaddress:portnumber* and 2) Under the AdminProperties row I modified ;RECEIVING_INDEXER; to ;RECEIVING_INDEXER=*ipaddress:portnumber* neither of which seemed to work. I also made sure to go to the advanced properties on my GPO to check "Ignore language when deploying this package". Any and all help would be greatly appreciated.

How to configure third party certificates for deployment server and deployment client connection?

$
0
0
I am trying to replace the Splunk certs with a third-party certs and following http://docs.splunk.com/Documentation/Splunk/6.3.0/Security/Securingyourdeploymentserverandclients I am a bit confused about the edits required. Do both server.conf edits occur on both server and client? Does the web.conf edit only occur on the server? Thanks,

Can I upgrade my Linux universal forwarders directly from Splunk 6.0.3 to 6.3.0?

$
0
0
Hello, Just checking to see if it is okay to upgrade my Linux universal forwarders directly from 6.0.3 to 6.3.0 or if I need to make an intermediary jump. Thanks.

Universal Forwarder resends entire Security Event log after upgrade.

$
0
0
I have recently started upgrading Windows universal forwarders from 6.0.3 to 6.2.6. After I upgrade them they seem to be resending the entire Windows Security log (2GB) instead of continuing where they left off. I can see the evidence of this by viewing the index data amount from the host staring after they are upgraded and by doing a report on Windows Security Events and seeing that there are multiple events with the same RecordNumber field. Now I could modify my install script to drop the Security log, upgrade the software and avoid the licensing issues this is causing, but I'd prefer to get to the root cause. Has anyone seen this?

How do I configure Universal Forwarder to not send INFO Metrics over TCP?

$
0
0
My ouputs conf looks like this:> [tcpout] defaultgroup = logstash> disabled = false>> forwardedindex.0.whitelist = .*> forwardedindex.1.blacklist = _.*> forwardedindex.2.blacklist => (_audit|_internal|_introspection)>> [tcpout:logstash]> server=localhost:7777 sendCookedData => false useACK = true As well as my actual events, I'm seeing loads of messages being emitted like this: > INFO Metrics - group=thruput, name=uncooked_output, instantaneous_kbps=0.176377, instantaneous_eps=0.096773, average_kbps=0.355449, total_k_processed=44.000000, kb=5.467773, ev=3.000000> INFO Metrics - group=thruput, name=thruput, instantaneous_kbps=0.176377, instantaneous_eps=0.096773, average_kbps=0.371606, total_k_processed=46.000000, kb=5.467773, ev=3.000000, load_average=0.030000> INFO Metrics - group=thruput, name=cooked_output, instantaneous_kbps=0.000000, instantaneous_eps=0.000000, average_kbps=0.000000, total_k_processed=0.000000, kb=0.000000, ev=0.000000> INFO Metrics - group=tcpout_connections, name=logstash:127.0.0.1:7777:0, sourcePort=8090, destIp=127.0.0.1, destPort=7777, _tcp_Bps=186.73, _tcp_KBps=0.18, _tcp_avg_thruput=0.39, _tcp_Kprocessed=46, _tcp_eps=0.10, kb=5.47 How can I eliminate these from the forwarder output?

Can I configure a universal forwarder to send syslog messages to a syslog server?

$
0
0
Could some one help me out here.. Can I configure a Universal forwarder to send the syslog messages to a (non splunk) syslog server? Right now I have Universal forwarder which is sending data to a Splunk Indexer. Can I configure the same forwarder to send the data to another system (no splunk) as syslog messages?

Universal forwarder is truncating/adding extra line breaks to events output over TCP

$
0
0
We have a universal forwarder set up to forward incoming messages to logstash, TCP -> forwarder -> TCP: outputs.conf: [tcpout] defaultGroup = logstash [tcpout:logstash] server=localhost:7777 sendCookedData = false useACK = true And inputs.conf: [tcp://:9997] sourcetype=_json [monitor://$SPLUNK_HOME/var/log/splunk/metrics.log] disabled = true [monitor://$SPLUNK_HOME/var/log/splunk/splunkd.log] disabled = true [monitor://$SPLUNK_HOME/var/log/splunk/license_audit.log] disabled = true If we connect our load tester directly to logstash, then our messages appear correctly formatted. If we go via the universal formatter then messages are coming through with extra line breaks. It's as if Splunk isn't able to tell where one event stops and another begins. Interestingly, if we send the same event multiple times, it seems to get truncated at the same point each time. All our messages originating from the load testing tool are json formatted, newline seperated. We have also tried terminating messages with EOT characters but to no avail. What could be causing this? Is there a key piece of information/documentation we are missing?

Why are my universal forwarder data inputs to index CSV files not working? (no indexed data, no sign of change in configuration)

$
0
0
Hello fellow splunk users! I am encountering a problem with indexing .csv files. A bit of background story: I am trying to index Windows Server 2003 data. Installing an universal forwarder does not work on this machine (the OS is too old apparently). Therefore, I installed a tool on the machine that forwards the logs to a syslog server. This works flawlessly. The syslog server (Windows Server 2012 R2) stores the logs from the Windows 2003 Server in a local folder (C:/syslogServer/). This folder contains subfolders for each machine the syslog server collects data from (C:/syslogServer/win2003). The subfolders contain .csv files. I would like splunk to index those files. The syslog server has an universal forwarder installed and on my distribution server I tried to configure a Data Input collecting the .csv files. I tried all variants: - telling splunk the path to C:/syslogServer (apparently it should recursively index all subfolders / contained files) - telling splunk the path to C:/syslogServer/win2003 - telling splunk the path to the file I would like to index directly: C:/syslogServer/win2003/file.csv See image for details. ![files and directories][1] I also tried uninstalling the universial forwarder on the syslog server and reinstalling it to tell the installer that I want to index the file (thus not using the deoployment server, but manually entering the indexer) Result: - no data from the created index is being found - no data from the given source is being found - no data from the given source type is being found Also, I could not find any error messages in the log files. (python.log, splunkd.log) Can someone please tell me what to do? Or is there any other way to index data from a Windows 2003 server? [1]: /storage/temp/71179-unbenannt.png

Is it possible for Windows event logs to be flagged up on the Active Directory and passed to a Splunk server via universal forwarder?

$
0
0
I have been assigned with the task of implementing Splunk on my company network. I have Syslog communication with my server with no problems, but I would like to have my Windows devices communicating to Splunk. Using the Universal Forwarder on my Active Directory server will show changes to the Active Directory config. However, my ultimate aim is to show logs from all the Windows devices on my network. As an example, I would like to determine whether one of the Users or Computers in my domain has changed their Windows Firewall settings, or whether they have locked their account. I have installed the Universal Forwarder on my AD, and have also set up a Group Policy Object to audit events based upon what I need. My results so far is that only changes to my AD are being logged, such as the creation of a new OU, GPO or User. Is there any possibility for my Windows Events to be flagged up on the AD and passed to my Splunk Server through the forwarder? Additionally, does the server running Splunk have to reside on the same domain as the AD and Windows Devices?

Where should I put my syslog universal forwarder/deployment server with regards to subnets and firewalls in an indexer clustering environment?

$
0
0
Hi folks, I'm planning on installing some new machines running Splunk instances. Two of the machines are going to run an indexer cluster, one a cluster master and one a search head. The last machine is going to run both a deployment server and a syslog universal forwarder, where syslogs are going to be written to file on the forwarder and forwarded to the indexer cluster. The deployment server is going to deploy apps to the forwarders and (probably) to the one search head. What I'm wondering is where it would be most logical to put my syslog forwarder/deployment server? Does it make sense to put it in the same subnet as the indexer cluster, cluster master, and search head to keep it nice and simple, or would this possibly make it difficult for apps/configuration to be deployed to forwarders outside the subnet? The alternative would be to make it "external", outside the subnet of the indexer cluster, cluster master, and search head. I'm sorry that I can't give you any details regarding firewalls, domains, etc., mainly because I don't know myself due to the complexity of the system I'm dealing with. I was just hoping someone had any experience with similar cases? Any input would be much appreciated, thanks!

Splunk Light: After creating a server class to collect Windows event logs from one server, why am I unable to modify it or create an additional server class?

$
0
0
I'm evaluating Splunk Light for purchase and running in to some issues collecting Windows Event Logs from multiple servers. I installed the Universal Forwarder on a few machines, then to test the setup, I configured a receiver, created a server class, and set it up to collect logs from the Windows App, Sec, & Sys logs which is working great. So I continued installing the forwarder on a number of other machines and that's where I ran in to an issue. From looking through the UI and doing some Googling, it seems like Splunk Light isn't able to manage server classes, REALLY odd that you can create one and not modify it, but that would be absolutely fine if I had the ability to manage forwarder clients individually. However, it seems like that's not possible either: I can't setup a forwarded data input without using a server class. I can't add a new server to an existing server class. I can't add multiple server classes with the same Windows Event Log inputs. When I try, I receive an error "Cannot create another input for the event log "Application", one already exists." So how, are you supposed to collect forwarded Windows Event Logs from an additional server in Splunk Light?

Splunk 6.2.3 Universal Forwarder maxQueueSize: What is the algorithm used to determine the amount of memory to use?

$
0
0
The outputs.conf.spec shows a default value of "auto". The Splunk Universal Forwarder version is 6.2.3 on RHEL 6.6. What is the algorithm used to determine the amount of memory to use? I have OS personnel asking what is the possible memory maximum for the agent.

Fixing Splunk for Symantec and its search criteria

$
0
0
OK, so after fighting with this app for a few days, it seems a part of the issue has to do with the actual searches. When using Splunk for Symantec, it seems the best thing to do is use the universal forwarder as this will allow you to control the sourcetype. This is key. When syslog is used, the data gets there, but it is all wrong and all searches would be broken, at least with the forwarder, you can control the sourcetype on delivery. For the sake of simplicity, I will use a single source. On the forwarder, set the inputs.conf: [monitor://< PATH>>/data/dump/agt_scan.tmp] sourcetype = symantec:scan index = symantec disabled = false That much is simple and it gets the data into the SEPM. However, this is where the issue lies and I hope we can fix it. In the Splunk for Symantec App (the v 1.x one), when you click on: Host Overview -> completed scans today it shows zero. Underneath that box, if you open in search, the search field pro-populates with: `host_overview_scans_today` and you have no data and the count is 0, but the good news is that this is a hyperlink to a new search, click it and select view events. The search box now shows `index=symantec_syslog sourcetype=sep12:scan status=completed` and below there are no results. From here, there are 2 issues and the one is corrected using the forwarder as opposed to syslog: 1. `sourcetype=sep12:scan`, with syslog, this is just sep12, so it returns no value, at least with the forwarder, you can forward the type as sep12:scan. 2. "`status=completed`, this too is an issue, because the imported data does not have a status field. Instead, remove status= and you end up with results. So, depending on how you can edit an existing app and its searches, Splunk for Symantec would technically work. Otherwise, it would seem the answer is Splunk Enterprise Security. This appears to be where the 2.x symantec application comes into play because looking at the TA, it also utilizes the forwarder, but once in Splunk, the paid Enterprise Security app can then read this data. So, the 2.x TA has no affect on 1.x. The 1.x has the pre-built dashboards, but the issue there is that the searches used to build the dashboards do not appear to be accurate and therefore, they display no data. While I have no idea how this can be fixed at least it seems to make some sense now as to why the data that is sent to Splunk cannot be seen in the add-on and instead requires individual searches, negating the gui's functionality.

Can you deploy the Splunk App for Unix and Linux from a deployment server to a universal forwarder?

$
0
0
I have the Splunk App for Unix and Linux set up to deploy from my deployment server and I have been able to successfully deploy it to existing servers that I have in my environment - but all of those servers are running full fledged Splunk. I just installed a universal forwarder on a Linux server and I'm able to successfully see internal data being forwarded from that, and I tried adding this server into my serverclass.conf on my deployment server to try to push the unix app to it, but it is not deploying and I don't see anything in the logs that would indicate why. Does anyone know what I'm missing? Can you deploy from a deployment server to a universal forwarder instance?

Why is the indexer discovery clear text password not being encrypted?

$
0
0
I've enabled indexer discovery on my 6.3.1 linux universal forwarders. http://docs.splunk.com/Documentation/Splunk/6.3.1/Indexer/indexerdiscovery 3. Configure the forwarders a. Configure the forwarders to use indexer discovery On each forwarder, add these settings to the outputs.conf file: [indexer_discovery:] pass4SymmKey = master_uri = [tcpout:] indexerDiscovery = I have noticed that the pass4SymmKey is not being encrypted when the server first starts after its been added. Is this by design or is it a flaw?

Can a Universal forwarder filter lines from log?

$
0
0
I've read the docs on how to filter events from: http://docs.splunk.com/Documentation/Splunk/4.3.3/Deploy/Routeandfilterdatad The documentation makes mention that somethings the light and "Universal forwarder" cannot do... is this one of those things? If so where DO you filter this to keep from getting it into the DB? The log lines with "ipmon" text still are sent. The universal forwarder is running on a solaris 10 host. My configuration is: /opt/splunkforwarder/etc/apps/search/local/inputs.conf [monitor:///var/log/local0/debug] disabled = false ## filter ipmon logs out of forwarded logs sourcetype = local0_syslog queue = parsingQueue /opt/splunkforwarder/etc/system/local/props.conf [local0_syslog] TRANSFORMS-null= setnull_ipmon /opt/splunkforwarder/etc/system/local/transform.conf [setnull_ipmon] #match anything with ipmon and toss it REGEX =ipmon DEST_KEY = queue FORMAT = nullQueue

Why are we getting a deployment checksum failure on all universal forwarders with the Splunk Add-on for Microsoft Windows?

$
0
0
After a recent `deploy-server reload`, all of my Splunk_TA_Windows clients except for 5 started showing up with the following client errors: 11-12-2015 14:43:24.037 -0800 WARN ClientSessionsManager - ip=10.x.x.x name=946A6046-907B-408A-8887-77350DA2A96C Updating record for sc=Windows app=Splunk_TA_windows: action=Install result=Fail checksum=331432938900507147 The 5 that work are Exchange servers, run by a different system admin, but I can't find anything significantly different. We cleared this up before with newly deployed Universal Forwarders by deleting the Splunk_TA_Windows directory on the client machines and bouncing the UF, but I don't want to have to do this to 42 servers! When I did my deploy-server reload, I was making changes in another app, only to set a TZ variable in a "local" props.conf for a particular sourcetype. Does anyone have any clues about how to best fix this error permanently? Thanks

OS Compatibility: Can a Splunk universal forwarder be installed on a machine running SCO UNIXWARE 7.1.4?

$
0
0
I've been asked to install a Splunk Universal Forwarder on an machine running: SCO UNIXWARE 7.1.4 I can't find any details on if this is supported by Splunk Universal Forwarders - this is a strange variant of Unix with its own Kernel I believe. Has anyone else come across this in relation to Splunk and knows if it is supported? And if so, by which version?

Splunk Cloud Trial: Why am I getting "ERROR TcpOutputFd - Connection to host=(splunk-cloud-ip):9997 failed" after setting up a universal forwarder on our EC2 instance?

$
0
0
I signed up for a Splunk Cloud trial, and set up a universal forwarder on one of our EC2 instances. However, I keep getting this in splunkd.log: ERROR TcpOutputFd - Connection to host=[ip address of input server]:9997 failed I tried telnet to the ip/port and it was successful, so there should be no network-related issues. If I go in the admin console to **Settings->Forwarding and Receiving** I see the message: There was an error retrieving the configuration, can not process this page Is there some additional configuration on either the admin or on our EC2 instance (universal forwarder) to get this to work? Or does the Splunk Cloud trial not allow contributing data to the instance?
Viewing all 1551 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>