Quantcast
Channel: Questions in topic: "universal-forwarder"
Viewing all 1551 articles
Browse latest View live

What file do I need to modify on puppet config to change the Splunk server name on the universal forwarder outputs.conf file?

$
0
0
Hello, I have a question regarding puppet and splunk. I'm planning to install the following module. https://forge.puppet.com/puppetlabs/splunk I just want to modify the splunk server name on the universal forwarder outputs.conf file. What file do I need to modify on puppet config?

Why would splunk universal forwarder report "ERROR TailReader - File will not be read, is too small to match seekptr checksum" on a file whose events begin with a timestamp?

$
0
0
splunkd.log is reporting ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=/apps/xxx/xxx/xxx/xxx/logs/systemOut-1.log). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info when re-starting the Universal forwarder on client servers. The events logged in these systemOut files begin with date/timestamps: [6/7/17 15:48:32:071 EDT] 00000288 SystemOut O No Response View Handler [6/7/17 15:48:40:424 EDT] 0000031d SystemOut O Request On and they roll to a dated filename (i.e. systemOut-1_17.06.07_11.02.26.log) when they reach about 1MB in size. Why would splunk ever think it's seen these before when each event is unique within the first 25 bytes? On these same servers the splunk u-forwarder monitors the systemErr files (which also start with date/time and share the same "roll" behavior as the systemOut files) and it does not report the same error for the systemErr files. The only parameters used for each monitor stanza in inputs.conf are the host, index, and sourcetype

Why does the Splunk Universal Forwarder 6.5.1 crash on Centos 7.3?

$
0
0
I apologize if this is too brief, but I want to provide the information I know first. I have a working Splunk environment currently, which has been running for years without issue. I noticed, however, when I rolled out a new Centos 7 box (The previous ones are Cent 6) the packages all install correctly and everything works (Splunkd starts but it has not been configured yet) ./splunk set deploy-poll myserver.domain.com:8089 -auth admin:******* After setting up my deploy server (Same as I have on every other server), I can see the connections established as expected. Splunk does indeed pull down configs but then ~5-10 minutes later Splunk crashes. From this point on start I get the following error : Invalid key in stanza [tcpout] in /opt/splunkforwarder/etc/apps/XX/default/outputs.conf, line 4: isLoadBalanced (value: False). Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug' I have looked at the crash logs (As from this point Splunk will start but immediately crash), and the logs point to the IndexerPipe, but my base question is, are there known obvious caveats to consider when installing a config I have used on C6 servers on C7 servers? Because if there are, rather than try to work backwards through tons of errors that do not seem to make any sense, I should rule out the obvious stuff first. I have been searching this site, as well as the greater internet for any mention of the issue I am having, but have found a lot of stuff that does not really match what I am seeing, so I am hoping someone has an idea what this can be and can give me a fresh lead to run down. Note just to rule it out, I did disable SELINUX to confirm the behavior remains the same.

Splunk Universal Forwarder 6.5.3 installed on Windows 10 workstations stop

$
0
0
I am running Splunk Enterprise on a Windows Server 2012 R2 and have installed both the Splunk Universal Forwarder 6.5.3 and 6.6.1on Windows 10 workstations. I have noticed that after about a week after being installed, the SplunkForwarder Service stops. When I try to start the service, it says that it cannot start because of a logon problem. I found that I have to open the service properties and re-enter the password for the account that it uses. Once I enter the password, I am able to start the service. I have noticed that this happens on a few workstations and sometimes when it is installed on a server. I installed the universal forwarder with using a domain service account. Any ideas? I have chosen to uninstall the UF to see if there was a problem with installation, but I have found that it still occurs.

Why my sourcetypes under universal forwarder not showing up in Splunk GUI?

$
0
0
We have a windows forwarder running on vm02, and forwarding data to vm01 which is the main Splunk Enterprise. we configured the inputs and props.conf in the vm02 forwarder level, so far we are able to search the events in vm01, coming from the vm02. But when we go to sourcetypes or inputs link in the vm01 GUI. We dont see any sourcetypes or inputs that are configured at forwarder level. But we are able to search the events using the forwarder sourcetypes in the vm01. How to make vm01 GUI to show the vm02 sourcetypes and inputs ?

Splunk UF forwarding to a unidirectional data diode which then forwards logs to Splunk server. No longer receiving logs from UF. (air gapped environment)

$
0
0
Here's a quick rundown of the environment: Virtual Machines (linux splunk instances), No internet connection, air gapped environment that uses a unidirectional data diode. In this environment there is going to be very little data which is why there is just a single instance of Splunk (IDX, SH, and LM) and 1 universal forwarder. Oh, and for those of you reading along and new to splunk/networking and are asking "wtf is a data diode?" here is a short explanation "The concept of a data diode is simple: specifically designed hardware circuitry within which it is only possible for data to flow in one direction" In this case the data flows from the UF via UDP 514 to to side A interface of the diode with an example IP of (192.168.10.15). This interface is supposed to then push all of that forwarded data out of side B of the data diode which then pushes that data to the splunk server which is configured to listen on local input TCP 514 because I was told by the engineering team that's just how it was and didn't receive an explanation as to why one side was configured UDP and the other TCP. The problem I have is ever since we added the diode aspect to the environment, Splunk no longer receives logs and I have no idea where to begin troubleshooting. The IP addresses in the UF and Splunk server have been corrected to reflect the change of location in the environment and rebinded the new IP address etc etc. Now, I don't know if this is because of a misconfiguration on my end of things or because the diode itself isn't properly setup yet. But from what I've explained am I understanding this is how the configuration is supposed to be in Splunk? ** Configure the universal forwarder to forward the syslog-ng data to the interface/IP of the data diode via UDP 514 ** Then have the diode push that information outbound towards the splunk server ** Splunk is now listening on TCP 514 for the incoming syslog-ng data. Side A of Diode (air gap) Side B of Diode UF(x.x.10.25) -----> x.x.10.15 --> ||||| x.x.13.15 ----------> Splunk server(x.x.13.26) Splunk should see this data as being sent from the data diode and not the universal forwarder correct? I would expect the logs to also include the IP addresses of both sides of the data diode as well as the IP of the UF.. Am I understanding this correctly? Or am I way off base?

How to create a report that lists of all enabled apps on Splunk Universal Forwarders and their versions?

$
0
0
I would like to create a report/dashboard that includes among other things the list of Splunk apps installed on universal forwarders and their versions. I created the report for apps installed on heavy forwarders and other Splunk components using the REST API. Any idea for universal forwarders? Also on the deployment server I was not able to spot if that info is indexed somewhere

How to stop access to Port 8089 in Splunk or change password on Universal Forwarder?

$
0
0
On all the Universal Forwarders, any user has the ability to access REST API called Splunk ATOM Feed:Splunkd. They can access this on any Universal Forwarder by putting in https:localhost:8089 or loopback 127.0.0.1:8089. I am trying to disable this feature or at the very least change the default password. The research that I’ve done informed me that this is not being used since we are not running a deployment server and we currently don’t have plans to use one in the future. The interface itself seems to be locked down and you can’t make any changes to it just view.

How to configure the Splunk Add-On for BMC Remedy in Splunk Enterprise?

$
0
0
I'm trying to use the "Splunk Add-on for BMC Remedy" add-on under Splunk Enterprise. I have a Remedy server, SplunkFwdr, with the universal forwarder installed and it identifies my Splunk Enterprise server, SplunkEnt, as the receiving indexer using port 9997. I installed the add-on to the Splunk server and the universal forwarder on my Remedy server. I copied the "rpa-inputs-ta" and "rpa-ta" directories from \\SplunkEnt\Program Files\Splunk\etc\apps\platform_advisor_remedy\appserver\addons\ to the \\SplunkFwdr\Program Files\SplunkUniversalForwarder\etc\apps\ tree and modified the log paths in inputs.conf to reflect the log path residing on the SplunkFwdr server (ex: \\SplunkFwdr\Program Files\BMC Software\ARSystem\Arserver\Db\aruser.log). My confusion is that when I try to configure Forwarded Inputs\TCP, I don't see where to specify the input from the "Splunk Add-on for BMC Remedy" add-on. I tried specifying the directory in which the logs sit in but that did not work. Does anyone have experience with the "Splunk Add-on for BMC Remedy" add on and how to configure it? Thank you very much Regards, Paul

Why can't I find the Universal Forwarder tab to download credentials?

$
0
0
I am trying to follow this tutorial: http://jasonpoon.ca/2017/04/03/kubernetes-logging-with-splunk/ I logged into a Splunk Cloud account (companyName.splunkcloud.com). But I can't find the Universal Forwarder tab to download credentials. I made my own free trial account and was able to find it easily, but I can't connect to the cloud account.

Is it possible for a single splunk universal forwarder to be managed by two different deployment servers?

$
0
0
I was wondering if possible for a single splunk universal forwarder to be managed by two different deployment servers? I imagine it may not be advisable, because of potential configuration clashes, but wanted to check to see if anyone knows for certain what Splunk's stance is on this, and whether anyone has tried it. Thank you!

Universal forwarder support on RHEL 7

$
0
0
Hello, I do not see any version of Splunk universal forwarder for linux kernel 3.10+ on the dowload portal. Is the last universal forwarder version on linux (2.6+ kernel) supported on linux kernel 3.10+ as well ? If not, when is splunk planning to realease a supported version ?

Do I need to configure a separate receiver port for sysmon data?

$
0
0
I currently have a receiver setup and it's ingesting data from a log source. I am looking to install the Splunk Universal Forwarders on workstations to forward Sysmon. Do I need a separate receiver port for the Sysmon data, or can I also forward that to port 9997? If so, how do I set the Sysmon data to go to it's own index? Thx

How to configure the universal forwarder to collect System Properties on a Windows Server?

$
0
0
How can I configure the universal forwarder to collect the hosts system properties?

Why are there a lot of splunkd processes running in Splunk Universal forwarder?

$
0
0
I have Splunk Universal Forwarder 6.2.0 running and I see a lot of splunkd processes running upon starting/restarting this. This seems to be causing some performance bottlenecks in our setup. Why is there a bunch of splunkds that is running? What do these do? I have 5 scripted inputs each of which are running python scripts and 1 monitor input to monitor a log file. Here is the inputs.conf with what we added in bold and everything else is from default/inputs.conf, [default] index = default _rcvbuf = 1572864 host = bleaf3 [blacklist:$SPLUNK_HOME/etc/auth] [monitor://$SPLUNK_HOME/var/log/splunk] index = _internal [monitor://$SPLUNK_HOME/etc/splunk.version] _TCP_ROUTING = * index = _internal sourcetype = splunk_version [batch://$SPLUNK_HOME/var/spool/splunk] move_policy = sinkhole crcSalt = [batch://$SPLUNK_HOME/var/spool/splunk/...stash_new] queue = stashparsing sourcetype = stash_new move_policy = sinkhole crcSalt = [fschange:$SPLUNK_HOME/etc] pollPeriod = 600 signedaudit = true recurse = true followLinks = false hashMaxSize = -1 fullEvent = false sendEventMaxSize = -1 filesPerDelay = 10 delayInMills = 100 [udp] connection_host = ip [tcp] acceptFrom = * connection_host = dns [splunktcp] route = has_key:_replicationBucketUUID:replicationQueue;has_key:_dstrx:typingQueue;has_key:_linebreaker:indexQueue;absent_key:_linebreaker:parsingQueue acceptFrom = * connection_host = ip [script] interval = 60.0 start_by_shell = true [SSL] cipherSuite = ALL:!aNULL:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM allowSslRenegotiation = true sslQuietShutdown = false **[script://$SPLUNK_HOME/bin/scripts/path/to/pythonscripts] source = sourcename sourcetype = sourcename interval = 60 [script://$SPLUNK_HOME/bin/scripts/path/to/pythonscripts] source = sourcename sourcetype = sourcename interval = 60 [script://$SPLUNK_HOME/bin/scripts/path/to/pythonscripts] source = sourcename sourcetype = sourcename interval = 5 [monitor:///var/log/eos] source = sourcename sourcetype = sourcename [script://$SPLUNK_HOME/bin/scripts/path/to/pythonscripts] source = sourcename sourcetype = sourcename interval = 30 [script://$SPLUNK_HOME/bin/scripts/path/to/pythonscripts] source = sourcename sourcetype = sourcename interval = 5** Is Splunk's execprocessor that runs these scripts multithreaded, which causes the number of splunkd to show up in *ps* ? Also, I would like to confirm, if one of these scripts hangs and is stuck past the set interval, does Splunk wait for it to finish execution before starting a new script or just keeps running the scripts for every interval?

バージョン 6.6.1 の Universal Forwarder を Windows OSにインストールできない

$
0
0
Windows OSに、Universal Forwarder をインストールしようとしたところ、インストールが途中で停止してしまい、インストーラーを手動で強制終了しました。 splunkd.log を確認したところ、下記のメッセージが出力されていました。 deploymentclient/servicesNS/nobody/SplunkUniversalForwarder/admin/deploymentclient2017-06-22T09:22:38+09:00Splunk0300 解決策を教えてください。

How to resolve error "ERROR IndexConfig - stanza=perfmon Required parameter=tstatsHomePath not configured" when starting the indexer?

$
0
0
When using Windows 2016 Universal Forwarder 6.6.1, I'm running into issues with starting indexer. splunkd log indicates the following 06-29-2017 11:42:32.517 -0500 INFO loader - Initializing from configuration 06-29-2017 11:42:32.517 -0500 WARN IndexerService - Indexer was started dirty: splunkd startup may take longer than usual; searches may not be accurate until background fsck completes. 06-29-2017 11:42:32.517 -0500 ERROR IndexConfig - stanza=perfmon Required parameter=tstatsHomePath not configured 06-29-2017 11:42:32.517 -0500 FATAL IndexerService - Cannot load IndexConfig: stanza=perfmon Required parameter=tstatsHomePath not configured 06-29-2017 11:42:32.517 -0500 ERROR IndexConfig - stanza=perfmon Required parameter=tstatsHomePath not configured

Offline server sending to Splunk when it has connection to Splunk Server

$
0
0
We have a standalone system that has a Universal Forwarder on it. While working on the standalone, it should still be collecting data for Splunk. Once we remove the drive and place it on the network, the forwarder should pull that data into splunk even though work was being done in an “offline” state, right? We are not seeing that information. We see only the information from the time the drive is back “Online”. Is there a special configuration that needs to be done? Spunk Docs refer to a useACK=true command in the output.conf, but that doesn’t work.

Why is SSL on Universal Forwarder failing with error "WARN SSLCommon - Received fatal SSL3 alert"?

$
0
0
Hi, I just followed the answer in the below post to configure SSL between my UF and the indexer: answers.splunk.com/answers/211383/why-am-i-getting-errors-with-my-ssl-configuration.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev Im seeing the following error in the splunkd.log when i restart splunkd: 07-06-2017 16:08:22.151 +0100 ERROR X509Verify - X509 certificate (O=SplunkUser,CN=SplunkCA,O=SplunkInc,L=SanFrancisco,ST=CA,C=US) failed validation; error=19, reason="self signed certificate in certificate chain" 07-06-2017 16:08:22.151 +0100 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read server certificate B', alert_description='unknown CA'. 07-06-2017 16:08:22.151 +0100 ERROR TcpOutputFd - Connection to host=xxx.xxx.xxx.xxx:9778 failed. sock_error = 0. SSL Error = error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed 07-06-2017 16:08:22.193 +0100 ERROR X509Verify - X509 certificate (O=SplunkUser,CN=SplunkCA,O=SplunkInc,L=SanFrancisco,ST=CA,C=US) failed validation; error=19, reason="self signed certificate in certificate chain" Any pointers on this would be great, i've tried using signed certs and was seeing the same error.

Modern Honey Network: How to use the Splunk Universal Forwarder and what is Splunk Atom Feed: splunkd?

$
0
0
Hi, I'm very confused about how to use Splunk with the Modern Honey Network (MHN) app. I installed it and when I go to https://ipaddress:8089 I end up on a page that says "Splunk Atom Feed: splunkd", I know this isn't the way the web interface is supposed to look. I've seen some sites suggest downloading an app but the problem is I can't navigate to the website through the server because there's no GUI. It's for a college project and the instructions say to monitor the log file /var/log/mhn-splunk.log by the Splunk Universal Forwarder. I can see the log file and everything looks ok in it, I'm just confused as to how I can see this data in a web interface? If anyone could help me out, I'd really appreciate it! Thanks!
Viewing all 1551 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>