Quantcast
Channel: Questions in topic: "universal-forwarder"
Viewing all 1551 articles
Browse latest View live

What are the limitations of Splunk Docker Logging driver vs Universal Forwarder?

$
0
0
What is the best option between Splunk logging driver for Docker or Universal forwarder running on the host or inside the container for sending logs to an indexer server? What are the limitations of Splunk logging driver for Docker.

increasing maxKBPS for only one splunk forwarder host

$
0
0
Hi Splunkers , I am getting this splunkd log entry in only one splunk forwarder . 05-09-2018 08:11:39.579 +0000 INFO ThruputProcessor - Current data throughput (258 kb/s) has reached maxKBps. As a result, data forwarding may be throttled. Consider increasing the value of maxKBps in limits.conf. Please let me know how to solve this only for particular splunk forwarder . For doing only for one splunk forwarder can i mention the below entry in splunk forwarder for which i want or do i need to mention that in indexer (but how to mention that in indexer only for one splunk forwarder host not for all) [thruput] # means unlimited maxKBps = 0

How to configure Universal Forwarder on my personal machine where Splunk Enterprise is installed for learning purpose?

$
0
0
I installed Splunk Universal Fwd and Splunk Enterprise on my C drive. I created a sample file and modified the inputs.conf as mentioned in one of the ans(link given below) and enabled the receiver by setting port to 9997. Do we have to modify/create outputs.conf file? I tried creating outputs.conf too..but no use. In outputs.conf I gave the server name as localhost and port as 9997. Am I missing something? Also, do we have to modify anything in distributed search? I assume my Splunk Enterprise is acting both as SH and Indexer. Have referred to below ans but didnt got the answer https://answers.splunk.com/answers/490343/how-to-properly-configure-universal-forwarder-loca.html#answer-656030

Why am I unable to forward data from Universal forwarder?

$
0
0
I am trying to index new data and it is not happening. I am indexing a single log file that is being written to by the server when ever new events are added. I put this statement into the MSIADDED inputs on the universal forwarder because that is where my current input live. This is what I added. [Monitor://D:\Software\Waratek\HR-Config\HR.log] disabled = 0 sourcetype = waratek index = main This is sample of the file. 2018-05-02 11:02:09,851 CEF:0|ARMR:CWE-114: Process Control|CWE-114: Process Control|1.0|Process Forking - 02|Load Rule|Low|outcome=success 2018-05-02 11:02:13,252 CEF:0|ARMR:CWE-114: Process Control|CWE-114: Process Control|1.0|Process Forking - 02|Link Rule|Low|outcome=success 2018-05-02 11:02:13,263 CEF:0|ARMR:CWE-114: Process Control|CWE-114: Process Control|1.0|Process Forking - 03|Load Rule|Low|outcome=success 2018-05-02 11:02:14,135 CEF:0|ARMR:CWE-114: Process Control|CWE-114: Process Control|1.0|Process Forking - 03|Link Rule|Low|outcome=success I can see the sourcetype show up in data summary; however, when I search for the data there is nothing there. Any suggestions here?

Universal forward installed on windows server but can't get the logs for that server.

$
0
0
Hi Everyone, I am testing universal forwarding in our testing environment and also installed universal forwarder in one of windows server, but can't get the desire logs. My test environment included Splunk Enterprise OVA as server and Windows server (with universal forwarder installed) which is client. I had used the "deployment server" command(set deploy-poll) and then restart. On Splunk OVA enterprise server Added forwarder input using Settings -> "Data Inputs" -> "Forwarded Inputs" -> "Windows Event Logs"-> New (could see my desired deployment client in the list). Selected Application, security & system events. Tested: 1. I had check the Eventviewer logs; there logs are generating 2. Check the Tcp dump; there is also logs are coming from the windows server. Also I am geeting Messages: - Skipped indexing of internal audits event will keep dropping events until indexer congestion is remedied.check disk space and other issues that may cause indexer to block. - Forwarding the indexer group default-autolb-group blocked for 10 seconds.

Does the Universal forwarder collect historical windows event logs?

$
0
0
I have installed the UF on a number of servers and I configured ti to monitor the winodws event logs (Application, System, Security). It looks like the UF has only picked up the event logs starting from when it was installed. Is there a way to tell the UF to ingest all of the event logs from the past?

How do I enable a UF to accept REST API commands?

$
0
0
I'm reading through all of the API docs, and I am executing GET API calls against my search head successfully. However, I want to restart the separate universal forwarder and edit inputs.conf via the API but I can't figure out how to enable the REST API on it.nThere are no splunk accounts on it, so what do I need to configure here?

Universal forwarder Manual Installation

$
0
0
I am trying to install Splunk Insights .. Installed splunk Server .. when i am trying to install Forwarder am not allowing to do that in my Environment so i tried manually to install Agent.. and it is installed sucessfully. but it is not showing that Added host in web URL. Can you please help me out here. Thanks, NAK

Correct path to IIIS logs

$
0
0
Trying to setup the Universal Forwarder on the Web Server to forward IIS logs to SPLUNK. The Windows Event log ARE forwarding correctly. My IIS logs are NOT stored in the default location so I'm trying to figure out the correct stanza to use. My actual IIS log directoiry structure is E:\weblogs\w3svc1\*.log E:\weblogs\w3svc2\*.log E:\weblogs\w3svc3\*.log Etc... multiple web sites I tried the following Stanzas neither have seemed to work [monitor://E:\weblogs\\*\\*.log] disabled = 0 [monitor://E:\weblogs\\...\\*.log] disabled = 0 I even tried tho log just a single site [monitor://E:\weblogs\\w3svc1\\*.log] disabled = 0 I restart splunk forwarded after changing the path If I run 'splunk list monitor' I get for all stanzas E:\weblogs\*.log No logs are being imported that I can tell Appreciate any assistsnce anyone can provide. -MARK-

Splunk UF: getting error ERROR ExecProcessor

$
0
0
Hi - I saw these errors in SPlunkd.log. our UF is currenlty down and cannot be restarted. I'm not sure if these errors impacts the UF itself but what does it mean if i get these errors in UF splunkd.log? will this cause the UF to be down? UF was down 30mins after these errors. 05-21-2018 00:01:42.952 +0000 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/_apps_inputs/bin/service.sh" Note: This output shows SysV services only and does not include native 05-21-2018 00:01:42.952 +0000 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/_apps_inputs/bin/service.sh" systemd services. SysV configuration data might be overridden by native 05-21-2018 00:01:42.952 +0000 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/_apps_inputs/bin/service.sh" systemd configuration. 05-21-2018 00:01:42.952 +0000 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/_apps_inputs/bin/service.sh" If you want to list systemd services use 'systemctl list-unit-files'. 05-21-2018 00:01:42.952 +0000 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/_apps_inputs/bin/service.sh" To see services enabled on particular target use 05-21-2018 00:01:42.952 +0000 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/_apps_inputs/bin/service.sh" 'systemctl list-dependencies [target]'.

What are the pros and cons of installing a UF on same machine as my Splunk instance?

$
0
0
I know it is possible to install a UF on the same machine as my Splunk instance as stated in these posts: 1. https://answers.splunk.com/answers/131245/running-a-universal-forwarder-on-the-same-server-as-the-enterprise-server.html 2. https://answers.splunk.com/answers/471936/install-both-universal-forwarder-and-splunk-enterp.html but I will like to know if there are notable reasons why to do so or not. - Are there any benefits to having both on the same machine or otherwise? - What is the best practice and why is that so? - Which approach is most prone to errors? Thanks in advance! :)

Universal forwarder not forwarding

$
0
0
Hello, I'm trying to forward logs from azLog (Azure log integration) into my splunk indexer. Both are running on AWS instances. Everything seems to be configured correctly except that I don't see anything on the indexer. Here is the investigation that I did so far: My indexer has a receiver configured and enabled on 9997. My instance which has the forwarder installed is able to connect there: > PS C:\Users\Administrator>> Test-NetConnection xxx.xxx.xxx -Port 9997>> ComputerName : xxx.xxx.xxx> RemoteAddress : xx.xx.xx.xx> RemotePort : 9997 > InterfaceAlias : Ethernet> SourceAddress : xx.xx.xx.xx> TcpTestSucceeded : True My inputs file looks like this: [monitor://C:\Users\azlog\AzureActiveDirectoryJson] disabled = false crcSalt = [monitor://C:\Users\azlog\AzureResourceManagerJson] disabled = false crcSalt = [monitor://C:\Users\azlog\AzureSecurityCenterJson] disabled = false crcSalt = My output file looks like this: [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] disabled = false server = xxx.xxx.xxx:9997 [tcpout-server://xxx.xxx.xxx:9997] spunkd is running. Splunk list monitor shows the correct list of files. Looking at the log for a specific file that should be forwarded I see : 05-29-2018 08:21:10.878 +0000 DEBUG TailReader - tailreader0 waiting for jobs 05-29-2018 08:21:13.878 +0000 DEBUG TailingProcessor - Returning disposition: 1 05-29-2018 08:21:13.878 +0000 DEBUG TailingProcessor - **************************************** 05-29-2018 08:21:13.878 +0000 DEBUG TailingProcessor - File state notification for path='C:\Users\azlog\AzureResourceManagerJson'. 05-29-2018 08:21:13.878 +0000 DEBUG TailingProcessor - Returning disposition: 1 05-29-2018 08:21:13.878 +0000 DEBUG TailingProcessor - **************************************** 05-29-2018 08:21:13.878 +0000 DEBUG TailingProcessor - File state notification for path='C:\Users\azlog\AzureResourceManagerJson\20180529T082113_3468468.0000000035.af2ac63e-756c-4c64-ad6d-b7dca46a0ceb.json' (first time). 05-29-2018 08:21:13.878 +0000 DEBUG TailingProcessor - Returning disposition: 1 05-29-2018 08:21:13.878 +0000 DEBUG TailReader - Enqueued file=C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log in tailreader0 05-29-2018 08:21:13.878 +0000 DEBUG TailReader - Enqueued file=C:\Users\azlog\AzureResourceManagerJson in tailreader0 05-29-2018 08:21:13.878 +0000 DEBUG TailReader - Enqueued file=C:\Users\azlog\AzureResourceManagerJson\20180529T082113_3468468.0000000035.af2ac63e-756c-4c64-ad6d-b7dca46a0ceb.json in tailreader0 05-29-2018 08:21:13.878 +0000 DEBUG TailReader - Start reading file="C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log" in tailreader0 thread 05-29-2018 08:21:13.878 +0000 DEBUG WatchedFile - Reading for plain initCrc... 05-29-2018 08:21:13.878 +0000 DEBUG WatchedFile - Preserving seekptr and initcrc. 05-29-2018 08:21:13.893 +0000 DEBUG TailReader - Finished reading file='C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log' in tailreader0 thread, disposition=NO_DISPOSITION, deferredBy=3.000 05-29-2018 08:21:13.893 +0000 DEBUG TailReader - Defering notification for file=C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log by 3.000ms 05-29-2018 08:21:13.893 +0000 DEBUG TailReader - Start reading file="C:\Users\azlog\AzureResourceManagerJson" in tailreader0 thread 05-29-2018 08:21:13.893 +0000 DEBUG TailReader - Have seen this item before (since splunkd was restarted). 05-29-2018 08:21:13.893 +0000 DEBUG TailReader - Finished reading file='C:\Users\azlog\AzureResourceManagerJson' in tailreader0 thread, disposition=RECURSE_INTO_THIS_DIRECTORY, deferredBy=0.000 05-29-2018 08:21:13.893 +0000 DEBUG TailReader - Returning disposition=RECURSE_INTO_THIS_DIRECTORY for file=C:\Users\azlog\AzureResourceManagerJson 05-29-2018 08:21:13.893 +0000 DEBUG TailReader - Start reading file="C:\Users\azlog\AzureResourceManagerJson\20180529T082113_3468468.0000000035.af2ac63e-756c-4c64-ad6d-b7dca46a0ceb.json" in tailreader0 thread 05-29-2018 08:21:13.893 +0000 DEBUG TailingProcessor - Skipping itemPath='C:\Users\azlog\AzureResourceManagerJson\20180529T082113_3468468.0000000035.af2ac63e-756c-4c64-ad6d-b7dca46a0ceb.json', does not match path='C:\Users\azlog\AzureSecurityCenterJson' :Not a directory :Not a symlink 05-29-2018 08:21:13.893 +0000 DEBUG TailingProcessor - Item 'C:\Users\azlog\AzureResourceManagerJson\20180529T082113_3468468.0000000035.af2ac63e-756c-4c64-ad6d-b7dca46a0ceb.json' matches stanza: C:\Users\azlog\AzureResourceManagerJson. 05-29-2018 08:21:13.893 +0000 DEBUG TailingProcessor - Storing config 'C:\Users\azlog\AzureResourceManagerJson'. 05-29-2018 08:21:13.893 +0000 DEBUG TailingProcessor - Will use CRC salt='C:\Users\azlog\AzureResourceManagerJson\20180529T082113_3468468.0000000035.af2ac63e-756c-4c64-ad6d-b7dca46a0ceb.json' for this source. 05-29-2018 08:21:13.893 +0000 DEBUG TailingProcessor - Entry is associated with 1 configuration(s). 05-29-2018 08:21:13.893 +0000 DEBUG TailReader - Will attempt to read file: C:\Users\azlog\AzureResourceManagerJson\20180529T082113_3468468.0000000035.af2ac63e-756c-4c64-ad6d-b7dca46a0ceb.json. 05-29-2018 08:21:13.940 +0000 DEBUG TailReader - Got classified_sourcetype='json-6' and classified_charset='AUTO'. 05-29-2018 08:21:13.940 +0000 DEBUG WatchedFile - Storing pending metadata for file=C:\Users\azlog\AzureResourceManagerJson\20180529T082113_3468468.0000000035.af2ac63e-756c-4c64-ad6d-b7dca46a0ceb.json, sourcetype=json-6, charset=AUTO 05-29-2018 08:21:13.940 +0000 DEBUG WatchedFile - setting trailing nulls to true via 'auto' 05-29-2018 08:21:13.940 +0000 DEBUG WatchedFile - Loading state from fishbucket. 05-29-2018 08:21:13.940 +0000 DEBUG WatchedFile - Attempting to load indexed extractions config from conf=source::C:\Users\azlog\AzureResourceManagerJson\20180529T082113_3468468.0000000035.af2ac63e-756c-4c64-ad6d-b7dca46a0ceb.json|host::EC2AMAZ-HOQE95P|json-6|338 ... 05-29-2018 08:21:13.940 +0000 DEBUG WatchedFile - Reading for plain initCrc... 05-29-2018 08:21:13.940 +0000 DEBUG WatchedFile - initcrc has changed to: 0x5e4645810867b257. 05-29-2018 08:21:13.940 +0000 DEBUG WatchedFile - Normal record was not found for initCrc=0x5e4645810867b257. 05-29-2018 08:21:13.940 +0000 DEBUG WatchedFile - Computed initCrc=5e4645810867b257 (old style). 05-29-2018 08:21:13.940 +0000 DEBUG WatchedFile - Normal record was not found for initCrc=0x5e4645810867b257. 05-29-2018 08:21:13.940 +0000 DEBUG WatchedFile - Creating new pipeline input channel with channel id: 339. 05-29-2018 08:21:13.956 +0000 DEBUG WatchedFile - Attempting to load indexed extractions config from conf=source::C:\Users\azlog\AzureResourceManagerJson\20180529T082113_3468468.0000000035.af2ac63e-756c-4c64-ad6d-b7dca46a0ceb.json|host::EC2AMAZ-HOQE95P|json-6|339 ... 05-29-2018 08:21:13.956 +0000 DEBUG TailReader - About to read data (Opening file: C:\Users\azlog\AzureResourceManagerJson\20180529T082113_3468468.0000000035.af2ac63e-756c-4c64-ad6d-b7dca46a0ceb.json). 05-29-2018 08:21:13.956 +0000 DEBUG WatchedFile - seeking C:\Users\azlog\AzureResourceManagerJson\20180529T082113_3468468.0000000035.af2ac63e-756c-4c64-ad6d-b7dca46a0ceb.json to off=0 05-29-2018 08:21:13.956 +0000 DEBUG WatchedFile - Reading for plain initCrc... 05-29-2018 08:21:13.956 +0000 DEBUG WatchedFile - initcrc changed to 0x5e4645810867b257 since file grew past initCrcLen. 05-29-2018 08:21:13.956 +0000 DEBUG WatchedFile - Applying pending meta data 05-29-2018 08:21:13.956 +0000 DEBUG WatchedFile - Clearing pending metadata 05-29-2018 08:21:13.956 +0000 DEBUG WatchedFile - Reached EOF: fname=C:\Users\azlog\AzureResourceManagerJson\20180529T082113_3468468.0000000035.af2ac63e-756c-4c64-ad6d-b7dca46a0ceb.json fishstate=key=0x5e4645810867b257 sptr=12112 scrc=0xc11622e038ef0e51 fnamecrc=0xbe9301895b5e826a modtime=1527582073 05-29-2018 08:21:13.956 +0000 DEBUG TailReader - Skipping sending done key. 05-29-2018 08:21:13.956 +0000 DEBUG TailReader - Will doublecheck EOF (in 3000ms).. 05-29-2018 08:21:13.956 +0000 DEBUG TailReader - Finished reading file='C:\Users\azlog\AzureResourceManagerJson\20180529T082113_3468468.0000000035.af2ac63e-756c-4c64-ad6d-b7dca46a0ceb.json' in tailreader0 thread, disposition=NO_DISPOSITION, deferredBy=3.000 05-29-2018 08:21:13.956 +0000 DEBUG TailReader - Defering notification for file=C:\Users\azlog\AzureResourceManagerJson\20180529T082113_3468468.0000000035.af2ac63e-756c-4c64-ad6d-b7dca46a0ceb.json by 3.000ms 05-29-2018 08:21:13.956 +0000 DEBUG TailReader - tailreader0 waiting for jobs 05-29-2018 08:21:14.893 +0000 DEBUG TailingProcessor - **************************************** But absolutely nothing on the indexer in the main index. In the internal index I see the log lines : e.g 05-29-2018 08:25:48.948 +0000 DEBUG TailReader - tailreader0 waiting for jobs Any help with next steps here? Thanks

Is it possible to write a lightweight custom forwarder to collect data, and not have to deploy the universal forwarder on every machine that needs monitoring?

$
0
0
We're trying to determine if Splunk is appropriate for our scenario, which is to monitor our own agent that runs on our users' PCs and Macs. We have several million customers, and it seems like it would be burdensome (based on the posted system requirements) to deploy a universal forwarder onto every user's machine (plus I'm not sure how we would integrate this into the existing installer & upgrader features of our app). All we really need to do is to periodically upload (either daily or hourly) a .json file containing some structured data for metrics that describe the current state of the app during that interval, as well as some exception events (crashes, thrown exceptions of note, etc.). In theory, this would just be an HTTPS call to our Splunk instance with the appropriate authentication, but I can't locate any online documentation that describes this - the REST API seems to be more about controlling existing collectors and doing extraction & analysis of collected data.

Can we use the usual Splunk Universal Forwarder to collect and send metrics to Splunk Insights for Infrastructure?

$
0
0
When you deploy Splunk Insights for Infrastructure you use the specific script to install a forwarder. Can we use Splunk Universal Forwarder to collect and send metrics to Splunk Insights for Infrastructure and to send other logs to a different Splunk Instance at the same time?

Qualys scan detecting various SSL certificate vulnerabilities: How to resolve these vulnerabilities?

$
0
0
Our Qualys report detected various SSL certificate vulnerabilities for any devices using Splunk universal forwarder via 8090. We have deployment server configured to push configuration to servers running Splunk agent. After doing some research it appears we need to create a certificate on the deployment server and distribute to any server running Splunk agent. I'm curious to know which certificates I need to distribute. I was able to create self-sign certificates on the deployment server. I would like to resolve vulnerabilities detected by Qualys. I found the following documentation that cert authentication is not recommended for deployment and clients. - https://docs.splunk.com/Documentation/Splunk/7.1.0/Security/Securingyourdeploymentserverandclients Additional information: http://docs.splunk.com/Documentation/Splunk/7.1.0/Security/Howtoself-signcertificates http://docs.splunk.com/Documentation/Splunk/7.1.0/Security/HowtoprepareyoursignedcertificatesforSplunk Qualys Vulnerabilities: • X.509 Certificate SHA1 Signature Collision Vulnerability • SSL Certificate - Self-Signed Certificate • SSL Certificate - Expired • SSL Certificate - Subject Common Name Does Not Match Server FQDN • SSL Certificate - Signature Verification Failed Vulnerability • HTTP Security Header Not Detected

Im getting Universal forwarder setup failed preamaturely error when i try to upgrade from 6.4.1 to 6.5.2. I tried everything i could find in splunk answers

$
0
0
Im getting Universal forwarder setup failed preamaturely error when i try to upgrade from 6.4.1 to 6.5.2. Running the install as administrator

Why is my server name not displayed as host?

$
0
0
I have a UF installed on my local machine and I installed a different UF on a server which I remotely connect to. Whenever I forward files from the remote server it works well but instead of the "host" field value showing as the server name, it shows my local machine name instead. I don't know why this is. Since I am forwarding from the server I expected that the host value will be the server name. Am I missing something? Is there a way to make the host value the server name instead of my local machine name?

Universal Forwarder Support for Mac OSX 10.13?

$
0
0
I noticed on the download page that Splunk Enterprise is supported on OSX 10.13 but the Universal Forwarder is not. Setting aside the kerfuffle caused by the new OSX logging mechanisms, is there any reason not to use UF 7.1.1 on OSX 10.13? What's the official stance on that? I did find the reference to SPL-129734, and I would like to add a vote for requesting that functionality, but bothering the support folks with a formal ticketed request seems excessive. Thanks all!

Installation Universal Forwarder on Citrix Provisioning servers

$
0
0
Hi there, i followed the install [instructions](https://docs.splunk.com/Documentation/Splunk/7.0.3/Admin/Integrateauniversalforwarderontoasystemimage) for the installation of the splunk UF in our Citrix environment. We used this command for the installation on the master. `msiexec.exe /i splunkforwarder-7.0.3-fa31da744b51-x64-release.msi DEPLOYMENT_SERVER=":8089" AGREETOLICENSE=yes LAUNCHSPLUNK=0 /quiet` The preperation of the master image works fine. After we start the first provisioned server with this image, we saw that the UF communicates with the deployment server and received the prepared inputs.conf, outputs.conf. And a few minutes later we received some events from the event log on the indexer. So far everythings works as expected. But if we restart the provisioned server, the image of the server will be reseted and therefore the previosly generated "GUID" of the server is gone. After the splunk UF service is started again, the service will generate a new guid. On the splunk master server in the Forwarders:Deployment menu we see for each reboot a new entry of the UF for the provisioned server. Is there a way to set to the provisioned servers always the same guid? Is it possible to deploy the instance.cfg by the deployment server? Any help / info is welcome. Thanks.

One search head to search across two separate indexer clusters?

$
0
0
I am running two setups of Splunk, one is in Datacenter and another is in AWS. DC : 2 Node search heads, 3 nodes : indexers, 1 deployment server & license manager AWS : 2 Node search heads, 3 nodes : indexers, 1 deployment server & license manager I am trying to add AWS indexer cluster to DC search head. If this is possible we will stop the AWS hosted SHs because we want to keep only one SH cluster which should be able to search across two distinct indexer clusters. Please note that there is no replication or any connection between the AWS hosted and DC hosted indexer cluster. We don't want to setup multisite indexer clustering. Can this be done ?
Viewing all 1551 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>