Quantcast
Channel: Questions in topic: "universal-forwarder"
Viewing all 1551 articles
Browse latest View live

Setting indexes on windows universal forwarder

$
0
0
Hi All, i am trying to configure the splunk universal forwarders on a windows machine to send to an index that isnt main. I attempted to set index=windows_index in the inputs.comf file in $splunk/etc/system/local/. when i set the index there, and restart the forwarder no logs get to splunk. when removed and restarted again, logs all pour in. Is this config setting something to be set in the forwarders?

How to include a unique ID to rsyslog client config?

$
0
0
Is there a way to assign a unique id to each rsyslog client node. I'm trying to build a solution where multiple rsyslog clients would be sending their services logs to a centralized rsyslog server and from their those logs will be send to Splunk indexers via universal forwarder agent, the problem is a that customer can own 2-3 nodes and once those nodes logs are sent to rsyslog then how can I segeregate logs per customer. I am trying to find out if there a way to add a unique label like custumer_uid in rsyslog client nodes of each customer.

Connection closes with INFO TcpOutputProc - Detected connection to 10.x.x.x:9997 closed.

$
0
0
TCP connection closes after few hours and will not re-establish even after splunk restart. Connection gets re-established by editing outputs.conf and again closes with below logs after few hours. 05-23-2018 00:29:36.512 +0200 ERROR TcpOutputFd - Connection to host=10.x.x.x:9997 failed. sock_error = 0. SSL Error = error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed 05-23-2018 00:29:36.512 +0200 INFO TcpOutputProc - Detected connection to 10.x.x.x:9997 closed 05-23-2018 00:29:36.512 +0200 INFO TcpOutputProc - Will close stream to current indexer 10.x.x.x:9997 05-23-2018 00:29:36.512 +0200 INFO TcpOutputProc - Closing stream for idx=10.x.x.x:9997 05-23-2018 00:29:36.767 +0200 ERROR TcpOutputFd - Connection to host=10.x.x.x:9997 failed. sock_error = 0. SSL Error = error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed 05-23-2018 00:30:06.567 +0200 ERROR TcpOutputFd - Connection to host=10.x.x.x:9997 failed. sock_error = 0. SSL Error = error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed

universal forwarder is taking about 30GB of Memory - Is this normal?

$
0
0
Hi My universal forwarder is taking about 30GB and my IT guys are asking is this normal. I have just restarted it and then upgrade it to the latest 7.1.1, but with in 20 minutes it has gone from 500MB back to 30GB VIRT and RESS. This seems like a lot of me, or is this just the way LINUX uses memory? Thanks in Advance Robert ![alt text][1] [1]: /storage/temp/251980-2018-06-18-11-41-14.png

Heavy Forwarder vs. Reduced Splunk Enterprise & DB Connect App

$
0
0
Hello everyone! My team and I are attempting to create a service for our departments' applications that enable them to easily send logs to our Splunk Enterprise; however, we do not control the Splunk Enterprise since it's handled by another department. We are essentially an intermediary between the Splunk department and our department to create an easy-to-implement solution. We are also restricted to only sending logs by either Universal Forwarder or Heavy Forwarder. We have seen the discouragement associated with the heavy forwarder, and we would like to get a few things cleared up. Please, correct us if we're wrong in any of these bulletin points: 1. Universal Forwarder is the way-to-go. Only has the ability to monitor files / directories / system logs. Does not index. It cannot view logs stored IN A DATABASE column. 2. The Heavy Forwarder can be implemented as a "slave" to prevent license usage so that it acts strictly as a forwarder. It can take HTTPs Event as an input and forward it onto the Splunk department WITHOUT impeding our usage. It can support DB Connect App for forwarding logs over to the Splunk environment. It does NOT have a web interface. 3. A Splunk Enterprise Instance can be configured to be a slave and NOT act as an indexer (how difficult is this?). We would potentially want to do this so that we have access to a website interface, have the capabilities of extraction, and have the availability to access db connect app from an interface view. One thing to note here, we are creating libraries in Python and Java that can extend applications' loggers to add our easy-to-implement heavy forwarder or Splunk instance. It would essentially be through either HTTPs, UDP, or TCP. One more question: if we had a db connect app on a heavy forwarder, could multiple applications hosted on different machines / servers connect to the database connect app? Does Splunk Light come into this at all?

TIMESTAMP_FIELDS setting in props.conf is not taken into account

$
0
0
I have the issue that the TIMESTAMP_FIELDS setting in the props.conf on the Universal Forwarder is not taken into account. It seems like the field _time is filled in with the time the line is being indexed and not take from the log line itself. **Splunk Enterprise:** VERSION=6.6.3 BUILD=e21ee54bc796 PRODUCT=splunk PLATFORM=Linux-x86_64 **Splunk Universal Forwarder:** VERSION=6.6.3 BUILD=e21ee54bc796 PRODUCT=splunk PLATFORM=Linux-x86_64 **Log line example:** {"Application":"CNIP","CallStatus":"OK","CallType":"TERM-RP","Called":"xxxxxxxxx","Calling":"xxxxxxxxx","Clir":"false","DelayTime":"161","Error":"","ErrorBy":"","ErrorSeverity":"","Name":"xxxxxxxxx","NameBy":"DisDB","OverwriteCli":"","Protocol":"SIPPROXY","SessionId":"xxxxxxxxx","StartTime":"2018-06-20T08:36:00Z","StopTime":"2018-06-20T08:36:00Z","logLevel":1} **How it is seen on Splunk:** ![alt text][1] [1]: /storage/temp/252009-2018-06-20-10-59-20-search-splunk-663.png As you can see, the times are not taken from the "StartTime" field in the logline. Here the config on the Forwarder: **inputs.conf** [monitor:///locationOnServer/LogFile] index=csdp_prod_services source=CNIPService sourcetype=CnipCallLog.log ignoreOlderThan=1d **props.conf** [CNIPService] SHOULD_LINEMERGE=false INDEXED_EXTRACTIONS=json KV_MODE=none category=Structured disabled=false TIMESTAMP_FIELDS=StartTime TZ = UTC #I tried with and without this field, same behavior TIME_FORMAT=%FT%TZ #I tried with and without this field, same behavior What am I missing here to make this work? I want the _time field to be filled in based on the "StartTime" field in the log lines.

Why is the TIMESTAMP_FIELDS setting in props.conf, on the Universal Forwarder, not taken into account?

$
0
0
I have the issue that the TIMESTAMP_FIELDS setting in the props.conf on the Universal Forwarder is not taken into account. It seems like the field _time is filled in with the time the line is being indexed and not take from the log line itself. **Splunk Enterprise:** VERSION=6.6.3 BUILD=e21ee54bc796 PRODUCT=splunk PLATFORM=Linux-x86_64 **Splunk Universal Forwarder:** VERSION=6.6.3 BUILD=e21ee54bc796 PRODUCT=splunk PLATFORM=Linux-x86_64 **Log line example:** {"Application":"CNIP","CallStatus":"OK","CallType":"TERM-RP","Called":"xxxxxxxxx","Calling":"xxxxxxxxx","Clir":"false","DelayTime":"161","Error":"","ErrorBy":"","ErrorSeverity":"","Name":"xxxxxxxxx","NameBy":"DisDB","OverwriteCli":"","Protocol":"SIPPROXY","SessionId":"xxxxxxxxx","StartTime":"2018-06-20T08:36:00Z","StopTime":"2018-06-20T08:36:00Z","logLevel":1} **How it is seen on Splunk:** ![alt text][1] [1]: /storage/temp/252009-2018-06-20-10-59-20-search-splunk-663.png As you can see, the times are not taken from the "StartTime" field in the logline. Here the config on the Forwarder: **inputs.conf** [monitor:///locationOnServer/LogFile] index=csdp_prod_services source=CNIPService sourcetype=CnipCallLog.log ignoreOlderThan=1d **props.conf** [CNIPService] SHOULD_LINEMERGE=false INDEXED_EXTRACTIONS=json KV_MODE=none category=Structured disabled=false TIMESTAMP_FIELDS=StartTime TZ = UTC #I tried with and without this field, same behavior TIME_FORMAT=%FT%TZ #I tried with and without this field, same behavior What am I missing here to make this work? I want the _time field to be filled in based on the "StartTime" field in the log lines.

Why does installing a forwarder using msiexec keeps failing?

$
0
0
We are installing a forwarder to new workstations using the command below; *msiexec /i "splunkforwarder-7.0.0-c8a78efdd40f-x64-release.msi" /qn /l*v %windir%\temp\INSTALL_Splunk.log AGREETOLICENSE=Yes LOGON_USERNAME="domain\Splunk" LOGON_PASSWORD="mypassword" DEPLOYMENT_SERVER="192.168.0.1:8089" WINEVENTLOG_APP_ENABLE=1 WINEVENTLOG_SYS_ENABLE=1 SPLUNKPASSWORD=splunkpassword* - I can make this command line work correctly using cmd and PowerShell on my own local machine, however - when using SCCM to push out it appears to act like, it has no permissions. it appears to hang while doing something in the registry. This will apparently need to use "system" account, but following the flags on docs.splunk webpage (http://docs.splunk.com/Documentation/Forwarder/7.1.1/Forwarder/InstallaWindowsuniversalforwarderfromthecommandline) shows that the username/login is needed. The error message in msi log is like below; *MSI (s) (50:5C) [12:54:19:999]: Executing op: CustomActionSchedule(Action=RollbackGroupAndRightsFromReg,ActionType=3329,Source=BinaryData,Target=RemoveGroupAndRightsFromRegCA,CustomActionData=SplunkSvcName=SplunkForwarder;FailCA=) MSI (s) (50:5C) [12:54:19:999]: Executing op: ActionStart(Name=SaveGroupAndRightsToRegistry,,) MSI (s) (50:5C) [12:54:19:999]: Executing op: CustomActionSchedule(Action=SaveGroupAndRightsToRegistry,ActionType=3073,Source=BinaryData,Target=SaveGroupAndRightsToRegistryCA,CustomActionData=SplunkSvcName=SplunkForwarder;UserName=ODOT\SplunkUF;SetAdminUser=1;FailCA=) MSI (s) (50:20) [12:54:19:999]: Invoking remote custom action. DLL: C:\windows\Installer\MSI6294.tmp, Entrypoint: SaveGroupAndRightsToRegistryCA SaveGroupAndRightsToRegistry: Warning: Invalid property ignored: FailCA=. SaveGroupAndRightsToRegistry: Error: cannot SaveGroupAndRightsToRegistry. SaveGroupAndRightsToRegistry: Error 0x80004005: Cannot save rights to registry. CustomAction SaveGroupAndRightsToRegistry returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox)*

Blacklist files greater than a certain size from inputs.conf

$
0
0
Hi All, I have to monitor a folder where there are very huge files with file name automatically generated. Is there some way (instead of write a custom UNIX script that moves only small files to another folder that will be then monitored by the forwarder) to blacklist files that have a size greater than (suppose) 10 MB? Any other suggestion with Splunk stanza attributes is appreciated. Thanks a lot, Edoardo

Forwarder not working on Windows 10.

$
0
0
I configured a Splunk Universal Forwarder on Windows 10. I also installed Splunk light on another Windows 10 computer. The forwarder is not recognized by Splunk Light. Can you offer any suggestions?

Universal forwarder - gMSA - EventID 7000

$
0
0
Hello, in ou're environment we've configured the forwarders (Windows, version 6.6.3) to use a gMSA account to run the splunkd service. This account has been granted the correct permissions (as described in the installation documentation). After an (expected) restart on some systems the service won't startup correctly (Eventid 7000, The SplunkForwarder service failed to start due to the following error: The service did not start due to a logon failure.) When this issue arrises, the Test-ADServiceAccount returns a true value. The PrincipalsAllowedToRetrieveManagedPassword properties has been configured with the correct systems that use the gMSA account. A manual restart will fix this issue. offcourse, this can be trapped within a monitoring solution, or with an action combined to this event, but this is working around an issue imho. What's the best way to troubleshoot/fix this issue.

Why is the forwarder not working on Windows 10?

$
0
0
I configured a Splunk Universal Forwarder on Windows 10. I also installed Splunk light on another Windows 10 computer. The forwarder is not recognized by Splunk Light. Can you offer any suggestions?

Splunk add monitor not sending log to splunk cloud

$
0
0
Hi I am newbie. I have installed splunk universal forwarder on windows client to forward log on Splunk Cloud. When I run below command, it executes without any error. But when I check /etc/local/inputs.conf file there is no section of monitor. /splunk add monitor "D:\SGN" -index qa -sourcetype test_log -host Also, If I execute list monitor command then also it shows monitored directory. How do I debug or find out whats wrong. Note: I am creating AWS EC2 instance by passing UF installation scripts in userdata. in case, if it makes any difference.

Splunk_TA_nix cannot open scripts

$
0
0
Hey Everyone, I installed Splunk_TA_nix on my Ubuntu 16.04.2 server. After enabling some scripts and not seeing any data beng monitored, I checked splunkd.log and I see the following error: >07-03-2018 16:13:04.110 +0100 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/cpu.sh" /bin/sh: 0: Can't open For some reason the UF cannot of the .sh script files. As shown below, Splunk is the owner of those files and it has execute permissions: > -rwxrwxr-x 1 splunk splunk 3447 Jul 3 15:21 bandwidth.sh*> -rwxrwxr-x 1 splunk splunk 3997 Jul 3 15:21 common.sh*> -rwxrwxr-x 1 splunk splunk 3997 Jul 3 15:21 common.sh* Does anyone know what is wrong here?

Can I send cisco syslog messages to a universal forwarder and make it send logs to an indexer?

$
0
0
hi I am already a basic user of splunk to monitor our networking equipment syslogs now I want to install universal forwarder in each branch to collect data when the network goes down and data can't be sent to the splunk server I wanted to know can I send cisco syslog messages to a universal forwarder and make it send logs to indexer? and what would happen when the link between the branch and data center goes down? how can I cache logs to send them after links are up and running again?

using stream forwarder to forward pcap data

$
0
0
Hi, I would like to forward pcap data using tcpreplay on a remote machine which has installed a stream forwarder to forward the pcap data to my local machine. In my local machine, I have installed splunk stream but I did not receive any pcap data when I run tcpreplay on my remote machine. e.g. I ran this on my remote machine, but it didnt worked. So I tried installing a universal forwarder. ./streamfwd -r '/root/Desktop/mypacket.pcap' -s http://:8889 e.g. using universal forwarder sudo ./splunk add forward-server :9997 then I added the directory to monitor. ./splunk add monitor /root/Desktop -sourcetype pcap_capture -index wireshark_pcaptest (is that how universal forwarder works like it monitors traffic in the desktop directory since im running tcpreplay on my desktop ?) So my question is how do I receive pcap data both ways as mentioned above ? Because I want to simulate a real-time traffic through tcpreplay. (please clarify my understanding)

Installed Universal Forwader 7.1 - splunk showing “No users exist. Please set up a user.”

$
0
0
We noticed for newly install splunk Universal forwarder 7.1 we are unable to connect as user admin, and splunk was showing error “No users exist. Please set up a user.”

Force UF(6.4.5) and Deployment Server(6.2.3) to use TLS 1.2

$
0
0
https://answers.splunk.com/answers/468642/deployment-server-flooded-with-ssl-handshake-error-1.html By seeing answers above, I think that I should configure like below, if I want to force `Deployment Server` and `Universal Forwarder` to use `TLS 1.2`. In `Deployment Server` [sslConfig] sslVersions = tls1.2,-ssl2, -ssl3 In `Universal Forwarder` [sslConfig] sslVersionsForClient = tls1.2,-ssl2, -ssl3 However, in my environment `Universal Forwarder` is ver 6.4.5 and `Deployment Server` is ver 6.2.3, and there isn't stanza `sslVersionsForClient` in ver 6.2.3. First of all, is the setting above correct? Also, even if the ver is different and there is a stanza that does not exist on other side, will it work without problems? If someone tell me about it, I appreciate.

Is Deployment Client (Universal Forwarder) 6.0 and below is still compatible with Splunk Deployment Server (Splunk Enterprise) 7.0?

$
0
0
Hi All, Just want to ask if Deployment Client (Universal Forwarder) 6.0 and below is still compatible with Splunk Deployment Server (Splunk Enterprise) 7.0? Cheers, Dan

Index name entry in inputs.conf

$
0
0
Hello splunkers , I have seen in system/local/inputs.conf of many servers that it contains one entry provided below root@abchost:~ # `cat /opt/splunkforwarder/etc/system/local/inputs.conf` [default] host = abc.com index = unmanaged What is need of providing `index= unmanged` in that . I am just simply guessing that might be it provided the default index entry to those monitors which don't have index name specified , Please let me know if i am right or wrong , if wrong then please let me know what is the need of providing that value .
Viewing all 1551 articles
Browse latest View live