Quantcast
Channel: Questions in topic: "universal-forwarder"
Viewing all 1551 articles
Browse latest View live

What is the UF upgrade compatibility?

$
0
0
I need to upgrade a several forwarders that are running older versions such as 4.x and 5.x. to 7.x. Our distributed environment is running at 7.1.4. Do I need to upgrade the UF to 6.5.2 first? Or, can i upgrade straight to 7.1.4?

Splunk Universal Forwareder porformance impact

$
0
0
Hello. Do you know if exist a table, web page, benchmark or paper where the impact to performance for the appliances where an Universal Forwarder is installed on is shown?

How to install splunk app for linux without installing the universal forwarder?

$
0
0
Can I use splunk app for linux without installing universal forwarder on each linux host I need their logs?

New install of UF windows, splunkd.log says "sock_error = 10054. SSL Error = No error"

$
0
0
I just installed a new UF on a Windows VM, and I'm getting an error that connection to the host failed, with "sock_error = 10054. SSL Error = No error" The indexers I'm trying to connect to can talk to a bunch of other Windows VMs. The ports to the indexers are open from the Windows VM that's having the problem. This configuration worked when I installed it on other VMs What does "SSL Error = No error" mean?

Monitor files perfomance

$
0
0
Hello, I need to monitor some Oracle Database agent logs with Splunk Universal Forwarder. The base directory for finding the logs is $ORACLE_HOME. We´re using this configuration to monitor these logs in a Splunk Enterprise environment: **[monitor://$ORACLE_HOME/log/*/agent/ohasd/oraagent_(grid|oracle)/oraagent_(grid|oracle).log]** ... I know we could configure ORACLE_HOME env in splunk-launch.conf on each UF instance. However, we have already installed all Universal Forwarders and we don´t know the $ORACLE_HOME env variable on the UF hosts. we have about 300 hosts, so we decided to do the above configuration to save time: **[monitor:///.../log/*/agent/ohasd/oraagent_(grid|oracle)/oraagent_(grid|oracle).log]** When I execute **splunk list monitor** its listing all directories under **/** partition, even if there is one log file per host. My questions are: 1 - Does Splunk will really look into all directories under **/**? 2 - If yes, would I have performance problems because the huge amount of directories? Thanks.

How to install Splunk UFD without asking for password in Linux?

$
0
0
HI Friends, I am installing Splunk UFD 7.2.5, but when I run the command (/opt/splunk/bin/splunk start --accept-license) its asking for the password. Is there a way I can install UFD without asking password ? Previously we have SPLUNK UFD 6 version and when we run the same command it doesn't ask for password. I think some file is hardcoded to take default password (changeme). Can you please point me in which file I need to make this change to not to ask password so that I can put this as part of automated script to install Splunk UFD.

Log data of a particular sourcetype from one of the forwarder is missing in splunk

$
0
0
Hi All, In UF installed server ,we have monitor stanza to read the .log file from a particular source named it as one of the sourcetype. I used to get the log feed upto 7 days . But suddenly it stopped and not able to see any log feed from that particular sourcetype only But I am getting the different types of log files nearly from 8 sources from the same UF installed server to indexer I had rebooted the UF but no luck . By running splunk btool command I can see the monitor stanza for the missing sourcetype in inputs.conf along with others Please guide me on this Thanks

Why are there no logs received when the universal forwarder is sending data to a search head?

$
0
0
hello, i have a problem with the universal forwarder, i set up a universal forwarder to send to a search head splunk but i have not received any log

Why is indexed extraction not happening when the data comes via the UF?

$
0
0
Hi, We have a quite a "piggy backed" data coming from a system and extracting as [mysourcetype] SHOULD_LINEMERGE=false INDEXED_EXTRACTIONS=CSV FIELD_NAMES=Date,Time,EmployeeID,EmployeeName TIMESTAMP_FIELDS=Date,Time (A) System Data collected using UF => (B) Sent to Heavy Forwarder => (C) HF to Indexer => (D) Clustered SH We have the - inputs.conf in (A) - props.conf with INDEXED_EXTRACTIONS=CSV in (B) , (C) & (D) Directly indexing the file works perfectly in standalone Splunk Instance. But when the data comes via the UF, the indexed extraction is not happening. Any reasons for this? Should we add props.conf to UF?

How to restart Universal Forwarder via the deployment server?

$
0
0
Hi Splunker Is There way to do restart for splunk agent via the deployment server by use a particular app or configuration? Please help me in that. Regards

What happens when we restart universal forwarder as root user ?

$
0
0
Hi All, So , What happens when I restart universal forwarder as root user on Linux . And Previously if done so what needs to be done if anything goes wrong I am missing one of the log file on a particular host , but remaining logs from different sources are working fine from the same host So restarted UF as root user ,but didn't worked Any help ? Thanks

How to execute custom script on Universal Fowarder when Event Trigger Alert raised

$
0
0
How to execute custom script on Universal Fowarder when Event Trigger Alert raised? I am monitoring my linux audit logs, upon receiving event from the remote client (UF), I want to trigger an event alert which will execute a custom script on the remote client. Reviewed the following but I'm not sure whether this will execute the script on the UF or on the SH which detects the event. http://dev.splunk.com/view/dev-guide/SP-CAAAE68 Newbie

Significant sudden slowness in ingesting between servers with Splunk UF to Splunk Heavy Forwarders

$
0
0
Hi , Looking for an advice in troubleshooting the cause of the issue we are experiencing and how to solve it. We have few Splunk UF(s) where we are monitoring large amount of big files to our 4 load balanced Heavy Forwarders. The setup we have was working until last week when we have started to ingest the files with big delay ,3-6 hrs depending on the size. Previously it was taking minutes to ingest. Best to our knowledge we didn't have any network, OS or Splunk related changes on the day when we started to experience the issue. We tried: 1. Restart Splunk process on Splunk UF servers 2. Reboot the servers with Splunk UF 3. Per Splunk support we changed server.conf on Splunk UF server: by adding parallelIngestionPipelines and queue sizes parallelIngestionPipelines = 2 [queue] maxSize = 1GB [queue=aq] maxSize = 20MB [queue=aeq] maxSize = 20MB 4. Per Splunk support we modified limits.conf by adding max_fd and we had thruput set to unlimited already [thruput] maxKBps = 0 [inputproc] max_fd = 200 All above didn't fix the issue. Maybe you have experienced the similar issue. It would be great to know how it was solved Any advice will be appreciated!

Why is there significant sudden slowness in ingesting between servers with Splunk UF to Splunk Heavy Forwarders?

$
0
0
Hi , Looking for an advice in troubleshooting the cause of the issue we are experiencing and how to solve it. We have few Splunk UF(s) where we are monitoring large amount of big files to our 4 load balanced Heavy Forwarders. The setup we have was working until last week when we have started to ingest the files with big delay ,3-6 hrs depending on the size. Previously it was taking minutes to ingest. Best to our knowledge we didn't have any network, OS or Splunk related changes on the day when we started to experience the issue. We tried: 1. Restart Splunk process on Splunk UF servers 2. Reboot the servers with Splunk UF 3. Per Splunk support we changed server.conf on Splunk UF server: by adding parallelIngestionPipelines and queue sizes parallelIngestionPipelines = 2 [queue] maxSize = 1GB [queue=aq] maxSize = 20MB [queue=aeq] maxSize = 20MB 4. Per Splunk support we modified limits.conf by adding max_fd and we had thruput set to unlimited already [thruput] maxKBps = 0 [inputproc] max_fd = 200 All above didn't fix the issue. Maybe you have experienced the similar issue. It would be great to know how it was solved Any advice will be appreciated!

Why is my blacklist being so greedy when going through a Universal Forwarder?

$
0
0
I have an inputs.conf file that has multiple monitor stanzas and it appears that the blacklist used on one of the stanzas is being applied to all ... My aim is to have 4 sourcetypes for the same index with the last sourcetype (search) not showing the logs from the first 3 logs. Testing locally worked wonders -- grabbed all logs and put them all in their respective sourcetypes and filtered the blacklisted elements from the 4th **BUT** when being processed through to the Universal Forwarder, the blacklist seems to be overriding the entire file thus never getting the applications, server, or audit logs inputs.conf `[monitor:///data/web/defaultroot/newlogs/test/applications.log] index=test sourcetype=applications disabled=0 [monitor:///data/web/defaultroot/newlogs/test/server.log] index=test sourcetype=server disabled=0 [monitor:///data/web/defaultroot/newlogs/test/audit.log] index=test sourcetype=audit disabled=0 [monitor:///data/web/defaultroot/newlogs/test/*.log] index=test sourcetype=search disabled=0 blacklist1=*gz blacklist2=applications* blacklist3=server* blacklist4=audit*`

Log file is no not shipping since being deleted

$
0
0
I had deleted a rouge log file which had become too large and caused the root partition to fill up. The log file has since been regenerated by the application and is now no longer shipping to spunk. I have tried to "splunk restart -auth USER:PASSWORD" but receive the bellow error. splunkd is not running. Splunk> Like an F-18, bro. Checking prerequisites... Checking mgmt port [8089]: open Checking conf files for problems... Invalid key in stanza [tcpout:splunkcloud] in /opt/splunkforwarder/etc/apps/100_splunkcloud/default/outputs.conf, line 16: cipherSuite ( REMOVED). Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug' Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-6.4.1-debde650d26e-linux-2.6-x86_64-manifest' All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... Bad Option -a Usage: splunkd [OPTION...] --nodaemon causes the system not to daemonize -c STRING override the config path -h no longer supported -i no longer supported -n STRING the component name to start with -p INT the management port Splunkd will listen on --debug start with debug log config Help options: -?, --help Show this help message --usage Display brief usage message splunkd.log 05-03-2019 05:51:16.268 +0000 ERROR TailReader - File will not be read, seekptr checksum did not match (file=/home/jenkins/consolidation.log). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info. Many thanks,

Spunk Windows TA + Windows Universal Forwarder vs clean Windows Universal forwarder

$
0
0
Hi, There are any differences between Windows TA + Windows Universal Forwarder and clean Windows Universal forwarder installation? Could you please specify any?. Thank you in advamce

Is there a security reason to upgrade Splunk Universal Forwarder?

$
0
0
I subscribe to a RSS feed for Splunk CVEs and diligently keep my security team in the look regarding Splunk vulnerabilities. Since I've taken over the Splunk administrator role at my company, I've upgraded everything Splunk except some 6.4 UFs. The documentation states: Before you upgrade, consider whether you really need to. In most cases, you do not have to upgrade a forwarder. Forwarders are always compatible with later versions of indexers, so you do not need to upgrade them just because you have upgraded the indexers that they send data to. My questions is: Should I upgrade my UFs? Have there been significant threats since 6.4 that do affect forwarders? If not, is there a blurb (honestly, I'll accept a Splunk Answers blurb) or link out there that I can send my security team to keep them happy?

how to configure a universal forwarder on centos 7 ?

$
0
0
Hello, My problem is that the data I send with the forwarder does not reach splunk. Here is how I configured the forwarder First, I started the forwarder > ./splunk start in $Splunk_Home/bin> Second, I configure the forwarder to connect to a receiving indexer and configure to connect to a deployment server and try > ./splunk add forward-server Ip_of_splunk:9997> ./splunk set deploy-poll Ip_of_splunk:8089 Third, I have configured **inputs.conf** to enter the logs I wanted to retrieve > [monitor:///var/log/secure.log] > index = logcentos > sourcetype = secure >> [monitor:///var/log/httpd/access.log]> index = logapache > sourcetype = acces_log Four, I configured the firewall > firewall-cmd --zone=public --add-port=9997/tcp --permanent> firewall-cmd --reload Five, I restarted the forwarder > ./splunk restart in $Splunk_Home/bin when the restart is finished, I'll check the splunk web page and I see that nothing happened about the indexes I just configured. I check that I didn't make any mistakes when I wrote the names of the indexes but no there is no mistake I check if the forward-server is "active" and yes is active So I don't know what the problem is because I have the "same" configuration as for a forwarder in windows which works Thank you in advance for helping me find solutions

How to configure a universal forwarder on centos 7?

$
0
0
Hello, My problem is that the data I send with the forwarder does not reach splunk. Here is how I configured the forwarder First, I started the forwarder > ./splunk start in $Splunk_Home/bin> Second, I configure the forwarder to connect to a receiving indexer and configure to connect to a deployment server and try> ./splunk add forward-server Ip_of_splunk:9997> ./splunk set deploy-poll Ip_of_splunk:8089 Third, I have configured **inputs.conf** to enter the logs I wanted to retrieve > [monitor:///var/log/secure.log] > index = logcentos > sourcetype = secure >> [monitor:///var/log/httpd/access.log]> index = logapache > sourcetype = acces_log Four, I configured the firewall > firewall-cmd --zone=public --add-port=9997/tcp --permanent> firewall-cmd --reload Five, I restarted the forwarder > ./splunk restart in $Splunk_Home/bin when the restart is finished, I'll check the splunk web page and I see that nothing happened about the indexes I just configured. I check that I didn't make any mistakes when I wrote the names of the indexes but no there is no mistake I check if the forward-server is "active" and yes is active So I don't know what the problem is because I have the "same" configuration as for a forwarder in windows which works Thank you in advance for helping me find solutions
Viewing all 1551 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>