Quantcast
Channel: Questions in topic: "universal-forwarder"
Viewing all 1551 articles
Browse latest View live

How to ensure logs generated during Universal Forwarder upgrade are not lost or duplicated?

$
0
0
We are about to upgrade several hundred Universal Forwarders (UF) in our environment. We want to make sure that any logs that were generated during the upgrade of the UF would not be lost or duplicated. I did find info on `current_only`, however it seem this is only for the ***Windows Event Log Monitor***, and not the ***MONITOR:***. Is there anything we need to make sure we have in place? How will the UF know where the old version left off? I have tried to look this up, but with all the posts just named *Universal Forwarder*, I could have overlooked if this has been asked before.

How does Splunk Universal Forwarder behave for load balanced deployment toplogies when Receivers are down?

$
0
0
one of the customers have a situation whereby there are 1000's of clients with Universal Forwarders in multiple network zones , trying to reach Splunk Heavy Forwarders which are also in multiple network zones. The network zones has to be specific due to security controls, but it is very hard to determine which zone the client (UF) beforehand. As of now, the outputs.conf are hand-crafted manually once the customer identifies which zone the UF is based upon. I was thinking to push outputs.conf with **All** Heavy-forwarder-servers in outputs.conf, but I'm sure some of these cannot be reached from the clients. So my question is 1. How does the UF load-balance behave when it has all (say 10) servers in its outputs.conf list, but only can reach a subset (say 4) of them? 2. Will it throw error and cause failure on the client? or lot of error logs? 3. Is there mechanism whereby we can ask the UF not to try the receiver again if it fails N number of times?

Moving splunk from one directory to another in Windows

$
0
0
We have several servers where the Universal Forwarder has been installed to the wrong drive/directory. During our upgrade window, we are wanting to move these installations to the correct drive/directory. What is the best way to go about this without losing or duplicating data?

Is it possible for a universal forwarder to inject additional data into existing log stream?

$
0
0
I have several universal forwarders (UF) monitoring files on both Windows and Linux endpoints. I would like to "inject data" into the stream of forwarded events that would be made available either by a search-time extraction or injected directly into the log stream as an indexed field. Here's a specific example: I am monitoring an application that allows for a wide range of log verbosity levels. Unfortunately, the application does NOT write the verbosity level within the log stream that it generates. (The verbosity level IS ONLY available in a registry key or in a text file, depending on the OS. In other words, it can be acquired programmatically.) I'd like to include the value of this log verbosity level variable within the stream of forwarded data, so that I can search against it like I would search against punct or host or sourcetype or what-have-you. In fact, this variable is the most important bit of metadata that I'd like to capture in my example. It arguably deserves promotion to an indexed field for this specific use case. Is it possible to have a UF include/join/inject additional data that isn't part of an existing log stream? If so, is it possible to have the UF pull said data in a programmatic way, like having the UF read from the registry or read a value from a text file using python or shell or vbscript, etc.? Answers and comments that need not be offered: -Please don't key off of my mention of an "indexed field" and hijack the answer. We all know that indexed fields are bad, except when they're not. -I know I can use a lookup table on my indexer and manually achieve what I'd like to accomplish. I'm only interested in a solution that can be fully automated across a large enterprise of UFs. A lookup table for this purpose will require lots of care and feeding. Let's not go there in this forum since it's already my fall-back option. If no solution is offered here, I'll answer my own question to close the loop to help any n00bs that stumble upon this answer. -The developers of this application will not change their log format for me. Again, we all know that modifying the source of a log stream is the easiest way to solve problems. Making comments to this effect provide little benefit to the Answers community. Thanks!

Will my outputs.conf edits work to send both compressed and uncompressed outputs from universal forwarder?

$
0
0
I have a Universal Forwarder (UF) that I'd like to send out both compressed and uncompressed data streams to a single indexer. Would this outputs.conf work: [tcpout] defaultGroup = index_cluster, index_cluster2 [tcpout:index_cluster] autoLBFrequency=60 autoLB=true useACK=true compressed=true server=indexer:9996 [tcpout:index_cluster2] compressed=false server=indexer:9997 Also, how would I specify which files/data streams should go out to targetGroup index_cluster and which ones should go out to index_cluster2? Thanks, Mike

Why does enable Splunk Forwarder on boot FreeBSD display ""Can't access "/etc/rc.conf": No such file or directory" error?

$
0
0
Running a vm firewall which is running on FreeBSD. I installed the Splunk universal forwarder, and it can run just fine and forward logs to my Splunk Light instance. However, when I try to enable splunk forwarder to start on boot "sudo splunk enable boot-start", it returns the error: "Can't access "/etc/rc.conf": No such file or directory." I did find /etc/rc.d/splunk, which says its the "init script for Splunk" and is "generated by 'splunk enable boot-start'." Is something borked with my install?

Universal Forwarder and forward-compatibility

$
0
0
In our zest to upgrade our Universal Forwarders (UF) , we have seemed to inadvertently upgrade to a version newer than our indexer. We currently are running Splunk Version 6.4.4. The UF on some of the servers are now at 6.5. [With the best practices stating][1] that it is "recommended that that indexers be at the same or higher version of Splunk Enterprise than the forwarders they are receiving data from," will we see any issues? When they say same version, are they talking just the major or both major and minor releases? [1]: http://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Compatibilitybetweenforwardersandindexers

What do I look at in splunkd.log to troubleshoot deployment client issues?

$
0
0
Hi Splunkers, I have a list of servers that have the Splunk UF running on them. These servers are not showing up in my Deployment server. I have verified that the deployment server is enabled with: **/opt/splunk/bin/splunk display deploy-server** I have also verified that my UFs are pointing to the correct IP Address/Port 8089 of my Deployment Server UI with: **/opt/splunkforwarder/bin/splunk show deploy-poll** Finally, I have tested a telnet connection from one of these "problem" UFs to my Deployment Server over port 8089. The telnet connection to the Deployment Server was successful. I have asked the UNIX team here to send me a copy of Splunkd.log from one of the "problem" servers. What should I be looking for in this file that would clearly show connection issues between the UF and the deployment server? Are there any other troubleshooting steps I should try besides what I've already done? I'm trying to sort out whether this is a Splunk issue vs. something else on the network causing the issue. Ps - Even though these UFs aren't showing up in my Deployment Server, they ARE successfully sending logs to my indexers over TCP 9997.

Is there a way to configure the Universal Forwarder to prevent duplicate events due to a log file that regenerates?

$
0
0
I am using the universal forwarder to index a log file that regenerates every time that a new row is added. In other words, the logging mechanism rewrites the entire file periodically; it doesn't append rows to the previous file. The issue that I am having is that when new rows are added to the log file, the entire file is being re-indexed, which results in duplicate event rows. Is there a way to configure this file (in the inputs and/or props configuration files) to prevent this from happening? Thanks.

Universal forwarder with java application as input.

$
0
0
Hi, I have built a java application that reads data in from a bespoke input source and writes the data to Splunk Enterprise using the index submit option: e.g Service service = Service.connect(loginArgs); service.login(); splunkIndex = service.getIndexes().get("myindex") followed by an index.submit(jsonAsString) with each new record polled. This works perfectly connecting to port 8089, with my credentials on Splunk Enterprise server. Is there an easy way to get this same piece of code to connect to the light universal forwarder and forward to my enterprise instance. The documentation has me running around in circles at the moment. Am i going about it the wrong way, or do I have to use Splunk heavy forwarder instead? Any help appreciated. Cheers.

How Do We Convert Our Heavy Forwarders to Universal Forwarders

$
0
0
When we first rolled out Splunk to our forwarders we installed the full version. We would now like to convert them to Universal Forwarders to reduce the footprint on the servers. All the documentation talks about converting from a light forwarder to a universal forwarder and using the MIGRATESPLUNK=1 option to convert the checkpoint data. Will that also work when going from a Heavy Forwarder to a Universal Forwarder?

Ignoring header in the csv file

$
0
0
I want to index and search csv files in splunk. Each file has a header at the first line: number1,number2, number3, 1,2,3 4,5,6 I've created a custom csv sourcetype in props.conf and defined custom fields i want to use instead of the header in transforms.conf: props.conf: [custom-csv] DATETIME_CONFIG = CURRENT INDEXED_EXTRACTIONS = csv NO_BINARY_CHECK = true category = Structured pulldown_type = true SHOULD_LINEMERGE = false disabled = false KV_MODE = none REPORT-custom= REPORT-custom transforms.conf: [REPORT-custom] DELIMS = "," FIELDS = number1, number2, number3 However when i run a search i can still see the original field names extracted from the header plus the new ones i have defined in transforms.conf. Is there a way to make Splunk to ignore the header line?

Is there a way to get log files from Splunk and keep them in folder to create Log Bundle?

$
0
0
I am sending debug logs, admin audit logs and user audit logs(all server log4j log events) from my apache server to Splunk using Universal Forwarder. I need to create log bundle, which contains all these log files. is there a way to pull these log files from Splunk, keep it in folder and create log bundle. Or is there any way to store all these files one folder before going for indexer?

Splunk Universal Forwarder TLS certificate update. How to manage in phased manner?

$
0
0
We have around 3000 UF's talking to Deployment-server and sending data to Indexers using TLS. The current certificate on these clients are going to expire, but the client don't want to update on all 3K servers at the same time :( My worry is , say if we upgrade on first 100 clients, the new PEM will be present in the deployment-server/indexers. This means either it will break the 1st 100 or remaining 2900. 1. Is there any clever options which you guys tried out to update certificates in a phased manner? 2. I'm thinking of starting a separate instance of deployment-server to cater for the migrated clients. Any better options would be highly grateful.

What are the risks in reducing the fishbucket subdirectory size?

$
0
0
I am currently running Universal Forwarder 6.4.0 (build f2c836328108) on some PureApp host with small FS. The fishbucket (FB) is currently set to the default limit of 500MB. Right now they are using between 275 and 415. I don't want to clean the FB and re index data but I am curious as to the implication of lowering the FB limit on these servers to free up some space. If it only holds 300 or 400MB what am I at risk of losing. I assume there's a risk of reindexing but if someone could help me better understand I'd appreciate it. -bash-4.1$ ./splunk cmd btool --debug limits list | grep file_track /opt/splunkforwarder/etc/system/default/limits.conf file_tracking_db_threshold_mb = 500 Thanks

Forwarder data going to main index

$
0
0
I setup the Universal Forwarder on several Windows servers and pointed it towards my Splunk instance. After installing the forwarder, I went to Splunk web > Add Data > Forward > Event Logs and selected the 'WindowsServer' server class that I had setup and selected my index called 'windows'. However, despite the 'windows' index being set, all of the data coming from my universal forwarders is going into my 'main' index. How can I correct this?

Different management port for forwarders and indexer cluster

$
0
0
Can we use different management ports on Universal forwarders and Indexer cluster? Since we will also be using indexer discovery on the forwarders, is it possible that forwarders can continue using the default management port 8089 while the indexers can be setup to use 8090? If yes, what should be the management uri in the forwarder's outputs.conf? Should the port be 8089 (mgmt port of the forwarder) or 8090 (mgmt port of the cluster master) ? I think it's the latter but I want to get that confirmed.

Struggling with universal forwarder docker container

$
0
0
I had this working at some point, but I am not able to get any of the commands to run after the universal forwarder starts up. At one point, I was able to see it add directories, but now I'm not sure what is wrong with my configuration that doesn't allow it to start run the startup commands vsplunk_uf: container_name: vsplunk_uf image: busybox volumes: - splunk-etc:/opt/splunk/etc - splunk-var:/opt/splunk/var splunkuniversalforwarder: image: splunk/universalforwarder:latest hostname: splunkuniversalforwarder environment: SPLUNK_START_ARGS: --accept-license --answer-yes SPLUNK_USER: root SPLUNK_CMD: 'add monitor -source /usr/local/localrw/logs/webrtc/* -index webrtc -sourcetype signaler' volumes: - /var/lib/docker/containers:/host/containers:ro - /var/run/docker.sock:/var/run/docker.sock:ro - splunk-etc:/opt/splunk/etc - splunk-var:/opt/splunk/var - signaler-logs:/usr/local/localrw/logs/webrtc - kms-logs:/var/log/kurento-media-server/ depends_on: - webrtc.server.1 - kms.1 - vsplunk_uf I am seeing the server logs stop here: splunkuniversalforwarder_1 | Checking conf files for problems... splunkuniversalforwarder_1 | Done splunkuniversalforwarder_1 | Checking default conf files for edits... splunkuniversalforwarder_1 | Validating installed files against hashes from '/opt/splunk/splunkforwarder-6.5.3-36937ad027d4-linux-2.6-x86_64-manifest' splunkuniversalforwarder_1 | All installed files intact. splunkuniversalforwarder_1 | Done splunkuniversalforwarder_1 | All preliminary checks passed. splunkuniversalforwarder_1 | splunkuniversalforwarder_1 | Starting splunk server daemon (splunkd)... splunkuniversalforwarder_1 | Done splunkuniversalforwarder_1 | And when I try to run: docker exec webrtcserver_splunkuniversalforwarder_1 entrypoint.sh splunk list monitor Authentication needed, run "splunk login" I've seen this run successfully, but I'm not sure what I'm doing wrong at this point and why it doesn't seem to be running the commands setup in the environment variables. Thanks for any tips

Why our Universal Forwarder frequently stop forwarding logs?

$
0
0
Hi All, Good Day, I have a problem with our universal forwarder, it frequently stops forwarding data. When the problem occur, my temporary resolution is to restart the forwarder and it will forward data again, however, the next day problem will occur again. It happen almost every day. What could be the solution here? *Universal Forwarder version: 6.2.6 (build 274160)* Thanks, Dan

Universal forwarder fails to install credentials package on Linux (RPM / tgz)

$
0
0
I followed the procedures in **Get Data In** tutorial for an Amazon Linux. Tried both the RPM and tar ball and got he same errors when enabling the credentials file at the same point: [ec2-user@ip-10-10-29-187 bin]$ ./splunk start [ec2-user@ip-10-10-29-187 bin]$ ./splunk edit user admin -password PWDPWD -auth admin:changeme User admin edited. [ec2-user@ip-10-10-29-187 bin]$ ./splunk install app /home/ec2-user/splunkforwarder/splunkclouduf.spl -auth admin:PWDPWD Error during app install: failed to extract app from /home/ec2-user/splunkforwarder/splunkclouduf.spl to /home/ec2-user/splunkforwarder/splunkforwarder/var/run/splunk/bundle_tmp/eb859b5b37e018d6: No such file or directory [ec2-user@ip-10-10-29-187 bin]$ sudo ./splunk install app /home/ec2-user/splunkforwarder/splunkclouduf.spl -auth admin:PWDPWD Error during app install: failed to extract app from /home/ec2-user/splunkforwarder/splunkclouduf.spl to /home/ec2-user/splunkforwarder/splunkforwarder/var/run/splunk/bundle_tmp/c09d0c1b2a0339b3: No such file or directory Is there another universal forwarder for linux version for download?
Viewing all 1551 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>