Quantcast
Channel: Questions in topic: "universal-forwarder"
Viewing all articles
Browse latest Browse all 1551

Forwarding a log that's constantly updating, how to prevent indexing duplicate events?

$
0
0
Hi, We are currently monitoring a log file that tracks available time and unavailable time using the universal forwarder. The issue that we are running into is that we are getting duplicate events every time because Splunk seems to re-index the whole log every minute. The log looks like this: Unavailable 09.09.2015 18:31:11 - 09.09.2015 18:33:11 Available 09.09.2015 18:34:11 - 10.09.2015 10:49:14 Unavailable 10.09.2015 10:50:14 - 10.09.2015 11:11:14 Available 10.09.2015 11:12:14 - 17.09.2015 16:47:50 Unavailable 17.09.2015 16:48:50 - 17.09.2015 16:48:50 Available 17.09.2015 16:49:50 - 21.01.2016 12:48:27 Unavailable 21.01.2016 12:49:27 - 22.01.2016 17:28:33 Available 22.01.2016 17:29:30 - 22.01.2016 17:29:30 Unavailable 22.01.2016 17:29:33 - 22.01.2016 17:29:33 Available 22.01.2016 17:30:30 - 22.01.2016 17:30:30 Unavailable 22.01.2016 17:30:33 - 22.01.2016 17:30:33 Available 22.01.2016 17:31:30 - 22.01.2016 17:31:30 The way the file is updated is: it will update the the end time on the last line every min until the status goes to unavailable, then a new line can be created. Also, we used indexed time because there were no timestamps for each entry. Does anyone have any ideas on how we can stop the re-indexing/duplicate events? Thanks

Viewing all articles
Browse latest Browse all 1551

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>