**Our Environment:**
Multi-site
Search Head Cluster (X nodes on each site)
Standalone Search Head with ES
Indexer Cluster (X nodes on each site)
Deployment Server Node (2 node, NY is active and ASH is stand-by)
Cluster Master Node (2 node, NY is active and ASH is stand-by)
Several Universal Forwarders
Since we have a multi-site cluster, we can upgrade to a new release one site at a time.
**High-level steps:**
Backup configurations
Upgrade the master node.
Upgrade the deployment server
Upgrade site1 peers and search heads.
Upgrade site2 peers and search heads.
**Detailed Steps:**
1. Stop the master.
2. Upgrade the master node
3. Stop the deployment server
4. Upgrade the deployment server and do not start it yet.
5. Start the master, accepting all prompts, if it is not already running.
6. Run splunk enable maintenance-mode on the master. To confirm that the master has entered maintenance mode, run splunk show maintenance-mode. This step prevents unnecessary bucket fix-ups.
7. Stop all the peers (indexers) and search heads on site1 with the splunk stop command.
8. Upgrade the site1 peer nodes (indexers) and search heads.
9. Start the site1 peer nodes and search heads, if they are not already running.
10. Run splunk disable maintenance-mode on the master. To confirm that the master has left maintenance mode, run splunk show maintenance-mode.
11. Wait until the master dashboard shows that both the search factor and replication factor are met.
12. Run splunk enable maintenance-mode on the master. To confirm that the master has entered maintenance mode, run splunk show maintenance-mode.
13. Stop all the peers (indexers) and search heads on site2 with the splunk stop command.
14. Upgrade the site2 peer nodes (indexers) and search heads.
15. Start the site2 peer nodes (indexers) and search heads, if they are not already running.
16. Run splunk disable maintenance-mode on the master. To confirm that the master has left maintenance mode, run splunk show maintenance-mode.
17. Start the deployment server
You can view the master dashboard to verify that all cluster nodes are up and running.
**Universal Forwarders Upgrade:**
Compatibility Matrix - http://docs.splunk.com/Documentation/Splunk/6.4.4/Forwarding/Compatibilitybetweenforwardersandindexers
Based on the above compatibility matrix -
A forwarder that is version 6.0 or later can send data to an indexer that is version 5.0 or later.
An indexer that is version 6.0 can receive data from a forwarder that is version 4.3 or later.
So we don't have to upgrade our Universal Forwarders at this time.
Are the sequence of steps accurate - including when to upgrade and stop/start master and deployment servers?
Is the assumption with UF upgrade accurate?
With respect to back-up - is it OK not to back-up the entire indexed data, because we may not even have the capacity to back-up every single indexer? We are planning to back-up the entire splunk-installation though - /opt/splunk/
Please confirm/advice!
↧