-
Notifications
You must be signed in to change notification settings - Fork 475
Add fault tolerance demo docs #22837
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This page is woefully out of date. There is an ongoing discussion to either remove this page altogether or commit to better maintenance, but for right now it's probably better to add the note than not. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -108,6 +108,10 @@ The [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}) | |
|
|
||
| CockroachDB {{ site.data.products.standard }} is our new, [enterprise-ready plan](https://www.cockroachlabs.com/pricing), recommended for most applications. You can start small with [provisioned capacity that can scale on demand]({% link cockroachcloud/plan-your-cluster.md %}), along with enterprise-level security and availability. Compute for CockroachDB {{ site.data.products.standard }} is pre-provisioned and storage is usage-based. You can easily switch a CockroachDB {{ site.data.products.basic }} cluster to CockroachDB {{ site.data.products.standard }} in place. | ||
|
|
||
| ### Fault tolerance demo | ||
|
|
||
| CockroachDB {{ site.data.products.advanced }} includes a [built-in fault tolerance demo]({% link {{ page.version.version }}/demo-cockroachdb-resilience.md %}#run-a-guided-demo-in-cockroachdb-cloud) that allows you to monitor query execution during a simulated failure and recovery. The fault tolerance demo is in Preview. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I noticed that "is in Preview" is similar to a line in the Folders section below but preview is not capitalized there. Also, should add the link? ( |
||
|
|
||
| ### CockroachDB Cloud Folders | ||
|
|
||
| [Organizing CockroachDB {{ site.data.products.cloud }} clusters using folders]({% link cockroachcloud/folders.md %}) is in preview. Folders allow you to organize and manage access to your clusters according to your organization's requirements. For example, you can create top-level folders for each business unit in your organization, and within those folders, organize clusters by geographic location and then by level of maturity, such as production, staging, and testing. | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,17 +1,38 @@ | ||
| --- | ||
| title: CockroachDB Resilience Demo | ||
| summary: Use a local cluster to explore how CockroachDB remains available during, and recovers after, failure. | ||
| summary: Run a demo to explore how CockroachDB remains available during a failure and recovery. | ||
| toc: true | ||
| docs_area: deploy | ||
| --- | ||
|
|
||
| This page guides you through a simple demonstration of how CockroachDB remains available during, and recovers after, failure. Starting with a 6-node local cluster with the default 3-way replication, you'll run a sample workload, terminate a node to simulate failure, and see how the cluster continues uninterrupted. You'll then leave that node offline for long enough to watch the cluster repair itself by re-replicating missing data to other nodes. You'll then prepare the cluster for 2 simultaneous node failures by increasing to 5-way replication, then take two nodes offline at the same time, and again see how the cluster continues uninterrupted. | ||
| This page describes how to see a hands-on demonstration of how CockroachDB's fault-tolerant design allows services to remain available during a failure and recovery. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. super nit: Multiple uses of "how" close together sounds a bit awkward here. How to see ... how . I don't have any suggestions for improvements however |
||
|
|
||
| ## Before you begin | ||
| ## Run a guided demo in CockroachDB {{ site.data.products.cloud }} | ||
|
|
||
| CockroachDB {{ site.data.products.cloud }} {{ site.data.products.advanced }} includes a built-in fault tolerance demo in the {{ site.data.products.cloud }} Console that automatically runs a sample workload and simulates a node failure on your cluster, showing real-time metrics of query latency and failure rate during the outage and recovery. | ||
|
|
||
| {{ site.data.alerts.callout_info }} | ||
| The CockroachDB {{ site.data.products.cloud }} fault tolerance demo is in [Preview]({% link {{ page.version.version }}/cockroachdb-feature-availability.md %}). | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Not for this PR but it seems like we should have prebuilt macros for each of visibilities. Its annoying that we'd have to add the version and availability to each place we use this. Ideally we could do this and have it link up correctly. |
||
| {{ site.data.alerts.end }} | ||
|
|
||
| The following prerequisites are needed to run the fault tolerance demo: | ||
|
|
||
| - A [CockroachDB {{ site.data.products.advanced }} cluster]({% link cockroachcloud/create-an-advanced-cluster.md %}) with at least three nodes. | ||
| - All nodes are healthy. | ||
| - The cluster's CPU utilization is below 30%. | ||
| - The cluster does not a custom [replication zone configuration]({% link {{ page.version.version }}/configure-replication-zones.md %}). | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We dropped this one. There are some others but I don't think most of them are worth listing. The one additional that I think we should consider is around the cluster being in an unlocked state. For example if they are already undergoing cluster disruption or they are scaling their cluster or the cluster is under maintenance, they won't be able to run the demo. Anything that has locked the cluster will prevent the demo from starting. The messaging we show the user in this case is:
|
||
|
|
||
| To run the fault tolerance demo, open the {{ site.data.products.cloud }} Console and navigate to **Actions > Fault tolerance demo**. Follow the prompts to check that your cluster is eligible and begin the demo. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I wonder if this will be confusing? There aren't really any visible prompts to check eligibility since we run them automatically when you try to start the demo. |
||
|
|
||
| ## Run a manual demo on a local machine | ||
|
|
||
| This guide walks you through a simple demonstration of CockroachDB's resilience on a local cluster deployment. Starting with a 6-node local cluster with the default 3-way replication, you'll run a sample workload, terminate a node to simulate failure, and see how the cluster continues uninterrupted. You'll then leave that node offline for long enough to watch the cluster repair itself by re-replicating missing data to other nodes. You'll then prepare the cluster for 2 simultaneous node failures by increasing to 5-way replication, then take two nodes offline at the same time, and again see how the cluster continues uninterrupted. | ||
|
|
||
| ### Before you begin | ||
|
|
||
| Make sure you have already [installed CockroachDB]({% link {{ page.version.version }}/install-cockroachdb.md %}). | ||
|
|
||
| ## Step 1. Start a 6-node cluster | ||
| ### Step 1. Start a 6-node cluster | ||
|
|
||
| 1. In separate terminal windows, use the [`cockroach start`]({% link {{ page.version.version }}/cockroach-start.md %}) command to start six nodes: | ||
|
|
||
|
|
@@ -82,7 +103,7 @@ Make sure you have already [installed CockroachDB]({% link {{ page.version.versi | |
| $ cockroach init --insecure --host=localhost:26257 | ||
| ~~~ | ||
|
|
||
| ## Step 2. Set up load balancing | ||
| ### Step 2. Set up load balancing | ||
|
|
||
| In this tutorial, you run a sample workload to simulate multiple client connections. Each node is an equally suitable SQL gateway for the load, but it's always recommended to [spread requests evenly across nodes]({% link {{ page.version.version }}/recommended-production-settings.md %}#load-balancing). This section shows how to set up the open-source [HAProxy](http://www.haproxy.org/) load balancer. | ||
|
|
||
|
|
@@ -133,7 +154,7 @@ In this tutorial, you run a sample workload to simulate multiple client connecti | |
| $ haproxy -f haproxy.cfg & | ||
| ~~~ | ||
|
|
||
| ## Step 3. Run a sample workload | ||
| ### Step 3. Run a sample workload | ||
|
|
||
| Use the [`cockroach workload`]({% link {{ page.version.version }}/cockroach-workload.md %}) command to run CockroachDB's built-in version of the YCSB benchmark, simulating multiple client connections, each performing mixed read/write operations. | ||
|
|
||
|
|
@@ -179,7 +200,7 @@ Use the [`cockroach workload`]({% link {{ page.version.version }}/cockroach-work | |
|
|
||
| After the specified duration (20 minutes in this case), the workload will stop and you'll see totals printed to standard output. | ||
|
|
||
| ## Step 4. Check the workload | ||
| ### Step 4. Check the workload | ||
|
|
||
| Initially, the workload creates a new database called `ycsb`, creates the table `public.usertable` in that database, and inserts rows into the table. Soon, the load generator starts executing approximately 95% reads and 5% writes. | ||
|
|
||
|
|
@@ -207,7 +228,7 @@ Initially, the workload creates a new database called `ycsb`, creates the table | |
|
|
||
| <img src="{{ 'images/v26.1/fault-tolerance-6.png' | relative_url }}" alt="DB Console Overview" style="border:1px solid #eee;max-width:100%" /> | ||
|
|
||
| ## Step 5. Simulate a single node failure | ||
| ### Step 5. Simulate a single node failure | ||
|
|
||
| When a node fails, the cluster waits for the node to remain offline for 5 minutes by default before considering it dead, at which point the cluster automatically repairs itself by re-replicating any of the replicas on the down nodes to other available nodes. | ||
|
|
||
|
|
@@ -242,23 +263,23 @@ When a node fails, the cluster waits for the node to remain offline for 5 minute | |
| kill -TERM 53708 | ||
| ~~~ | ||
|
|
||
| ## Step 6. Check load continuity and cluster health | ||
| ### Step 6. Check load continuity and cluster health | ||
|
|
||
| Go back to the DB Console, click **Metrics** on the left, and verify that the cluster as a whole continues serving data, despite one of the nodes being unavailable and marked as **Suspect**: | ||
|
|
||
| <img src="{{ 'images/v26.1/fault-tolerance-7.png' | relative_url }}" alt="DB Console Suspect Node" style="border:1px solid #eee;max-width:100%" /> | ||
|
|
||
| This shows that when all ranges are replicated 3 times (the default), the cluster can tolerate a single node failure because the surviving nodes have a majority of each range's replicas (2/3). | ||
|
|
||
| ## Step 7. Watch the cluster repair itself | ||
| ### Step 7. Watch the cluster repair itself | ||
|
|
||
| Click **Overview** on the left: | ||
|
|
||
| <img src="{{ 'images/v26.1/fault-tolerance-5.png' | relative_url }}" alt="DB Console Cluster Repair" style="border:1px solid #eee;max-width:100%" /> | ||
|
|
||
| Because you reduced the time it takes for the cluster to consider the down node dead, after 1 minute or so, the cluster will consider the down node "dead", and you'll see the replica count on the remaining nodes increase and the number of under-replicated ranges decrease to 0. This shows the cluster repairing itself by re-replicating missing replicas. | ||
|
|
||
| ## Step 8. Prepare for two simultaneous node failures | ||
| ### Step 8. Prepare for two simultaneous node failures | ||
|
|
||
| At this point, the cluster has recovered and is ready to handle another failure. However, the cluster cannot handle two _near-simultaneous_ failures in this configuration. Failures are "near-simultaneous" if they are closer together than the `server.time_until_store_dead` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}) plus the time taken for the number of replicas on the dead node to drop to zero. If two failures occurred in this configuration, some ranges would become unavailable until one of the nodes recovers. | ||
|
|
||
|
|
@@ -290,7 +311,7 @@ To be able to tolerate 2 of 5 nodes failing simultaneously without any service i | |
|
|
||
| This shows the cluster up-replicating so that each range has 5 replicas, one on each node. | ||
|
|
||
| ## Step 9. Simulate two simultaneous node failures | ||
| ### Step 9. Simulate two simultaneous node failures | ||
|
|
||
| Gracefully shut down **2 nodes**, specifying the [process IDs you retrieved earlier](#step-5-simulate-a-single-node-failure): | ||
|
|
||
|
|
@@ -299,7 +320,7 @@ Gracefully shut down **2 nodes**, specifying the [process IDs you retrieved earl | |
| kill -TERM {process IDs} | ||
| ~~~ | ||
|
|
||
| ## Step 10. Check cluster status and service continuity | ||
| ### Step 10. Check cluster status and service continuity | ||
|
|
||
| 1. Click **Overview** on the left, and verify the state of the cluster: | ||
|
|
||
|
|
@@ -343,7 +364,7 @@ kill -TERM {process IDs} | |
|
|
||
| This shows that when all ranges are replicated 5 times, the cluster can tolerate 2 simultaneous node outages because the surviving nodes have a majority of each range's replicas (3/5). | ||
|
|
||
| ## Step 11. Clean up | ||
| ### Step 11. Clean up | ||
|
|
||
| 1. In the terminal where the YCSB workload is running, press **CTRL + c**. | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noticed that the headline that results in this page anchor is using the macro
{{ site.data.products.cloud }}but it's hard coded here in the anchor. It means that if we ever changed cloud, the anchor would break. This is obviously unlikely and perhaps we'd check by some automated link scanner but it suggests that the current system for linking we have within pages is lacking. It would be better if we had some layer of indirection for each headline that would allow us to change its name without changing it's id. Then we use the id to look up the current name and generate the anchor on the fly. Obviously we wouldn't do any of that in this PR. Just food for thought.