Skip to content

Commit ffb2dc5

Browse files
committed
Merge branch 'release-8.5' into pr/21926
2 parents 3bf15bc + 55f24d8 commit ffb2dc5

159 files changed

Lines changed: 1943 additions & 560 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.github/workflows/dispatch.yml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,15 @@ jobs:
3131
run: |
3232
echo "sha=$(sha=${{ github.sha }}; echo ${sha:0:6})" >> $GITHUB_OUTPUT
3333
34+
- name: trigger docs-staging/premium-preview workflow
35+
run: |
36+
curl \
37+
-X POST \
38+
-H "Accept: application/vnd.github+json" \
39+
-H "Authorization: token ${{ secrets.DOCS_STAGING }}" \
40+
https://api.github.com/repos/pingcap/docs-staging/actions/workflows/update-premium.yml/dispatches \
41+
-d '{"ref":"main","inputs":{"full": "false", "repo":"${{ github.repository }}","branch":"${{ github.ref_name }}"}}'
42+
3443
- name: trigger docs-staging workflow
3544
run: |
3645
curl \

TOC-tidb-cloud-essential.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -240,7 +240,7 @@
240240
- [Overview](/vector-search/vector-search-integration-overview.md)
241241
- AI Frameworks
242242
- [LlamaIndex](/vector-search/vector-search-integrate-with-llamaindex.md)
243-
- [Langchain](/vector-search/vector-search-integrate-with-langchain.md)
243+
- [LangChain](/vector-search/vector-search-integrate-with-langchain.md)
244244
- AI Services
245245
- [Amazon Bedrock](/tidb-cloud/vector-search-integrate-with-amazon-bedrock.md)
246246
- Embedding Models/Services

TOC-tidb-cloud-premium.md

Lines changed: 634 additions & 0 deletions
Large diffs are not rendered by default.

TOC-tidb-cloud-starter.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -255,7 +255,7 @@
255255
- [Overview](/vector-search/vector-search-integration-overview.md)
256256
- AI Frameworks
257257
- [LlamaIndex](/vector-search/vector-search-integrate-with-llamaindex.md)
258-
- [Langchain](/vector-search/vector-search-integrate-with-langchain.md)
258+
- [LangChain](/vector-search/vector-search-integrate-with-langchain.md)
259259
- AI Services
260260
- [Amazon Bedrock](/tidb-cloud/vector-search-integrate-with-amazon-bedrock.md)
261261
- Embedding Models/Services

TOC-tidb-cloud.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -265,7 +265,7 @@
265265
- [Overview](/vector-search/vector-search-integration-overview.md)
266266
- AI Frameworks
267267
- [LlamaIndex](/vector-search/vector-search-integrate-with-llamaindex.md)
268-
- [Langchain](/vector-search/vector-search-integrate-with-langchain.md)
268+
- [LangChain](/vector-search/vector-search-integrate-with-langchain.md)
269269
- AI Services
270270
- [Amazon Bedrock](/tidb-cloud/vector-search-integrate-with-amazon-bedrock.md)
271271
- Embedding Models/Services
@@ -309,6 +309,7 @@
309309
- [Set Up Self-Hosted Kafka Private Link Service in AWS](/tidb-cloud/setup-aws-self-hosted-kafka-private-link-service.md)
310310
- [Set Up Self-Hosted Kafka Private Link Service in Azure](/tidb-cloud/setup-azure-self-hosted-kafka-private-link-service.md)
311311
- [Set Up Self-Hosted Kafka Private Service Connect in Google Cloud](/tidb-cloud/setup-self-hosted-kafka-private-service-connect.md)
312+
- [Set Up Private Endpoint for Changefeeds](/tidb-cloud/set-up-sink-private-endpoint.md)
312313
- Disaster Recovery
313314
- [Recovery Group Overview](/tidb-cloud/recovery-group-overview.md)
314315
- [Get Started](/tidb-cloud/recovery-group-get-started.md)

TOC.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -192,7 +192,9 @@
192192
- [Integrate with Confluent and Snowflake](/ticdc/integrate-confluent-using-ticdc.md)
193193
- [Integrate with Apache Kafka and Apache Flink](/replicate-data-to-kafka.md)
194194
- Reference
195-
- [TiCDC Architecture](/ticdc/ticdc-architecture.md)
195+
- TiCDC Architecture
196+
- [TiCDC New Architecture](/ticdc/ticdc-architecture.md)
197+
- [TiCDC Classic Architecture](/ticdc/ticdc-classic-architecture.md)
196198
- [TiCDC Data Replication Capabilities](/ticdc/ticdc-data-replication-capabilities.md)
197199
- [TiCDC Server Configurations](/ticdc/ticdc-server-config.md)
198200
- [TiCDC Changefeed Configurations](/ticdc/ticdc-changefeed-config.md)

br/backup-and-restore-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ Backup and restore might go wrong when some TiDB features are enabled or disable
116116
| New collation | [#352](https://github.com/pingcap/br/issues/352) | Make sure that the value of the `new_collation_enabled` variable in the `mysql.tidb` table during restore is consistent with that during backup. Otherwise, inconsistent data index might occur and checksum might fail to pass. For more information, see [FAQ - Why does BR report `new_collations_enabled_on_first_bootstrap` mismatch?](/faq/backup-and-restore-faq.md#why-is-new_collation_enabled-mismatch-reported-during-restore). |
117117
| Global temporary tables | | Make sure that you are using v5.3.0 or a later version of BR to back up and restore data. Otherwise, an error occurs in the definition of the backed global temporary tables. |
118118
| TiDB Lightning Physical Import| | If the upstream database uses the physical import mode of TiDB Lightning, data cannot be backed up in log backup. It is recommended to perform a full backup after the data import. For more information, see [When the upstream database imports data using TiDB Lightning in the physical import mode, the log backup feature becomes unavailable. Why?](/faq/backup-and-restore-faq.md#when-the-upstream-database-imports-data-using-tidb-lightning-in-the-physical-import-mode-the-log-backup-feature-becomes-unavailable-why).|
119-
| TiCDC | | BR v8.2.0 and later: if the target cluster to be restored has a changefeed and the changefeed [CheckpointTS](/ticdc/ticdc-architecture.md#checkpointts) is earlier than the BackupTS, BR does not perform the restoration. BR versions before v8.2.0: if the target cluster to be restored has any active TiCDC changefeeds, BR does not perform the restoration. |
119+
| TiCDC | | BR v8.2.0 and later: if the target cluster to be restored has a changefeed and the changefeed [CheckpointTS](/ticdc/ticdc-classic-architecture.md#checkpointts) is earlier than the BackupTS, BR does not perform the restoration. BR versions before v8.2.0: if the target cluster to be restored has any active TiCDC changefeeds, BR does not perform the restoration. |
120120
| Vector search | | Make sure that you are using v8.4.0 or a later version of BR to back up and restore data. Restoring tables with [vector data types](/vector-search/vector-search-data-types.md) to TiDB clusters earlier than v8.4.0 is not supported. |
121121

122122
### Version compatibility

dashboard/dashboard-cluster-info.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -85,3 +85,7 @@ The list includes the following information:
8585
- Disk Capacity: The total space of the disk on the host on which the instance is running.
8686
- Disk Usage: The space usage of the disk on the host on which the instance is running.
8787
- Instance: The instance running on this host.
88+
89+
> **Note:**
90+
>
91+
> The **Disks** list might not display disk information for some hosts, depending on the component type, partition configuration, and deployment method. In these cases, a yellow warning icon (⚠️) appears. If you hover over the icon, a tooltip with the message "Failed to get host information" appears. This is expected behavior.

develop/dev-guide-aws-appflow-integration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -244,7 +244,7 @@ test> SELECT * FROM sf_account;
244244

245245
- If anything goes wrong, you can navigate to the [CloudWatch](https://console.aws.amazon.com/cloudwatch/home) page on the AWS Management Console to get logs.
246246
- The steps in this document are based on [Building custom connectors using the Amazon AppFlow Custom Connector SDK](https://aws.amazon.com/blogs/compute/building-custom-connectors-using-the-amazon-appflow-custom-connector-sdk/).
247-
- [{{{ .starter }}}](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-cloud-serverless) is **NOT** a production environment.
247+
- [{{{ .starter }}}](https://docs.pingcap.com/tidbcloud/select-cluster-tier#starter) is **NOT** a production environment.
248248
- To prevent excessive length, the examples in this document only show the `Insert` strategy, but `Update` and `Upsert` strategies are also tested and can be used.
249249

250250
## Need help?

develop/dev-guide-build-cluster-in-cloud.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ summary: Learn how to build a {{{ .starter }}} cluster in TiDB Cloud and connect
99

1010
<CustomContent platform="tidb">
1111

12-
This document walks you through the quickest way to get started with TiDB. You will use [TiDB Cloud](https://www.pingcap.com/tidb-cloud) to create a {{{ .starter }}} (formerly Serverless) cluster, connect to it, and run a sample application on it.
12+
This document walks you through the quickest way to get started with TiDB. You will use [TiDB Cloud](https://www.pingcap.com/tidb-cloud) to create a {{{ .starter }}} cluster, connect to it, and run a sample application on it.
1313

1414
If you need to run TiDB on your local machine, see [Starting TiDB Locally](/quick-start-with-tidb.md).
1515

@@ -45,15 +45,15 @@ This document walks you through the quickest way to get started with TiDB Cloud.
4545

4646
> **Note:**
4747
>
48-
> For [{{{ .starter }}}](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-cloud-serverless) clusters, when you connect to your cluster, you must include the prefix for your cluster in the user name and wrap the name with quotation marks. For more information, see [User name prefix](https://docs.pingcap.com/tidbcloud/select-cluster-tier#user-name-prefix).
48+
> For [{{{ .starter }}}](https://docs.pingcap.com/tidbcloud/select-cluster-tier#starter) clusters, when you connect to your cluster, you must include the prefix for your cluster in the user name and wrap the name with quotation marks. For more information, see [User name prefix](https://docs.pingcap.com/tidbcloud/select-cluster-tier#user-name-prefix).
4949
5050
</CustomContent>
5151

5252
<CustomContent platform="tidb-cloud">
5353

5454
> **Note:**
5555
>
56-
> For [{{{ .starter }}}](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-cloud-serverless) clusters, when you connect to your cluster, you must include the prefix for your cluster in the user name and wrap the name with quotation marks. For more information, see [User name prefix](/tidb-cloud/select-cluster-tier.md#user-name-prefix).
56+
> For [{{{ .starter }}}](https://docs.pingcap.com/tidbcloud/select-cluster-tier#starter) clusters, when you connect to your cluster, you must include the prefix for your cluster in the user name and wrap the name with quotation marks. For more information, see [User name prefix](/tidb-cloud/select-cluster-tier.md#user-name-prefix).
5757
5858
</CustomContent>
5959

0 commit comments

Comments
 (0)