persist region leader#1253
Conversation
Signed-off-by: Ryan Leung <rleungx@gmail.com>
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
| bool is_in_flashback = 7; | ||
| // The start_ts that the current flashback progress is using. | ||
| uint64 flashback_start_ts = 8; | ||
| Peer leader = 9; |
There was a problem hiding this comment.
It's better to add a new protobuf message in pdpb.proto that wraps metapb.Region and metapb.Peer for better extensibility.
metapb.Region must only include states that need to be persisted in TiKV side, so "leader" should not be added here.
There was a problem hiding this comment.
It will cause breaking changes on the PD side. BTW, I think the purpose of metapb.Region are not the same on PD and TiKV.
There was a problem hiding this comment.
+1, store leader in region meta is a little weird from the perspective of TiKV. Can PD store a separate key value for leader info?
There was a problem hiding this comment.
leader won't assign any value on the TiKV side.
There was a problem hiding this comment.
I know, if there is no other better solution. I can accept reluctantly.
The purpose of this PR is that we want to persist the leader information on the PD side so that if the PDs are reloaded, it won't affect the leader distribution in the cluster with a large number of regions. Because the heartbeat may be not processed in time and there will be redundant scheduling.
Besides, the region cache can get the leader ASAP.