Cross-Region Replication FAQ
How can I verify if data is replicated between primary and Disaster Recovery (DR) clusters?
You can run gstatusgraph
from the Linux terminal on both the primary and DR cluster.
The count for vertices and edges should match if data is in sync.
Note that if there are running loading jobs, DR might show a lower count, in which case check again when the loading job is done.
Why am I not seeing any loading job declared in DR?
Loading jobs themselves are not replicated to the DR cluster. However, the data loaded by these loading jobs is replicated over DR.
I’ve run a DROP ALL
command on primary and now new added data is not replicated in DR.
DROP ALL
will stop cross-region replication (CRR). You will need to re-establish the feature again.
Here is a list of all commands and operations that will stop CRR:
-
gsql drop all
which clears all data and schema -
gsql clear graph store
which clears only data -
gsql --reset
which clears all data, schemas and users, even resetting the password of the default tigergraph. -
gsql import graph
-
gsql export graph
-
gbar restore
Why is GSQL failing to replay a replica with an "UNAUTHORIZED" error?
It’s most likely that primary and DR have different passwords for the same TigerGraph user. This sometimes happens when you enable CRR without restoring the GBAR backup in DR (since you did not have any data) but DR was installed with a different password than primary. Make sure DR and primary have the same TigerGraph password before enabling CRR.
What happens if DR is down, unavailable or under scheduled maintenance (e.g. VM Motion)?
Nothing will happen.
As soon as DR is back online, Kafka MirrorMaker will replicate the Kafka Topic and GSQL will start replaying the replicas from where it left off.
in order to DR automatically recover, it has to come back up within the Kafka Topic retention time limit.
By default, this is set to 168 hours (7 days).
You can tune this parameter based on your need by running gadmin config set Kafka.RetentionHours <value_in_hrs>
Can I have multiple DRs?
Yes. You can set up multiple DR clusters to sync with one primary cluster. There is no hard limit to the number of DR clusters.
How will my application write to the new primary after DR failover?
We suggest that you handle this with an application load balancer where you can configure the DR IP hosts list (e.g. if you are using NGINX you can add the DR hosts list in the upstream section). When the Load Balancer fails the health check on the current primary it will re-route the traffic to the DR host list. You should then manually fail over to the DR cluster.