Quickstart with GKE
This is a quick start guide for deploying TigerGraph on Kubernetes with Google Kubernetes Engine (GKE).
Prerequisites
-
The
gcloud
command-line interface (CLI) is installed on your machine. -
The
kubectl
Kubernetes client command-line tool is installed on your machine. -
A running GKE cluster with nodes that meet the minimum hardware and software requirementsfor running TigerGraph.
-
You have configured cluster access for
kubectl
.
Single-server deployment
This section describes the steps to deploy, verify, and remove a single-server deployment of TigerGraph on GKE.
Deploy single server
Step 1: Generate deployment manifest. Clone the TigerGraph ecosystems repository and change into the k8s
directory. You can edit the kustimization.yaml
file in the gke
folder to change the namespace and image name for your deployment. The default namespace is default
. No need to edit the files if no changes are needed.
Next, run the ./tg
script in the k8s
directory to generate the deployment manifest for a single-server deployment. A deployment
directory will be created automatically and you should find the manifest named tigergraph-gke.yaml
in the directory.
$ ./tg gke kustomize -s 1
Step 2: Deploy manifest. Run kubectl apply
to create the deployment using the manifest you generated in Step1.
$ kubectl apply -f deployment/tigergraph-gke.yaml
Verify single server
Run kubectl get pods
to confirm that the pods were created successfully:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
installer-zsnb4 1/1 Running 0 4m11s
tigergraph-0 1/1 Running 0 4m10s
Run kubectl describe service/tg-external-service
to find the IP address of the load balancer. You can then make curl calls to port 9000 to make sure that RESTPP is running:
$ curl <load_balancer_ip>:9000/echo | jq .
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 39 100 39 0 0 120 0 --:--:-- --:--:-- --:--:-- 120
{
"error": false,
"message": "Hello GSQL"
}
You can also copy the IP address into your browser and visit port 14240 to make sure that GraphStudio is working.
Cluster deployment
Once your GKE cluster is ready, you can start following the below steps to deploy a TigerGraph cluster on Kubernetes.
Deploy TigerGraph cluster
1. Generate Kubernetes manifest
Clone the TigerGraph ecosystem repository and change into the k8s
directory:
$ git clone https://github.com/tigergraph/ecosys.git
$ cd ecosys/k8s
You can customize your deployment by editing the kustomize.yaml
file in the gke
directory. The tg
script in the k8s
folder offers a convenient way to make common customizations such as namespace, TigerGraph version, as well as cluster size. Run ./tg -h
to view the help text on how to use the script.
Use the tg
script in the k8s
directory of the repo to create a Kubernetes manifest. Use -s
or --size
to indicate the number of nodes in the cluster. Use the --ha
option to indicate the replication factor of the cluster, and the partitioning factor will be the number of nodes divided by the replication factor.
For example, the following command will create a manifest that will deploy a 3*2 cluster with a replication factor of 2 and a partitioning factor of 3.
$ ./tg gke kustomize -s 6 --ha 2
The command will create a directory named deployment
with the manifest inside.
Verify cluster
Run kubectl get pods
to verify the pods were created successfully:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
installer-zsnb4 1/1 Running 0 4m11s
tigergraph-0 1/1 Running 0 4m10s
tigergraph-1 1/1 Running 0 75s
Run kubectl describe service/tg-external-service
to find the IP address of the load balancer for your GKE cluster. You can make a curl call to port 9000 to make sure that RESTPP is working:
$ curl <load_balancer_ip>:9000/echo | jq .
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 39 100 39 0 0 120 0 --:--:-- --:--:-- --:--:-- 120
{
"error": false,
"message": "Hello GSQL"
}
You can also copy the IP address into your browser and visit port 14240 to make sure that GraphStudio is working.