Quickstart with GKE
This is a quickstart guide for deploying TigerGraph on Kubernetes with Google Kubernetes Engine (GKE).
1. Prerequisites
-
The
gcloud
command-line interface (CLI) is installed on your machine. -
The
kubectl
Kubernetes client command-line tool is installed on your machine. -
A running GKE cluster with nodes that meet the minimum hardware and software requirements for running TigerGraph.
-
You have configured cluster access for
kubectl
. -
You have the following permissions in your Kubernetes context:
-
Create and delete Pods, Services, StatefulSets, and ConfigMaps
-
Create and delete Jobs, CronJobs
-
Create and delete Service Accounts, roles and role bindings
-
Each of the commands below uses kubectl with the default namespace default .
If you’ve deployed your cluster using a different namespace, you need to explicitly provide the namespace where your cluster is deployed with the --namespace option.
|
2. Single-server deployment
This section describes the steps to deploy, verify, and remove a single-server deployment of TigerGraph on GKE.
2.1. Deploy single server
Take the following steps to deploy a single-server instance of TigerGraph on GKE:
2.1.1. Generate deployment manifest
Clone the TigerGraph ecosystems repository and change into the k8s
directory.
You can edit the kustimization.yaml
file in the gke
folder to change the namespace and image name for your deployment.
The default namespace is default
.
Next, run the ./tg
script in the k8s
directory to generate the deployment manifest for a single-server deployment.
You can use the --prefix
option to specify a prefix for your pods.
The default prefix is tigergraph
.
The script creates a deploy
directory automatically, and you can find the manifest named tigergraph-gke-default.yaml
in the directory.
$ ./tg gke kustomize -s 1 -v <version> (1)
1 | -s <number> specifies the number of nodes in your cluster.
For single-server deployment, the number specified is 1.
-v <version> specifies the version of TigerGraph you want to provision in your TigerGraph cluster. |
2.2. Verify single server
Run kubectl get pods
to confirm that the pods were created successfully:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
installer-zsnb4 1/1 Running 0 4m11s
tigergraph-0 1/1 Running 0 4m10s
Run kubectl get services
to find the IP addresses of the RESTPP service as well as the GUI service.
You can then make curl calls to the IP address of tg-rest-service
at port 9000 to make sure that RESTPP is running:
$ curl <restpp_ip>:9000/echo | jq .
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 39 100 39 0 0 120 0 --:--:-- --:--:-- --:--:-- 120
{
"error": false,
"message": "Hello GSQL"
}
You can also copy the IP address of the GUI service into your browser and visit port 14240 to make sure that GraphStudio is working.
2.3. Connect to single server
Use kubectl
to get a shell to the container or log in via ssh
:
# Via kubectl
kubectl exec -it tigergraph-0 -- /bin/bash
# Via ssh
ip_m1=$(kubectl get pod -o wide |grep tigergraph-0| awk '{print $6}')
ssh tigergraph@ip_m1
2.4. Remove single server resources
Use the tg
script in the k8s
directory of the repo to delete all cluster resources.
Replace <namespace_name>
with the name of the namespace within which you want to delete the resources.
If you don’t specify a namespace, the command deletes the resources in the namespace default
:
$ ./tg gke delete -n <namespace_name>
3. Cluster deployment
Once your Kubernetes cluster is ready on GKE, you may deploy a TigerGraph cluster to your GKE Kubernetes cluster.
3.1. Deploy TigerGraph cluster
Take the following steps to deploy a cluster of TigerGraph on GKE:
3.1.1. Generate Kubernetes manifest
Clone the TigerGraph ecosystem repository and change into the k8s
directory:
$ git clone https://github.com/tigergraph/ecosys.git
$ cd ecosys/k8s
You can customize your deployment by editing the kustomize.yaml
file in the gke
directory. The tg
script in the k8s
folder offers a convenient way to make common customizations such as namespace, TigerGraph version, as well as cluster size. Run ./tg -h
to view the help text on how to use the script.
Use the tg
script in the k8s
directory of the repo to create a Kubernetes manifest.
Use -s
or --size
to indicate the number of nodes in the cluster.
Use the --ha
option to indicate the replication factor of the cluster, and the partitioning factor is the number of nodes divided by the replication factor.
For example, the following command creates a manifest that will deploy a 3*2 cluster with a replication factor of 2 and a partitioning factor of 3.
$ ./tg gke kustomize -s 6 --ha 2
The command creates a directory named deploy
with the manifest inside.
3.2. Verify cluster
Run kubectl get pods
to verify the pods were created successfully:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
installer-zsnb4 1/1 Running 0 4m11s
tigergraph-0 1/1 Running 0 4m10s
tigergraph-1 1/1 Running 0 75s
Run kubectl get services
to find the IP addresses of the RESTPP service as well as the GUI service.
You can then make curl calls to the IP address of tg-rest-service
at port 9000 to make sure that RESTPP is running:
$ curl <restpp_ip>:9000/echo | jq .
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 39 100 39 0 0 120 0 --:--:-- --:--:-- --:--:-- 120
{
"error": false,
"message": "Hello GSQL"
}
You can also copy the IP address of the GUI service into your browser and visit port 14240 to make sure that GraphStudio is working.
3.3. Connect to instances
You can use kubectl
to get a shell to the container or log in via ssh
# Via kubectl
kubectl exec -it tigergraph-0 -- /bin/bash
# Via ssh
ip_m1=$(kubectl get pod -o wide |grep tigergraph-0| awk '{print $6}')
ssh tigergraph@ip_m1
3.4. Delete cluster resources
Use the tg
script in the k8s
directory of the repo to delete all cluster resources.
Replace <namespace_name>
with the name of the namespace within which you want to delete the resources.
If you don’t specify a namespace, the command will delete the resources in the namespace default
:
$ ./tg gke delete -n <namespace_name>