Self-Managed TigerGraph CoPilot

TigerGraph CoPilot is open-source on GitHub, which can be deployed to your own infrastructure. It only includes the backend service of CoPilot but you can still access all of its functions through the APIs.

The difference from CoPilot on TigerGraph Cloud is the absence of the graphical user interface and the extra steps to set a self-managed service up.

If you don’t need to extend the source code of CoPilot, the quickest way is to deploy its docker image with the docker compose file in the repo.

You will need the following prerequisites.

Prerequisites

In order to utilize TigerGraph CoPilot, you will need:

  • To install:

    • Docker

    • TigerGraph DB 3.9+

      • TigerGraph DB 4.0+ is required for SupportAI. The latest TigerGraph DB is included in the docker compose file.

      • The TigerGraph DB needs to have a graph schema with data.

        For installing Docker and a TigerGraph DB see our documentation on Get Started with Docker.

        For a graph schema and data refer to our Example Graphs and see our documentation on Defining a Schema and Loading Data in the GSQL 101 tutorial.

  • An LLM provider API key,

    An LLM provider refers to a company or organization that offers Large Language Models (LLMs) as a service. The API key verifies the identity of the requester, ensuring that the request is coming from a registered and authorized user or application.

    Currently, TigerGraph CoPilot Supports:

1) Setup

First, users need to get the TigerGraph CoPilot image and prep the environment.

You can either:

2) Set up Configuration Files

Next, in the same directory as the docker compose file is in, create and fill in the following configuration files.

  • configs/db_config.json

  • configs/llm_config.json

  • configs/milvus_config.json

configs/db_config.json

Copy the below into configs/db_config.json and edit the hostname and getToken fields to match your database’s configuration.

Set the timeout, memory threshold, and thread limit parameters to control how much of the database’s resources are consumed when answering a question.

You can also disable the consistency_checker, which reconciles Milvus and TigerGraph data, within this config. It is true by default.

{
"hostname": "http://tigergraph",
"getToken": false,
"default_timeout": 300,
"default_mem_threshold": 5000,
"default_thread_limit": 8,
"enable_consistency_checker": true
}

configs/llm_config.json

Next, in that configs/llm_config.json file, copy your LLM provider’s JSON config template below, and fill out the appropriate fields.


  • OpenAI

  • GCP

  • Azure

  • AWS Bedrock

In addition to the OPENAI_API_KEY, the llm_model and model_name, can be edited to match your specific configuration details.

{
    "model_name": "GPT-4",
    "embedding_service": {
        "embedding_model_service": "openai",
        "authentication_configuration": {
            "OPENAI_API_KEY": "YOUR_OPENAI_API_KEY_HERE"
        }
    },
    "completion_service": {
        "llm_service": "openai",
        "llm_model": "gpt-4-0613",
        "authentication_configuration": {
            "OPENAI_API_KEY": "YOUR_OPENAI_API_KEY_HERE"
        },
    "model_kwargs": {
        "temperature": 0
    },
    "prompt_path": "./app/prompts/openai_gpt4/"
    }
}
  1. Follow the GCP authentication information found here and create a Service Account with VertexAI credentials.

  2. Then, add the following to the docker run command:

    -v $(pwd)/configs/SERVICE_ACCOUNT_CREDS.json:/SERVICE_ACCOUNT_CREDS.json -e GOOGLE_APPLICATION_CREDENTIALS=/SERVICE_ACCOUNT_CREDS.json
  3. Finally, your JSON config should as below:

    {
        "model_name": "GCP-text-bison",
        "embedding_service": {
            "embedding_model_service": "vertexai",
            "authentication_configuration": {}
        },
        "completion_service": {
            "llm_service": "vertexai",
            "llm_model": "text-bison",
            "model_kwargs": {
                "temperature": 0
            },
        "prompt_path": "./app/prompts/gcp_vertexai_palm/"
        }
    }

In addition to the AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_KEY, and azure_deployment, the llm_model and model_name can be edited to match your specific configuration details.

{
    "model_name": "GPT35Turbo",
    "embedding_service": {
        "embedding_model_service": "azure",
        "azure_deployment":"YOUR_EMBEDDING_DEPLOYMENT_HERE",
        "authentication_configuration": {
            "OPENAI_API_TYPE": "azure",
            "OPENAI_API_VERSION": "2022-12-01",
            "AZURE_OPENAI_ENDPOINT": "YOUR_AZURE_ENDPOINT_HERE",
            "AZURE_OPENAI_API_KEY": "YOUR_AZURE_API_KEY_HERE"
        }
    },
    "completion_service": {
        "llm_service": "azure",
        "azure_deployment": "YOUR_COMPLETION_DEPLOYMENT_HERE",
        "openai_api_version": "2023-07-01-preview",
        "llm_model": "gpt-35-turbo-instruct",
        "authentication_configuration": {
            "OPENAI_API_TYPE": "azure",
            "AZURE_OPENAI_ENDPOINT": "YOUR_AZURE_ENDPOINT_HERE",
            "AZURE_OPENAI_API_KEY": "YOUR_AZURE_API_KEY_HERE"
        },
        "model_kwargs": {
            "temperature": 0
        },
        "prompt_path": "./app/prompts/azure_open_ai_gpt35_turbo_instruct/"
    }
}

Specify, your configuration details in the sample file below:

    "model_name": "Claude-3-haiku",
    "embedding_service": {
        "embedding_model_service": "bedrock",
        "embedding_model":"amazon.titan-embed-text-v1",
        "authentication_configuration": {
            "AWS_ACCESS_KEY_ID": "ACCESS_KEY",
            "AWS_SECRET_ACCESS_KEY": "SECRET"
        }
    },
    "completion_service": {
        "llm_service": "bedrock",
        "llm_model": "anthropic.claude-3-haiku-20240307-v1:0",
        "authentication_configuration": {
            "AWS_ACCESS_KEY_ID": "ACCESS_KEY",
            "AWS_SECRET_ACCESS_KEY": "SECRET"
        },
        "model_kwargs": {
            "temperature": 0,
        },
        "prompt_path": "./app/prompts/aws_bedrock_claude3haiku/"
    }
}

configs/milvus_config.json

Copy the below into configs/milvus_config.json and edit the host and port fields to match your Milvus configuration. The host address in the template is used by the docker compose file.

  • username and password can also be configured below if required by your Milvus setup.

  • enabled should always be "true" for now as Milvus is the only embedding store supported.

{
"host": "milvus-standalone",
"port": 19530,
"username": "",
"password": "",
"enabled": "true"
}

(Optional) Logging

Users can also configure logging in TigerGraph Co-Pilot service.

Create log configuration file

Copy the below into configs/log_config.json and edit the appropriate values to suit your needs.

{
"log_file_path": "logs",
"log_max_size": 10485760,
"log_backup_count": 10
}

The log is rotated and the rotation is based on the size and backups. These configurations are applied in the LogWriter to the standard python logging package.

Operational and audit logs are recorded.

Outputs include:
  • log.ERROR

  • log.INFO

  • and log.AUDIT-COPILOT

Configure Logging Level in Dockerfile

To configure the logging level of the service, edit the Dockerfile.

By default, the logging level is set to "INFO".
ENV LOGLEVEL="INFO"

This line can be changed to support different logging levels.

Table 1. The levels are described below:
Level Description

CRITICAL

A serious error.

ERROR

Failing to perform functions.

WARNING

Indication of unexpected problems, e.g. failure to map a user’s question to the graph schema.

INFO

Confirming that the service is performing as expected.

DEBUG

Detailed information, e.g. the functions retrieved during the GenerateFunction step, etc.

DEBUG_PII

Finer-grained information that could potentially include PII, such as a user’s question, the complete function call (with parameters), and the LLM’s natural language response.

NOTSET

All messages are processed.

3) Run the Docker Image

Now, simply run docker compose up -d and wait for all the services to start.

Next Steps

Once, that is running now you can move on to the five ways to Use TigerGraph Co-Pilot.

Return to TigerGraph CoPliot for a different topic.