Skip to main content

Helm Deployment Guide

Prerequisites

  1. Obtain Helm Charts from Dynamo AI

    • The Dynamo AI Helm charts are not publicly hosted. You must obtain them directly from the Dynamo AI team. Once you have the chart package (e.g., .tgz file), you can either:
      • Host it in an internal/private Helm repository accessible by your cluster, or
      • Reference the chart locally (e.g., helm install ./dynamoai-base-<VERSION>.tgz).
  2. Container Images in a Accessible Registry

    • Dynamo AI container images must be accessible by your Kubernetes cluster. Ensure you have pushed/pulled the correct images to your container registry before deployment. The Dynamo team will share scripts that help you mirror our images in your repositories.
    • Verify your cluster nodes have permission to access this registry.
  3. Model Weights in Object Storage

    • Dynamo AI relies on certain model artifacts. You can store these model weights in your existing object storage solution. These will be shared with you at deployment time.
    • Ensure you configure the correct credentials and endpoints in your values.yaml so the application can retrieve the models during runtime.

Dependencies

If not already present in your cluster, you may install the following to integrate with Dynamo AI.

NVIDIA Device Plugin (For GPU Clusters)

Required only if you intend to run GPU workloads:

# Example using the NVIDIA Helm chart (standard K8s):
helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
helm repo update

helm upgrade --install nvidia-device-plugin nvdp/nvidia-device-plugin \
--version <CHART_VERSION> \
--namespace kube-system

For RedHat OpenShift, the recommended approach is installing the NVIDIA Device Plugin Operator from OperatorHub.

Ingress Controller or OpenShift Routes

Dynamo AI services need external access:

  • NGINX Ingress (Generic K8s)
    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update

    helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
    --namespace ingress-nginx --create-namespace
  • OpenShift – Use built-in Routes; no separate NGINX required.

KEDA

Dynamo AI integrates with KEDA to autoscale workloads based on custom metrics and events:

helm repo add kedacore https://kedacore.github.io/charts
helm repo update

helm upgrade --install keda kedacore/keda \
--namespace keda \
--create-namespace

Prometheus (Metrics)

Prometheus is used for metrics-based monitoring:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

helm upgrade --install prometheus prometheus-community/prometheus \
--namespace monitoring --create-namespace

OpenTelemetry

If you want distributed tracing:

helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo update

helm upgrade --install opentelemetry-operator open-telemetry/opentelemetry-operator \
--namespace opentelemetry --create-namespace

Configuration Variables

Dynamo AI is composed of multiple Helm charts. Below are the relevant variables for the Base (dynamoai-base) and DynamoGuard (dynamoai-dynamoguard) charts. You will typically configure these in your values.yaml or pass them via --set flags.

dynamoai-base Variables

VariableDescriptionDefault / Example
dynamoguard_namespaceNamespace to which DynamoGuard roles and references may point (if you plan to install dynamoai-dynamoguard).dynamoai-dynamoguard
dynamoai_namespaceNamespace where the Dynamo AI base platform is installed.dynamoai
dns_zoneDNS zone for domain mappings (used in certain TLS or route configurations).e.g., internal.company.com
api_domainDomain for the API endpoints.e.g., api.dynamoai.example.com
ui_domainDomain for the UI endpoints.e.g., dynamoai.example.com
keycloak_domainDomain for the Dynamo AI Keycloak IDP (if using bundled Keycloak).e.g., sso.dynamoai.example.com
minio_domainDomain for MinIO if using in-cluster MinIO.e.g., minio.dynamoai.example.com
storage_classThe storage class to use for persistent volumes (PVC) in the cluster.e.g., gp2, managed-premium
licenseYour Dynamo AI license string (provided by Solutions Architect).<LICENSE_STRING>
postgres_hostHostname or IP of the PostgreSQL database (in-cluster or external).e.g., postgres.dynamoai.svc.cluster.local
postgres_nameDatabase name used by Dynamo AI.e.g., dynamoai_db
postgres_usernamePostgres username.dynamoai_user
postgres_passwordPostgres password.ChangeMe123
mistral_api_keyAPI key for Mistral.ai if used as your LLM service.<MISTRAL_KEY>
openai_api_keyAPI key for OpenAI if used.<OPENAI_KEY>
data_generation_api_keyAPI key for a data-generation LLM (if separate from the above).<DATAGEN_KEY>
hf_tokenHuggingFace token if you wish to pull private models.<HF_TOKEN>
keycloak_usernameAdmin username for Keycloak.keycloak_admin
keycloak_passwordAdmin password for Keycloak.ChangeMe123

dynamoai-dynamoguard Variables

VariableDescriptionDefault / Example
dynamoguard_namespaceNamespace in which DynamoGuard will be installed.dynamoai-dynamoguard
dynamoai_namespaceNamespace for Dynamo AI base services (must match your dynamoai-base installation).dynamoai
api_domainDomain for DynamoGuard-related endpoints (often same as base api_domain).e.g., api.dynamoai.example.com
ui_domainDomain for the DynamoGuard UI.e.g., guard.dynamoai.example.com
keycloak_domainKeycloak domain (should match base config).e.g., sso.dynamoai.example.com
minio_domainDomain for MinIO if used.e.g., minio.dynamoai.example.com
storage_classStorage class for any additional DynamoGuard volumes (same as base).gp2 (AWS), managed-premium (Azure), etc.
licenseDynamo AI license (same as base or can be overridden).<LICENSE_STRING>
postgres_hostPostgreSQL endpoint.postgres.dynamoai.svc.cluster.local
postgres_nameDatabase name.dynamoai_db
postgres_usernamePostgres username.dynamoai_user
postgres_passwordPostgres password.ChangeMe123
mistral_api_keyMistral API key (if used).<MISTRAL_KEY>
openai_api_keyOpenAI API key (if used).<OPENAI_KEY>
data_generation_api_keyAPI key for data-generation LLM.<DATAGEN_KEY>
hf_tokenHuggingFace token (if pulling private models).<HF_TOKEN>
keycloak_usernameKeycloak admin username (if needed for advanced config).keycloak_admin
keycloak_passwordKeycloak admin password.ChangeMe123
storage_account_connection_string(Azure Specific) Connection string for Azure Blob if used for additional or external model/data storage.e.g., DefaultEndpointsProtocol=https;AccountName=...

Note:

  • The storage_account_connection_string is typically Azure-specific. For AWS or GCP, you may leave it blank or remove it from your configuration if not using Azure Blob.

Installing Dynamo AI Charts

1. Prepare the Helm Charts

Since the Dynamo AI Helm charts are not hosted online:

  1. Obtain the chart .tgz files (e.g., dynamoai-base-<VERSION>.tgz, dynamoai-dynamoguard-<VERSION>.tgz, etc.) from your Dynamo AI Solutions Architect.
  2. Store them in a local directory or a private Helm repository accessible to your cluster.

2. Deploy the dynamoai-base Chart

  1. Create a dynamoai-base-values.yaml (or equivalent) with the Base Variables configured.

  2. Install locally, for example:

    helm upgrade --install dynamoai-base /path/to/dynamoai-base-<VERSION>.tgz \
    --values ./dynamoai-base-values.yaml \
    --namespace dynamoai \
    --create-namespace

    If you hosted your chart in a private repo, use:

    helm upgrade --install dynamoai-base my-private-repo/dynamoai-base \
    --version <DAI_VERSION> \
    --values ./dynamoai-base-values.yaml \
    --namespace dynamoai \
    --create-namespace

3. Deploy the dynamoai-dynamoguard Chart

If you require DynamoGuard:

  1. Create a dynamoai-dynamoguard-values.yaml (or equivalent) with DynamoGuard Variables.
  2. Install:
    helm upgrade --install dynamoai-dynamoguard /path/to/dynamoai-dynamoguard-<VERSION>.tgz \
    --values ./dynamoai-dynamoguard-values.yaml \
    --namespace dynamoai

(Optional) dynamoai-dynamoeval

If you also have DynamoEval:

helm upgrade --install dynamoai-dynamoeval /path/to/dynamoai-dynamoeval-<VERSION>.tgz \
--values ./dynamoai-dynamoeval-values.yaml \
--namespace dynamoai

Post-Installation

  1. Validate Pods

    kubectl get pods -n dynamoai

    Confirm all pods are in a Running/Ready state.

  2. DNS & Ingress Configuration

    • If using an NGINX Ingress on EKS, AKS, or GKE, ensure your domain (api_domain, ui_domain, etc.) has DNS A/CNAME records pointing to the ingress load balancer.
    • On OpenShift, confirm that the Routes are created and externally resolvable. In some cases, the domain will be auto-generated unless you configure custom hostnames.
  3. Access the UI

    • Open a browser and navigate to the UI domain you specified (e.g., https://dynamoai.example.com).
    • Log in using the credentials set in your Helm values.
  4. Integrate AI Systems

    • For real-time guardrails, configure an external or local AI system in the UI or via CRDs.
    • Provide your LLM API keys (OpenAI, Mistral, Databricks, etc.) or local GPU resources if running in-cluster.

Cleanup

Remove Dynamo AI charts:

helm uninstall dynamoai-dynamoguard -n dynamoai
helm uninstall dynamoai-dynamoeval -n dynamoai
helm uninstall dynamoai-base -n dynamoai

(Optional) Remove dependencies if they were installed specifically for Dynamo AI:

helm uninstall nvidia-device-plugin -n kube-system   # GPU device plugin
helm uninstall ingress-nginx -n ingress-nginx # Ingress controller
helm uninstall keda -n keda
helm uninstall prometheus -n monitoring
helm uninstall opentelemetry-operator -n observability