Integrating Guardrails
Policies from DynamoGuard can be integrated with applications using two methods: Analyze and Managed Inference. These methods provide flexible options for integrated policies tailored to your use case.
Analyze
The Analyze method enables lightweight guardraling by evaluating the compliance of a piece of text. This can be useful for integrating DynamoGuard into systems with multiple LLM calls, evaluating corpuses of text, or performing one-off compliance checks.
How it Works
- Send a request to the
/analyze/
endpoint with the text to be analyzed and the policies to be applied - DynamoGuard evaluates the text against the configured policies
- The response contains the analysis results, which can be used to determine the appropriate action, such as blocking non-compliant user inputs or model responses
Managed Inference
Managed inference refers to the process of using DynamoGuard as a proxy for inferencing a third-party LLM. It provides end-to-end policy enforcement on both user inputs and model responses.
How it Works
- Send a request to the
/chat/
endpoint with the user input - DynamoGuard evaluates the user input against the configured policies
- Based on the results, the input may be blocked, sanitized and forwarded to the configured LLM, or sent as-is to the LLM
- The processed input is sent to the configured LLM, which generates a response
- DynamoGuard evaluates the model response against the configured policies
- Based on the results, the response may be blocked, sanitized and returned to the user, or returned as-is to the user
Supported Model Providers
Managed inference is currently supported for the following model API providers:
- OpenAI
- Azure OpenAI
- Databricks
- AWS Bedrock
- Mistral
Managed inference is also supported for custom endpoints and providers. For detailed guidance on setting up managed inference for your solution, please reach out to the DynamoAI team.