LLM Setup

The AI Test Agent leverages Large Language Models (LLMs) to automatically generate fuzz tests for your project. To enable this feature, you need to grant the agent access to a supported LLM provider.
Currently, the AI Test Agent supports models from OpenAI, Microsoft Azure, and Anthropic on AWS Bedrock, including popular models like GPT-4o and Claude 3.5 Sonnet.
Configuration
You can configure the LLM integration by adding an llm
section to your cifuzz.yaml
file. This is the recommended way to set up the connection to your LLM provider.
Here is an overview of the available configuration options:
Option | Description | Required | Default |
---|---|---|---|
api-type | The LLM provider: open_ai , azure , or bedrock . | Yes | - |
model | The specific model name (e.g., gpt-4o ). | Yes | - |
api-url | The base URL for the API (required for Azure). | No | Provider default |
api-version | The API version (required for Azure). | No | Provider default |
azure-deployment-name | The deployment name on Azure. | No | - |
max-tokens | The maximum number of tokens for a single chat completion request. | No | Provider default |
timeout | The timeout for LLM API responses (e.g., 10m , 30s ). | No | 10m |
For a persistent configuration that applies to all your projects, you can set these options in your global user configuration file. See the Configuration page for more details.
Provider Setup
Below are instructions for configuring each supported LLM provider.
OpenAI
-
Get an API Key: Follow the OpenAI documentation to obtain an API key.
-
Set the API Key: Store your key in the
CIFUZZ_LLM_API_TOKEN
environment variable:export CIFUZZ_LLM_API_TOKEN="<your-openai-api-key>"
-
Configure
cifuzz.yaml
: Add the following to yourcifuzz.yaml
file.llm:
api-type: "open_ai"
model: "gpt-4o"
OpenAI on Azure
-
Deploy a Model: Deploy a model like GPT-4o in Azure AI Studio. For details, see the Azure AI Services documentation.
-
Get Credentials: From your deployment in Azure AI Studio, you will need the endpoint URL, API key, and deployment name.
-
Set the API Key: Store your key in the
CIFUZZ_LLM_API_TOKEN
environment variable:export CIFUZZ_LLM_API_TOKEN="<your-azure-api-key>"
-
Configure
cifuzz.yaml
: Add the following to yourcifuzz.yaml
, filling in the details from your Azure deployment.llm:
api-type: "azure"
model: "gpt-4o" # The original model name, not the deployment name
api-url: "https://<your-resource>.openai.azure.com"
api-version: "2024-02-15-preview" # Use the API version for your deployment
azure-deployment-name: "<your-deployment-name>"noteThe
api-url
should be the base URL of your Azure resource, ending in.openai.azure.com
.
Anthropic on AWS Bedrock
-
Enable Model Access: Follow the AWS Bedrock documentation to get access to Anthropic's Claude models (e.g., Claude 3.5 Sonnet).
-
Configure AWS Credentials: Make sure your environment is authenticated with AWS. You can do this by running
aws configure
or by setting theAWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
environment variables. -
Configure
cifuzz.yaml
: Add the following to yourcifuzz.yaml
file.llm:
api-type: "bedrock"
model: "anthropic.claude-3-5-sonnet-20240620-v1:0"
Advanced Configuration
For more granular control, you can use the following environment variables to fine-tune the LLM's behavior.
CIFUZZ_LLM_TEMPERATURE
: Temperature for chat completion (e.g.,0.5
).CIFUZZ_LLM_MAX_ALTERNATIVES
: Maximum number of alternative responses to request. A lower number reduces token usage.CIFUZZ_LLM_API_HEADER_...
: Adds custom headers to API requests. Replace...
with the header name, using underscores for hyphens (e.g.,CIFUZZ_LLM_API_HEADER_X_My_Header
).
# Example:
export CIFUZZ_LLM_TEMPERATURE="0.2"
export CIFUZZ_LLM_MAX_ALTERNATIVES="5"