Skip to main content

LLM Setup

The AI Test Agent leverages Large Language Models (LLMs) to automatically generate fuzz tests for your project. To enable this feature, you need to grant the agent access to a supported LLM provider.

Currently, the AI Test Agent supports models from OpenAI, Microsoft Azure, and Anthropic on AWS Bedrock, including popular models like GPT-4o and Claude 3.5 Sonnet.

Configuration

You can configure the LLM integration by adding an llm section to your cifuzz.yaml file. This is the recommended way to set up the connection to your LLM provider.

Here is an overview of the available configuration options:

OptionDescriptionRequiredDefault
api-typeThe LLM provider: open_ai, azure, or bedrock.Yes-
modelThe specific model name (e.g., gpt-4o).Yes-
api-urlThe base URL for the API (required for Azure).NoProvider default
api-versionThe API version (required for Azure).NoProvider default
azure-deployment-nameThe deployment name on Azure.No-
max-tokensThe maximum number of tokens for a single chat completion request.NoProvider default
timeoutThe timeout for LLM API responses (e.g., 10m, 30s).No10m
Global Configuration

For a persistent configuration that applies to all your projects, you can set these options in your global user configuration file. See the Configuration page for more details.


Provider Setup

Below are instructions for configuring each supported LLM provider.

OpenAI

  1. Get an API Key: Follow the OpenAI documentation to obtain an API key.

  2. Set the API Key: Store your key in the CIFUZZ_LLM_API_TOKEN environment variable:

    export CIFUZZ_LLM_API_TOKEN="<your-openai-api-key>"
  3. Configure cifuzz.yaml: Add the following to your cifuzz.yaml file.

    llm:
    api-type: "open_ai"
    model: "gpt-4o"

OpenAI on Azure

  1. Deploy a Model: Deploy a model like GPT-4o in Azure AI Studio. For details, see the Azure AI Services documentation.

  2. Get Credentials: From your deployment in Azure AI Studio, you will need the endpoint URL, API key, and deployment name.

    Azure setup

  3. Set the API Key: Store your key in the CIFUZZ_LLM_API_TOKEN environment variable:

    export CIFUZZ_LLM_API_TOKEN="<your-azure-api-key>"
  4. Configure cifuzz.yaml: Add the following to your cifuzz.yaml, filling in the details from your Azure deployment.

    llm:
    api-type: "azure"
    model: "gpt-4o" # The original model name, not the deployment name
    api-url: "https://<your-resource>.openai.azure.com"
    api-version: "2024-02-15-preview" # Use the API version for your deployment
    azure-deployment-name: "<your-deployment-name>"
    note

    The api-url should be the base URL of your Azure resource, ending in .openai.azure.com.

Anthropic on AWS Bedrock

  1. Enable Model Access: Follow the AWS Bedrock documentation to get access to Anthropic's Claude models (e.g., Claude 3.5 Sonnet).

  2. Configure AWS Credentials: Make sure your environment is authenticated with AWS. You can do this by running aws configure or by setting the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.

  3. Configure cifuzz.yaml: Add the following to your cifuzz.yaml file.

    llm:
    api-type: "bedrock"
    model: "anthropic.claude-3-5-sonnet-20240620-v1:0"

Advanced Configuration

For more granular control, you can use the following environment variables to fine-tune the LLM's behavior.

  • CIFUZZ_LLM_TEMPERATURE: Temperature for chat completion (e.g., 0.5).
  • CIFUZZ_LLM_MAX_ALTERNATIVES: Maximum number of alternative responses to request. A lower number reduces token usage.
  • CIFUZZ_LLM_API_HEADER_...: Adds custom headers to API requests. Replace ... with the header name, using underscores for hyphens (e.g., CIFUZZ_LLM_API_HEADER_X_My_Header).
# Example:
export CIFUZZ_LLM_TEMPERATURE="0.2"
export CIFUZZ_LLM_MAX_ALTERNATIVES="5"