LLM Support
Use various LLMs to handle your agents and tasks.
Overview
The BeaconLabs framework allows you to use .env variable files or environment variables for LLM support. Once you provide the necessary keys for various services, you can easily select the LLM by using the model
parameter within the agents.
The supported LLMs are:
OpenAI
openai/gpt-4o
openai/o3-mini
Azure
azure/gpt-4o
Anthropic
claude/claude-3-5-sonnet
AWS Bedrock
bedrock/claude-3-5-sonnet
DeepSeek
deepseek/deepseek-chat
Setting up .env To use these LLMs, the example .env variable file is shown below. You can adjust the variables according to the LLM you want to use and fill in only the required ones. This .env file must be located in your working directory.
dotenvCopyEdit# OpenAI
OPENAI_API_KEY="sk-***"
# Anthropic
ANTHROPIC_API_KEY="sk-***"
# DeepSeek
DEEPSEEK_API_KEY="sk-**"
# AWS Bedrock
AWS_ACCESS_KEY_ID="**"
AWS_SECRET_ACCESS_KEY="***"
AWS_REGION="**-**"
# Azure
AZURE_OPENAI_ENDPOINT="https://**.com/"
AZURE_OPENAI_API_VERSION="****-**-**"
AZURE_OPENAI_API_KEY="***"
Using a Specific Model in Agent
Using the model
parameter, you can easily select which LLM each agent will use. Here's an example:
pythonCopyEditfrom beaconlabs import Agent
product_manager_agent = Agent(
"Product Manager",
model="openai/gpt-4o" # Specify the model
)
Last updated