Task-Centric Structure

Design your tasks in a crystal clear format.

The Beacon Labs framework uses Tasks as its core component. Tasks can be executed by agents or direct LLM calls and can be customized with various parameters, tools, and context. The framework automatically generates the necessary steps within tasks.

The task-centric approach advantages:

  • It removes the restriction of binding agents to single tasks, allowing them to handle multiple tasks efficiently.

  • It improves programmability by enabling task dependencies like websites, company data, and competitor information to be defined programmatically instead of being embedded in individual agents. This approach eliminates the need for separate agents for repetitive operations like competitor analysis, creating a more scalable and maintainable system.

Creating a Task

Tasks can be imported into your project and created with custom identifiers like ‘task1’, ‘task2’ or any descriptive name you choose.

from beaconlabs import Task

task1 = Task("Do an in-depth analysis of US history")

Task Attributes

Tasks within the framework can range from basic to complex, depending on your specific requirements. The framework is designed with simplicity in mind, requiring only one mandatory parameter: the description. All other parameters are optional, providing flexibility in task configuration.

Attribute

Parameters

Type

Description

Description

description

str

A clear and concise statement of what the task entails.

Response Format (Optional)

response_format

Optional[List[Union(BaseModal, ObjectResponse)]]

Describe the response you expect.

Tools (Optional)

tools

Optional[List[Union(MCP, Function)]]

The tools needed to complete the task.

Context (Optional)

context

Optional[List[Union(Task, KnowledgeBase, str)]]

Context that helps accomplish the task.

Adding Tools to a Task

Tools play a crucial role in agent functionality by bridging the gap between LLMs and real-world applications such as APIs, services, and search engines.

The framework supports two distinct types of tool implementation. The first option allows you to utilize Python functions directly as tools within Beacon Labs agents. The second approach leverages the Model Context Protocol (MCP), a standardized protocol for LLM tools that supports multiple platforms including Python, Node.js, and Docker. MCP tools are continuously developed and maintained by both companies and the community, with detailed information available in the “Tools” section.

Integrating a tool into your task is straightforward: simply create a list containing the desired tool’s class name and assign it to the “tools” parameter in the Task object.

1. Function Tools

Let’s define a class called MyTools that includes a function named is_page_available. This function will perform a simple URL validation check, returning True if the specified URL is accessible, making it useful for verifying web resources.

import requests

class MyTools:
  def is_page_available(url: str) -> bool:
      return requests.get(url).status_code == 200

2. MCP Tools

This example demonstrates integration with the HackerNews MCP Server, which provides several functions including get_stories, get_story_info, search_stories, and get_user_info. The MCP framework simplifies the process of connecting our agents to external services, as illustrated by this HackerNews integration.

class HackerNewsMCP:
    command = "uvx"
    args = ["mcp-hn"]

3. Put to Task

Once you’ve configured your custom tools, they can be directly incorporated into the Task object. The agent will then automatically utilize these tools as needed during task execution.

task = Task(
  "Summarize the latest hackernews stories of today",
  tools=[Search, MyTools] # Specify the tools list
)

Putting Task to Another Task as Context

The framework supports the combination of multiple tasks to handle complex operations, particularly useful in scenarios requiring deep analysis followed by report generation. While individual tasks may be complex, the true power lies in their interconnection. By creating task chains and linking them through shared context, you can build sophisticated workflows that seamlessly pass information between tasks.

task1 = Task("Do an in-depth analysis of the history of chips")

task2 = Task(
  "Prepare a draft report on Europe's position",
  context=[task1] # Add task1 in a list as task context
)

The Beacon Labs framework explains the context to the LLM for your purposes. You don’t have to worry about sharing multiple contexts.


This is just a portion of the full reworded content. Let me know if you'd like the full document or any modifications!

Last updated