Post

CrewAI as Your AI Agent

What is CrewAI

  • CrewAI is an open-source framework for orchestrating and collaborating with multiple AI agents.
  • It allows you to create different AI agents, each with its own role and capabilities.
  • These agents can then work together to achieve complex tasks, similar to how a crew works on a ship.
  • This framework is often used for research and development in multi-agent AI systems.
  • You can find more information about CrewAI the framework on its GitHub page: https://github.com/joaomdmoura/crewAI ![[/assets/img/headers/crewai-cover.png]]

Key features

1. Role-Based Agent Design: This allows you to define specific roles, goals, and tools for each individual AI agent. Think of them as crew members with specialized skills contributing to a common goal.

2. Autonomous Inter-Agent Delegation: Agents can communicate, share information, and autonomously delegate tasks among themselves. This streamlines problem-solving and allows for dynamic task distribution.

3. Flexible Task Management: You can design tasks with custom tools and dynamically assign them to specific agents based on their expertise. This ensures the right tool for the job gets used.

4. Processes-Driven: Currently, CrewAI supports sequential and hierarchical workflows, like following a set of instructions or a decision tree. More complex processes like consensual and autonomous are under development.

5. Local Model Integration: You can integrate pre-trained AI models like Ollama into CrewAI for specific tasks or data privacy needs.

6. Open-Source: CrewAI is built with an open-source approach, allowing for community contributions and customization.

Advantages of CrewAI:

  • Flexibility: Offers a balance between the ease of building conversational agents and the structure of process-driven approaches.
  • Adaptability: Dynamic processes allow for seamless integration into development and production workflows.
  • Collaboration: Enables teamwork among AI agents, enhancing efficiency and tackling complex challenges.
FeatureCrewAIAutoGEN
PhilosophyProcess-driven, structured workflowsEmergent behavior, collaborative
Agent RolesExplicitly defined with specific capabilitiesDynamically assigned based on task requirements
Task ManagementManually designed and assignedAgents negotiate and select tasks themselves
Control & PredictabilityHigh, suitable for complex workflowsLower, more adaptable
Learning & AdaptationLess emphasis on on-the-fly learningAgents can learn and adapt based on interactions
Ease of UseRequires more upfront planningEasier to start with, but may be complex for specific tasks
Open SourceYesYes
Best forStructured workflows, clear roles, predictable outcomesFlexible, adaptable systems, collaborative problem-solving

Getting Started

1. Installation

1
pip install crewai langchain_community langchain

The example below also uses DuckDuckGo’s Search. You can install it with pip too:

1
pip install duckduckgo-search

2. Setting Up Your Crew

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
import os
from crewai import Agent, Task, Crew, Process

os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"

# You can choose to use a local model through Ollama for example. See ./docs/how-to/llm-connections.md for more information.
# from langchain_community.llms import Ollama
# ollama_llm = Ollama(model="openhermes")

# Install duckduckgo-search for this example:
# !pip install -U duckduckgo-search

from langchain_community.tools import DuckDuckGoSearchRun
search_tool = DuckDuckGoSearchRun()

# Define your agents with roles and goals
researcher = Agent(
  role='Senior Research Analyst',
  goal='Uncover cutting-edge developments in AI and data science',
  backstory="""You work at a leading tech think tank.
  Your expertise lies in identifying emerging trends.
  You have a knack for dissecting complex data and presenting actionable insights.""",
  verbose=True,
  allow_delegation=False,
  tools=[search_tool]
  # You can pass an optional llm attribute specifying what mode you wanna use.
  # It can be a local model through Ollama / LM Studio or a remote
  # model like OpenAI, Mistral, Antrophic or others (https://python.langchain.com/docs/integrations/llms/)
  #
  # Examples:
  #
  # from langchain_community.llms import Ollama
  # llm=ollama_llm # was defined above in the file
  #
  # from langchain_openai import ChatOpenAI
  # llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7)
)
writer = Agent(
  role='Tech Content Strategist',
  goal='Craft compelling content on tech advancements',
  backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles.
  You transform complex concepts into compelling narratives.""",
  verbose=True,
  allow_delegation=True,
  # (optional) llm=ollama_llm
)

# Create tasks for your agents
task1 = Task(
  description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
  Identify key trends, breakthrough technologies, and potential industry impacts.
  Your final answer MUST be a full analysis report""",
  agent=researcher
)

task2 = Task(
  description="""Using the insights provided, develop an engaging blog
  post that highlights the most significant AI advancements.
  Your post should be informative yet accessible, catering to a tech-savvy audience.
  Make it sound cool, avoid complex words so it doesn't sound like AI.
  Your final answer MUST be the full blog post of at least 4 paragraphs.""",
  agent=writer
)

# Instantiate your crew with a sequential process
crew = Crew(
  agents=[researcher, writer],
  tasks=[task1, task2],
  verbose=2, # You can set it to 1 or 2 to different logging levels
)

# Get your crew to work!
result = crew.kickoff()

print("######################")
print(result)

Examples

You can test different real life examples of AI crews in the crewAI-examples repo:

This post is licensed under CC BY 4.0 by the author.