Chain & Agent

Chain

  • Chains bring together different components of AI to create a cohesive system. They set up a sequence of calls between the components. These components can include models and memory

Summarization Chain

  • For summarization chain, it consists of several components - Loader (Document , Web loader), Html To Text, Text Splitter

  • There is several strategies for implementing summarization for llm model

  • For the large data set, due to token limit of model, it is needed to split into several steps

Map Reduce

  • The map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step).

Refine

  • The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer.

  • Since the Refine chain only passes a single document to the LLM at a time, it is well-suited for tasks that require analyzing more documents than can fit in the model's context. The obvious tradeoff is that this chain will make far more LLM calls than

Stuff

  • The stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.

  • It is suitable for small data set

Agent

  • Agents can interact with its environment, collect data, and use the data to perform self-determined tasks to meet predetermined goals. Humans set goals, but an AI agent independently chooses the best actions it needs to perform to achieve those goals.

  • They can determine the purpose of a query, choose the best tools to use, but also keep learning and self-reflection

  • It is mainly composed of tool, memory and model

  • Model act as a brain , memory act as a external data source, tools define the a series of action.

ReAct

  • ReAct is inspired by the synergies between "acting" and "reasoning" which allow humans to learn new tasks and make decisions or reasoning.

  • As LLM have capability of reasoning (understand the meaning of prompt) and acting (to decide the next step) . ReAct prompts LLMs to generate verbal reasoning traces and actions for a task. This allows the system to perform dynamic reasoning to create, maintain, and adjust plans for acting while also enabling interaction to external environments.

  • React helps to break a complex task into logical steps (reasoning) and perform operations, such as calling APIs or retrieving information (actions), to reach a solution.

Tool Call

  • Tool calling allows a model to detect when one or more tools should be called and respond with the inputs that should be passed to those tools. In an API call, you can describe tools and have the model intelligently choose to output a structured object like JSON containing arguments to call these tools.

Multi-Agent Architecture

Sequential

  • The easiest flow

Parallel

  • Suitable for the breaking complex tasks and ensure the co-relation

Loop

Hierarchical

  • There is a supervisor / manager llm that decide what task should be executed

  • After finished the task, it can be looped back to manager llm and start the new execution again until the task is decided to be finished

Memory

  • Memory is a key part of AI chat services. The memory keeps a history of previous messages, allowing for an ongoing conversation with the AI, rather than every interaction starting fresh.

  • To add memory to your AI workflow you can use either:

  • Window Buffer Memory: stores a customizable length of chat history for the current session. This is the easiest to get started with.

  • One of the memory services that n8n provides nodes for. These include:

Workflow vs Agent

  • Workflows are systems where LLMs and tools are orchestrated through predefined code paths.

  • Agents, on the other hand, are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.

  • Think of it like cooking: A workflow is like following a recipe exactly, step by step. An agent is like a chef who decides on the spot how to prepare a dish based on the ingredients and what tastes best.

Agentic RAG

  • Agentic RAG identifies what’s needed to complete a task without waiting for explicit instructions. For instance, if it encounters an incomplete dataset or a question requiring additional context, it autonomously determines the missing elements and seeks them out.

  • Unlike traditional models that rely on static, pre-trained knowledge, agentic RAG dynamically accesses real-time data.

Last updated

Was this helpful?