Chapter 1: ChatModel and Message (Console)
Introduction to the Eino Framework
What is Eino?
Eino is a Go-based AI application development framework (Agent Development Kit) designed to help developers build AI applications that are extensible and maintainable.
What problems does Eino solve?
- Model abstraction: A unified interface over different LLM providers (OpenAI, Ark, Claude, etc.). Switching models does not require business-code changes.
- Capability composition: Replaceable, composable capability units via the Component interfaces (chat, tools, retrieval, etc.).
- Orchestration framework: Agent, Graph, and Chain abstractions for complex multi-step AI workflows.
- Runtime support: Streaming output, interrupt/resume, state management, and callback observability.
The main repositories:
- eino (this repo): core library, defines interfaces, orchestration abstractions, and ADK
- eino-ext: extensions, concrete implementations of components (OpenAI, Ark, Milvus, etc.)
- eino-examples: example code, including this quickstart series
ChatWithEino: An Assistant That Talks With Eino Docs
What is ChatWithEino?
ChatWithEino is an assistant built with the Eino framework to help developers learn Eino and write Eino code. It answers questions based on the Eino repositories’ source code, comments, and examples.
Core capabilities:
- Conversational interaction: Understand Eino-related questions and provide clear answers
- Code access: Read Eino source code, comments, and examples to answer based on real implementations
- Persistent sessions: Multi-turn conversations with remembered context, recoverable across processes
- Tool calling: Execute operations like file reads and code search
Architecture:
- ChatModel: Communicate with LLMs (OpenAI, Ark, Claude, etc.)
- Tool: Capability extensions such as filesystem access and code search
- Memory: Persist conversation history
- Agent: A unified execution framework coordinating all components
Quickstart Series: Build ChatWithEino From Scratch
This series starts from the simplest ChatModel call and incrementally builds a complete ChatWithEino Agent.
Learning path:
| Chapter | Topic | What you build | Capability |
| Chapter 1 | ChatModel and Message | Understand the Component abstraction, implement a single-turn chat | Basic chat |
| Chapter 2 | Agent and Runner | Introduce the execution abstraction, implement multi-turn chat | Session management |
| Chapter 3 | Memory and Session | Persist chat history and support session recovery | Persistence |
| Chapter 4 | Tool and filesystem | Add file-access capability and read source code | Tool calling |
| Chapter 5 | Middleware | Middleware mechanism to handle cross-cutting concerns | Extensibility |
| Chapter 6 | Callback | Callback mechanism to observe Agent execution | Observability |
| Chapter 7 | Interrupt and Resume | Interrupt/resume for long-running tasks | Reliability |
| Chapter 8 | Graph and Tool | Use Graph to orchestrate complex workflows | Advanced orchestration |
| Chapter 9 | A2UI | Integration from Agent to UI | Production readiness |
Why this design?
Each chapter adds one core capability on top of the previous one, so you can:
- Understand each component’s role: Introduce features gradually instead of all at once
- See how the architecture evolves: From simple to complex, and why each abstraction exists
- Build practical skills: Every chapter includes runnable code for hands-on practice
Goal of this chapter: understand Eino’s Component abstraction, call a ChatModel with minimal code (with streaming output), and learn the basic usage of schema.Message.
Code Location
- Entry code: cmd/ch01/main.go
Why a Component Interface?
Eino defines a set of Component interfaces (ChatModel, Tool, Retriever, Loader, etc.). Each interface describes a replaceable capability:
type BaseChatModel interface {
Generate(ctx context.Context, input []*schema.Message, opts ...Option) (*schema.Message, error)
Stream(ctx context.Context, input []*schema.Message, opts ...Option) (
*schema.StreamReader[*schema.Message], error)
}
Benefits of interfaces:
- Replaceable implementations:
eino-extprovides OpenAI, Ark, Claude, Ollama, and more. Business code depends only on the interface; switching models only changes construction. - Composable orchestration: Agent/Graph/Chain depend only on Component interfaces, not implementations. Replace OpenAI with Ark without changing orchestration code.
- Mock-friendly testing: Interfaces are easy to mock; unit tests do not need real model calls.
This chapter focuses only on ChatModel. Later chapters will introduce Tool, Retriever, and more.
schema.Message: The Basic Unit of a Conversation
Message is the basic structure for conversation data in Eino:
type Message struct {
Role RoleType // system / user / assistant / tool
Content string // text content
ToolCalls []ToolCall // only assistant messages may have this
// ...
}
Common constructors:
schema.SystemMessage("You are a helpful assistant.")
schema.UserMessage("What is the weather today?")
schema.AssistantMessage("I don't know.", nil) // the second argument is ToolCalls
schema.ToolMessage("tool result", "call_id")
Role semantics:
system: system instruction, usually first in the message listuser: user inputassistant: model responsetool: tool execution result (used in later chapters)
Prerequisites
Get the Code
git clone https://github.com/cloudwego/eino-examples.git
cd eino-examples/quickstart/chatwitheino
- Go version: Go 1.21+ (see
go.mod) - A callable ChatModel (default: OpenAI; Ark is also supported)
Option A: OpenAI (Default)
export OPENAI_API_KEY="..."
export OPENAI_MODEL="gpt-4.1-mini" # OpenAI model (2025), or gpt-4o / gpt-4o-mini, etc.
# Optional:
# OPENAI_BASE_URL (proxy or compatible service)
# OPENAI_BY_AZURE=true (use Azure OpenAI)
Option B: Ark
export MODEL_TYPE="ark"
export ARK_API_KEY="..."
export ARK_MODEL="..."
# Optional: ARK_BASE_URL
Run
From examples/quickstart/chatwitheino:
go run ./cmd/ch01 -- "In one sentence, what problem does Eino’s Component design solve?"
Example output (streaming print):
[assistant] Eino’s Component design defines unified interfaces...
What the Entry Code Does
In execution order:
- Create the ChatModel: select OpenAI or Ark based on the
MODEL_TYPEenvironment variable - Build input messages:
SystemMessage(instruction)+UserMessage(query) - Call Stream: all ChatModel implementations must support
Stream(), which returnsStreamReader[*Message] - Print output: iterate over the
StreamReaderand print assistant chunks
Key snippet (note: this is a simplified excerpt and not directly runnable; see the full code in cmd/ch01/main.go):
// Build input
messages := []*schema.Message{
schema.SystemMessage(instruction),
schema.UserMessage(query),
}
// Call Stream (all ChatModels must implement this)
stream, err := cm.Stream(ctx, messages)
if err != nil {
log.Fatal(err)
}
defer stream.Close()
for {
chunk, err := stream.Recv()
if errors.Is(err, io.EOF) {
break
}
if err != nil {
log.Fatal(err)
}
fmt.Print(chunk.Content)
}
Summary
- Component interfaces: define replaceable, composable, and testable capability boundaries
- Message: the basic unit of conversation data, with semantics defined by role
- ChatModel: the fundamental Component, providing
GenerateandStream - Implementation choice: switch between OpenAI/Ark via env/config without changing business code