Integrations
Integrating with MCP
The Model Context Protocol (MCP) allows Fastino’s Personalization API to plug directly into AI runtimes and agent frameworks — enabling models to retrieve, learn, and update personalized context in real time.
This integration gives any compliant agent the ability to:
Retrieve relevant user memories (
retrieve_relevant_chunks)Ask semantic questions about users (
search_or_ask)Update contextual memory after an interaction (
update_memory)
Overview
MCP is a lightweight protocol for connecting AI models to external tools and APIs.
By registering the Fastino Personalization API as an MCP toolset, your agents can dynamically recall user context, ground responses, and continuously improve from feedback.
Fastino exposes three MCP tools for user-level personalization:
Tool | Description |
|---|---|
| Retrieve the most relevant snippets of user memory for grounding |
| Ask natural-language questions about a user’s profile |
| Send new observations or updates to the user’s world model |
Prerequisites
Before integrating:
You have a valid Fastino API key
MCP runtime installed or accessible (e.g., OpenAI MCP, Anthropic’s Tool Use runtime, or custom agent shell)
Python or Node.js environment configured for tool registration
At least one user registered via
/register
Configuration
You can expose Fastino’s API to an MCP agent by registering it in your tool manifest or connector script.
Example: MCP Tool Manifest
Example Integration (Python)
The following snippet demonstrates how an MCP agent can call Fastino tools directly during conversation.
Example Conversation Flow
User asks: “When does Ash usually take meetings?”
MCP runtime calls Fastino’s
search_or_asktool:Fastino responds:
The agent incorporates this directly into its reasoning or response.
Common Patterns
Pattern | Tool Used | Description |
|---|---|---|
Grounding model prompts |
| Retrieve memory snippets before generating responses |
Personalized reasoning |
| Ask clarifying questions about user goals or habits |
Memory updates |
| Log new insights or corrections automatically |
Feedback loop |
| Store feedback and confirm updated understanding |
Authentication
Each MCP tool must authenticate via the same bearer token scheme as standard Fastino API calls.
Example header:
We recommend using workspace-scoped keys for security and rotating them every 90 days.
Error Handling
Fastino MCP tools return standardized JSON errors consistent with the REST API.
MCP runtimes should catch and surface these gracefully within the conversation trace.
Best Practices
Register the tools at runtime initialization, not per turn.
Cache MCP tool responses for ephemeral session performance.
Use consistent
user_idacross all agents in a workspace.Implement the
update_memorytool for long-running conversations to maintain continuity.Log and version memory updates for transparency.
Restrict MCP tokens to read/write scopes depending on your agent type.
Example Use Case: Personalized Assistant
A chat agent can use Fastino’s MCP integration to remember preferences and style:
search_or_ask→ learns “User prefers concise async replies.”update_memory→ saves new information like “Moved meetings to 2 PM.”retrieve_relevant_chunks→ grounds next message with past context.
Result: the agent behaves consistently across sessions, mirroring tone and timing preferences dynamically.
Summary
Integrating Fastino with MCP gives your agents true memory and adaptive reasoning — bridging static LLM responses with personalized, evolving context.
By registering Fastino’s personalization tools within MCP, your agents gain the ability to recall, learn, and refine their understanding of every user in real time.
Next, continue to Integrating with LlamaIndex to learn how to connect Fastino’s personalization layer to a local or enterprise retrieval pipeline.
Join our Discord Community