Integrations

Integrating with ChatGPT

You can integrate Fastino’s Personalization API directly into ChatGPT as a Model Context Protocol (MCP) server. This allows ChatGPT to retrieve user context, answer personalized questions, and update memory in real time — all using your existing Fastino workspace and endpoints.


When connected, ChatGPT can:

  • Retrieve top-k user memories through /chunks

  • Query personalized information with /query

  • Update user memory via /ingest

  • Manage data securely and transparently with /delete

Overview

Fastino MCP Integration for ChatGPT gives ChatGPT the ability to “remember” users across sessions by syncing with Fastino’s world models.
Each user’s memory, preferences, and tone are dynamically fetched through the MCP connection — allowing ChatGPT to behave like a personalized, evolving assistant.

Architecture

Flow Overview

  1. ChatGPT loads your MCP manifest (fastino_mcp.json).

  2. Fastino MCP server exposes the personalization tools (retrieve_relevant_chunks, search_or_ask, update_memory).

  3. ChatGPT tools call Fastino API endpoints using your workspace’s API key.

  4. Responses are streamed back into ChatGPT’s context window.

Prerequisites

Before setting up the MCP integration, make sure you have:

  • A Fastino workspace and API key

  • Access to ChatGPT Team or Enterprise (MCP-enabled versions)

  • Node.js or Python runtime to host your MCP server

  • Your OpenAI ChatGPT MCP manifest and config file

MCP Manifest Example

Your MCP manifest describes the tools ChatGPT can use from Fastino:

fastino_mcp.json

{
  "schema_version": "v1",
  "name": "Fastino Personalization API",
  "description": "Provides personalized user memory and retrieval capabilities.",
  "tools": [
    {
      "name": "retrieve_relevant_chunks",
      "description": "Retrieve top-k relevant user context snippets.",
      "input_schema": {
        "type": "object",
        "properties": {
          "user_id": {"type": "string"},
          "conversation": {"type": "array"},
          "top_k": {"type": "integer", "default": 5}
        },
        "required": ["user_id"]
      }
    },
    {
      "name": "search_or_ask",
      "description": "Ask a natural-language question about the user's memory.",
      "input_schema": {
        "type": "object",
        "properties": {
          "user_id": {"type": "string"},
          "question": {"type": "string"}
        },
        "required": ["user_id", "question"]
      }
    },
    {
      "name": "update_memory",
      "description": "Update the user's profile or ingest new context.",
      "input_schema": {
        "type": "object",
        "properties": {
          "user_id": {"type": "string"},
          "content": {"type": "string"}
        },
        "required": ["user_id", "content"]
      }
    }
  ]
}

Place this file in your MCP server directory (e.g. /fastino_mcp/fastino_mcp.json).

Example: MCP Server (Python)

Below is a minimal Python server exposing Fastino as an MCP endpoint for ChatGPT.

from fastapi import FastAPI, Request
import requests

app = FastAPI()

FASTINO_API = "https://api.fastino.ai"
FASTINO_KEY = "sk_live_123"
HEADERS = {"Authorization": f"x-api-key: {FASTINO_KEY}", "Content-Type": "application/json"}

@app.post("/retrieve_relevant_chunks")
async def retrieve_relevant_chunks(request: Request):
    body = await request.json()
    r = requests.post(f"{FASTINO_API}/chunks", json=body, headers=HEADERS)
    return r.json()

@app.post("/search_or_ask")
async def search_or_ask(request: Request):
    body = await request.json()
    r = requests.post(f"{FASTINO_API}/query", json=body, headers=HEADERS)
    return r.json()

@app.post("/update_memory")
async def update_memory(request: Request):
    body = await request.json()
    user_id = body.get("user_id")
    content = body.get("content")
    payload = {
        "user_id": user_id,
        "source": "chatgpt_mcp",
        "documents": [
            {"doc_id": "doc_auto", "kind": "note", "title": "ChatGPT Memory Update", "content": content}
        ]
    }
    r = requests.post(f"{FASTINO_API}/ingest", json=payload, headers=HEADERS)
    return r.json()

Launch the server:

Example: ChatGPT MCP Config

To connect ChatGPT to your Fastino MCP server, create a configuration file:

fastino_mcp.config.json

{
  "servers": [
    {
      "name": "Fastino MCP Server",
      "url": "http://localhost:8080",
      "manifest": "http://localhost:8080/fastino_mcp.json"
    }
  ]
}

Once this file is loaded into ChatGPT’s MCP configuration panel, the Fastino tools will appear in ChatGPT’s toolbox, ready to use during chats.

Example: Conversation Flow

  1. User: “Remind me when I usually do focus work.”

  2. ChatGPT → calls search_or_ask:

    
    
  3. Fastino responds:

    
    
  4. ChatGPT displays: “You typically do your deep work from 9 to 12, so I’ll block that off.”

Example: Updating Memory from ChatGPT

If a user says “Move my focus block to 1–4 PM,” ChatGPT can call:


This triggers the /update_memory tool, which ingests the new context via Fastino’s /ingest endpoint.

Error Handling

Fastino returns consistent, structured errors for MCP integrations:


Your MCP server should return the same structure to ChatGPT for transparency.

Authentication

All MCP tool calls must include:

Authorization: x-api-key: <FASTINO_API_KEY>

We recommend storing the key as an environment variable:

If your workspace supports multiple users, you can also issue scoped keys that limit API access by endpoint or tool type.

Security Notes

  • ChatGPT never directly stores your API key; it uses MCP routing for each request.

  • Fastino encrypts all data in transit (TLS 1.3) and at rest (AES-256).

  • You can disable or revoke individual MCP tools at any time from your Fastino dashboard.

  • For GDPR compliance, all interactions through MCP are logged and deletable via /delete.

Use Cases

Use Case

Description

Personalized ChatGPT Assistants

ChatGPT retrieves context and tone from Fastino for each user.

Memory-Enhanced GPTs

Persistent profiles and preferences across GPT sessions.

Multi-Agent Context Sharing

Sync memory between ChatGPT, Claude, and Pioneer via the same MCP schema.

Workspace Knowledge Transfer

Share world-model context across multiple connected apps securely.

Best Practices

  • Use stable, deterministic summaries (/summary) for initialization.

  • Limit top-k results to 3–5 snippets for clarity and latency.

  • Periodically refresh context from Fastino after each ChatGPT session.

  • Use scoped keys for each workspace or customer environment.

  • Keep the MCP manifest versioned and include descriptions for each tool.

  • Set up logging and request limits for observability.

Example: End-to-End Setup Summary

  1. Run Fastino MCP Server (Python or Node).

  2. Expose manifest at /fastino_mcp.json.

  3. Add configuration to ChatGPT’s MCP settings.

  4. ChatGPT connects and displays Fastino tools.

  5. User interacts, and ChatGPT retrieves or updates memory in real time.

  6. All changes sync with the user’s Fastino world model.

Summary

Integrating Fastino with ChatGPT via MCP transforms ChatGPT into a personalized, memory-aware assistant.
Your MCP server acts as the secure bridge between ChatGPT and Fastino’s world models — ensuring every interaction is grounded in user-specific data while remaining compliant, explainable, and adaptive.

Next, continue to Personalization Use Cases → Overview to see how these integrations enable proactive and context-rich AI assistants.

On this page