Personalization Use Cases

Learning from Corrections and Outcomes

The Learning from Corrections and Outcomes pattern enables your AI agents to improve automatically from user feedback. By leveraging the Fastino Personalization API, agents can capture corrections, results, and preferences — turning them into structured signals that refine the user’s world model.

This creates a feedback loop where every interaction makes the agent smarter, safer, and more aligned with the user’s intent.

Overview

Every user correction or behavioral outcome provides valuable data about:

  • How the user prefers things to be done.

  • What decisions they consider good or poor.

  • Which communication or reasoning styles work best for them.

Fastino treats these signals as first-class personalization events, allowing your agents to adjust tone, reasoning, and decisions without retraining or fine-tuning.

Core Concepts

Concept

Description

Correction

Explicit user feedback that something was wrong, misleading, or undesired.

Outcome

Implicit or explicit confirmation that an action was successful, satisfying, or preferred.

Adaptive Learning Loop

The process of ingesting these feedback signals and updating the user’s model.

Example Flow

  1. Agent generates a suggestion or performs an action.

  2. User provides feedback (“That’s not right” or “Perfect”).

  3. Agent sends the correction or outcome back to Fastino via /ingest.

  4. Fastino updates the user’s memory and summaries to reflect this new preference.

  5. Future /query responses reflect the learned behavior.

Example: Capturing a Correction

When a user corrects an agent’s decision or output, log it as a structured feedback event.

POST /ingest
{
  "user_id": "usr_42af7c",
  "source": "assistant",
  "events": [
    {
      "event_id": "evt_feedback_001",
      "type": "correction",
      "timestamp": "2025-10-27T17:00:00Z",
      "metadata": {
        "context": "meeting_scheduling",
        "previous_action": "Scheduled team sync for 2 PM",
        "user_feedback": "Prefer 3 PM instead"
      },
      "content": "User prefers 3 PM meetings for afternoon sessions."
    }
  ]
}

Response

{
  "ingested": { "events": 1, "documents": 0 },
  "updated_at": "2025-10-27T17:00:01Z"
}

Example: Logging Positive Outcomes

Agents can also log successful actions or confirmations to reinforce desirable behavior.

POST /ingest
{
  "user_id": "usr_42af7c",
  "source": "scheduler_agent",
  "events": [
    {
      "event_id": "evt_feedback_002",
      "type": "outcome",
      "timestamp": "2025-10-27T18:00:00Z",
      "metadata": {
        "context": "meeting_reschedule",
        "status": "success"
      },
      "content": "User approved rescheduling to 3 PM — preferred new routine confirmed."
    }
  ]
}

This feedback helps reinforce positive decisions and refine scheduling predictions.

Using Feedback to Improve Future Queries

When queried later, Fastino incorporates both corrections and outcomes:

POST /query
{
  "user_id": "usr_42af7c",
  "question": "When does Ash prefer to hold team meetings?"
}

Response

{
  "answer": "Ash prefers meetings around 3 PM, as confirmed by recent user feedback."
}

Fastino merges explicit corrections with inferred behavioral outcomes to provide accurate, up-to-date context.

Continuous Learning Loop

Step

Agent Action

Fastino Role

1

Perform an action or generate output

2

Receive user feedback (positive/negative)

3

Log feedback via /ingest

Updates memory and embeddings

4

Retrieve updated summary or context

Reflects learned preferences

5

Adjust future behavior

Agent aligns with refined user model

Example: Updating Summaries After Feedback

Fastino automatically updates user summaries to incorporate feedback signals.

GET /summary?user_id=usr_42af7c&purpose=work-style

Response

{
  "summary": "Ash prefers concise communication, meetings after 3 PM, and appreciates proactive rescheduling suggestions."
}

Corrections and outcomes are blended into these deterministic summaries, ensuring LLMs have consistent access to the latest user behavior.

Advanced Pattern: Weighted Feedback

You can optionally assign weights or confidence scores to feedback events for more nuanced learning.

{
  "user_id": "usr_42af7c",
  "events": [
    {
      "event_id": "evt_feedback_003",
      "type": "correction",
      "timestamp": "2025-10-27T18:15:00Z",
      "metadata": {
        "context": "document_style",
        "confidence": 0.9
      },
      "content": "User prefers short, bullet-point summaries over paragraphs."
    }
  ]
}

Fastino’s embedding layer uses these confidence values to weight updates, preventing over-correction or volatility in adaptive learning.

Integrating Feedback into Multi-Agent Systems

In multi-agent environments (e.g., email, scheduling, research), feedback signals should be shared across all connected agents through Fastino.

Example:

  • Calendar agent logs correction: “Prefers afternoon meetings.”

  • Research agent retrieves updated preference when preparing context.

  • All agents remain synchronized via the shared personalization layer.

This ensures cross-tool learning consistency.

Example Implementation (Python)

import requests, datetime

BASE_URL = "https://api.fastino.ai"
headers = {"Authorization": "x-api-key sk_live_456", "Content-Type": "application/json"}

def log_feedback(user_id, feedback_type, content, context):
    payload = {
        "user_id": user_id,
        "source": "agent_runtime",
        "events": [
            {
                "event_id": f"evt_{datetime.datetime.utcnow().isoformat()}",
                "type": feedback_type,
                "timestamp": datetime.datetime.utcnow().isoformat() + "Z",
                "metadata": {"context": context},
                "content": content
            }
        ]
    }
    r = requests.post(f"{BASE_URL}/ingest", json=payload, headers=headers)
    return r.json()

# Example usage
log_feedback(
    "usr_42af7c",
    "correction",
    "User prefers task updates in Slack instead of email.",
    "communication_channel"
)

Use Cases

Use Case

Description

Adaptive Scheduling

Adjust meeting times based on repeated reschedules or confirmations.

Tone Mirroring

Refine language style from feedback on generated emails or messages.

Decision Support

Improve predictions based on past approval/rejection data.

Autonomous Agents

Build reinforcement-like learning without retraining LLMs.

User Retention Metrics

Track satisfaction trends as implicit feedback signals.

Best Practices

  • Always include context metadata to improve clustering accuracy.

  • Separate positive outcomes (type=outcome) from negative corrections (type=correction).

  • Avoid ingesting trivial messages; use semantic deduplication (options.dedupe: true).

  • Regularly refresh summaries to surface learned insights.

  • Visualize feedback history for debugging and trust validation.

  • Combine corrections with transparent logs (see Action Boundaries & Transparency).

Example: Deterministic Feedback Summary

Response

{
  "summary": "Recent feedback indicates Ash prefers meetings around 3 PM, concise responses, and context-aware scheduling. Negative feedback on verbose summaries has been incorporated."
}

This summary can be cached or embedded into prompts for downstream reasoning.

Summary

The Learning from Corrections and Outcomes use case turns every user interaction into a learning opportunity.
By feeding feedback events and results into Fastino, agents continuously refine their understanding of each user’s preferences, tone, and decision logic — closing the loop between action, feedback, and adaptation.

Next, continue to Personalization Use Cases → Routine Prediction to explore how Fastino models and predicts user activity patterns over time.

On this page