Personalization Use Cases
Learning from Corrections and Outcomes
The Learning from Corrections and Outcomes pattern enables your AI agents to improve automatically from user feedback. By leveraging the Fastino Personalization API, agents can capture corrections, results, and preferences — turning them into structured signals that refine the user’s world model.
This creates a feedback loop where every interaction makes the agent smarter, safer, and more aligned with the user’s intent.
Overview
Every user correction or behavioral outcome provides valuable data about:
How the user prefers things to be done.
What decisions they consider good or poor.
Which communication or reasoning styles work best for them.
Fastino treats these signals as first-class personalization events, allowing your agents to adjust tone, reasoning, and decisions without retraining or fine-tuning.
Core Concepts
Concept | Description |
|---|---|
Correction | Explicit user feedback that something was wrong, misleading, or undesired. |
Outcome | Implicit or explicit confirmation that an action was successful, satisfying, or preferred. |
Adaptive Learning Loop | The process of ingesting these feedback signals and updating the user’s model. |
Example Flow
Agent generates a suggestion or performs an action.
User provides feedback (“That’s not right” or “Perfect”).
Agent sends the correction or outcome back to Fastino via
/ingest.Fastino updates the user’s memory and summaries to reflect this new preference.
Future
/queryresponses reflect the learned behavior.
Example: Capturing a Correction
When a user corrects an agent’s decision or output, log it as a structured feedback event.
Response
Example: Logging Positive Outcomes
Agents can also log successful actions or confirmations to reinforce desirable behavior.
This feedback helps reinforce positive decisions and refine scheduling predictions.
Using Feedback to Improve Future Queries
When queried later, Fastino incorporates both corrections and outcomes:
Response
Fastino merges explicit corrections with inferred behavioral outcomes to provide accurate, up-to-date context.
Continuous Learning Loop
Step | Agent Action | Fastino Role |
|---|---|---|
1 | Perform an action or generate output | — |
2 | Receive user feedback (positive/negative) | — |
3 | Log feedback via | Updates memory and embeddings |
4 | Retrieve updated summary or context | Reflects learned preferences |
5 | Adjust future behavior | Agent aligns with refined user model |
Example: Updating Summaries After Feedback
Fastino automatically updates user summaries to incorporate feedback signals.
Response
Corrections and outcomes are blended into these deterministic summaries, ensuring LLMs have consistent access to the latest user behavior.
Advanced Pattern: Weighted Feedback
You can optionally assign weights or confidence scores to feedback events for more nuanced learning.
Fastino’s embedding layer uses these confidence values to weight updates, preventing over-correction or volatility in adaptive learning.
Integrating Feedback into Multi-Agent Systems
In multi-agent environments (e.g., email, scheduling, research), feedback signals should be shared across all connected agents through Fastino.
Example:
Calendar agent logs correction: “Prefers afternoon meetings.”
Research agent retrieves updated preference when preparing context.
All agents remain synchronized via the shared personalization layer.
This ensures cross-tool learning consistency.
Example Implementation (Python)
Use Cases
Use Case | Description |
|---|---|
Adaptive Scheduling | Adjust meeting times based on repeated reschedules or confirmations. |
Tone Mirroring | Refine language style from feedback on generated emails or messages. |
Decision Support | Improve predictions based on past approval/rejection data. |
Autonomous Agents | Build reinforcement-like learning without retraining LLMs. |
User Retention Metrics | Track satisfaction trends as implicit feedback signals. |
Best Practices
Always include
contextmetadata to improve clustering accuracy.Separate positive outcomes (
type=outcome) from negative corrections (type=correction).Avoid ingesting trivial messages; use semantic deduplication (
options.dedupe: true).Regularly refresh summaries to surface learned insights.
Visualize feedback history for debugging and trust validation.
Combine corrections with transparent logs (see Action Boundaries & Transparency).
Example: Deterministic Feedback Summary
Response
This summary can be cached or embedded into prompts for downstream reasoning.
Summary
The Learning from Corrections and Outcomes use case turns every user interaction into a learning opportunity.
By feeding feedback events and results into Fastino, agents continuously refine their understanding of each user’s preferences, tone, and decision logic — closing the loop between action, feedback, and adaptation.
Next, continue to Personalization Use Cases → Routine Prediction to explore how Fastino models and predicts user activity patterns over time.
Join our Discord Community