Scouttlo
Todas las ideas/productividad / coaching / salud mental/Una plataforma SaaS que combine agentes inteligentes proactivos integrados en canales privados (como Discord) con expertos humanos para ofrecer soporte personalizado, seguimiento de sesiones y gestión de comportamientos en tiempo real.
GitHubB2Bproductividad / coaching / salud mental

Una plataforma SaaS que combine agentes inteligentes proactivos integrados en canales privados (como Discord) con expertos humanos para ofrecer soporte personalizado, seguimiento de sesiones y gestión de comportamientos en tiempo real.

Detectado hace 8 horas

6.8/ 10
Puntaje general

Convierte esta senal en ventaja

Te ayudamos a construirla, validarla y llegar primero.

Pasamos de la idea al plan: quien compra, que MVP lanzar, como validarlo y que medir antes de invertir meses.

Contexto extra

Ver mas sobre la idea

Te contamos que significa realmente la oportunidad, que problema existe hoy, como esta idea lo resolveria y los conceptos clave detras de ella.

Comparte tu correo para ver este analisis ampliado.

Desglose del puntaje

Urgencia8.0
Tamano de mercado7.0
Viabilidad7.0
Competencia5.0
Dolor

Los usuarios necesitan soporte continuo y proactivo entre sesiones con expertos para evitar la desmotivación y mejorar la productividad.

Quien pagaria por esto

Profesionales, coaches, terapeutas y consultores que buscan extender su alcance y ofrecer soporte continuo a sus clientes mediante agentes inteligentes.

Senal de origen

"Intentive v1 is meant to prove that a human expert plus an OpenClaw-powered agent can support users between expert sessions, especially when the user struggles to start important work or recover after derailment."

Publicacion original

PRD: Phase 4 Productized Manual Pilot

Publicado: hace 8 horas

Repository: sruj75/v1-openclaw Author: sruj75 ## Problem Statement Intentive v1 is meant to prove that a human expert plus an OpenClaw-powered agent can support users between expert sessions, especially when the user struggles to start important work or recover after derailment. The destination from the original v1 PRD still stands: users should feel supported in ordinary life, experts should be able to extend their reach through the agent, and Braintrust should turn real expert judgment into reusable improvement evidence. The implementation path changed across earlier phases. Phase 2 deliberately kept weekly session summaries manual instead of building a session upload, extraction, and approval product workflow. Phase 3 deliberately retired the custom Intentive relay and moved runtime execution to OpenClaw built-in Discord, Braintrust prompt management, and Braintrust review UI. Those changes removed old mechanisms, not the product requirements. Phase 4 must reconcile those competing narratives and finish the smallest real product loop. The product should run as a productized manual pilot: two users, one shared expert, OpenClaw-native Discord, proactive agent support, manual expert-approved session handoffs, Braintrust-managed shared runtime behavior, Braintrust review evidence, and founder/operator judgment over whether the loop is working in real life. ## Solution Run a real two-user pilot week using the current OpenClaw-native architecture. Each pilot user has a private Discord channel, a dedicated OpenClaw agent/workspace, and the same shared expert present in the workflow. The shared runtime behavior is managed through Braintrust bundles and applied to registered OpenClaw workspaces. The one-on-one session output is not materialized by a custom Intentive allocator or direct file-edit workflow. Instead, the manually prepared, expert-approved session summary is handed off to the correct OpenClaw agent as a runtime/admin interaction. The agent is responsible for integrating that handoff into its durable understanding, memory, workspace, and future behavior. The pilot week must include proactive behavior. Intentive is not only a reactive chatbot. OpenClaw heartbeat/proactive behavior should check in and support the user when the plan meets resistance. The exact proactive policy is controlled through Braintrust-managed runtime behavior and OpenClaw configuration, not hardcoded in this repository. The expert participates in the human workflow. Phase 4 should not over-prescribe hidden routing rules or revive the relay-era assumption that expert messages must be filtered by an Intentive classifier. The founder/operator will observe whether the agent handles the user/expert/agent workflow naturally enough. Weaknesses in role-awareness, timing, interruption, silence, or support quality should become Braintrust improvement candidates. Phase 4 produces a pilot-week runbook and a redacted evidence packet. The evidence should record labels, timestamps, bundle versions, trace links or IDs, review/dataset actions, handoff occurrence, proactive behavior evidence, and founder/operator notes. It must not commit private session content, Discord message content, therapist notes, user facts, tokens, credentials, Discord IDs, phone numbers, or screenshots of private messages. ## User Stories 1. As a pilot user, I want to interact naturally in my private Discord channel, so that Intentive feels like support in real life rather than another productivity app. 2. As a pilot user, I want the agent to remember the important context from my latest one-on-one session, so that support during the week reflects what actually came up with the expert. 3. As a pilot user, I want proactive check-ins at useful moments, so that I get help before one bad hour becomes a lost day. 4. As a pilot user, I want the agent to help me start work I avoid, so that the product addresses the core reason I am using Intentive. 5. As a pilot user, I want the agent to help me recover after derailment, so that the day can be repaired instead of silently collapsing. 6. As a pilot user, I want my private channel and agent context to stay isolated from the other pilot user, so that my support remains personal and private. 7. As a pilot user, I want the shared expert to be able to participate naturally, so that the human workflow does not feel like a rigid bot protocol. 8. As a pilot user, I want the agent to behave appropriately when the expert is present, so that the agent does not interrupt, confuse roles, or respond in ways that make the interaction feel robotic. 9. As a shared expert, I want one operational workflow across two pilot users, so that I can test whether the agent extends my reach instead of creating extra overhead. 10. As a shared expert, I want my approved session summary to be handed off to the correct user's agent, so that the agent can support that user between sessions. 11. As a shared expert, I want the agent to integrate the handoff itself, so that I do not have to decide which workspace file, memory, or runtime surface should receive each detail. 12. As a shared expert, I want to monitor the agent's support during the week, so that I can notice what helps, what hurts, and where the agent needs improvement. 13. As a shared expert, I want to intervene naturally when needed, so that the human remains part of the support loop. 14. As a shared expert, I want Braintrust review to capture useful expected behavior where appropriate, so that my judgment can improve future agent behavior. 15. As a shared expert, I want to avoid a large dashboard or formal clinical workflow in Phase 4, so that the pilot stays focused on real support rather than product ceremony. 16. As a founder, I want to run a two-user pilot week, so that I can judge whether Intentive is working as a real product rather than a wiring demo. 17. As a founder, I want the pilot to include one shared expert, so that I can test the expert-augmentation thesis directly. 18. As a founder, I want proactive behavior included, so that the pilot tests the product promise rather than only reactive chat. 19. As a founder, I want the success bar to remain founder-judged, so that Phase 4 does not replace product judgment with premature numeric criteria. 20. As a founder, I want redacted evidence from the pilot week, so that I can review what happened without exposing private user content in git or GitHub. 21. As a founder, I want no cross-user leakage evidence, so that the multi-tenant OpenClaw setup is tested before expanding the pilot. 22. As an operator, I want a clear runbook for setting up two users and one expert, so that the manual pilot can be repeated without improvising every step. 23. As an operator, I want to verify each Discord channel is bound to the correct OpenClaw agent/workspace, so that messages and handoffs do not reach the wrong runtime context. 24. As an operator, I want to apply the same Braintrust runtime bundle to both active workspaces, so that both users run the same shared behavior version. 25. As an operator, I want to record the resolved Braintrust bundle version, so that pilot behavior can be tied back to the runtime bundle that produced it. 26. As an operator, I want to deliver each expert-approved session summary as a runtime handoff to the correct agent, so that Phase 4 tests the intended future dashboard handoff model. 27. As an operator, I want the handoff to be purely operational through OpenClaw, so that this repo does not reintroduce a custom product backend for the pilot. 28. As an operator, I want to avoid direct workspace editing for session summaries, so that the agent owns integration of the human handoff. 29. As an operator, I want to verify proactive check-ins for both users, so that heartbeat behavior is part of the acceptance evidence. 30. As an operator, I want Braintrust traces for both users, so that observed behavior can be reviewed and improved. 31. As an operator, I want Braintrust Expected review and dataset seeds where useful, so that the learning loop starts without building custom eval tooling. 32. As an operator, I want to capture improvement candidates after the week, so that Phase 5 or the next bundle iteration can target real failures. 33. As a maintainer, I want Phase 4 to honor Phase 3's relay retirement, so that we do not accidentally rebuild custom Discord ingress or SQLite routing. 34. As a maintainer, I want Phase 4 to honor Phase 2's manual summary decision, so that we do not prematurely build a summary extraction and approval product. 35. As a maintainer, I want Braintrust to remain the prompt-management and review surface, so that the repo stays small and avoids local annotation/export sprawl. 36. As a maintainer, I want the runbook and evidence packet to avoid sensitive data, so that product learning does not create privacy or security debt. 37. As a future dashboard builder, I want the Phase 4 handoff to look like an agent handoff, so that future automation formalizes the same workflow rather than replacing it. 38. As a future eval builder, I want reviewed traces and dataset seeds from the pilot, so that automated evals can later be grounded in real product behavior. ## Implementation Decisions - Phase 4 is a productized manual pilot, not a workflow automation phase. - The acceptance unit is a real two-user pilot week, not a single smoke message. - The pilot uses one shared expert across both users. - The user-facing runtime remains OpenClaw built-in Discord. - Each pilot user has a private Discord channel and dedicated OpenClaw agent/workspace. - The active workspace registry remains the operator source for applying shared runtime bundle updates. - Shared runtime behavior is owned by Braintrust prompt management and applied through the existing OpenClaw apply command. - Proactive/heartbeat behavior is required for the pilot. - Proactive behavior should be controlled by Braintrust-managed runtime behavior and allowlisted OpenClaw configuration. - Expert-approved one-on-one session summaries are delivered as runtime/admin handoffs to the correct OpenClaw agent. - The session handoff is a human-workflow handoff to the agent, not a direct workspace edit. - No Intentive allocator, note-to-file schema, or materialization engine should be built for Phase 4. - The agent decides how to internalize the handoff into memory, workspace, durable context, or future behavior. - If the agent does not integrate handoffs well enough, improve the agent through Braintrust-managed behavior rather than building a separate allocator. - The handoff is performed operationally, mostly by the operator, using an expert-approved session summary. - Phase 4 does not create a session upload UI, extraction pipeline, approval dashboard, or expert dashboard. - Phase 4 does not revive the custom relay, SQLite routing, local prompt attribution tables, local annotation persistence, JSONL exporters, or custom Braintrust dataset upload tooling. - Expert/user/agent role handling in Discord should be observed as part of the pilot, but this PRD does not lock a low-level message-visibility or routing rule. - Founder judgment owns the final success bar. The runbook captures evidence; it does not replace founder product judgment. - A pilot-week runbook is required. - A redacted evidence packet is required. - The evidence packet should record operational facts and links/labels, not private content. - Braintrust scope is traces, Expected review where useful, dataset seeds where useful, and improvement candidates. - Full automated evals, scorer design, and dashboards are out of scope. ## Testing Decisions - Phase 4 testing is mostly operational acceptance, because the productized manual pilot depends on OpenClaw runtime behavior, Discord setup, expert participation, Braintrust UI review, and founder judgment. - Existing automated tests around bundle fetch, managed block application, config allowlisting, package surface, and workspace registry should remain green. - Good automated tests for any new repo code should assert external behavior and safety boundaries, not implementation details. - If the Phase 4 work changes operator tooling, tests should mirror existing OpenClaw apply tests: mocked Braintrust/OpenClaw boundaries, all-or-nothing validation, explicit failure messages, and no private content. - If the Phase 4 work changes docs/runbook structure only, package-surface or doc-discovery tests may be enough to keep the expected Phase 4 runbook visible. - Manual acceptance should verify both users are set up, both workspaces receive the shared Braintrust bundle version, session handoffs are delivered to the correct agents, proactive behavior occurs, Braintrust traces exist for both users, and no cross-user leakage is observed. - Manual evidence should record founder/operator notes about agent behavior, proactive usefulness, role handling, handoff integration, expert monitoring load, and improvement candidates. ## Out of Scope - Building a therapist, expert, or founder dashboard. - Building session upload, transcript extraction, summary drafting, or approval automation. - Building a custom note allocator or workspace materialization manager. - Directly editing workspace files as the session-summary path. - Reintroducing custom Intentive Discord gateway ingress. - Reintroducing SQLite relay routing, relay persistence, or relay-owned session state. - Building local annotation tables, Discord thread annotation exporters, JSONL/CSV export flows, or custom Braintrust import tooling. - Building full automated evals, scorers, dashboards, or release gates. - Expanding to WhatsApp unless it can be done through the same OpenClaw built-in channel model after this pilot. - Billing, subscriptions, native mobile app auth, clinical compliance workflow, or broad multi-tenant SaaS platform work. - Defining a rigid clinical taxonomy or productivity form workflow. - Replacing founder judgment with premature success metrics. ## Further Notes - Phase 4 reconciles old destination and new route. The original v1 requirements are still alive, but many old mechanisms are retired. - The phrase "productized manual pilot" is intentional: the manual process should be clean, repeatable, evidenced, and real enough to learn from. - The pilot should feel like an expert's reach is extended by an agent, not like an automated robot enforcing forms. - The future dashboard should automate the same agent handoff pattern proven here. - The strongest Phase 4 output is not only code or docs. It is a week of redacted evidence showing whether Intentive can help two real users between sessions without cross-user leakage and with Braintrust-backed learning.