← Back to graphonomous.com All payloads captured 2026-04-03 from Graphonomous v0.3.1

Live MCP Demo Suite

Seventeen interactive A/B demos showing Graphonomous v0.3.1 in action, covering all 29 MCP tools. Every node ID, confidence score, similarity value, and κ routing decision below was captured from a real MCP session. Click any card to explore the full chat replay with payload inspector.

Demo 01
Codebase Skill Learning
Baseline retrieval (top similarity 0.062) vs learned procedural skills (top similarity 0.521). Store skills, learn from outcomes, link to goals, then re-query.
retrieve_context store_node learn_from_outcome manage_goal
8.4× similarity lift
+0.093 confidence Δ
→ Explore demo
Demo 02
κ-Routing & Deliberation
DAG region (κ=0, fast routing) vs 4-node business cycle (κ=1, deliberate routing). Topology analysis detects circular dependencies and assigns deliberation budgets.
topology_analyze deliberate
κ=0 vs κ=1
1 SCC detected
→ Explore demo
Demo 03
Novelty Detection
Known concept (novelty 0.14, Graphonomous itself) vs completely novel concept (novelty 0.92, quantum error correction). The system knows what it knows — and what it doesn’t.
learn_detect_novelty
0.14 vs 0.92
0.855 max similarity
→ Explore demo
Demo 04
Attention Engine
Proactive goal prioritization combining urgency, coverage gap, and κ-routing. 7 real goals ranked by attention score with topology-aware reasoning depth per goal.
attention_survey manage_goal
7 active goals
3/7 κ>0
→ Explore demo
Demo 05
Consolidation Pipeline
Before & after the 7-stage consolidation pipeline: decay, pruning, co-activation strengthening, similarity merging, timescale promotion, and abstraction generation.
run_consolidation graph_stats
7 pipeline stages
0.02 decay rate
→ Explore demo
Demo 06
Coverage & Decision Gate
Epistemic coverage assessment with act/learn/escalate routing. Low coverage (0.486) triggers “learn” — the system knows it doesn’t know enough to act.
coverage_query review_goal
0.486 coverage
learn decision
→ Explore demo
Demo 07
Full Learning Pipeline
Automated interaction ingestion (novelty 0.844, 5 claims extracted, 8 edges created) vs explicit feedback correction that replaces content and adjusts confidence.
learn_from_interaction learn_from_feedback
6 nodes learned
+0.090 feedback Δ
→ Explore demo
Demo 08
Graph Inspection Suite
Graph health overview (29 nodes, 22 edges, 9 orphans) vs deep BFS traversal from a node exploring 12-node neighborhood across 2 hops of causal chains.
query_graph graph_stats graph_traverse
29 nodes
12 traversal reach
→ Explore demo
Demo 09
Typed Retrieval
Episodic memory (5 time-filtered session events) vs procedural skill search (only 1 match at 0.034 similarity). Shows rich event history but sparse skill coverage.
retrieve_episodic retrieve_procedural
5 episodes today
0.034 skill relevance
→ Explore demo
Demo 10
Graph Mutation
Building relationships (store_edge, weight 0.7) vs pruning stale knowledge (delete_node, manage_edge). Full graph lifecycle from creation to cleanup.
store_edge delete_node manage_edge
122 total edges
7 edge types
→ Explore demo
Demo 11
Attention Dispatch
Passive survey vs full attention cycle with autonomy override. Survey → triage → dispatch pipeline in advise mode: 3 goals surveyed, 0 dispatched, all deferred.
attention_run_cycle attention_survey
3 surveyed
1/3 κ>0
→ Explore demo
Demo 12
Belief Revision
AGM-rational belief maintenance: expand adds a new belief with automatic contradiction detection; revise supersedes old facts with confidence propagation through dependency chains.
belief_revise belief_contradictions
1 contradiction
+0.38 confidence Δ
→ Explore demo
Demo 13
Intentional Forgetting
Soft forget (reversible, structure preserved) and policy preview vs cascade delete and GDPR-compliant permanent erasure with audit trail. No competitor has this.
forget_node forget_by_policy gdpr_erase
3 modes
GDPR compliant
→ Explore demo
Demo 14
Epistemic Frontier
Meta-cognition: uncertainty map reveals nodes with widest Wilson score intervals. Investigate the top frontier node, report outcome, watch the frontier shrink. Self-directed uncertainty reduction.
epistemic_frontier learn_from_outcome
4→3 frontier
0.842 info gain
→ Explore demo
Demo 15
Causal Chains
Success outcome boosts causal strength and confidence; failure decreases both and logs confounders. Full causal→belief revision→topology pipeline.
learn_from_outcome belief_revise
0.3→0.52 causal str.
2 confounders
→ Explore demo
Demo 16
Q-Value Retrieval
Same query, different ranking. Before outcomes: similarity-only. After outcomes: Q-values shift ranking — high-confidence low-utility drops below high-utility. Self-correcting retrieval.
retrieve_context
2 rank changes
#1→#3 reranker drop
→ Explore demo
Demo 17
Multi-Agent Memory
Two agents, one graph: scoped storage (agent sees only its own nodes) vs cross-agent topology (shared edges bridge isolated scopes). Multi-agent knowledge interaction.
store_node store_edge topology_analyze
2 agents
2 cross-agent edges
→ Explore demo
Demo 18
Evidence Path Tracing
Weighted Dijkstra finds the lowest-cost evidence chain between two knowledge nodes. Yen's K-shortest reveals diverse alternate reasoning paths. Cost = -log(confidence) + recency + type penalty.
trace_evidence_path graph_traverse
0.591 path cost
2 diverse paths
→ Explore demo