OTK → Totem Protocol Research

12 Years of Intuition, Validated by a 2002 EU Research Project

While researching earlier work in ontology-driven knowledge management, we rediscovered On-to-Knowledge — a 1.3M euro EU project that built, twenty-four years ago, almost exactly the architecture Totem Protocol arrived at independently.

The Long Road to Ontology

Jonny Dubowsky has been building toward Totem Protocol for over twelve years. The work spans brand marketing, interactive design, art, and music — NRDC campaigns, Download Music Festival, policy advocacy bridging corporate sponsors and communities. Each domain presented the same structural problem: how do groups manage shared resources, make collaborative investment decisions, and synthesize collective intelligence across different contexts?

The core insight, arrived at through practice rather than theory: ontology design is the key. Self-sovereign identity leads to personal knowledge systems. Personal knowledge systems enable sharing solutions through contextual templates. The question was always how to formalize this without losing the flexibility that real creative work demands.

While researching earlier approaches to this same problem, we rediscovered On-to-Knowledge (OTK) — an EU-funded research project (IST-1999-10132) that ran from 2000 to September 2002 under the Fifth Framework Programme. Led by Frank van Harmelen, Dieter Fensel, and Rudi Studer, it received 1.3 million euros to build a comprehensive ontology-based knowledge management architecture.

The architecture maps almost exactly to what Totem Protocol independently built.

The Four-Layer Parallel

OTK decomposed knowledge management into extraction, structuring, storage, inference, editing, sharing, search, and presentation. Totem Protocol performs every one of these functions — through different tools, with different technology, but following the same architectural logic.

OTK Layer OTK Function Totem Protocol Equivalent
OntoWrapper / OntoExtract Web document metadata extraction + NLP entity extraction to RDF Session transcripts → InfraNodus entity extraction + Claude LLM-based extraction
Sesame RDF data repository with RDFS inferencing and SPARQL querying InfraNodus knowledge graphs + Obsidian vault (YAML frontmatter)
OIL Ontology Inference Layer — formal semantics for machine reasoning Layered Ontology Architecture (CLAUDE.md schemas + frontmatter type system)
OntoShare / Spectacle / OntoEdit Knowledge sharing, presentation portals, collaborative ontology editing MCP servers + Cloudflare Pages editorial sites + Obsidian vault tools + skill system

Both architectures follow the same pipeline: Extract → Structure → Store → Query → Share → Present. The difference is that OTK built each function as a separate specialized tool, while Totem Protocol uses general-purpose AI agents that perform multiple functions through different modes of interaction.

Three Critical Upgrades

Totem Protocol is a 2026 re-implementation of the OTK architecture. But it is not a replica. Three fundamental shifts in technology transform the same architecture into a categorically more capable system.

Upgrade 01
LLM-Assisted Extraction
OTK relied on hand-crafted NLP rules and lexical pattern matching (OntoExtract). Modern LLMs achieve comparable or better extraction with zero-shot prompting. The entire extraction pipeline that required specialized tooling in 2002 now runs through a single Claude session with InfraNodus MCP tools.
Upgrade 02
Agent Orchestration via MCP
OTK integrated its tools through rigid, purpose-built APIs. Totem Protocol uses the Model Context Protocol to give AI agents access to any tool through a uniform interface. Claude Code with 43 skills and multiple MCP servers replaces a fixed pipeline with flexible, context-aware orchestration.
Upgrade 03
Personal Knowledge Management
OTK targeted enterprise intranets with centralized RDF repositories. Totem Protocol starts from personal knowledge (Obsidian vaults, daily notes, session records) and scales outward. The vault-first design means knowledge is sovereign to the individual before it is shared with any system.

The Core Insight

OTK was right about the architecture and wrong about the implementation technology. It tried to solve the knowledge management problem with specialized tools and rule-based NLP at a time when neither the tools nor the NLP were sufficient. Twenty-four years later, the architecture holds. Every individual tool has been superseded. And LLMs have solved the extraction and querying problems that limited the original system.

The gap that remains — formal semantic infrastructure (RDF, OWL, SPARQL, SHACL) — is real but addressable. Adding a lightweight semantic layer does not require abandoning the current stack. It means building a bridge between the AI-native tools Totem uses today and the formal semantic standards the OTK community developed twenty-four years ago. The bridge is Perceptagon.

Component Analysis

The Eight OTK Components

Tracing each tool from its 2002 origins through its modern successors, and mapping it onto Totem Protocol's current stack.

OTK built eight specialized components covering extraction, storage, inference, editing, sharing, search, and presentation. Two survive in evolved form. The rest are extinct — but their functions live on in modern tools and in Totem Protocol's architecture.

OntoWrapper
Defunct
01 / 08
2002 Role
Web document metadata extraction. Crawled structured and semi-structured web sources, extracted metadata, converted to RDF-compatible formats.
Built By
AIFB / University of Karlsruhe team
What Happened
No maintained codebase exists. The concept of "wrapping" web sources into RDF has been superseded by JSON-LD, RDFa, Microdata, and LLM-based extraction pipelines.
Successors
Diffbot (commercial API), Apache Any23 (RDFa/Microdata extraction), ODKE+ (LLM-based, production since May 2025)
Totem Equiv.
No direct equivalent. Claude Code WebFetch/WebSearch + InfraNodus fetch cover partial functionality.
Opportunity
OntoWeaver — a Perceptagon module for document-level ontology annotation and RDF output from web sources.
OntoExtract
Superseded
02 / 08
2002 Role
NLP-based extraction of ontological structures (concepts, relationships, hierarchies) from unstructured text. Core of the Text-to-Onto framework.
Built By
AIFB / University of Karlsruhe (Steffen Staab, Rudi Studer)
What Happened
Evolved: OntoExtract → Text-to-Onto (2002) → Text2Onto (2005, probabilistic) → LLM-based extraction (2023+). Each generation was more capable and less brittle.
Successors
OntoGPT/SPIRES (Monarch Initiative, zero-shot LLM extraction), iText2KG (incremental KG construction), KONDA (annotation + relation extraction)
Totem Equiv.
InfraNodus extractEntitiesOnly + analyze_text + Claude zero-shot extraction. Strong functional coverage.
Opportunity
Formalize extraction pipeline. Add OntoGPT for domain-specific ontological grounding (healthcare, financial services).
Sesame
Active
03 / 08
2002 Role
RDF data repository with RDFS inferencing and SPARQL querying. The storage backbone of the entire OTK architecture.
Built By
Aduna B.V. (Netherlands), created specifically for the On-to-Knowledge project
What Happened
The clearest success story. Sesame → OpenRDF Sesame (open-sourced) → Eclipse RDF4J 2.0 (2016, Eclipse Foundation) → RDF4J 5.2.2 (current). Supports RDF 1.2 and SPARQL 1.2.
Successors
Eclipse RDF4J (direct), Apache Jena + Fuseki (open-source standard), GraphDB (enterprise), Oxigraph (modern Rust alternative), Blazegraph (powers Wikidata)
Totem Equiv.
InfraNodus graphs (network storage) + Obsidian vault (YAML frontmatter). No formal RDF triplestore.
Opportunity
Add Oxigraph as a lightweight local SPARQL endpoint. Export frontmatter to RDF triples for cross-document inference.
OIL (Ontology Inference Layer)
Evolved
04 / 08
2002 Role
Formal ontology language combining frame-based syntax with Description Logic foundations. OTK's core theoretical contribution.
Built By
Frank van Harmelen (VU Amsterdam), Ian Horrocks, Dieter Fensel, Deborah McGuinness, Peter Patel-Schneider
What Happened
Enormous impact on web standards. OIL → DAML+OIL (2001, EU/US merger) → OWL 1.0 (2004, W3C Recommendation) → OWL 2 (2009/2012). OIL itself is the direct ancestor of the W3C Web Ontology Language.
Successors
OWL 2 (W3C standard, direct descendant), SHACL (validation complement), SKOS (lightweight taxonomies), RDF-star / RDF 1.2 (emerging)
Totem Equiv.
Layered Ontology Architecture. YAML frontmatter schemas define structure but lack formal inference. No Description Logic reasoning.
Opportunity
Adopt OWL 2 EL formally. Express Totem ontology (agents, capabilities, skills, projects) as OWL. Add SHACL for deterministic schema validation.
OntoShare
Defunct
05 / 08
2002 Role
Ontology-based knowledge sharing for virtual communities of practice. Semi-automatic classification via vector cosine similarity. Distinctive feature: ontology evolution based on usage patterns.
Built By
John Davies and Alistair Duke at BT Labs (British Telecom)
What Happened
Defunct as standalone tool. Its innovations — community-driven ontology evolution, semantic content recommendation, semi-automatic classification — are distributed across modern systems.
Successors
Cognee (evolving knowledge graphs), Semantic Scholar, InfraNodus (gap detection + clustering), embedding-based semantic search
Totem Equiv.
Skill/template system (43 skills) + Slack integration + editorial sites + 3x3 Framework for matchmaking
Opportunity
Formalize as ontology-aware sharing. Incorporate OntoShare's usage-based evolution to discover new schema fields users implicitly need.
RDF Ferret
Defunct
06 / 08
2002 Role
Semantic search engine for discovering and retrieving RDF-annotated resources across the OTK knowledge base.
Built By
Part of the broader OTK toolset. Limited documentation survives.
What Happened
The semantic search problem was solved by SPARQL endpoints, full-text search engines with RDF awareness, and embedding-based vector search.
Successors
SPARQL full-text extensions, vector databases (Qdrant, Pinecone, Weaviate), Open Semantic Search, GraphDB + Lucene connectors
Totem Equiv.
DEVONthink (AI-augmented search) + InfraNodus search / retrieve_from_knowledge_base + Obsidian search. Strong coverage across multiple surfaces.
Opportunity
Add SPARQL federated queries across all knowledge stores for structured cross-domain search.
Spectacle
Defunct
07 / 08
2002 Role
Knowledge presentation platform / portal. The user-facing layer rendering ontology-structured information in browseable forms.
Built By
BT Labs / industrial deployment side. Very limited public documentation.
What Happened
The concept evolved into enterprise knowledge portals, semantic dashboards, and modern editorial/visualization deliverables.
Successors
Enterprise portals (Confluence, SharePoint), WebVOWL, GraphDB Workbench, Obsidian Bases dashboards
Totem Equiv.
Cloudflare Pages editorial sites (two-site pattern: editorial brief + viz hub). More sophisticated than Spectacle's original portals.
Opportunity
Already solved. The editorial site pattern is a direct, improved successor to what Spectacle attempted.
OntoEdit
Active
08 / 08
2002 Role
Collaborative ontology editor supporting methodology-guided development, inferencing, axiom construction, and plugin extensibility.
Built By
Institute AIFB, University of Karlsruhe (Alexander Maedche, York Sure, Rudi Studer) + Ontoprise GmbH (commercial spin-off)
What Happened
Most commercially successful OTK lineage. OntoEdit → OntoStudio (2003) → NeOn Toolkit (2007) → Ontoprise bankruptcy (2012) → semafora acquires assets → OntoStudio X (current). Clients include Alstom, Atos, Audi, GE.
Successors
OntoStudio X (semafora, direct descendant), Protege (Stanford, open-source), TopBraid Composer (enterprise), Fluent Editor (controlled natural language)
Totem Equiv.
Obsidian (note/schema editing) + CLAUDE.md (schema definition) + Claude Code (schema enforcement). No visual ontology editor.
Opportunity
Consider OntoStudio X or Protege integration for formal visual ontology editing when the Totem ontology is expressed in OWL 2.

Full Component Mapping

The eight OTK components map onto Totem Protocol with varying degrees of coverage. Strong coverage in extraction, search, and presentation. Weak coverage in formal inference and RDF storage — the gaps Perceptagon is designed to fill.

OTK Component Function Totem Equivalent Coverage Gap
OntoWrapper Web metadata → RDF InfraNodus fetch + Claude WebFetch Partial No RDF/OWL output
OntoExtract NLP entity extraction InfraNodus + Claude LLM extraction Strong No formal ontology output
Sesame RDF repository + SPARQL Obsidian vault + DEVONthink + InfraNodus Partial No RDF triplestore
OIL Ontology inference YAML schemas + InfraNodus graph structure Weak No DL reasoning
OntoShare Knowledge sharing Skills + Slack + templates + 3x3 Framework Moderate No formal semantic matching
RDF Ferret Semantic search DEVONthink + InfraNodus + Obsidian search Strong No SPARQL queries
Spectacle Presentation portal Cloudflare Pages editorial sites + Bases Strong
OntoEdit Ontology editor Obsidian + CLAUDE.md + Claude Code Moderate No visual ontology editor
Independent Convergence

OntoRAG — The Modern Mirror

A 2025 research project arrived at nearly identical architecture to Totem Protocol, without knowledge of either OTK or Totem. Three systems, twenty-four years apart, converging on the same design.

The OntoRAG Pipeline

OntoRAG (2025) builds ontology-augmented RAG systems using Schema Cards for governance, content-addressable Document Transfer Objects for provenance, and MCP servers for agent access. Its architecture independently mirrors both OTK's design and Totem Protocol's implementation.

OntoRAG Architecture Flow
Baseline Ontologies (OWL / TTL)
Ontology Catalog — register, browse, compose
Schema Card (initial or evolved)
Documents → DTOs (Document / Chunk)
Ontology Extraction (LLM → proposals)
Schema Card (deterministic merge, origin-tracked)
Instance Extraction (LLM → RDF with provenance)
Knowledge Graph (TTL / SPARQL)
MCP Servers (Knowledge + Ontology)

Three-System Comparison

The same functional requirements produce the same architectural decomposition across two decades and three independent teams. The table below maps each function across all three systems.

Function OTK (2002) OntoRAG (2025) Totem Protocol (2026)
Extraction OntoWrapper + OntoExtract (rule-based NLP) DTO pipeline + LLM extraction proposals Claude + InfraNodus (LLM zero-shot + graph analysis)
Repository Sesame (RDF + RDFS inferencing) rdflib / Blazegraph (SPARQL endpoint) InfraNodus graphs + Obsidian vault (YAML frontmatter)
Inference OIL (Description Logic + frame syntax) Schema Card + deterministic merge Layered Ontology Architecture (CLAUDE.md schemas)
Access OntoShare, Spectacle, RDF Ferret MCP servers (Knowledge + Ontology) MCP servers + editorial sites + vault tools
Provenance Manual tracking SHA-256 content hashing per DTO Git trailers + session hashes
Governance None (beyond OTKM methodology) Schema Card review checkpoint Letta Witness Agent + "Schema is law" principle

If the system cannot explain what it knows, where it comes from, and why it changed, it is not a knowledge system. It is a data store with pretensions.

— Design principle shared by OntoRAG and Totem Protocol

What OntoRAG Adds

OntoRAG introduces four mechanisms that Totem Protocol should evaluate for adoption. Each addresses a specific gap in the current stack.

Content-Addressable DTOs
Prevent Re-Processing
OntoRAG hashes every Document Transfer Object with SHA-256. If a document has already been processed, the system skips it. Totem currently has no mechanism to detect duplicate processing of the same source material across sessions.
Origin Tracking
Every Class Traces to Source
Every ontology class and property in OntoRAG tracks which document, which extraction pass, and which merge event created it. Totem tracks file provenance via git but does not trace individual schema elements to their source documents.
Schema Card Governance
Deterministic Merge Checkpoint
OntoRAG uses Schema Cards as governance checkpoints: LLM extraction proposes changes, deterministic merge logic applies them, and the Card records every evolution. Totem's "Schema is law" principle is the same idea, implemented through CLAUDE.md rules rather than programmatic merge logic.
SPARQL Endpoint
Structured Queries
OntoRAG exposes its knowledge graph via SPARQL, enabling structured queries that natural language search cannot reliably express. Totem relies on InfraNodus graph queries and Obsidian Bases, which handle most use cases but cannot perform transitive relationship traversal or set operations.
Forward Architecture

The Perceptagon Connection

The remaining gap in Totem Protocol is semantic middleware — the layer between vault files and knowledge graphs. OTK called it OntoWrapper + OntoExtract. We call it Perceptagon.

The Missing Layer

OTK's OntoWrapper and OntoExtract formed a two-stage pipeline: crawl/ingest documents, then apply NLP to extract entities, relationships, and concepts as RDF metadata linked to source documents. This pipeline — document-level ontology annotation — is precisely what Totem Protocol lacks.

InfraNodus extracts entities and relationships as network graphs (nodes and edges with weights). Claude performs zero-shot extraction through prompting. But neither produces formal ontology output. Neither performs incremental knowledge graph construction from text. Neither resolves the same entity appearing across multiple documents into a single canonical representation.

Perceptagon is designed to fill this gap. Its planned functions map directly onto the OTK extraction pipeline, upgraded with modern approaches drawn from iText2KG, OntoGPT, KONDA, and ODKE+.

Perceptagon's Planned Functions

Function 01
Document-Level Ontology Annotation
Process vault files, session transcripts, and web sources. Extract entities grounded to formal ontologies (OWL classes, SKOS concepts). Produce RDF metadata linked to source documents.
Function 02
Incremental Knowledge Graph Construction
Process one note at a time, extending the existing knowledge graph without full rebuilds. Handle LLM hallucinations through entity matching and re-prompting (iText2KG approach). Topic-independent, works across domains.
Function 03
Cross-Document Entity Resolution
When the same person, concept, or organization appears in multiple documents, recognize and merge them. Use embedding-based similarity for fuzzy matching, SPARQL for exact matching, InfraNodus for gap detection.
Function 04
Template-Based Knowledge Sharing
Formalize the skill/template system as ontology-aware sharing. When a new skill is created, its schema is registered in the ontology. When a template is applied, the ontology tracks which concepts were instantiated.

The Matchmaking Problem

OTK's OntoShare used vector cosine matching to connect knowledge supply (what people share) with knowledge demand (what the community needs). This is the same problem the 3x3 Framework addresses — matching Skills, Strategies, and Solutions across people's knowledge bases to find fits for unsolved problems.

3x3 Dimension OntoShare Equivalent (2002) Modern Implementation
Skills User competency profiles OWL-based skill ontology + embedding-based matching
Strategies Ontological class hierarchies Strategy templates with semantic annotations
Solutions Shared information resources Deliverable artifacts with frontmatter metadata

The modern approach combines three matching methods that did not exist together in 2002:

Method 01
Formal Ontology Matching (SPARQL)
Structured queries over formal ontologies. Precise, deterministic, handles transitive relationships. Limited to what has been formally modeled.
Method 02
Semantic Similarity (Embeddings)
Neural embedding-based cosine similarity. Handles fuzzy matching and conceptual proximity. The successor to OntoShare's TF-IDF vectors, dramatically more powerful.
Method 03
Gap Detection (InfraNodus)
Identify concepts across knowledge bases that fit requirements for unsolved problems. Find what is missing, not just what is present. The negative space methodology.

Combining these three methods produces a matchmaking system that neither OTK nor any single modern tool achieves alone: formal precision from SPARQL, semantic breadth from embeddings, and creative opportunity discovery from gap detection.

Integration Roadmap

Short-term

SHACL Validation + OntoGPT

Express CLAUDE.md frontmatter schemas as SHACL shapes. Run deterministic validation as a Claude Code skill. Add OntoGPT for domain-specific entity extraction grounded to formal ontologies.

Effort: Low-medium. pySHACL exists. OntoGPT is pip-installable. Schemas are already well-defined.

Medium-term

OntoRAG Pipeline Evaluation + Oxigraph

Evaluate OntoRAG's extraction pipeline for adoption. Add Oxigraph as a lightweight local SPARQL endpoint alongside Obsidian. Export frontmatter to RDF triples. Evaluate Cognee as complement to Letta for evolving knowledge graphs.

Effort: Medium. Multiple components to integrate. Oxigraph has Python bindings. The frontmatter-to-RDF mapping uses vocabulary that CLAUDE.md schemas already provide.

Long-term

Perceptagon as Semantic Middleware

Full extraction + annotation + resolution + matching pipeline. Express the Totem Agent Framework (9 agents, 23+ skills, 22 capabilities) as an OWL 2 ontology. Federated SPARQL across all knowledge stores. 3x3 Framework formalized as OWL with matching rules.

Effort: High. Multi-month project. But it gives Totem Protocol what OTK never achieved: a truly integrated semantic knowledge management system powered by modern AI.

Target Architecture

The Totem Protocol architecture with OTK insights integrated. Perceptagon serves the function that Sesame + OIL served in OTK: a unified semantic layer that all components read from and write to.

Totem Protocol — Integrated Semantic Architecture
Vault Files (Obsidian)
Perceptagon (extraction + annotation)
InfraNodus (viz + gaps)
RDF4J / Oxigraph (SPARQL reasoning)
Letta Witness (observation + governance)
MCP Servers (agent access)
Editorial Sites (presentation)
3x3 Framework (cross-knowledge-base matchmaking)

OTK was right about the architecture and wrong about the implementation technology. Twenty-four years later, the architecture holds. The individual tools have been superseded. And LLMs have solved the extraction and querying problems that limited the original system. The bridge between AI-native tools and formal semantic standards is Perceptagon.

Commercial Architecture

Palantir for the Rest of Us

The semantic web promised enterprise-grade knowledge intelligence. It delivered — but only for organizations that could spend millions on implementation. AI changed the equation. Totem Protocol is the result.

The Cost Barrier That Killed the Semantic Web

Between 1998 and 2008, the European Union invested billions of euros in research-meets-enterprise programs designed to commercialize semantic web technologies. Projects like OTK received targeted funding — 1.3 million euros in OTK's case — to build tools that would make knowledge management ontology-driven, machine-readable, and semantically interoperable.

The technology worked. The economics didn't.

Implementation required ontology designers, data scientists, RDF specialists, and integration engineers. Schema consensus alone could take months. Enterprise deployments ran six to seven figures. The OntoRAG team — building a comparable system in 2025 — spent over $1 million in development costs. Target customers would spend another $200,000 to $500,000 in agency fees, consulting, and internal time to deploy it.

The result: semantic technologies became the exclusive province of governments, defense contractors, and Fortune 50 companies. Everyone else got keyword search and spreadsheets.

The semantic web didn't fail because the architecture was wrong. It failed because the implementation cost exceeded the ROI for all but the largest organizations on earth.

Why It Stalled

Barrier 01
Schema Consensus
Getting stakeholders to agree on a shared taxonomy took months of workshops. Every domain had its own vocabulary. Every team had its own ontology. The governance overhead crushed adoption.
Barrier 02
Integration Friction
RDF triple stores, Protégé editors, SPARQL endpoints, custom ETL pipelines. Each component required specialized skills. The learning curve eliminated self-service.
Barrier 03
ROI Gap
Benefits were real but diffuse — better search, faster onboarding, cross-team knowledge reuse. Costs were immediate and concrete. CFOs couldn't justify the investment against near-term revenue.
Barrier 04
Talent Scarcity
Ontology engineering was a PhD-level discipline. Data scientists who understood both RDF and business domains were rare and expensive. The talent pool never scaled with demand.

Concepts leaked into the mainstream — Google adopted RDF and Linked Data, JSON-LD became a web standard, Wikipedia structured its data as Wikidata, machine learning models started using data cards. But the full promise of semantic intelligence — structured knowledge that machines can reason over, share, and compose — remained locked behind enterprise budgets.

The Inflection Point

Between 2022 and 2026, large language models broke the cost barrier. Not by making semantic web tools cheaper — by making them unnecessary as standalone products.

Every knowledge worker now has access to what OTK's target customers — special projects cohorts, enterprise ontology teams, consulting firms — used to spend hundreds of thousands of dollars to provision: an entity extraction engine, a reasoning system, a natural language query interface, a document synthesis pipeline, and a presentation layer. All running through general-purpose AI agents that require no RDF expertise, no SPARQL training, and no six-month taxonomy workshops.

The EU spent billions funding research into tools that would let organizations manage knowledge semantically. Twenty-four years later, every knowledge worker has their own personal collection of agencies and consultants — they just don't know it yet.

What Changed

Capability OTK Era (2001) Enterprise Era (2015) AI-Native Era (2026)
Entity Extraction Rule-based NLP (Text-to-Onto) Trained ML models ($500K+) Zero-shot LLM prompting (pennies per document)
Knowledge Structuring Manual ontology editing (OntoEdit) Enterprise taxonomy platforms ($200K/yr) InfraNodus + Claude Code skills (included)
Semantic Storage Sesame RDF repository (custom deploy) Neo4j / Stardog ($50K-500K/yr) Obsidian vault + Letta memory (free / $20/mo)
Querying & Reasoning SPARQL + OIL inference rules GraphQL + custom reasoning engines Natural language via Claude / MCP servers
Knowledge Sharing OntoShare (vector matching) Confluence / SharePoint ($10-50/user/mo) Skills + editorial sites + Slack pipelines
Presentation Spectacle (custom portal) Tableau / PowerBI ($15-70/user/mo) Generated editorial sites (minutes, not weeks)
Total Implementation $1.3M+ (research project) $500K–$10M (Palantir-class) $5K–$50K (Totem Protocol + ShurIQ engagement)

The Shared Pipeline

Both OTK and Totem Protocol follow the same six-stage pipeline. The architecture is identical. The implementation is completely different.

OTK built each stage as a separate specialized tool, each with its own interface, data format, and deployment requirements. Totem Protocol uses general-purpose AI agents that perform multiple functions through different modes of interaction.

Extract → Structure → Store → Query → Share → Present
Extract
OTK: OntoWrapper + Text-to-Onto
Totem: Claude Code + InfraNodus MCP
Structure
OTK: OntoEdit + OIL ontologies
Totem: InfraNodus graphs + Letta blocks + YAML schemas
Store
OTK: Sesame RDF repository
Totem: Obsidian vault + git + Qdrant vectors
Query
OTK: SPARQL + OIL inference
Totem: Natural language + MCP tool calls + gap detection
Share
OTK: OntoShare (vector matching)
Totem: Skills + Slack pipelines + Google Docs
Present
OTK: Spectacle (knowledge portal)
Totem: Editorial sites + viz hubs + intelligence briefs

The critical difference: OTK required a different specialist for each stage. An ontology engineer for Structure. A Sesame administrator for Store. A SPARQL developer for Query. A web developer for Present. Totem Protocol runs the entire pipeline through a single agent session, with Claude Code orchestrating specialized tools via MCP.

Palantir as Proxy

Palantir Technologies is the closest commercial analogue to what Totem Protocol delivers. Their pitch: enterprise ontology + data integration + custom deployment = actionable intelligence. Their price: $1 million to $10 million per engagement. Their customers: governments, intelligence agencies, Fortune 50.

Shur Creative Partners delivers the same architecture to the next tier down. Fortune 500 companies and nimble brands who need the same quality of insight but cannot justify Palantir's price tag.

Palantir
Gotham / Foundry / AIP
Proprietary ontology layer. Custom data integration. Forward-deployed engineers on-site. $5M+ annual contracts. Black-box deployment model — the client sees dashboards, not the reasoning.
ShurIQ
Totem Protocol + Creative Partners
Open ontology architecture. AI-native extraction and synthesis. Delivered as intelligence briefs, editorial sites, and viz hubs. $5K–$50K engagements. The client sees the reasoning — they get an operating system upgrade for how they understand their business.

The positioning is precise: clients don't just get a report. They get a new way of seeing their business, their market, and the attention flows around them. That is the experiential differentiator from Palantir's black-box model. Palantir tells you what happened. ShurIQ shows you why it matters and what the negative space reveals.

The Super-Concentrated Formula

Totem Protocol has reduced decades of semantic web research — billions of euros in EU funding, thousands of PhD dissertations, hundreds of commercial failures — into a substrate for encoding high-performing expertise-based work. Not by reimplementing the research. By recognizing that LLMs made the implementation layer disposable while the architectural layer remained timeless.

There are dozens, if not hundreds, of businesses, products, and services that can be derived from various combinations of Totem Protocol's capabilities. ShurIQ is the first. It won't be the last.

The formula works because the components are general-purpose:

Layer 01
Totem Protocol
The substrate. 9 agents, 43 skills, 22 capabilities, 5 memory systems, 6 ontology layers. Encodes how high-performing expertise-based work actually flows — from intake to intelligence to deliverable to feedback loop.
Layer 02
ShurIQ
The embodiment. Totem Protocol instantiated with AI agents for brand intelligence, competitive analysis, and strategic positioning. The first product built on the substrate.
Layer 03
Shur Creative Partners
The delivery model. High-performing creative agency with bespoke processes, methods, and systems. Totem Protocol was designed as the substrate to encode exactly this type of work.

This is the business case that OTK's consortium never cracked: how to take a research architecture and make it commercially viable without requiring enterprise-scale implementation budgets. The answer turned out to be — wait twenty-four years for AI to make the specialized tools unnecessary, then rebuild the architecture with general-purpose agents.

The EU Investment in Context

Between the late 1990s and mid-2000s, the European Commission funded hundreds of semantic web projects through the Fifth and Sixth Framework Programmes. The strategy was deliberate: direct graduate students toward commercializable tools, fund consortia that mixed academic research with industrial partners, and measure success by spin-off companies and self-sustaining products.

The success rate was roughly 10% to self-reliance. OTK's components mostly didn't survive independently — but their DNA is everywhere. Sesame became RDF4J. OIL became OWL. OntoEdit became semafora systems. The research ROI was real. The distribution was the bottleneck.

Era Investment Output Bottleneck
FP5 (1998–2002) ~€3.6B across IST OTK, SWAP, KAON, Knowledge Web Implementation cost per customer
FP6 (2002–2006) ~€3.6B across IST SEKT, KnowledgeWeb, SIMILE Talent scarcity + schema governance
FP7 (2007–2013) ~€9B across ICT LOD2, LIDER, xlime Enterprise adoption / ROI justification
Horizon 2020 (2014–2020) ~€6B KET + data Eurostars, KG-based innovation pilots ML/deep learning pivot; KG deprioritized
AI-Native (2023–) LLM training costs absorbed by labs OntoRAG, OntoGPT, iText2KG, Totem Protocol None — the architecture is now accessible

Totem Protocol sits at the end of this arc. Not as a single breakthrough, but as a convergence point — where twenty-five years of architectural research meets the AI capabilities that finally make the architecture cheap to implement. The semantic web's promise was always real. The cost was the only thing standing in the way. That barrier is gone.

The semantic web didn't die. It went underground, evolved through academic research and enterprise consulting, and resurfaced in 2026 as AI-native knowledge architecture. OTK planted the seed. Totem Protocol is the harvest.