The semantic web stack has five moving parts: an RDF store, a SPARQL engine, a SHACL processor, a serializer, and something to glue them together. Each has its own execution model, its own failure modes, and its own infrastructure requirements. For open-world reasoning and runtime discovery, you need all of them.

For a project plan with 40 tasks? You don’t.

The overbuilt case

A dependency graph where you know every resource and every edge at authoring time is a closed-world problem. There’s nothing to discover at runtime. There’s nothing to federate. The full graph exists before any query runs.

Running five coordinating services against a graph you already have is overbuilt. The machinery exists for a reason, but the reason doesn’t apply here.

Constraints do the same work

I use a constraint language where you declare resources with types and dependencies. The language evaluates constraints by computing intersections. If a field must be a string AND must match a pattern AND must be one of three values, the evaluator computes the intersection. If the intersection is empty, evaluation fails. At compile time.

This is structurally the same thing SHACL does. A shape declares that a node must have certain properties with certain value constraints. If the data violates the shape, the report says sh:conforms: false. The difference is when.

The prior art is SHACL, SPARQL, and the RDF stack. What’s new is not the output (it’s the same W3C artifacts) but the evaluation architecture. Instead of five runtime components coordinating, one evaluation step does the work of three: query, validate, serialize. The claim is narrow: compile-time constraints are sufficient for the closed-world case.

What falls out

Every resource declares @type (a set of types) and depends_on (a set of dependencies). Two fields. From that, comprehensions compute topology, depth, ancestors, critical paths, impact analysis. Each computation is deterministic and total.

The W3C outputs are projections of this graph. Each projection walks the resources and emits the target vocabulary:

  • SHACL: compare graph against declared constraints, emit sh:ValidationReport
  • SKOS: collect type sets, emit skos:ConceptScheme with members per type
  • OWL-Time: take the critical path schedule, emit time:Interval per resource
  • PROV-O: follow dependency edges, emit prov:Entity with prov:wasDerivedFrom
  • DCAT: catalog the resources, emit dcat:Dataset per resource type

One graph, seventeen W3C vocabularies.

The claim and its limits

In patent prosecution, you learn to write claims that cover exactly what you’ve built, no more. The temptation is to claim broadly. The examiner’s job is to find prior art that narrows you.

So here’s the narrow claim: for closed-world graphs where every resource is declared and every edge is known at authoring time, a constraint language can produce conformant W3C linked data without a runtime stack. The output is standard JSON-LD. Any triplestore can import it. Any JSON-LD processor can read it.

Here’s what’s explicitly outside the claim: open-world reasoning, runtime discovery, federation across organizational boundaries where you don’t control the schema, graphs that change after authoring, probabilistic or uncertain assertions.

The interesting question is how many of the graphs people load into triplestores are actually closed-world problems wearing open-world infrastructure. Project plans, infrastructure topology, curricula, supply chains, research pipelines. These are all cases where you know the full graph at authoring time. The open-world assumption isn’t serving them. It’s just the only architecture available.

Enablement

Could a person having ordinary skill in the art reproduce this? The graph engine is one file. Each W3C projection is one file. The input is typed resources with dependency edges. The output is cue export -e <expression> --out json.

The barrier isn’t complexity. It’s that constraint languages aren’t what people reach for when they hear “linked data.” They reach for triplestores, because that’s what the tutorials teach. The question is whether the output matters more than the architecture that produces it.