Everstar News | Nuclear AI, Licensing & Compliance
Everstar LogoEverstar Text Logo
Products

Your Plant Already Has a Knowledge Graph. It's a Guy Named Dave.

Author

Theresa Clark

Published

Nuclear Plant Chip Background (Everstar Website)

Every nuclear plant I've ever walked through has a "Dave."


Dave knows which calculation justified the current RCS pressure limit. Dave knows why that surveillance test runs every 92 days instead of quarterly. Dave knows which procedure you need to read before you touch that valve, and he knows it because he was in the room when it was written.


Ask Dave a question and you get an answer in four minutes. Ask the document management system the same question and you get 847 search results and a headache.


Dave is retiring in eighteen months.


When he goes, that knowledge goes with him. Not because it isn't documented somewhere — it is, in seventeen different systems, across four decades of Word documents, scanned PDFs, and a shared drive that nobody has reorganized since 2009. The knowledge exists. It just isn't connected. And in nuclear, disconnected knowledge is almost the same as no knowledge at all.


That's what this thing we talk about at Everstar called a "knowledge graph" solves. And it's a bigger deal than most people in this industry realize.


What it actually is

A knowledge graph is a way of storing information where the relationships between things are treated as seriously as the things themselves.


In a spreadsheet, a relationship is implied. You know that valve V-101 is governed by Technical Specification 3.6.6 because someone put them in the same row, or because Dave told you. In a knowledge graph, that connection is an explicit, queryable object. It has a type, a direction, and it can carry its own metadata. You can follow it.


You can ask: what else does this valve connect to? What surveillance procedures flow from this TS? What calculations support the valve's design basis? What changes have touched this system in the last ten years?


The answer comes back in seconds. Not because the system is magic, but because someone did the hard work of making the connections explicit rather than leaving them in Dave's head.


Why nuclear is the perfect use case

We are, at our core, a cross-referencing industry. Every SSC has a design basis. Every safety-related design basis links to a licensing document. Every licensing submittal traces to a regulation. Every change to any of those things needs to ripple through all the others.


We know this. We live this. We just haven't had a tool that treats it as the fundamental structure of the problem.


Think about what a 10 CFR 50.59 screening actually requires. You need to know what the change touches — not just the component, but the FSAR sections that describe it, the calculations that sized it, the specs that govern it, the surveillance tests that verify it, the analyses that assumed it. Today, that work takes hours, if not weeks, because the connections are implicit, scattered, and dependent on whoever happens to know the history.


A knowledge graph makes that traversal instant. Start at the proposed change. Follow the edges. Return the complete picture of what's affected. The screening still requires engineering judgment, but the archaeology is done.


The same logic applies to license amendment request preparation, where that archaeology (across many precedent plants, often) is the bottleneck. To corrective action pattern recognition, where the signal is buried in tens of thousands of condition reports that nobody has connected to the components they affect, the safety functions those components serve, or the industry operating experience that might be telling you something. To inspection readiness, where an NRC inspector asks a question that should take five minutes and instead takes three days (or a months-long dispute).


An acute need for new nuclear

Here's the one that keeps advanced reactor developers up at night.


When you propose a change to a highly integrated modular design, the question isn't just "what does this touch." It's what does this touch, and what does that touch, and what does that touch. For a developer working through a design application with thousands of cross-references, a single design change can cascade through hundreds of sections, dozens of calculations, and potentially trigger a licensing submittal that costs millions and takes years to resolve. Figuring out the blast radius of a proposed change is itself a major project — before a single line of engineering work has started.


A knowledge graph changes that calculation entirely. You propose the change, you run the traversal, and you get back the full impact picture: which sections need revision, which calculations are invalidated, whether the change crosses a threshold that requires NRC approval, and roughly what the licensing timeline implication looks like. That's not a hypothetical future capability. That's a query against a well-built graph.


The downstream value is even more interesting. When your design team can see the cost of a change — in licensing terms, in procurement terms, in schedule terms — before they commit to it, they make different decisions. Better decisions. The graph doesn't do the engineering. It makes the engineering visible. And in a capital-intensive, schedule-sensitive, regulator-scrutinized industry, visibility is worth an enormous amount of money.


How to make it happen

I won't pretend this is easy to build. The technology is the least of it.


The hard part is the ontology — deciding, before you populate anything, what types of things exist in your world and how they relate to each other. Get that wrong and you're rebuilding. The hard part is extracting structure from forty years of unstructured documents. The hard part is entity resolution: "RCP-1A," "Reactor Coolant Pump 1A," and "RCP Train A" are the same thing, and someone has to tell the system that. The hard part is what happens when the graph surfaces a conflict between what the FSAR says and what the calculation says. (That's a feature, not a bug — but resolving it is real engineering work.)


And the hardest part of all is maintaining it. A knowledge graph drifts from reality if it isn't updated when the design changes, when the procedure gets tweaked, when the amendment is approved. A drifted graph is worse than no graph, because it gives you false confidence.


This is not a spreadsheet upgrade. It is a commitment to treating your plant's knowledge as a living system rather than a static archive.


What it's worth

The plants and developers that get this right will run faster screenings, prepare LARs in a fraction of the time, catch precursors before they become events, walk into NRC inspections with complete traceable answers on demand, and know the licensing cost of a design change before they commit to it.


They'll stop re-deriving history that already exists. They'll stop losing institutional knowledge every time a senior engineer retires. They'll stop making expensive design decisions without understanding the full downstream implications.


More than that: they'll stop bothering Dave so much.


Dave deserves a real retirement. And your plant deserves a knowledge base that doesn't walk out the door with him.


If you want to talk through what this looks like for your organization — what it takes to build it right, and where to start — reach out to me directly at theresa@everstar.ai. I've been thinking about this problem for a long time, and I'm happy to think about it with you.



Theresa Clark is Chief Nuclear Officer at Everstar. She spent 21 years at the NRC across new reactors, operating reactors, radioactive materials, and rulemaking before joining Everstar to build something better. See her prior posts on AI and regulatory policy at everstar.ai/news.

Related Posts