Structured Trust Assurance

Most people, when they think about “security,” imagine firewalls, audits, and clever custody setups. In the digital-asset world, we also have a habit of pointing to infrastructure and tools and saying that this is where the safety comes from. Those things do matter, of course, but there’s a much simpler and more direct point where things actually become real: the moment someone (or something) signs.

That signing step is where someone’s intention turns into an action that you can’t easily take back: money actually moves, people’s roles and permissions change, contracts are upgraded, and governance decisions are locked in. Up until that point, people are discussing, planning, and preparing; once the signing happens, you are dealing with the concrete results of those choices, whether they were good or bad. What I want to argue in this piece is that security has less to do with isolated controls scattered around the system and much more to do with how trust is organized around those signing moments. I’ll walk through how normal day‑to‑day business processes quietly evolve into a de‑facto trust model, how I’m building a tool that makes that structure visible, and why this matters in systems where a handful of actions can have a disproportionate impact.

The Problem: Trust Structures That Grow in the Dark

In practice, organizations rarely sit down and carefully design a complete trust model and then follow it consistently. Instead, the trust model grows in a slightly messy and improvised way over time.

You handle one exception to the process because something is urgent, and then that exception stops being exceptional. A bot that was meant to be temporary becomes part of the normal workflow, simply because removing it is more work than keeping it. A particular signer group is reused because people are busy and don’t feel like setting up a new configuration. One laptop ends up being used for more and more tasks, until it quietly becomes the laptop for a set of critical operations.

Bit by bit, those local decisions accumulate and solidify into a governance structure that no one formally wrote down, but that everyone depends on. If you look at the documentation, you might see:

  • tidy diagrams of multisig wallets and roles,

  • written policies describing who is supposed to approve which types of actions,

  • statements claiming that separation of duties is in place.

  • If you look at how things actually happen, you often find:

  • the same set of signers appearing in very different critical flows,

  • supposedly “independent” paths that share devices, providers, or vendors,

  • shortcuts that exist only in people’s heads or in private chat threads,

  • automation that nobody fully understands anymore, but nobody dares to touch either.

The distance between the formal description of trust and the way trust actually works in daily operations is where many of the interesting risks show up. My aim here is to close some of that distance by turning real operations into an explicit trust model that can be inspected and discussed, instead of relying on assumptions.

The Tool: Describing How Work Actually Happens

I’m building a tool that takes your real business processes - the way work is actually done when people are in a hurry or just following habit - and turns them into a structured picture of trust. The goal is not to impress anyone with fancy visuals; it’s to produce something that a reasonably technical person can read and argue about.

The starting point is fairly simple: you describe your processes as chains of objects interacting with each other. A very common pattern in this space might look like this:

a signer uses a hardware wallet → the hardware wallet plugs into a laptop → the laptop connects to a web interface → the web interface sends a transaction to an RPC node → the node broadcasts the transaction → the contract executes some change.

You can write it in that arrow notation or in plain sentences; either way, what we’re really capturing is: who interacted with what, through which path, and what came out at the end.

On top of that, you add context that you probably already think about when you design or review processes:

  • how much value could be affected if something went wrong,
  • how often this particular process runs (once a year, once a day, dozens of times an hour),
  • how often it is treated as urgent,
  • whether parts of it are allowed to be automated or are expected to be manual,
  • and any hard constraints you are stuck with, such as jurisdictions, staffing limits, uptime requirements, or regulatory expectations.

You also include some basic information about the objects involved:

  • who owns or operates each one,
  • roughly where they are (region, environment, time zone),
  • whether they are humans, bots, backend services, or MPC parties,
  • and which dependencies they share, such as the same cloud provider, the same fleet of devices, the same team, or the same vendor.

The tool does not need a perfect, fully up‑to‑date inventory to be useful. It just needs a reasonably honest approximation of reality and a willingness to adjust the model as you learn more.

Under the Hood: Graphs, Paths, and Layers of Risk

Internally, the tool turns all of this into a graph representation:

  • nodes correspond to objects such as people, devices, applications, pieces of infrastructure, or on‑chain contracts,
  • edges represent actions or relationships, such as “uses,” “connects to,” “signs,” “sends transaction,” “broadcasts,” “executes,” “approves,” or “changes policy.”

A business process is then just a path that runs through a sequence of these nodes and edges.

Once you have that graph, you can start asking questions that are surprisingly awkward to answer if all you have are policy documents and scattered spreadsheets. For example:

  • Which concrete paths in the system can move funds from point A to point B?
  • Which paths can change who has long‑term control over a given pool of funds?
  • Which people, devices, vendors, or components sit in the middle of several important processes at once?
  • What is the effect if one object - say, a single laptop, one RPC provider, or a specific CI pipeline - fails or gets compromised?

To avoid turning this into one vague “risk score,” the tool looks at risk in a few separate layers. At a high level, it considers:

  1. Who or what could cause harm? That might be external attackers, insiders, third‑party providers, or even just accidents.
  2. In what way would the harm actually happen? Through technical compromise, social engineering, physical access, a breakdown of a manual process, or misuse of a policy or contract.
  3. What factors make this easier than it should be? Things like having single points of failure, weak separation of duties, shared dependencies that were supposed to be independent, poor recovery planning, or missing monitoring.

The point is not to generate a long list of worst‑case horror stories. The goal is to attach realistic, concrete situations to the actual objects and paths you already rely on, and then look at how well (or how badly) the organization would absorb those situations.

A particularly important step is to look at where different processes intersect. This is often where comfortable assumptions give way to awkward realities. You can have processes that look well‑controlled on their own, but when you combine them, the overall structure turns out to be fragile because the same small group of signers or systems sits in positions where they can, in effect, override everything else.

For instance:

  • the same quorum of signers might be able to approve both contract upgrades and large treasury transfers,
  • an upgrade path might quietly grant new transfer capabilities that were not obvious in the documentation,
  • a single CI pipeline or admin key might indirectly control many systems that are supposed to be independent.

When patterns like this show up, you realize you may have created what is essentially a universal takeover path, not because someone wanted that, but simply because of how separate processes were composed over time. This is exactly the sort of thing the graph representation brings to the surface.

What the Tool Actually Produces

I know, it all sounds like a mixture of different threat model approaches, but it's much more than that. It's about automating and simplifying the process. The tool is built to produce a few concrete outputs that are meant to be used to make changes, not just to decorate slides.

1. A Minimal Signer Pool and Hierarchy

The first output is a proposal for a minimal signer pool and signer hierarchy across all the processes you have modeled. If you optimize signer setups for individual processes in isolation, it is quite easy to convince yourself that everything looks fine. The real picture, however, is about how all of those processes fit together.

So the tool tries to:

  • keep everyday operations practical (for example, avoid unrealistic approval requirements for simple, low‑risk tasks),
  • reduce dangerous overlaps between actions that can change a lot,
  • and create a clear distinction between control‑plane powers (things like upgrades, role administration, and configuration changes) and funds‑plane powers (things like transfers and withdrawals).

Signer reuse is not treated as automatically bad. The tool is simply there to highlight the cases where reuse leads to too much power concentrating in the same hands or systems.

2. A “To‑Be” Graph: Same Business, Different Shape

The second output is a “to‑be” graph, which is essentially a modified version of your system that keeps the same business outcomes but rearranges the trust structure.

This might involve suggestions such as:

  • replacing single signers with threshold schemes for certain actions,
  • splitting sensitive flows into stages like propose, approve, and execute,
  • inserting policy enforcement between the user interface and the RPC layer,
  • or separating devices, vendors, or teams across paths that should not be able to compromise each other.

You can compare the “as‑is” and “to‑be” graphs side by side and see exactly what changed and what problems those changes are meant to address.

3. Concrete Policy Rules and Guardrails

The third output is a set of policy rules that can be enforced in code or configuration, so that the system does not depend entirely on people always doing the right thing.

These rules might include things like:

  • transaction size limits and allowlists,
  • timelocks for changes that have large or long‑lasting impact,
  • extra approvals when certain risk thresholds are crossed (based on amount, asset type, or destination),
  • and constraints on which kinds of transactions are allowed to originate from particular process paths.

If you have ever seen a perfectly reasonable process break down because a single “special urgent case” was handled informally, the value of having these rules enforced by the system itself is probably clear.

4. Organizational and Administrative Controls

The fourth output deals with organizational and administrative controls.

Some of the more important risks do not live in cryptographic details or smart‑contract code. They live in who reports to whom, where people are located, how they travel, and what kinds of incentives they have. Based on the graph, the tool might surface observations like:

  • certain signers probably should not be in a direct reporting relationship,
  • some groups would be safer if they were kept in different regions or time zones,
  • specific people should avoid all being in the same place at the same time during certain periods,
  • or a few roles should have clearer rules around conflicts of interest.

In those cases, the answer is not “add another key.” The answer is to change how people and responsibilities are arranged around the keys and systems you already have.

The Real Benefit: Making Trade‑Offs Explicit

The main benefit I’m aiming for is not that the tool will tell you to “add more security.” It is that it gives you a clearer picture of how things actually work, so that trade‑offs can be discussed in concrete terms instead of guesswork.

You end up with a single map that shows how authority and value move through your organization. You can see how decisions made for convenience - reusing the same signers, adding extra automation, standardizing on one vendor, skipping a step in the approval chain - gradually reshape the trust model, even if nobody updates the documentation.

You can also talk about trade‑offs more calmly. There will be situations where speed is genuinely important, where automation is necessary, or where a smaller signer set really is the practical option. The tool doesn’t try to shame those choices. Instead, it shows what they cost in terms of trust and suggests where you might introduce compensating controls: perhaps more separation in the control plane, stronger guardrails on high‑risk actions, or some straightforward organizational changes.

For audits, incident reviews, or internal governance debates, this gives you something to refer back to. Instead of arguing purely from intuition or fear, you can say:

  • these are the paths we actually use in production,
  • these are the overlaps and shared dependencies we rely on,
  • these are the ways one action or system can dominate another,
  • this is the redesigned hierarchy we are considering,
  • and this is the remaining risk we are choosing to accept.

It does not turn every discussion into a simple yes‑or‑no decision, but it anchors the conversation in structure rather than in vague feelings.

Why I’m Working on This Now

I’m doing this now because the ecosystem has reached a point where the old “a few smart people with hardware wallets” pattern is obviously not enough.

There is more value under management, more automation and bots involved in moving that value, more third‑party integrations, more regulation and scrutiny, and more capable and motivated attackers.

The informal trust models that worked when teams were small and systems were simple don’t hold up very well in this kind of environment.

Distributed ledger systems are also quite unforgiving. They do not care how confident someone sounded in a meeting or how sure a team felt about an exception. If the wrong thing gets signed, it will be executed. That harshness, in a way, makes it very clear that most of what we call “security” is actually about governance and about how day‑to‑day operations are designed.

If this tool ends up doing its job properly, the result I’m aiming for is not just nicer diagrams or slightly tidier multisig configurations. I want a reusable way to translate business decisions into explicit trust structures, and, over time, to distill a baseline set of patterns that still work whether you’re dealing with a stablecoin, a protocol treasury, a fintech approval workflow, or any other system where a relatively small number of actions can have very large effects.

I’m calling this STACKED - Structured Trust Assurance in Crypto Key Environment Deployments. It’s not meant to be a magic solution. It’s meant to be a practical way to see how your trust actually works today and to shape it deliberately, instead of letting it grow by accident.

Structured Trust Assurance - Roman Semko