Platform overview
FortisX provides an analytics and policy layer for organisations that allocate capital into validator-based networks. Rather than operating validators or custodying assets, it focuses on making validator, pool, and network behaviour observable, assessable, and governable through metrics, risk indicators, and explicit allocation policies.
At a high level, FortisX observes how validators, pools, and networks behave; turns these observations into time-series metrics and risk indicators; and uses them to drive policies that define where capital can and cannot be allocated. The same analytics that support internal policies are exposed through dashboards and APIs so that operators and risk teams can inspect both the data and the reasoning behind allocation decisions.
FortisX is initially focused on major validator-based networks and is designed to extend to additional networks that expose sufficient data to support this analytic and policy layer.
Role in the staking ecosystem
Validator-based networks require several distinct layers to function in practice:
protocols and clients that implement consensus and state transitions;
validators and infrastructure providers that operate nodes and participate in consensus;
custody and execution systems that hold keys and apply staking decisions;
monitoring, analytics, and policy tooling that make behaviour observable and controlled.
FortisX focuses on the last of these layers. It does not define consensus, run the networks, or custody assets. Instead, it provides an analytics and policy plane: a consistent environment where validator and network behaviour can be monitored, where risk can be assessed, and where allocation and rebalancing decisions can be expressed as explicit policies.
The platform is designed to work alongside different operational setups. A single analytic and policy framework can be used across multiple validator providers, custodians, or internal staking implementations, so that allocation logic is not fragmented between tools or teams.
Core capabilities
Validator and network analytics
FortisX maintains a data pipeline that continuously ingests information from validator-based networks and related infrastructure. For each supported network, the platform focuses on:
validator- and pool-level characteristics such as reliability, participation, penalties, configuration changes, and performance relative to peers;
network-level indicators such as staking participation, concentration of stake, distribution of roles across providers, and changes in overall load and activity;
observable patterns in delegator behaviour, including large changes in stake, shifts between pools, and emerging concentrations.
These observations are stored as time series and aggregates, making it possible to analyse both current conditions and how they evolve over time.
Risk modeling
On top of raw and aggregated metrics, FortisX applies a risk model. The model does not attempt to predict prices or protocol outcomes; instead, it organises information into a set of dimensions that are relevant for staking decisions, such as:
technical reliability of validators and pools;
concentration and decentralisation characteristics at the network and provider level;
operational behaviour over time, including incident history and configuration changes;
exposure to specific infrastructure or service providers.
Each dimension is expressed in terms of derived indicators and scores that can be inspected and revised as assumptions change. The purpose of these scores is to support transparent, repeatable decision-making about where capital may be allocated.
Policy engine and allocation proposals
FortisX includes a policy engine that turns analytics and risk signals into concrete allocation and rebalancing proposals. Policies are defined as explicit rules and constraints, for example:
upper bounds on exposure to a single validator, pool, or provider;
minimum decentralisation characteristics for a network to be eligible;
exclusion of validators or pools that fall into designated high-risk buckets;
limits on how quickly allocations may change in response to new data.
Given current allocations, observed metrics, and configured policies, the engine produces proposals for how capital could be distributed across validators and pools within each network. These proposals can be reviewed, approved, and executed through systems that hold and manage assets, while remaining traceable back to the data and rules that produced them.
Integration and execution model
FortisX is designed as a non-custodial component. It does not assume control over private keys or direct authority to move assets. Instead, it integrates with:
validator and staking providers that implement the actual on-chain operations;
custody platforms that manage keys and enforce internal governance;
internal systems that record positions, limits, and approvals.
In a typical setup, FortisX produces analytics, alerts, and allocation proposals; external systems apply their own controls and approval processes; and, once actions are authorised, they can be executed in a way that is consistent with both FortisX policies and the organisation’s internal requirements.
External interfaces
The same data and models that drive internal analytics and policies are exposed through:
dashboards and views for validators, networks, and pools;
alert streams that highlight large stake movements or sudden changes in risk indicators;
an Analytics API and SDKs that allow external systems to query networks, validators, metrics, and risk assessments.
This ensures that FortisX does not act as an opaque “black box”. Operators and external systems can inspect the inputs and reasoning that sit behind allocation decisions, and can build additional tooling on top of the same analytic foundation.
Design principles
The platform is guided by a small set of design principles:
Data first – allocation and rebalancing decisions are derived from observable metrics and explicit policies, not from ad hoc judgement or opaque heuristics.
Separation of duties – analytics, risk modeling, policy specification, and execution can be operated and audited separately, matching how institutional control frameworks are structured.
Network-agnostic core – while details differ between Ethereum, Solana, Polkadot, Avalanche, Cosmos, and other networks, the analytic and policy framework is shared so that cross-network decisions can be made consistently.
Explainability and auditability – metrics, scores, and policies are designed to be inspectable and reproducible over time, so that risk committees and operators can understand how specific decisions were reached.
Incremental evolution – models, metrics, and network coverage are expected to evolve, but changes are made explicitly and tracked, rather than folded silently into the system.
These principles shape the more technical chapters that follow: the architecture and data flow, the data model and metrics, the risk model, the policy engine, and the operational and security practices around the platform.
Last updated