# Risk modeling

Risk modeling in FortisX provides a structured way to interpret validator and network metrics. Instead of treating each metric in isolation, the platform organises them into a set of factors that are relevant for staking decisions and then aggregates those factors into risk profiles for validators, pools, providers, and networks.

This section describes the objectives of the risk model, how it is built from underlying data, how it is versioned and interpreted, and how it interacts with policies and allocations.

***

## Objectives

The risk model in FortisX is designed to answer questions such as:

* Which validators, pools, or providers exhibit patterns that warrant caution, closer monitoring, or explicit limits?
* How do network-level conditions (such as concentration or churn) influence the environment in which validators operate?
* How can quantitative signals be organised so that allocation policies are based on observable structure rather than informal judgement?

The model is not intended to predict prices or guarantee outcomes. Its purpose is to make relevant dimensions of risk explicit, measurable, and comparable over time.

***

## Inputs to the model

The risk model consumes a broad set of inputs from other layers of the platform:

* **Validator metrics** – participation, availability, penalties and incidents, configuration changes, fee parameters, stake share, and stability over time.
* **Network and decentralization metrics** – stake distribution, concentration indicators, churn, governance and protocol events, and measures of network activity and load.
* **Provider-level information** – aggregations of validator and pool metrics across a provider’s footprint, including cross-network concentration where applicable.
* **Events and alerts** – structured summaries of notable conditions (for example, large stake movements, recurring minor penalties, or sudden changes in participation).

These inputs are aligned in time and tagged with model and data provenance, so that risk profiles can be reconstructed under different assumptions if needed.

***

## Risk factors

Rather than collapsing all information into a single score, FortisX organises risk into a set of **factors**, each representing one dimension of interest. Typical factors include:

* **Technical reliability**\
  Based on participation, availability, missed duties, penalty patterns, and the frequency and impact of incidents.
* **Operational stability**\
  Reflecting configuration and software changes, rollout discipline for updates, and the presence of recurring operational issues.
* **Concentration and exposure**\
  Capturing stake share, distribution across validators, pools, and providers, and the share of total stake controlled by a given entity or group.
* **Network environment**\
  Summarising aspects of the underlying network that affect all participants, such as churn, structural concentration levels, and the incidence of protocol-level events.
* **Infrastructure dependence**\
  Where observable, indicating how reliant a validator, pool, or provider is on particular pieces of infrastructure or service providers.

Depending on the subject (Validator, Pool, Provider, Network), not all factors are equally relevant or available; the model accounts for this when constructing risk profiles.

***

## Normalisation and scaling

Underlying metrics often differ in units and ranges across networks. To make them combinable within a factor, FortisX applies a normalisation layer that:

* converts raw values into dimensionless quantities on a common scale (for example, between 0 and 1 or within a small integer range);
* incorporates network-specific context where necessary (for example, typical performance bounds in a given network);
* records the transformation used, so that the mapping from raw metrics to normalised values remains transparent.

Normalisation parameters and methods are part of the model definition and are versioned alongside it.

***

## Factor scores and aggregation

For each subject and factor, the model produces a **factor score**:

* factor scores are computed from one or more normalised metrics using simple, documented combinations (for example, weighted sums, thresholds, or bounded functions);
* where data is incomplete or degraded, the factor score may be marked as partial or unavailable rather than forced into a default value;
* intermediate values used to compute factor scores are recorded so that the contribution of each metric can be inspected.

Factor scores may then be aggregated into higher-level views, depending on use case:

* **Risk profiles** – structured sets of factor scores associated with a subject and a specific model version;
* **Buckets or categories** – coarse labels (for example, low / moderate / elevated risk along a factor) derived from ranges of factor scores for use in policies.

Aggregation rules and bucket boundaries are configured and versioned, not embedded implicitly in code.

***

## Model versions and evolution

Networks, staking practices, and available data sources evolve over time. The FortisX risk model is therefore explicitly **versioned**:

* each version defines:
  * which metrics and indicators are used;
  * how they are normalised;
  * how they are combined into factor scores and buckets;
* risk profiles are tagged with the model version that produced them;
* changes to the model (for example, altered weights, additional metrics, or new factors) yield new model versions rather than silently modifying the behaviour of existing ones.

This approach allows FortisX to:

* evaluate the impact of prospective model changes on historical data before adopting them;
* reconstruct the view of risk that existed at a given time under a specific model version;
* provide clear context when explaining why an allocation decision was made.

***

## Interpretation and limitations

Risk profiles and factor scores are designed to support structured reasoning, not to replace it. In particular:

* a higher or lower score along a factor should be interpreted with reference to its definition, inputs, and model version;
* missing or degraded data is treated explicitly, and such cases may warrant further investigation rather than automatic inclusion or exclusion;
* different organisations may assign different weights or thresholds to the same factors in their internal policies.

The model does not assert that one validator, pool, provider, or network is “safe” or “unsafe” in an absolute sense. It provides a means to compare and track conditions under a given set of assumptions and to express policies that are consistent with those assumptions.

***

## Use in policies and allocations

The main consumer of risk modeling output inside FortisX is the **policy engine**. Policies may refer to:

* specific factor scores (for example, requiring a minimum level of technical reliability);
* combinations of factors (for example, allowing exposure only where both operational stability and network environment factors are within defined ranges);
* buckets or categories (for example, excluding subjects in designated high-risk categories, or limiting their aggregate share).

When the policy engine evaluates current allocations or constructs allocation proposals, it uses risk profiles to:

* identify entities that are in or out of policy based on their risk characteristics;
* compute maximum permissible exposure to particular validators, pools, providers, or networks;
* determine where reallocations could reduce concentration or align allocations more closely with defined risk tolerances.

Because risk profiles, policies, and data snapshots are all versioned and time-stamped, it is possible to explain how specific allocation proposals arose and how risk considerations were taken into account.

***

## Summary

The FortisX risk model organises a diverse set of validator and network metrics into a coherent structure of factors, scores, and profiles. By making each step of the transformation explicit and versioned, it provides:

* a repeatable way to interpret complex data about validators, pools, providers, and networks;
* a stable interface between analytics and policy logic;
* a basis for auditability and explanation of allocation and rebalancing decisions.

Subsequent sections describe how the policy engine uses these risk signals together with explicit rules and constraints to shape staking allocations and rebalancing behaviour across networks.
