# Validator metrics

Validator-level metrics are at the centre of FortisX. They describe how individual validators and pools behave over time and provide the raw material for reliability assessments, risk profiles, and allocation policies. This section outlines which aspects of validator behaviour FortisX measures, how those measurements are obtained, and how they are used within the platform.

The goal is not to exhaustively document every network-specific detail, but to define the main categories of metrics that are applied consistently across networks such as Ethereum, Solana, Polkadot, Avalanche, and Cosmos.

***

## Objectives of validator analytics

Validator metrics in FortisX are designed to answer a few practical questions:

* How reliably does a validator participate in consensus and perform expected duties?
* How does its behaviour compare to peers in the same network?
* How has its configuration and operational profile changed over time?
* Which validators or pools exhibit characteristics that may be relevant for concentration, operational, or governance risk?

The metrics described below are grouped into families that support these questions. They are stored as time series and aggregates, enabling both point-in-time evaluation and historical analysis.

***

## Participation and performance

Participation and performance metrics capture how a validator behaves in its primary role within a network.

Typical metrics include:

* **Participation rate** – the fraction of expected duties (for example, attestations, proposals, votes) that the validator has successfully performed over a given interval.
* **Missed duties** – counts or rates of missed attestations, blocks, proposals, or other network-specific responsibilities.
* **Relative performance** – comparisons of a validator’s participation or effectiveness to network averages or relevant peer groups.
* **Latency and inclusion patterns**, where observable – for example, how quickly a validator contributes to consensus relative to typical network timings.

These metrics are derived from on-chain events, consensus-layer data, and, where applicable, validator-specific telemetry. They provide a basic view of whether a validator is consistently online and behaving as expected for its role.

***

## Availability and downtime

Availability metrics highlight periods where a validator is not participating as expected. Depending on the network, this may be expressed as:

* **Uptime percentage** over a rolling window or fixed period;
* **Count and duration of downtime episodes**, where the validator’s effective participation drops below a configurable threshold;
* **Recovery patterns** – how quickly the validator returns to normal behaviour after incidents or scheduled maintenance.

FortisX does not rely on single-point measurements. Instead, it tracks availability over multiple windows and resolutions, so that both short-term incidents and longer-term trends can be distinguished.

***

## Penalties, slashing, and incidents

Penalties and incidents directly affect the risk profile of a validator or pool. FortisX maintains structured records and metrics for:

* **Protocol-level penalties and slashing events** – including type, severity, and affected balances where this is visible from the network;
* **Patterns of minor penalties** – for example, frequent small penalties that may signal recurring issues even if no major slashing has occurred;
* **Recorded incidents and operator-reported events**, where available – such as operational failures, misconfigurations, or external disruptions.

These signals are combined into time-series metrics and incident logs that feed into the risk model. The aim is not only to mark validators that have suffered major penalties, but also to detect recurring patterns that may indicate operational fragility.

***

## Configuration and software profile

Changes in configuration and software can have important implications for reliability and interoperability. FortisX tracks:

* **Client and version information**, where it can be inferred or reported in a reliable way;
* **Consensus and execution client combinations** in multi-client networks;
* **Changes in configuration** that may affect behaviour or risk (for example, changes in fee or commission parameters, or in operational flags);
* **Rollout patterns for updates**, such as whether a validator upgrades promptly or lags behind the broader network.

Metrics in this family are less about short-term performance and more about assessing whether a validator operates with a discipline that aligns with the expectations of the network and of risk-sensitive delegators.

***

## Economic and fee parameters

FortisX does not model asset prices or future returns, but it does track certain parameters that influence the economic positioning of validators and pools. Typical metrics include:

* **Fee or commission levels** charged by validators or pools, as defined by network-specific mechanisms;
* **Changes in fee parameters over time**, including direction and frequency;
* **Relative positioning** of a validator or pool’s fee levels compared to its peers in the same network.

These measurements are used to understand how validators and pools are positioned from an economic perspective, and how frequently they change terms, without attempting to predict or promote specific yield outcomes.

***

## Stake distribution and concentration

From the perspective of both risk and policies, it matters not only how a validator behaves, but also how much stake it attracts.

Validator-level metrics in this area include:

* **Effective stake controlled by the validator** within its network;
* **Share of stake held by the validator within relevant scopes** (for example, within a pool, provider, or the overall network);
* **Inflow and outflow patterns**, where delegation operations are visible, indicating how stake concentration evolves over time.

These metrics inform concentration and decentralisation analyses and are used by policies that limit exposure to individual validators, pools, or providers.

***

## Derived reliability indicators

Beyond raw metrics, FortisX computes derived indicators that summarise aspects of validator behaviour. Examples include:

* **Reliability scores** based on combinations of participation, availability, and incident patterns;
* **Stability indicators** reflecting how often and how strongly performance fluctuates;
* **Configuration stability indicators** capturing the frequency and type of changes to key parameters.

These indicators are not treated as opaque ratings. Their definitions and inputs are documented, and they are stored with references to the underlying metrics and model versions that produced them.

***

## Time scales and granularity

Validator metrics are collected and stored at multiple time scales:

* **Fine-grained measurements** at the level of blocks, slots, or epochs, depending on the network;
* **Aggregated views** over fixed intervals (for example, hourly, daily, weekly), used in dashboards and risk modeling;
* **Rolling windows** that smooth short-term noise while preserving sensitivity to meaningful shifts.

This multi-scale approach allows FortisX to:

* detect short-lived incidents without overreacting to isolated events;
* characterise long-term behaviour and trends;
* recalibrate indicators when network dynamics or model assumptions change.

***

## Data sources and validation

Validator metrics are derived from a combination of:

* consensus-layer and execution-layer data obtained from nodes or indexers;
* publicly accessible RPC endpoints and specialised data services;
* provider- or operator-supplied information where it can be verified or cross-checked.

The platform applies consistency checks where multiple sources exist, and records which sources were used for each metric. When discrepancies are detected, FortisX may mark certain measurements as degraded or exclude them from derived indicators until the underlying issue is resolved.

***

## Use within FortisX

Validator metrics serve several roles inside the platform:

* **Analytics and visualisation** – describing how individual validators and pools behave, how they compare to peers, and how their behaviour changes over time.
* **Risk modeling inputs** – feeding into factor scores and risk profiles that represent dimensions such as technical reliability, operational stability, and concentration exposure.
* **Policy and allocation inputs** – providing the quantitative basis for policies that bound exposure to specific validators or classes of validators and for allocation proposals that reflect those policies.

Because metrics, indicators, and their relationships to policies are recorded explicitly, it is possible to reconstruct why a particular validator was included, excluded, or limited in an allocation proposal at a given point in time.

Subsequent sections extend this view beyond individual validators and pools to network-level and decentralisation metrics, and then to the risk and policy layers built on top of this data.
