Setup Aligned Infrastructure Locally

Dependencies

Ensure you have the following installed:

To install Go, Rust, jq and yq go to the provided links and follow the instructions.

Install Go dependencies (zap-pretty, abigen, eigenlayer-cli):

make go_deps

Install Foundry:

make install_foundry
foundryup -v nightly-a428ba6ad8856611339a6319290aade3347d25d9

Install the necessary submodules and build all the FFIs for your OS:

make deps

If you want to rebuild the FFIs, you can use:

make build_all_ffi

Booting Devnet with Default configs

Before starting, you need to set up an S3 bucket. More data storage will be tested in the future.

You need to fill the data in:

batcher/aligned-batcher/.env

And you can use this file as an example of how to fill it:

batcher/aligned-batcher/.env.example

After having the env setup, run in different terminals the following commands to boot Aligned locally:

Anvil

To start anvil, a local Ethereum devnet with all necessary contracts already deployed and ready to be interacted with, run:

make anvil_start_with_block_time
More information on deploying the smart contracts on anvil:

EigenLayer Contracts

If EigenLayer contracts change, the anvil state needs to be updated with:

make anvil_deploy_eigen_contracts

You will also need to redeploy the MockStrategy & MockERC20 contracts:

make anvil_deploy_mock_strategy

Aligned Contracts

When changing Aligned contracts, the anvil state needs to be updated with:

make anvil_deploy_aligned_contracts

To test the upgrade script for ServiceManager in the local devnet, run:

make anvil_upgrade_aligned_contracts

To test the upgrade script for RegistryCoordintator in the local devnet, run:

make anvil_upgrade_registry_coordinator

Note that when upgrading the contracts, you must also:

  1. Re-generate the Go smart contract bindings:

    make bindings
  2. Rebuild Aggregator and Operator Go binaries:

    make build_binaries

Aggregator

To start the Aggregator:

make aggregator_start
To start the aggregator with a custom configuration:
make aggregator_start CONFIG_FILE=<path_to_config_file>

Operator

To start an Operator (note it also registers it):

make operator_register_and_start

If you need to start again the operator, and it's already registered, you can use:

make operator_start
More information about Operator registration:

Operator needs to register in both EigenLayer and Aligned. Then it can start verifying proofs.

Register into EigenLayer

To register an operator in EigenLayer Devnet with the default configuration, run:

make operator_register_with_eigen_layer

To register an operator in EigenLayer with a custom configuration, run:

make operator_register_with_eigen_layer CONFIG_FILE=<path_to_config_file>

Register into Aligned

To register an operator in Aligned with the default configuration, run:

make operator_register_with_aligned_layer

To register an operator in Aligned with a custom configuration, run:

make operator_register_with_aligned_layer CONFIG_FILE=<path_to_config_file>

Full Registration in Anvil with one command

To register an operator in EigenLayer and Aligned and deposit strategy tokens in EigenLayer with the default configuration, run:

make operator_full_registration

To register an operator in EigenLayer and Aligned and deposit strategy tokens in EigenLayer with a custom configuration, run:

make operator_full_registration CONFIG_FILE=<path_to_config_file>

Deposit Strategy Tokens in Anvil local devnet

There is an ERC20 token deployed in the Anvil chain to use as a strategy token with EigenLayer.

To deposit strategy tokens in the Anvil chain with the default configuration, run:

make operator_mint_mock_tokens
make operator_deposit_into_mock_strategy

To deposit strategy tokens in the Anvil chain with a custom configuration, run:

make operator_mint_mock_tokens CONFIG_FILE=<path_to_config_file>
make operator_deposit_into_mock_strategy CONFIG_FILE=<path_to_config_file>

Deposit Strategy tokens in Holesky/Mainnet

EigenLayer strategies are available in eigenlayer-strategies.

For Holesky, we are using WETH as the strategy token.

To get HolETH and swap it for different strategies, you can use the following guide.

Config

There is a default configuration for devnet purposes in config-files/config.yaml. Also, there are three different configurations for the operator in config-files/devnet/operator-1.yaml, config-files/devnet/operator-2.yaml and config-files/devnet/operator-3.yaml.

The configuration file has the following structure:

# Common variables for all the services
# 'production' only prints info and above. 'development' also prints debug
environment: <production/development>
aligned_layer_deployment_config_file_path: <path_to_aligned_layer_deployment_config_file>
eigen_layer_deployment_config_file_path: <path_to_eigen_layer_deployment_config_file>
eth_rpc_url: <http_rpc_url>
eth_ws_url: <ws_rpc_url>
eigen_metrics_ip_port_address: <ip:port>

## ECDSA Configurations
ecdsa:
  private_key_store_path: <path_to_ecdsa_private_key_store>
  private_key_store_password: <ecdsa_private_key_store_password>

## BLS Configurations
bls:
  private_key_store_path: <path_to_bls_private_key_store>
  private_key_store_password: <bls_private_key_store_password>

## Operator Configurations
operator:
  aggregator_rpc_server_ip_port_address: <ip:port> # This is the aggregator url
  address: <operator_address>
  earnings_receiver_address: <earnings_receiver_address> # This is the address where the operator will receive the earnings, it can be the same as the operator address
  delegation_approver_address: "0x0000000000000000000000000000000000000000"
  staker_opt_out_window_blocks: 0
  metadata_url: "https://yetanotherco.github.io/operator_metadata/metadata.json"
  enable_metrics: <true|false>
  metrics_ip_port_address: <ip:port>
  max_batch_size: <max_batch_size_in_bytes>
# Operators variables needed for register it in EigenLayer
el_delegation_manager_address: <el_delegation_manager_address> # This is the address of the EigenLayer delegationManager
private_key_store_path: <path_to_bls_private_key_store>
bls_private_key_store_path: <bls_private_key_store_password>
signer_type: local_keystore
chain_id: <chain_id>

Changing operator keys:

Operator keys can be changed if needed.

When creating a new wallet keystore and private key please use strong passwords for your own protection.

To create a keystore, run:

cast wallet new-mnemonic
cast wallet import <keystore-name> --private-key <private-key>

To create an ECDSA keystore, run:

eigenlayer operator keys import --key-type ecdsa <keystore-name> <private-key>

To create a BLS keystore, run:

eigenlayer operator keys import --key-type bls <keystore-name> <private-key>

Batcher

To start the Batcher:

make batcher_start

If you are testing locally, you can run this instead:

make batcher_start_local
More information about Batcher configuration:

To run the batcher, you will need to set environment variables in a .env file in the same directory as the batcher (batcher/aligned-batcher/).

The necessary environment variables are:

You can find an example .env file in .env.example

You can configure the batcher in config-files/config.yaml:

# Common variables for all the services
eth_rpc_url: <http_rpc_url>
eth_ws_url: <ws_rpc_url>
aligned_layer_deployment_config_file_path: <path_to_aligned_layer_deployment_config_file>

## Batcher Configurations
batcher:
  block_interval: <block_interval>
  batch_size_interval: <batch_size_interval>
  max_proof_size: <max_proof_size_in_bytes>
  max_batch_size: <max_batch_size_in_bytes>
  pre_verification_is_enabled: <true|false>

## ECDSA Configurations
ecdsa:
  private_key_store_path: <path_to_ecdsa_private_key_store>
  private_key_store_password: <ecdsa_private_key_store_password>

Run

make batcher_start

or

make batcher_start_local

The latter version sets up a localstack to act as a replacement for S3, so you don't need to interact with (and give money to) AWS for your tests.


Send test proofs

Next, you can use some of the send proofs make targets. All these proofs are pre-generated and for testing purposes, feel free to generate your own tests to submit to Aligned.

SP1

Send an individual proof:

make batcher_send_sp1_task

Send a burst of 15 proofs:

make batcher_send_sp1_burst

Send proofs indefinitely:

make batcher_send_infinite_sp1
Risc0

Send an individual proof:

make batcher_send_risc0_task

Send a burst of 15 proofs:

make batcher_send_risc0_burst
Plonk

Send an individual BN254 proof:

make batcher_send_plonk_bn254_task

Send a burst of 15 BN254 proofs:

make batcher_send_plonk_bn254_burst

Send an individual BLS12-381 proof:

make batcher_send_plonk_bls12_381_task

Send a burst of 15 BLS12-381 proofs:

make batcher_send_plonk_bls12_381_burst
Groth16

Send an individual BN254 proof:

make batcher_send_groth16_bn254_task

Send BN254 proofs indefinitely:

make batcher_send_infinite_groth16

Send BN254 proof bursts indefinitely:

make batcher_send_burst_groth16
Send a specific proof:

To install the Aligned client to send a specific proof, run:

make install_aligned_compiling

The SP1 and Risc0 proofs need the proof file and the vm program file. The current SP1 version used in Aligned is v3.0.0 and the current Risc0 version used in Aligned is v1.1.2. The GnarkPlonkBn254, GnarkPlonkBls12_381 and Groth16Bn254 proofs need the proof file, the public input file and the verification key file.

aligned submit \
--proving_system <SP1|GnarkPlonkBn254|GnarkPlonkBls12_381|Groth16Bn254|Risc0> \
--proof <proof_file> \
--vm_program <vm_program_file> \
--pub_input <pub_input_file> \
--proof_generator_addr [proof_generator_addr] \
--batch_inclusion_data_directory_path [batch_inclusion_data_directory_path] \
--keystore_path [path_to_ecdsa_keystore] \
--batcher_url wss://batcher.alignedlayer.com \
--rpc_url https://ethereum-holesky-rpc.publicnode.com 

Explorer

If you also want to start the explorer for the devnet, to clearly visualize your submitted and verified batches, see how to run it using the following documentation:

Minimum Requirements

DB Setup

[!NOTE] If you want to run the DB separately, without docker, you can set it up and start the explorer with the following command:

make run_explorer_without_docker_db

To set up the explorer, an installation of the DB is necessary.

First, you'll need to install docker if you don't have it already. You can follow the instructions here.

The explorer uses a PostgreSQL database. To build and start the DB using docker, run:

make explorer_build_db
(Optional) The steps to manually execute the database are as follows...
  • Run the database container, opening port 5432:

make explorer_run_db
  • Configure the database with ecto running ecto.create and ecto.migrate:

make explorer_ecto_setup_db
  • Start the explorer:

make run_explorer

[!NOTE] If you want to run the DB separately, without docker, you can set it up and start the explorer with the following command:

make run_explorer_without_docker_db

To clear the DB, you can run:

make explorer_clean_db

If you need to dump the data from the DB, you can run:

make explorer_dump_db

This will create a dump.$date.sql SQL script on the explorer directory with all the existing data.

Data can be recovered from a dump.$date.sql using the following command:

make explorer_recover_db

Then you'll be requested to enter the file name of the dump you want to recover already positioned in the /explorer directory.

This will update your database with the dumped database data.

Extra Explorer script to fetch past batches

If you want to fetch past batches that for any reason were not inserted into the DB, you will first need to make sure you have the ELIXIR_HOSTNAME .env variable configured. You can get the hostname of your elixir by running :

elixir -e 'IO.puts(:inet.gethostname() |> elem(1))'

Then you can run:

make explorer_fetch_old_batches

You can modify which blocks are being fetched by modify the parameters the explorer_fetch_old_batches.sh is being received

Running the Explorer

To run the explorer for the local devnet, you'll need to have the devnet running and the DB already setup.

Additionally, you'll need to have the .env file in the /explorer directory of the project. A base example of the .env file can be found in /explorer/.env.dev.

Use the following command to start the Explorer:

make run_explorer

Now you can visit localhost:4000 from your browser. You can access to a tasks' information by visiting localhost:4000/batches/:merkle_root.

There's an additional Explorer script to fetch past operators and restake

If you want to fetch past operators, strategies and restake, you will need to run:

make explorer_fetch_old_operators_strategies_restakes

This will run the script explorer_fetch_old_operators_strategies_restakes.sh that will fetch the operators, strategies and restake which will later insert into the DB.

Run with custom env / other devnets

Create a .env file in the /explorer directory of the project. The .env file needs to contain the following variables:

Then you can run the explorer with this env file config by entering the following command:

make run_explorer

This will start the explorer with the configuration set in the .env file on port 4000. Visit localhost:4000 from your browser.

Metrics

Aggregator Metrics

Aggregator metrics are exposed on the /metrics endpoint.

If you are using the default config, you can access the metrics on http://localhost:9091/metrics.

To run Prometheus and Grafana, run:

make run_metrics

Then you can access Grafana on http://localhost:3000 with the default credentials admin:admin.

If you want to install Prometheus and Grafana manually, you can follow the instructions below.

To install Prometheus, you can follow the instructions on the official website.

To install Grafana, you can follow the instructions on the official website.

Notes on project creation

EigenLayer middleware was installed as a submodule with:

mkdir contracts
cd contacts
forge init . --no-commit
forge install Layr-Labs/eigenlayer-middleware@mainnet

Then, to solve the issuehttps://github.com/Layr-Labs/eigenlayer-middleware/issues/229, we changed it to:

forge install yetanotherco/eigenlayer-middleware@yac-mainnet --no-commit

As soon as it gets fixed in mainnet, we can revert it.

Base version of middleware used is 7229f2b.

The script to initialize the devnet can be found on contracts/scripts/anvil.

The addresses of the relevant contracts after running the anvil script are dumped on contracts/script/output/devnet.

The state is backed up on contracts/scripts/anvil/state.

EigenLayer contract deployment is almost the same as the EigenLayer contract deployment on mainnet. Changes are described in the file.

Running Fuzzers:

Fuzzing for the operator can be done by executing the following make commands from the root directory of the project.

macOS:

make operator_verification_data_fuzz_macos

Linux:

operator_verification_data_fuzz_linux

Last updated