For Holesky, we are using WETH as the strategy token.
To get HolETH and swap it for different strategies, you can use the following guide.
Config
There is a default configuration for devnet purposes in config-files/config.yaml. Also, there are three different configurations for the operator in config-files/devnet/operator-1.yaml, config-files/devnet/operator-2.yaml and config-files/devnet/operator-3.yaml.
The configuration file has the following structure:
# Common variables for all the services# 'production' only prints info and above. 'development' also prints debugenvironment:<production/development>aligned_layer_deployment_config_file_path:<path_to_aligned_layer_deployment_config_file>eigen_layer_deployment_config_file_path:<path_to_eigen_layer_deployment_config_file>eth_rpc_url:<http_rpc_url>eth_ws_url:<ws_rpc_url>eigen_metrics_ip_port_address:<ip:port>## ECDSA Configurationsecdsa:private_key_store_path:<path_to_ecdsa_private_key_store>private_key_store_password:<ecdsa_private_key_store_password>## BLS Configurationsbls:private_key_store_path:<path_to_bls_private_key_store>private_key_store_password:<bls_private_key_store_password>## Operator Configurationsoperator:aggregator_rpc_server_ip_port_address:<ip:port># This is the aggregator urladdress:<operator_address> earnings_receiver_address: <earnings_receiver_address> # This is the address where the operator will receive the earnings, it can be the same as the operator address
delegation_approver_address:"0x0000000000000000000000000000000000000000"staker_opt_out_window_blocks:0metadata_url:"https://yetanotherco.github.io/operator_metadata/metadata.json"enable_metrics:<true|false>metrics_ip_port_address:<ip:port>max_batch_size:<max_batch_size_in_bytes># Operators variables needed for register it in EigenLayerel_delegation_manager_address: <el_delegation_manager_address> # This is the address of the EigenLayer delegationManager
private_key_store_path:<path_to_bls_private_key_store>bls_private_key_store_path:<bls_private_key_store_password>signer_type:local_keystorechain_id:<chain_id>
Changing operator keys:
Operator keys can be changed if needed.
When creating a new wallet keystore and private key please use strong passwords for your own protection.
You can configure the batcher in config-files/config.yaml:
# Common variables for all the serviceseth_rpc_url:<http_rpc_url>eth_ws_url:<ws_rpc_url>aligned_layer_deployment_config_file_path:<path_to_aligned_layer_deployment_config_file>## Batcher Configurationsbatcher:block_interval:<block_interval>batch_size_interval:<batch_size_interval>max_proof_size:<max_proof_size_in_bytes>max_batch_size:<max_batch_size_in_bytes>pre_verification_is_enabled:<true|false>## ECDSA Configurationsecdsa:private_key_store_path:<path_to_ecdsa_private_key_store>private_key_store_password:<ecdsa_private_key_store_password>
Run
makebatcher_start
or
makebatcher_start_local
The latter version sets up a localstack to act as a replacement for S3, so you don't need to interact with (and give money to) AWS for your tests.
Send test proofs
Next, you can use some of the send proofs make targets. All these proofs are pre-generated and for testing purposes, feel free to generate your own tests to submit to Aligned.
SP1
Send an individual proof:
makebatcher_send_sp1_task
Send a burst of 15 proofs:
makebatcher_send_sp1_burst
Send proofs indefinitely:
makebatcher_send_infinite_sp1
Risc0
Send an individual proof:
makebatcher_send_risc0_task
Send a burst of 15 proofs:
makebatcher_send_risc0_burst
Plonk
Send an individual BN254 proof:
makebatcher_send_plonk_bn254_task
Send a burst of 15 BN254 proofs:
makebatcher_send_plonk_bn254_burst
Send an individual BLS12-381 proof:
makebatcher_send_plonk_bls12_381_task
Send a burst of 15 BLS12-381 proofs:
makebatcher_send_plonk_bls12_381_burst
Groth16
Send an individual BN254 proof:
makebatcher_send_groth16_bn254_task
Send BN254 proofs indefinitely:
makebatcher_send_infinite_groth16
Send BN254 proof bursts indefinitely:
makebatcher_send_burst_groth16
Send a specific proof:
To install the Aligned client to send a specific proof, run:
makeinstall_aligned_compiling
The SP1 and Risc0 proofs need the proof file and the vm program file. The current SP1 version used in Aligned is v3.0.0 and the current Risc0 version used in Aligned is v1.1.2. The GnarkPlonkBn254, GnarkPlonkBls12_381 and Groth16Bn254 proofs need the proof file, the public input file and the verification key file.
If you also want to start the explorer for the devnet, to clearly visualize your submitted and verified batches, see how to run it using the following documentation:
[!NOTE] If you want to run the DB separately, without docker, you can set it up and start the explorer with the following command:
makerun_explorer_without_docker_db
To set up the explorer, an installation of the DB is necessary.
First, you'll need to install docker if you don't have it already. You can follow the instructions here.
The explorer uses a PostgreSQL database. To build and start the DB using docker, run:
makeexplorer_build_db
(Optional) The steps to manually execute the database are as follows...
Run the database container, opening port 5432:
make explorer_run_db
Configure the database with ecto running ecto.create and ecto.migrate:
make explorer_ecto_setup_db
Start the explorer:
make run_explorer
[!NOTE] If you want to run the DB separately, without docker, you can set it up and start the explorer with the following command:
make run_explorer_without_docker_db
To clear the DB, you can run:
make explorer_clean_db
If you need to dump the data from the DB, you can run:
make explorer_dump_db
This will create a dump.$date.sql SQL script on the explorer directory with all the existing data.
Data can be recovered from a dump.$date.sql using the following command:
make explorer_recover_db
Then you'll be requested to enter the file name of the dump you want to recover already positioned in the /explorer directory.
This will update your database with the dumped database data.
Extra Explorer script to fetch past batches
If you want to fetch past batches that for any reason were not inserted into the DB, you will first need to make sure you have the ELIXIR_HOSTNAME .env variable configured. You can get the hostname of your elixir by running :
You can modify which blocks are being fetched by modify the parameters the explorer_fetch_old_batches.sh is being received
Running the Explorer
To run the explorer for the local devnet, you'll need to have the devnet running and the DB already setup.
Additionally, you'll need to have the .env file in the /explorer directory of the project. A base example of the .env file can be found in /explorer/.env.dev.
Use the following command to start the Explorer:
make run_explorer
Now you can visit localhost:4000 from your browser. You can access to a tasks' information by visiting localhost:4000/batches/:merkle_root.
There's an additional Explorer script to fetch past operators and restake
If you want to fetch past operators, strategies and restake, you will need to run:
make explorer_fetch_old_operators_strategies_restakes
This will run the script explorer_fetch_old_operators_strategies_restakes.sh that will fetch the operators, strategies and restake which will later insert into the DB.
Run with custom env / other devnets
Create a .env file in the /explorer directory of the project. The .env file needs to contain the following variables:
Then you can run the explorer with this env file config by entering the following command:
make run_explorer
This will start the explorer with the configuration set in the .env file on port 4000. Visit localhost:4000 from your browser.
Metrics
Aggregator Metrics
Aggregator metrics are exposed on the /metrics endpoint.
If you are using the default config, you can access the metrics on http://localhost:9091/metrics.
To run Prometheus and Grafana, run:
make run_metrics
Then you can access Grafana on http://localhost:3000 with the default credentials admin:admin.
If you want to install Prometheus and Grafana manually, you can follow the instructions below.
To install Prometheus, you can follow the instructions on the official website.
To install Grafana, you can follow the instructions on the official website.
Notes on project creation
EigenLayer middleware was installed as a submodule with: