Normally LND nodes use the embedded bbolt database to store all important states. This method of running has been proven to work well in a variety of environments, from mobile clients to large nodes serving hundreds of channels. With scale however it is desirable to be able to replicate LND's state to quickly and reliably move nodes, do updates and be more resilient to datacenter failures.
It is now possible to store all essential state in a replicated etcd DB and to run multiple LND nodes on different machines where only one of them (the leader) is able to read and mutate the database. In such setup if the leader node fails or decommissioned, a follower node will be elected as the new leader and will quickly come online to minimize downtime.
The leader election feature currently relies on etcd to work both for the election itself and for the replicated data store.
To create a dev build of LND with leader election support use the following command:
$ make tags="kvdb_etcd"
To start your local etcd instance for testing run:
$ ./etcd \
max-request-bytesvalues are currently recommended but may not be required in the future.
To run LND with etcd, additional configuration is needed, specified either through command line flags or in
Sample command line:
$ ./lnd-debug \
cluster.etcd-election-prefixoption sets the election's etcd key prefix. The
cluster.idis used to identify the individual nodes in the cluster and should be set to a different value for each node.
Optionally users can specify
db.etcd.passfor db user authentication. If the database is shared, it is possible to separate our data from other users by setting
db.etcd.namespaceto an (already existing) etcd namespace. In order to test without TLS, we can set
Once the node is up and running we can start more nodes with the same command line.
The above setup is useful for testing but is not viable when running in a production environment. For users relying on containers and orchestration services, it is essential to know which node is the leader to be able to automatically route network traffic to the right instance. For example in Kubernetes, the load balancer will route traffic to all "ready" nodes. This readiness may be monitored by a readiness probe.
For readiness probing we can simply use LND's state RPC service where a special state
WAITING_TO_STARTindicates that the node is waiting to become the leader and is not started yet. To test this we can simply curl the REST endpoint of the state RPC:
"set -e; set -o pipefail; curl -s -k -o - https://localhost:8080/v1/state | jq .'State' | grep -E 'NON_EXISTING|LOCKED|UNLOCKED|RPC_ACTIVE'",
Beginning with LND 0.14.0 when using a remote database (etcd or PostgreSQL) all LND data will be written to the replicated database, including the wallet data which contains the key material and node identity, the graph, the channel state, the macaroon and the watchtower client databases. This means that when using leader election there's no need to copy anything between instances of the LND cluster.
No, leader election is not supported by Postgres itself since it doesn't have a mechanism to reliably determine a leading node. It is, however, possible to use Postgres as the LND database backend while using an etcd cluster purely for the leader election functionality.