Long awaited updates that have been in work for a long time are now ready to be released.
With heartbeat-v1, each node was sending once per minute a heartbeat message to the entire network. As per today we have 3200 validator nodes (adding the observers used for various services, the real number of nodes raises at ~5000 but for the sake of this calculus we stick to 3200), therefore we expect to have ~3200 heartbeat messages propagated to the entire network every minute. Considering a heartbeat message size of ~1KB and a propagation to ~3-4 peers, this would translate to ~1.3Mbps per node traffic just for the heartbeat messages.
Even if up until now, 1.3Mbps was manageable, for the long run this is not sustainable. In addition to the quantity of data that needs to be transferred, there is also a processing overhead with each message (libp2p ECDSA signature verification + BLS signature verification which is quite expensive in CPU terms).
We considered many options that (in a way or another) would have brought some kind of improvement but in the end we came up with the following solution.
Heartbeat messages used to have a dual purpose:
- authentication of nodes and mapping of registered nodes for staking with libp2p IDs – this is used by the protocol for the network sharding mechanism, in order to optimize the propagation of messages to interested actors (observers/validators);
- node liveliness monitoring - this is used by our explorer and by some external solutions;
With this information in mind, we decided to separate the implementation for liveliness and authentication into two modules:
- a new inter-shard (global) topic for the authentication messages - will be sent only by validators every 2h(may be subject for change);
- transform the topic for heartbeat messages into an intra-shard topic - will be sent by entire network (each peer to the shard to which it belongs) at a time interval of 5 minutes (may be subject for change).
For the above mentioned separation and in order to keep the backwards compatibility between peers running the old version vs peers that are running the new version, the transition was prepared in 2 steps:
- First step introduces the new topics: PeerAuthentication and Heartbeat_[shardID] keeping the current heartbeat implementation (both for monitor and sender). Altogether, we add the new implementations for peerAuthentication resolvers and interceptors, internal cache, and make the new implementations write in the same peerShardMapper. In this way the peerShardMapper used in the network sharding will be fed from 2 possible streams: the old heartbeat implementation and the new peerInfo topic. Also, we will provide the same crypto. PeerSignatureHandler as to cache the BLS key signature from the same 2 possible streams.
- After all nodes transition to the new implementations of the heartbeat functionality, in the second step, the old heartbeat version will be eventually disabled and the associated code will be removed.
2. Rosetta integration
The Node API (and underlying components) have been updated and improved to allow us implement the Rosetta API gateway (as a separate application). Rosetta is an open standard designed to simplify blockchain deployment and interaction. Rosetta's goal is to make blockchain integration simpler, faster, and more reliable than using a native integration.
3. Partial miniblock execution
In order to execute a transaction, we require gas, just as a car uses fuel or a computer consumes electricity. Gas cost, in general, is a combination of gas used by contract execution time (the cost to execute a System or User-Defined Smart Contract) and by value movement + data handling (minimum gas limit + the cost to write data into MultiversX's blockchain - cost per data byte).
The structure of our blocks is represented by a block header (block nonce, round, proposer, validators timestamp etc.) and a list of miniblocks for each shard containing the actual transactions hashes inside. Miniblocks, in their turn, contain a header (miniblock type, status, sender and destination shard etc.) and starting with this release we added information about the range of transactions that have been executed as part of the respective miniblock. Each block has a gas limit (3.0bn gas units starting with scheduled transactions feature activation), representing the amount of available computing space during each block interval (for ease in the calculation - consider 1 second execution time = estimated 1.0bn gas units).
Given the previous miniblocks processing approach when the atomic unit of processing in cross-shard execution is a whole miniblock (either all transactions of the miniblock are processed in a round or none of them, resulting in the retrial of miniblock execution in the next round) some blocks may not be able to be filled to their 3.0bn gas capacity, not due to the lack of transactions, but because the transactions are grouped into large miniblocks that require for their atomic execution more gas than the block still has space for. For this reason we made the cross-shard transactions processing much granulated, from processing in one round the whole miniblock to processing in one round at least one transaction from the respective miniblock. Every time, same miniblock hash is referenced but a different transaction range. The partial miniblock becomes finalized only when all transactions are processed (see figure below).
This feature opens opportunities for future/further improvements.
4. Trie sync improvement
Release v1.3.42 brings a big improvement of the sync time. Most of the improvement comes from the adjustment of parameters for the trie sync request mechanism, which led to the decrease of the average number of requests per trie node from ~5-6 to slightly above 1. This translates to less data traffic and faster trie sync. When a trie is syncing, we are saving the synced data to both current epoch storer and previous epoch storer. This is done because the data from the current storer is pruned during blocks processing, but the complete data for epoch start will be present in the previous storer.
When an epoch changes, all trie data needed for epoch start is copied from the previous epoch storer to a new storer for the current epoch. This process is called state snapshot. We also added the possibility to save all data in a single storer, thus removing the need for snapshot. This will greatly help integrators and other service providers that will no longer require to consume CPU time and disk IO bandwidth to create snapshots as, for the perspective of a service provider, these are irrelevant. Although this will greatly help those integrators to provide nodes with 100% uptime for serving API requests, it will reduce the number of nodes that will be able to help other nodes to bootstrap in a certain epoch, so this feature should be used carefully and responsibly. It will become useless to be able to have a node with 100% uptime in regarding API response serve times if the chain fails to produce a block due to the nodes that are unable to bootstrap in an epoch.
What got better?
- On API, we populated the fields tx.processingTypeOnSource, tx.processingTypeOnDestination;
- Adjust GetBulkFromEpoch() - keep order of requested items;
- Track "previous-to-final" block info in ChainHandler;
- Full refactor for accountsRepository;
- Integrate data field parser transaction and smart contract results in indexer;
- Global settings role to burn for all;
- New fields on API miniblock: first / last processed TX;
- Remove lastSnapshotStarted if the snapshot finished successfully;
- Notifier client - add txhash for log events;
- Trie storage manager without snapshot;
- Fix import-db flags (NumActivePersisters);
- Switch heartbeat v2 to single data interceptor as multi data is not needed;
- Recreate trie from epoch;
- Added and integrated peerAuthenticationPayloadValidator component;
- And many more other changes. See Full Changelong for more improvements.
What did we fix?
- Fixed Test_getProcessedMiniBlocks that was failling from time to time;
- Fixed the init call in transaction coordinator;
- Fixed isRelayedTransactionV2 which was causing ComputeTransactionType to return improper type;
- Trie fixes;
- Return empty list if sender is not found in pool for sender;
- Fixed indexer warning logs;
- Persistent metrics fixes;
- Print fixes;
- In receipts unit, save intrashard miniblocks with SCRs generated by cross-shard scheduled transactions;
- Added backwardsCompatibility for partialExecution on epochStartData;
- On API, fix GetAccountWithBlockInfo();
- Enable snapshot even if pruning is disabled;
- Complete refactor of the timeCacher implementation;
- Fix direct connections processor;
- Fix duplicated txs into tx pool api response;
- Optimized peer authentication messages management;
- And many more other changes. See Full Changelong for more fixes.
The 4 flags described below will be enabled on epoch 795 which is scheduled to happen on October 3rd, 2022 ~15:15 UTC.
- ESDTMetadataContinuousCleanupEnableEpoch - due to an unintended behavior, metadata of tokens that don't belong anymore to any address in a shard still remained in the trie, resulting in an unwanted and unnecessary state increase. This fix will proactively remove tokens metadata that aren't used anymore in a shard;
- HeartbeatDisableEpoch - disable the heartbeat v1 subsystem in order to improve both network bandwidth usage along with the consumed CPU times;
- MiniBlockPartialExecutionEnableEpoch - a new feature that allows the destination shard to partially execute miniblocks. The order is still kept but the atomic execution in one round is no longer required. This will add flexibility in the miniblocks and transactions execution while maintaining the existing processing flow;
- FixAsyncCallBackArgsListEnableEpoch - a minor fix for the asynchronous callback between shards that will now add the correct transaction data on the smartcontract result before the asyncallback call in the sender shard.
Feel free to send us a feedback or open a topic in our Github Discussions tab and share your thoughts so that the entire MultiversX community can hear you. If you have a great idea, share it with us and let's make it happen by implementing and integrating it in our ecosystem.
Stay Hungry! Stay Foolish!