Fix accounts trie remover config value
The config value
AccountsTrieCleanOldEpochsData was not used. Therefore we added a factory function that creates either a custom or a disabled DB remover, based on the config value.
Heartbeat-v2 improvements and fixing
- Observer to observer connection issue was detected, leading to increased sync time. The root cause of the issue was that the unknown peers list of each node would have been filled super fast, thus new observers being ignored and hardly being able to synchronize. What we did was broadcast observers’ shard on the entire network via
- We fixed the heartbeat-v2 monitor for when a validator used the redundancy feature because we observed that after the upgrade to v1.3.44, when some validators activated the redundancy feature, the
numInstancesvalue from the
hearbeatStatusroute wrongly indicated the value
- After some Mainnet nodes upgrade, the observed heartbeat status response randomly showed active or inactive nodes in the network. This was caused by the fact that the node returned all messages (inactive or active) even though the public key was now behind another peer ID. To combat this, we’ve implemented a new algorithm for better filtering:
- if all stored messages show inactive nodes, only keep the latest message, remove the older ones, and put the value
NumInstancesfield from the
- if all stored messages show only active nodes, sort the messages based on the
peerId(so that the response will be consistent on all requests), take the first active heartbeat message and set the
data.PubKeyHeartbeatstructure to the number of active messages the node has;
- if some stored messages show inactive nodes and others show active nodes, remove the stored messages that show inactive nodes and for the rest of the messages showing active nodes, apply (ii).
New direct connection processor
We’ve added a new
directConnectionProcessor because the existing method of splitting the directly connected peers on shards had some issues when trying to decide if a peer was a cross-shard peer or an intra-shard one.
Add epoch start data endpoint
/epoch-start/:epoch for returning the epoch start data for a given epoch.
Get logs on a best-effort basis
We observed that right at the beginning of an epoch,
logs & events of a transaction can be mistakenly saved in the storage associated with the previous epoch, that’s why we attempted to load them from both the requested epoch
N, and from the epoch
This release is fully backwards compatible!
Configuration Release Notes: v220.127.116.11.
Full GitHub Changelog: v1.3.48.
Feel free to send us a feedback or open a topic in our Github Discussions tab and share your thoughts so that the entire MultiversX community can hear you. If you have a great idea, share it with us and let’s make it happen by implementing and integrating it in our ecosystem.
Stay Hungry! Stay Foolish!