The Solana Foundation, the company behind the SOL cryptocurrency, gave details about the attack that left its network down for more than 17 hours exactly one week ago, on September 14th.

This is the first official analysis published since what happened. In the text, the company confirmed that it was a denial of service (DDoS) attack that brought its ecosystem to a standstill.

In this offensive, a network is flooded by a number of accesses or transactions that are much higher than it can process, going down as a result. Solana’s DDoS attack, however, may have been more of an accident than a malicious action.

“The Grape Protocol launched its IDO on Raydium, and the bots generated transactions that flooded the network. These transactions resulted in a memory overflow, which caused many validators to crash, forcing the network to slow down and eventually stop,” explained the company.

At that time, bots tried to send more than 300,000 transactions per second, with Solana only being able to process 65,000 transactions in the period. The network went down when validators could not agree on the status of the blockchain, which prevented the creation of new blocks.

In analyzing the case, the company explained exactly what happened at that time: “Transactions flooded a system known as a forward queue, causing the memory used by that queue to grow without limits. Block-encoded transactions required a lot of resources to be processed”.

Amidst this disorderly growth of the queues, the validators proposed a series of bifurcations. The analysis points out that the creation of the parallel chains caused a new problem as “the block producers’ processors started to run out of memory and crash, and upon restarting the network, they were unable to process all the proposed forks in time to keep in consensus with the rest of the network”.

How Solana’s community solved the problem

In those early hours, Solana’s validators gathered at Discord to definitively resolve the network issue and proposed a hard fork of the network from the last confirmed slot.

This update required the participation of at least 80% of the validators to achieve the required consensus — a process that took 14 hours.

“Engineers from around the world worked together to write code to mitigate the problem and coordinate a network refresh and reset among more than a thousand validators,” explains Solana’s note. The company admits that the entire network recovery effort was led by the community itself, which was based on the guidelines described in the protocol documentation.

According to the foundation behind SOL, the delay in solving the problem was the result of the project’s own decentralization, which requires the community to reach consensus before any major changes are made.

“If Amazon Web Services fails, users need to trust Amazon to bring it back to normal. The credit and obligation to restore network operations on any blockchain is in the hands of the community,” says the note.

Finally, the company thanked the validators for their help and informed that a more detailed technical report should be released in the coming weeks.

LEAVE A REPLY

Please enter your comment!
Please enter your name here