Cointime

Download App
iOS & Android

StarkNet Mainnet Alpha Upgraded to v0.10.2

StarkNet Mainnet alpha upgraded to v0.10.2.

Expect an improved TPS in the upcoming StarkNet versions.

Sequencer parallelization - already deployed on Mainnet!

A new rust implementation for the Cairo VM

Sequencer re-implementation in rust

StarkNet Performance Roadmap

TL;DR

Validity rollups are not limited in throughput in the same manner as L1s. This gives rise to potentially much higher TPS on L2 validity rollups.

StarkNet performance roadmap addresses a key element in the system: the sequencer.

We present here the roadmap for performance improvements:

— Sequencer parallelization

— A new rust implementation for the Cairo VM

— Sequencer re-implementation in rust

Provers, being battle-tested as they are, are not the bottleneck and can handle much more than they do now!

Intro

StarkNet launched on Mainnet almost a year ago. We started building StarkNet by focusing on functionality. Now, we shift the focus to improving performance with a series of steps that will help enhance the StarkNet experience.

In this post, we explain why there’s a wide range of optimizations that are only applicable in validity rollups, and we will share our plan to implement these steps on StarkNet. Some of these steps are already implemented in StarkNet Alpha 0.10.2, which was released on Testnet on Nov 16 and Yesterday on Mainnet. But before we discuss the solutions, let’s review the limitations and their cause.

Block Limitations: Validity Rollups versus L1

A potential approach towards increasing blockchain scalability and increasing TPS would be to lift the block limitations (in terms of gas/size) while keeping the block time constant. This would require more effort from the block producers (validators on L1, sequencers on L2) and thus calls for a more efficient implementation of those components. To this end, we now shift the focus to StarkNet sequencer optimizations, which we describe in more detail in the following sections.

A natural question arises here. Why are sequencer optimizations limited to validity rollups, that is, why can’t we implement the same improvements on L1 and avoid the complexities of validity rollups entirely? In the next section, we claim that there is a fundamental difference between the two, allowing a wide range of optimizations on L2 that are not applicable to L1.

Why is L1 throughput limited?

Unfortunately, lifting the block limitations on L1 suffers from a major pitfall. By increasing the growth rate of the chain, we also increase the demands from full nodes, who are attempting to keep up with the most recent state. Since L1 full nodes must re-execute all the history, a high increase in the block size (in terms of gas) puts a significant strain on them, again leading to weaker machines dropping out of the system and leaving the ability to run full nodes only to large enough entities. As a result, users won’t be able to verify the state themselves and participate in the network trustlessly.

This leaves us with the understanding that L1 throughput should be limited, in order to maintain a truly decentralized and secure system.

Why don’t the same barriers affect Validity Rollups?

Only when considering the full node perspective do we see the true power offered by validity rollups. An L1 full node needs to re-execute the entire history to ensure the current state’s correctness. StarkNet nodes only need to verify STARK proofs, and this verification takes an exponentially lower amount of computational resources. In particular, syncing from scratch does not have to involve execution; a node may receive a dump of the current state from its peers and only verify via a STARK proof that this state is valid. This allows us to increase the throughput of the network without increasing the requirements from the full node.

We therefore conclude that the L2 sequencer is subject to an entire spectrum of optimizations that are not possible on an L1.

Performance Roadmap Ahead

In the next sections, we discuss which of those are currently planned for the StarkNet sequencer.

Sequencer parallelization

The first step on our roadmap was to introduce parallelization to the transaction execution. This was introduced in StarkNet alpha 0.10.2, which was released Yesterday on Mainnet. We now dive into what parallelization is (this is a semi-technical section, to continue on the roadmap, jump to the next section).

So what does “transaction parallelization” mean? Naively, executing a block of transactions in parallel is impossible as different transactions may be dependent. This is illustrated in the following example. Consider a block with three transactions from the same user:

Transaction A: swap USDC for ETH

Transaction B: pay ETH for an NFT

Transaction C: swap USDT for BTC

Clearly, Tx A must happen before Tx B, but Tx C is entirely independent of both and can be executed in parallel. If each transaction requires 1 second to execute, then the block production time can be reduced from 3 seconds to 2 seconds by introducing parallelization.

The crux of the problem is that we do not know the transaction dependencies in advance. In practice, only when we execute transaction B from our example do we see that it relies on changes made by transaction A. More formally, the dependency follows from the fact that transaction B reads from storage cells that transaction A has written to. We can think of the transactions as forming a dependency graph, where there is an edge from transaction A to transaction B iff A writes to a storage cell that is read by B, and thus has to be executed before B. The following figure shows an example of such a dependency graph:

In the above example, each column can be executed in parallel, and this is the optimal arrangement (while naively, we would have executed transactions 1–9 sequentially).

To overcome the fact that the dependency graph is not known in advance, we introduce optimistic parallelization, in the spirit of BLOCK-STM developed by Aptos Labs, to the StarkNet sequencer. Under this paradigm, we optimistically attempt to run transactions in parallel and re-execute upon finding a collision. For example, we may execute transactions 1–4 from figure 1 in parallel, only to find out afterward that Tx4 depends on Tx1. Hence, its execution was useless (we ran it relative to the same state we ran Tx1 against, while we should have run it against the state resulting from applying Tx1). In that case, we will re-execute Tx4.

Note that we can add many optimizations on top of optimistic parallelization. For example, rather than naively waiting for each execution to end, we can abort an execution the moment we find a dependency that invalidates it.

Another example is optimizing the choice of which transactions to re-execute. Suppose that a block which consists of all the transactions from figure 1 is fed into a sequencer with five CPU cores. First, we try to execute transactions 1–5 in parallel. If the order of completion was Tx2, Tx3, Tx4, Tx1, and finally Tx5, then we will find the dependency Tx1→Tx4 only after Tx4 was already executed — indicating that it should be re-executed. Naively, we may want to re-execute Tx5 as well since it may behave differently given the new execution of Tx4. However, rather than just re-executing all the transactions after the now invalidated Tx4, we can traverse the dependency graph constructed from the transactions whose execution has already terminated and only re-execute transactions that depended on Tx4.

A New Rust Implementation for the Cairo-VM

Smart contracts in StarkNet are written in Cairo and are executed inside the Cairo-VM, which specification appears in the Cairo paper. Currently, the sequencer is using a python implementation of the Cairo-VM. To optimize the VM implementation performance, we have launched an effort of re-writing the VM in rust. Thanks to the great work of Lambdaclass, who are by now an invaluable team in the StarkNet ecosystem, this effort is soon coming to fruition.

The VM’s rust implementation, cairo-rs, can now execute native Cairo code. The next step is handling smart-contracts execution and integrations with the pythonic sequencer. Once integrated with cairo-rs, the sequencer’s performance are expected to improve significantly.

Sequencer Re-Implementation in Rust

Our shift from python to rust to improve performance is not limited to the Cairo VM. Alongside the improvements mentioned above, we plan to rewrite the sequencer from scratch in rust. In addition to Rust’s internal advantages, this presents an opportunity for other optimizations to the sequencer. Listing a couple, we can enjoy the benefits of cairo-rs without the overhead of python-rust communication, and we can completely redesign the way the state is stored and accessed (which today is based on the Patricia-Trie structure).

What About Provers?

Throughout this post, we didn’t mention the perhaps most famous element of validity rollups — the prover. One could imagine that being the arguably most sophisticated component of the architecture, it should be the bottleneck and, thus, the focus of optimization. Interestingly, it is the more “standard” components that are now the bottleneck of StarkNet. Today, particularly with recursive proofs, we can fit a lot more transactions than the current traffic on Testnet/Mainnet into a proof. In fact, today, StarkNet blocks are proven alongside StarkEx transactions, where the latter can sometimes incur several hundred thousand NFT mints.

Summary

Parallelization, Rust, and more — brace yourselves for an improved TPS in the upcoming StarkNet versions.

Comments

All Comments

Recommended for you

  • Survey: 75% of Nigerians Confident in Using Bitcoin for Financial Transactions

    A new survey shows that 75% of Nigerians are confident in using Bitcoin for financial transactions. This survey result comes at a critical time in Nigeria's traditional financial market. In recent months, the Nigerian currency, the Naira, has sharply declined, and the government is trying to maintain the Naira exchange rate while also targeting cryptocurrency. One of the measures recently taken by the Nigerian Securities and Exchange Commission (SEC) regarding the cryptocurrency industry is to propose a significant 400% increase in registration fees for cryptocurrency exchanges.

  • Amaranth Foundation founder spent $24.7 million to buy 7,814 ETH

    According to Spot On Chain, James Fickel, founder of Amaranth Foundation, spent $24.7 million in the past 40 minutes to purchase 7,814 ETH at a price of approximately $3,161 per coin. This giant currently provides Aave with 128,516 ETH ($404 million) and 40.97 million USDC, and has borrowed 2,266 WBTC ($146 million), seemingly trading long on the ETH/BTC pair since December 2023.

  • Vitalik: PoW is also quite centralized. PoW is just a temporary phase before moving to PoS

    Vitalik Buterin, co-founder of Ethereum, stated on social media that PoW is also quite centralized. It just hasn't been discussed too much because everyone knows it's just a temporary stage before transitioning to PoS. This doesn't even involve how to potentially avoid ASICs, simply because the upcoming PoS transition means there's no incentive to build them.

  • If a Hong Kong spot virtual asset ETF is sold at a premium, it can be converted into Hong Kong dollars on the Hong Kong Stock Exchange

    Currently only a few Hong Kong brokers with virtual asset retail licenses can subscribe to the Hong Kong Bitcoin ETF through the new share subscription method (PD/distributor), and after the ETF officially enters the Hong Kong Stock Exchange, all hundreds of Hong Kong brokers and banks can purchase it. The approved virtual asset ETF adopts the performance of the ChiNext CF Bitcoin Index (Asia-Pacific closing price), so the profit and loss risks of cash subscription for Bitcoin ETF are basically the same as those of directly buying Bitcoin. As the exchange ratio between Bitcoin and Bitcoin ETF is fixed, if physical subscription is used in the IOP stage, that is, Bitcoin is used to subscribe to Bitcoin ETF, the relevant ETF can be exchanged for Hong Kong dollars in the exchange if it is sold at a premium after listing, and then buy back Bitcoin at the same time to earn the price difference between on-exchange and off-exchange. (Finance News Agency)

  • SEC sues Bitcoin mining company Geosyn, accusing its founder of $5.6 million fraud

    On April 26th, the US SEC filed a lawsuit against bitcoin mining company Geosyn Mining and its co-founders, accusing them of falsely reporting the number of cryptocurrency mining equipment in operation and using customer funds for personal expenses, resulting in a $5.6 million investment fraud.

  • Hong Kong Stock Exchange to Start Trading Harvest Fund’s Bitcoin and Ethereum Spot ETFs on April 30

    The Hong Kong Stock Exchange will begin trading Harvest's Bitcoin and Ethereum spot ETFs on April 30.

  • The total market value of stablecoins exceeds 158 billion US dollars, and USDT has a market share of 69.8%

    According to DefiLlama data, the total market value of stablecoins has reached 158.197 billion US dollars, with a 7-day growth rate of 0.16%. Among them, the market value of UDST is 110.426 billion US dollars, with a market share of 69.8%.

  • Bitcoin spot ETF has a cumulative net inflow of US$12.082 billion, and Grayscale GBTC has a cumulative net outflow of over US$17.1 billion

    According to Farside Investors, the cumulative net inflow of Bitcoin spot ETF has reached 12.082 billion US dollars since its launch. Among them:

  • Rune DOG•GO•TO•THE•MOON ranked first in transaction volume in the past 24 hours

    According to Ord.io on social media platform, the top 5 trading volumes for runes in the past 24 hours are:

  • CARV announces completion of $10 million Series A financing, with OKX Ventures participating

    CARV announced the completion of a $10 million Series A financing round, led by Tribe Capital and IOSG Ventures. Consensys, OKX Ventures, Fenbushi Capital, No Limit Holdings, Draper Dragon, Arweave, ARPA, MARBLEX, and others participated in the round. The aim is to build the largest modular data layer for gaming and artificial intelligence, and to maximize data innovation while ensuring that individual users can derive value from internet sharing.Jeff Ren, partner at OKX Ventures, said, "CARV's revolutionary approach is reshaping the way we manage decentralized data. Its modular cross-chain protocol and ID aggregation solution cultivate data sovereignty and integrity while emphasizing security and efficiency. We are excited about this collaboration and look forward to seeing how OKX Web3 products can better collaborate with CARV's advanced cross-chain data layer."