Lerned in Milano, part 3: Even little changes can have a big impact. The implementation of Schnorr Signatures and better Coin Selection can help to use the existing blockchain-space more efficiently.
One of my major realization on Scaling Bitcoin in Milan was, that scaling bitcoin is an operation taking place on many fields. The blocksize – for a long time the dominant issue – is only one of these fields, and an allegedly growing numbers of developers believe, that it is a field on which little can be won but much can be lost. At least this was the message the workshop sent.
About the official, not-debatable roadmap of core to scaling – combining SegWit for an immediate capacity increase and Lightning for longterm-scaling – I’ve written already much in the past. On the Scaling Bitcoin Workshop there was some kind of water level reporting about the progress of implementing SegWit (some “annoying little stuff to fix”) and about what has been learned by building SegWit (using separate TestNets). Lightning was covered by three presentations, all about routing, as a whole as impressive as sobering. Impressive, because they illustrated the big amount of work that is done, and sobering, because they demonstrated that it’s still a long journey to implement proper Lighning-routing.
Maybe more tangible have been two presentations about the “little things”: About improvements which get the most out of the existing space on the blockchain and help managing it more economically. You could say, they are the brown bread of scaling – no major breakthroughs, but small and gradual steps to perfection. Both touched a specific property of Bitcoin transaction building and its relationship to scaling.
Saving 20 percent by aggregating inputs with Schnorr
Let start with Schnorr Signatures. This concept has been presented by Pieter Wuille, who is, if you don’t know, one of the most important and praised core or Blockstream dev. He is the architect of SegWit and one of the major contributors to libsecp256k1, an amazing improvement of the verification process of Bitcoin. Whatever Wuille, a small man looking like a 19th-century brushy bearded professor, says, is rated as so substantiated, that “it is enough to call his name to establish consensus”, like PieterWuilleFacts tells.
Well. Pieter Wuille presented Schnorr Signatures. This kind of signature system was introduced and patented by the Frankfurt based mathematician Claus-Peter Schnorr. After ECDSA became a standard in 2005, Schnorr signatures have been more or less forgotten. Like many other applications Bitcoin uses the standardized ECDSA-signatures, based on a specific elliptic curve.
In 2008 however Schnorr’s patent expired, and since 2011 the idea exists to combine a specific family of elliptic curves with the Schnorr system. “Schnorr is more or less just a method,” explains Wuille, “there is still no standard to be used.”
Despite this Bitcoin developers see many advantages in replacing ECDSA with Schnorr: Schnorr Signatures are safe, fast, non-malleable and do natively support multisig. With them it may be possible to simplify Bitcoin’s multisig procedure (which is, to be honest, quiet unhandy) and, maybe most important: With Schnorr’s system you can aggregate signatures. Bitcoin transactions would no longer need one signature for each input, but just one for all. This could, explains Wuille, safe 20 percent of blockchain space. As a side-effect it would incentivice the use of CoinJoin, as the aggregation of inputs in one transaction would significantly reduce fees.
As interesting Schorr Signatures are – Pieter Wuille’s presentation also showed, how much work is to be done before they can become part of Bitcoin. But there are plans to implement them as a soft-fork after SegWit.
Reducing UTXO with better Coin Selection
Closer to contemporary Bitcoin was a presentation about Coin Selection and the UTXO-Set. Mark Erhard from Karlsruhe Institute of Technology (KIT) demonstrated the results of a simulation of different approaches to Coin Selection.
I’m afraid I have to walk somewhat further to bring both of this terms together. Let’s look at a bitcoin transaction (a random example). You see inputs transforming into outputs. Inputs are somehow like real coins, which can only be transacted as a whole. What’s too much, goes back as change (and here the real-coin-analogy falls apart).
The duty of your wallet software is to manage the inputs and use them to compose transactions. That’s the Coin Selection. Every wallet has its own strategy how to do it. Imagine you have 10 inputs with 0.1 btc, 2 inputs with 0.5 btc and 1 input with 0.8 btc. If you want to send 1 Bitcoin – how do you compose the transaction? You have several choices (change is excluded for simplicity):
- 10 x 0.1
- 5 x 0.1 + 1 x 0.5
- 2 x 0.5
- 1 x 0.8 + 2 x 0.1
Now – what input should be spend? What variation is best? There are several reasons why this decision is everything but trivial:
- Fees: As you have to sign every input, a transaction with more inputs needs more signatures. More signatures need more space, and more space needs more fees. So economically the best decision is to use as few inputs as possible. In our example you should build the transaction with 2 x 0.5 btc.
- UTXO: Each transaction takes old outputs in your wallet as inputs and transforms them to other outputs. This is called “spending outputs”. The Set of Unspent Output (UTXO-set) consists of every not spent output in every wallet. It’s sum is the monetary supply of bitcoin. In rough terms it can be called the minimum state of the system. You can prune the blockchain, but you can’t make it smaller than the UTXO-set. Since the UTXO-set is growing constantly, it could become a serious obstacle for scaling. So if you want your wallet to select coins “blockchain-sustainable”, you should use as many of your unspent outputs and transform them into a single output as possibe. Using 10 x 0.1 btc would be perfect.
- Privacy: Thirdly – and for users most importantly – the Coin Selection has serious implications for your privacy. But this was explicitly not the issue of Mark’s presentation.
The Master-student, who is currently writing his thesis, simulated the impact of Coin Selection on fees and the UTXO-set. For this he used a medium-large set of transaction-data und testet some algorithm used by some wallets. One algorithm is FIFO (first in first out), another determines priority (by value and age), and core uses its own, complex mechanism.
Principially the goals of reducing fees and reducing UTXO are contrary. If you use more inputs, you reduce the UTXO-set but increase the fees. But that doesn’t mean that Coin Selection is arbitrary. Mark Erhardt showed how different the impacts of the different Coin Selection strategies. While FIFO and Core both kept the UTXO set relatively small, the Priority-based algorithm inflated it by partitioning the big (more priorty) outputs into smaller pieces. In detail you could find further differences between FIFO and Core (and with the core variation of Luke-JR).
Mark Erhardt did not finish his analyses by now. He says he needs more and better transaction data, wants to try out more modells and integrate the factor of privacy. But as it is now his analyses already illustrate the importance of proper Coin Selection for scaling bitcoin sustainably.