Home < Stephan Livera Podcast < Schnorr Taproot Tapscript BIPs

Schnorr Taproot Tapscript BIPs

Speakers: Anthony Towns

Date: December 27, 2019

Transcript By: Michael Folkson

Tags: Taproot, Schnorr signatures, Tapscript

Category: Podcast

Media: https://stephanlivera.com/episode/137/

Stephan Livera: AJ, welcome to the show.

AJ Towns: Howdy.

Stephan Livera: Thanks for joining me AJ. I know you’re doing a lot of really cool work with the Schnorr and Taproot proposal and the review club. But let’s start with a bit of your background and I know you’re working at Xapo as well.

AJ Towns: Yeah so I’m working at Xapo on Bitcoin Core dev, which means that I get to do open source work full time and get paid for it and it’s pretty freaking awesome. So I got into Lightning from being interested in micropayments embarrassingly long ago at this point like mid 2000s. I’ve always wanted some sort of micropayment thing to be workable so that I can charge for emails so that all the 10,000 spams I get a month, I charge a cent for those and I make a tiny profit or nobody spams me anymore. It’s kind of win-win so that’s my end goal with Bitcoin and Lightning. I followed Rusty since his Linux days and he started doing pettycoin, which was a Bitcoin clone and got into Lightning when it came out essentially. That’s the first realistic technically practical thing for micropayments that has ever happened. Then I fell down the rabbit hole of to get Lightning to work, you have to get Bitcoin to work. So does Bitcoin work? How do we make sure it keeps actually growing and making sense? I’m now working on Bitcoin so that Lightning works. Once we’ve got Bitcoin set up so that Lightning works, I’ll be trying to get Lightning to do micropayments stuff and then I’ll get my email working and then I’ll be done. That’s my theory.

Stephan Livera: So all of this is just to stop the spam, right?

AJ Towns: Yup. The original goal of Hashcash.

Stephan Livera: The real Hashcash. Lightning is the real Hashcash. But I’ve noticed as well on the Bitcoin dev mailing list and also on the Lightning dev mailing list, you’re also quite active there as well, coming out with different ideas on constructions of once we get Schnorr signatures and Taproot then that can help Lightning here, here and here. So that’ll be cool to get into as well. But maybe we could just start with a bit more of a high level understanding around what is Schnorr signatures and what’s the application of it to Bitcoin?

AJ Towns: So the way Bitcoin works is you have transactions and the way you ensure that only you can spend your coins is you have a digital signature on it that only you can produce. The digital signatures that Satoshi chose, as far as I’m concerned, was elliptic curves because they’re short and efficient. It was ECDSA because that’s what open SSL implemented. If you just use what OpenSSL implements you don’t have to implement something yourself. It’s a pretty broadly accepted standard. That’s fine if you’re just doing signatures and you want them to work. But the benefit of Schnorr as far as I’m concerned is that you can do maths with them. So with ECDSA to avoid the Schnorr patent, they came up with this complicated method that is kind of equivalent, but the maths is really complicated and you can do a signature with it but you can’t do anything else with Schnorr. You can essentially add signatures together. You can kind of multiply them by things. If you add two signatures together you don’t quite get a signature out but you get something pretty close. With that ability we can have two partial signatures that you add together to get a real signature. That starts making things like multisig not require as much space. Instead of having two signatures to do multisig, 2-of-2, you just have the one signature that’s the addition of something else. But you only have to have the one signature on the blockchain. It’s half the size. Xapo does 3-of-5 multisig for its vault or 2-of-3 for regular stuff. So if we can get Schnorr implemented and get that working, that kind of halves the fees that Xapo will be paying. We can do more complicated stuff than that. In theory if we can make it all work, it’s pretty awesome.

Stephan Livera: The other component I’ve heard is that leaving the patent aside, there was this idea of having it be battle tested and out in the wild and so on. Do you put much stock into that or is that overrated?

AJ Towns: I wouldn’t put it quite that way. It’s more that because of the patent, Schnorr wasn’t really used in industry. So you have OpenSSL which is a library that’s written and you can just plug it in and use it. There’s already a standard out there. Different libraries implement the standard. People know how to interoperate with it. Whereas with Schnorr, there’s papers on it and that’s where it stops because you have to pay money to go any further than that. So now the patent is finished and that’s fine. ECDSA still works for everyone. So everyone keeps using it because it’s easy and no one’s written up a standard for Schnorr. So if you want to use Schnorr you’ve got to write up a standard for it. You’ve got to implement the libraries for it, then you’ve got to test the libraries. You’ve got to make sure you don’t have any weird edge cases. And that’s essentially what we’re going through now.

Stephan Livera: That also brings to mind another common saying I’ve heard, this idea of “don’t roll your own crypto”. That’s meant to be a really bad thing because there’s so many ways that you can make a mistake. What’s your reflection on that idea as it applies to what we’re doing now?

AJ Towns: That’s totally true. There are so many ways you can make a mistake. That’s why, first of all, you want to be an expert before you do it. But then even if you are an expert, you’ve got to realize there’s so many ways as an expert you can make mistakes. Get as much review on it, as much testing, leave yourself time to think about it, get academics, get industry, get everyone to look over it and don’t rush into things. So one of the things that Blockstream came out with as part of the Schnorr thing is a thing called MuSig which is a secure way of adding two signatures, adding two pubkeys together to get the multisig in a single pubkey case, single signature. The original paper came out six months, maybe a year later. Then some academic cryptographers who’d written the papers that the Blockstream paper was based off of, came out with a new paper saying that the MuSig proof was broken and that it didn’t work as specified. The proof was wrong. It was exploitable theoretically and they came out with a fix for it as well which was making an extra interactive round of making it a bit more complicated in set up. Since then I think we’ve come up with a fairly practical way of attacking the original design of it. These things are really complicated. There are lots of edge cases and you need to be careful. Slow and steady wins the race sort of thing.

Stephan Livera: On that idea part of the building the case for a change is to understand what are the current assumptions that Bitcoin relies on and under this proposed change, are there any changes to that? I’ve heard it said that currently Bitcoin relies on what’s known as the discrete logarithm problem. I’ve also heard the argument put forward that the Schnorr scheme requires less assumptions. Can you help explain that for us?

AJ Towns: I’m not a cryptographer so I can’t explain it really accurately but the simple explanation is that the Schnorr signature idea is what came first. It’s fairly simple. You do a hash of something, you add it with a random number. Going from a private key in Bitcoin to the public key is point multiplication. Going from the public key back to the private key is taking the discrete log. So that’s what the discrete log problem is. It says that you can’t get the private key just knowing the public key. The Schnorr signature is just upgrading that same difficulty to let you do a signature so that you can say “This public key definitely signed this thing but there’s no way of getting back to the private key from it.” There’s no way someone else with just the public key is able to produce a signature without having the private key. To get ECDSA, what we currently use in Bitcoin, they basically tweaked the formula for Schnorr and made it a whole bunch more complicated and a bit harder to analyze. There’s no proof that the ECDSA thing is secure as long as discrete log is secure. There might be some other assumption in there. There is a proof that uses some extra assumptions that ECDSA is secure but proving that Schnorr is secure uses fewer assumptions. It’s simpler to understand on the academic side and it should be at least as secure, possibly more secure.

Stephan Livera: Awesome. Can you help us understand the relation between Taproot and Tapscript? What are they at a high level?

AJ Towns: Greg Maxwell came up with the idea of Taproot. You’ve got the existing address types at the moment. You’ve got pay-to-pubkey which you just pay to a pubkey and then you can sign that with your hardware wallet or whatever. Or you’ve got pay-to-script-hash. Then your script can be a complicated script that says I need this pubkey and this pubkey and I need this timelock and I need the reveal of this hash preimage. It can only happen under a full moon or whatever. Any arbitrarily complicated thing you can express in Script which isn’t that complicated but could potentially be. Taproot takes the idea that if you do some of this fancy point addition stuff that Schnorr allows you to, you can combine the two. You either just pay with a key, you’ve got a single address that works as a pay-to-pubkey or you’ve got a way of pulling a Script that was originally hidden in the pubkey out and then paying with a Script. So you can create an address that gives you a choice of either pay-to-pubkey or pay-to-script-hash and you only have to decide which of those you use when you want to spend it.

Stephan Livera: So maybe we could go back a little bit on the address types and the Script types. As I understand, in the early days, it literally was pay-to-pubkey. You were paying to a pubkey. Then the newer type was the pay-to-pubkey-hash, which is hiding the pubkey and you’re paying to the hash of the pubkey. Then there was also P2SH which is paying to the Script hash not the Script itself. So that’s a hash of the Script. As I understand, we would now have a pay-to-taproot output. Can you help me understand that part of it?

AJ Towns: Originally there was pay-to-script. Every Bitcoin UTXO in the blockchain would have a Script in it which gives the rules, that allow this coin to be spent. It could be a multisig Script, it could be a pubkey CHECKSIG Script, which is you pay-to-pubkey. You’ve got the actual pubkey in there and you’ve got the CHECKSIG opcode in there. So pay-to-pubkey-hash is just a Script that says “Here’s this hash, I want you to give me a pubkey when you spend it and I’m going to hash that and check it matches the hash that I provided in the UTXO. Then I want a signature and you’ve got to check that the signature and the pubkey match the transaction that you’re using.” On top of that we added a thing called pay-to-script-hash which is a magic template. It’s not a Script that would previously have been been verified. It’s just a hash and then check. If you hadn’t done the soft fork upgrade, the pay-to-script-hash, all you would have had to provide is something that matches the hash. You wouldn’t have had to provide any signatures, anything that actually proves you are in it. Just the Script itself. The soft fork that implemented pay-to-script-hash also added the extra rules that say that you’ve got to satisfy the Script that you provide. The Script says CHECKMULTISIG and you’ve got to provide the pubkeys and the signatures. SegWit built on top of that along the same idea. It says that you can provide either a 20 byte pubkey hash or you can provide a 32 byte script hash according to the SegWit template. Then the appropriate rules for that will be executed so that you have to provide the pubkey and the signature. Or you have to provide the Script and whatever the Script requires. And so pay-to-taproot will be a similar thing. It goes from version zero with SegWit to version one with Taproot. It will require either a signature if it’s taking the keypath, just a pay-to-pubkey by Taproot or it’ll take a point reveal, a Script and whatever stuff you need for the Scripts for the Script path.

Stephan Livera: What you were referring to there with the SegWit versions, those are the addresses. We’ve seen that written as pay-to-wpkh or pay-to-wsh? That’s like the SegWit versions of the pay-to-pubkey-hash and pay-to-script-hash. That was v0, right? In the programming world, some languages and things start with zero. That was v0 and the new version is going to be v1 and that’s what we’re talking about now with Taproot and Tapscript.

AJ Towns: Yes.

Stephan Livera: So let’s talk about some of the other ideas around Schnorr that I’ve heard of. Hopefully the listeners might have heard of these and maybe they want to understand what those are. So you mentioned MuSig earlier. That’s one particular way in which you can add signatures together if I understood you correctly?

AJ Towns: If you want to do Schnorr multisig, you have say three public keys, you add them together and then you do three separate signatures of the transaction, one with each pubkey. Then you add the signatures together. The sum of the signatures is basically a signature for the sum of the public keys. But the problem with that is what happens if you don’t entirely trust the guys with the other public keys and one of them sets up their public key to be not their public key but their public key plus the negation of the other two public keys? You go A plus B plus C minus A minus B and that just adds up to C. They could sign for the address directly. MuSig is a way of adding hashes to each of those public keys in a particular way so that you multiply each of those public keys by something, add them together and there’s no way you can do that tricky math to exploit the other two guys.

Stephan Livera: It is a defense against a malicious party at the setup point.

AJ Towns: Yeah.

Stephan Livera: What’s the relation to Schnorr?

AJ Towns: The relationship is fundamental in that Schnorr is the thing that lets you do the math. It lets you add up the signatures usefully to get something. There’s some really complicated 2 party ECDSA stuff that lets you kind of do the same thing but it uses much more difficult cryptography. You’ve got zero knowledge stuff passing around between you. You kind of get the same result but with a lot more complexity. The advantage of Schnorr is that if you can cope with doing elliptic curve mathematics, which is already a relatively high bar, then adding Schnorr signatures and stuff is really easy.

Stephan Livera: I’ve seen this benefit called batch validation. So what is batch validation?

AJ Towns: So again, because you can do maths with it if you’ve got a whole bunch of public keys and a whole bunch of signatures, you can do some hashing to make sure you can’t cheat easily. You can add all the public keys up with some factors, add all the signatures up with the same factors. The only way that you get the two things adding up to be a single signature at the end is if they were all valid signatures independently at the start. It’s still a fair bit of work. It’s not just “I had a hundred signatures and I’m only going to do 1% of the work, so I’ve only got to verify one signature.” But you still get a three or four times increase in performance. So instead of taking 30 seconds, you take 10 seconds to do like 10,000 signatures. That’s a little bit of a benefit for relaying transactions in the mempool. But if you’re doing IBD, the initial block download to get your full node set up. You’ve got perhaps 3000 signatures in a block. Then you’re cutting down the signature verification time for each block by a factor of two or so.

Stephan Livera: Some pretty good performance benefits there. I’ve heard also of this idea of key aggregation. What’s that?

AJ Towns: So pubkey aggregation is where you’re adding a bunch of public keys up together to get a single public key. That’s what goes on chain but it’s representing an n-of-n multisig.

Stephan Livera: On the interactivity that’s required for that could you help us understand that? Is that people having to pass information back and forward for a few rounds in some way? That could be Bluetooth or over an email or through some way that the software communicates with other parties. Can you talk us through what that means, the interactivity part of it?

AJ Towns: Current multisig is not interactive. You set up your address and you can do that just by knowing all the public keys that you need. When you want to spend from the address you figure out the transaction you want to spend, maybe do a PSBT or whatever to set this up and you go to one hardware wallet and you get a signature from that. That’s done. You go to another hardware wallet, you get a signature from that. That’s done. However many hardware wallets you’ve got to get signatures from, you go to each of them in turn, combine all the signatures at the end, publish it and you’re done. The challenge with Schnorr multisig is that you need to be interactive. The original protocol they had, this is the broken one, was you go to one hardware wallet and get the nonce from it, which is half of the signal, the first half of the signature. Go to the next one and you get the nonce from that. Then the next one, get the nonce from it, collect all the nonces, and then you tell all the hardware wallets what all the nonces from the other hardware wallets were. Then they calculate the signatures and you combine the signatures. So you’ve got two passes through all the wallets. Then they discovered that was broken. First of all, you need the hash of the nonce from each one. Then you need the nonce from each one and then you can get the signature from each one. You’ve got this three-phase interactive thing. Xapo has these fancy vaults in places all over the world that our 3-of-5 multisigs are stored in. If we want to go to those vaults, we need to take the transaction down, go through all the security measures and then get a signature back. This is fine for the noninteractive stuff. But here we go down once, we get a hash of a nonce, then we take that back up and transmit it to all of them. Then we go down again and we pass the hashes of the nonce from all the other vaults and we get a nonce back. Then we come down and do it again. That’s kind of painful. So that’s the drawback of interactivity and it’s kind of a pain for hardware wallets. But if you can plug them all into the same PC at the same time, then maybe that’s okay. If you’ve got deep cold storage, it’s pretty hard. That’s the disadvantage of MuSig. It’s much cheaper, it’s much more private because you can’t tell how many n-of-n you were using. But you’ve got to work through this burden of interactivity.

Stephan Livera: Yup. That’s not just for the set up, but also for the signing.

AJ Towns: That’s for every signature. With the MuSig set up, you still just need the pubkeys from everyone. The MuSig set up is pretty easy. If you’re doing the threshold MuSig setup where you’re doing k-of-n instead of n-of-n, so 3-of-5 instead of 5-of-5 then you need interactivity at set up as well. You’ve also got to store a bunch more data. The threshold MuSig is still kind of a research, work in progress. We’ve got something that works on paper but going from that to a workable API that people can actually use is pretty tough.

Stephan Livera: It might be a good time now to bring up threshold signatures. What is a threshold signature?

AJ Towns: A threshold signature is just 3-of-5 instead of 5-of-5. So in the academic world the multisig is everybody has to sign it and then a threshold multisig is some number or more of them have to sign it.

Stephan Livera: With Schnorr and Taproot is there any implication there or is that basically the same part you were explaining earlier?

AJ Towns: So with MuSig you end up adding all the public keys and then adding the signatures. You can do a thing called Lagrangian interpolation which you might have met in like first year Mathematics at university or maybe in high school. You’ve got a polynomial, you make a polynomial of degree three out of your five points and you solve it by having any three points. The math is well known. It’s basically the same stuff behind Shamir’s Secret Sharing. The benefit of Schnorr is that you can construct the signature by adding the signatures of the individual three keys. You don’t have to ever reveal the private keys so you can keep using them afterwards without having to trust. But the disadvantages, it’s really complicated. Just writing out the math for it is pretty complicated. There’s extra storage and we don’t have a good API for it yet.

Stephan Livera: What are adaptor signatures?

AJ Towns: Adaptor signatures are a form of what we call a scriptless script, which is Andy Poelstra’s term. Taking a step back, the Scripts that Lightning use. Lightning’s design is that you want to send money to someone but only if they give you the payment receipt. The payment receipt is the preimage of a SHA256 hash. You give some value that hashes to something that you were given on the Lightning invoice. The script for that is more complicated than this, but it’s basically give me something, I’ll SHA256 it then I’ll check that it matches this thing that I was presented in the first place and I’ll check a signature. That’s a Script. It’s got some opcodes in it. If you see one of these Scripts on the blockchain then you know that it was probably a Lightning channel. The (Bitcoin) Optech guys or Bitmex or someone has got a webpage that monitors for every Lightning channel that was closed via these penalty transactions. You can track every single person on the lightning network that has closed channels in this way. That’s not not very good for privacy. The idea of a scriptless script is that you get the functionality of a Script at least in a limited way where you can say “I need this preimage revealed and I need a signature.” But you do it in a way that there’s only a signature on the blockchain. No one can actually tell what you were doing or get a copy of the preimage apart from the two people that were trying to do the initial negotiation. The technology we use for that is an adaptor signature. The signature is composed of two values an r and s. The r is called a nonce. The only way that you can calculate the signature is by adding the payment secret to the nonce. The only way you can create the signature is by adding the preimage to the s part of the signature. It works better if you can write it down on paper and explain the math. That’s the idea. You’ve done the partial signature in the first place. The only way that you can construct it in the end is by subtracting the secret value that you want to reveal. So you subtract the s value on the blockchain from the s value you had as a partial thing and the difference is going to be the secret you needed. That’s the concept of adaptor signatures.

Stephan Livera: While we’re on the topic of scripting, my understanding is there are different opcodes that go into the scripting. For example, it might be OP_CHECKMULTISIG and the Script will be written out with an IF statement. The way it resolves is there is a stack and you have to sort of one at a time resolve them until you’ve resolved the whole thing. Now the coins are able to be spent. Could you just give us an overview on how that works? The idea of this stack and the idea of opcodes.

AJ Towns: It’s a Forth like language which works pretty much as you described. You provide a Script and a witness stack when you’re spending the SegWit outputs. The witness stack is like a stack of face up cards. You can see the one on top generally and you can take it off or you can put something else on top or you can cheat and you can look underneath one. You can pick up the top three and move the fourth one to the top. The Script is a set of opcodes. It is a set of instructions that say “Do this, do that, do something else.” An instruction says “Take the thing off the top, if it’s a zero then we’ll stop, until we see an ELSE or an ENDIF. If it’s a one, we’ll lose the one and then we’ll do the IF part. Once we get to an ELSE, we’ll stop doing stuff. Once we get to an ENDIF, then it’s all good again.” There’s a bunch of opcodes which are like the Forth language. There’s IFs and there’s pops and pushes to manipulate the stack. There’s basic math operations, so you can add two numbers together, divide them, compare them. There’s some crypto operations, so you can take a thing on the stack and hash it. Take the SHA256 hash of it or the RIPEMD160 hash of it or the combination. What else can you do? There’s an altstack so you can take something from your regular stack of cards and move it to a different stack of cards, which you can then build up and then move back. That’s about it. It’s not a very exciting operation. These are just instructions that are defined in Script. So we can do a soft fork and add operations or we can do a soft fork and disable old ones. But the main one that people use is CHECKSIG. Here’s a public key. Then I want you to take something off the stack and check that it’s a valid signature for the pubkey of this particular transaction according to whatever restrictions we’ve got in the SIGHASH. You might say ANYONECANPAY or you might say SINGLE and that’ll restrict which parts of the transaction the signature has to verify. In current Bitcoin, we’ve got a thing called CHECKMULTISIG which pops say five public keys off the stack and then three signatures and then checks the three signatures are valid for three of the five public keys. That’s your CHECKMULTISIG opcode. Because of the batch validation stuff we’re changing that to separate opcodes so that instead of one CHECKMULTISIG opcode you’ll have five CHECKSIGADD opcodes. You pop a public key and a signature off the stack. If the signature is null then we won’t do anything. If the signature is valid, we’ll add one to a counter. Then we’ll check that the counter equals three afterwards to make sure we’ve got three of the five signatures valid. If we add opcodes to that we can do much more complicated stuff. The two recent opcodes that were added were CHECKSEQUENCEVERIFY and CHECKLOCKTIMEVERIFY which let you do the absolute and relative time locks which were needed for Lightning.

Stephan Livera: In terms of opcode changes that are coming as a result of this proposal we’re going to take away CHECKMULTISIG. You’re just going to be doing OP_CHECKSIGADD for however many signatures you have in that multisig. And there is also OP_SUCCESS, what’s that?

AJ Towns: So the way that Script was originally designed was there were about 10 reserved opcodes called OP_NOP, which if you had them in your Script they wouldn’t do anything whatsoever and you’d just continue on fine. There’d be no action. So it was possible to soft fork behavior on top of that, like CHECKSEQUENCEVERIFY and CHECKLOCKTIMEVERIFY as long as they didn’t modify the stack and the only thing they could do was abort the Script if something wasn’t valid. So if you had a CHECKLOCKTIMEVERIFY and the locktime wasn’t what it was supposed to be, the Script would fail and the transaction wouldn’t be valid. But you couldn’t do more complicated stuff like push the locktime to the stack and then add it to the CSV to make sure the sum of the locktime and the relative locktime were this value. That limits the upgrades you could do. Because we’re doing a completely new version of the witness stack it lets us make arbitrary changes to Script. What we’ve decided to do is keep all the normal opcodes doing what they’re currently doing apart from CHECKMULTISIG. But instead of just having OP_NOPs for upgrades in future we’ve got all the unused opcodes as what we’re calling OP_SUCCESS. As soon as you see an OP_SUCCESS in the Script that’s just an automatic success at the moment. What that means is that any behavior we have at all will only make the Script fail because it was an automatic success. That means we can soft fork in pretty much any behavior we want for future opcodes. It’s a much more flexible approach.

Stephan Livera: I wanted to talk about the benefits of this proposal as well. Obviously we’ve spoken about what might be more difficult with interactivity requirements but maybe we could touch on some of the benefits. I understand one of the benefits is referred to as policy privacy. It’s this idea that you might have multiple possible spending paths and you’re not necessarily revealing what those are. Could you tell us about that and what some of the other benefits are?

AJ Towns: As I was saying at the start you’ve got this alternative between just having pay to public key or having a Script hidden inside there. You can choose when you’re spending, which one of those. What I didn’t say was that we actually have a Merkle tree of Scripts. You can have many Scripts up to a completely ridiculous number, that you can’t actually produce on real machines, of alternative scripts that you could spend. So you could have a script that says “If it has been five years and I haven’t spent this yet then I can use my old secret key. Otherwise I have to use a 3-of-5 multisig. Or you can have something that’s really complicated. One of the things that people are considering for Lightning is that you might want to be able to do a child-pays-for-parent to get the Lightning transactions out faster. Maybe that means you need to have a zero satoshi output that either of the two parties in the channel can spend to do a child-pays-for-parent. If they don’t need to do a child-pays-for-parent then you want the option to have a miner claiming the UTXO so it doesn’t pollute the UTXO set forever. With a Taproot set up you can have all those different options at essentially no expense if you don’t use them. If you’ve got all these options and you don’t ever actually use them then no one can see. There’s no reveal of them onchain if you don’t use them. That gives you a little bit of extra privacy too. It’s limited privacy in that you will spend it eventually and however you do spend it then it’ll be obvious. It doesn’t make the amounts or the addresses particularly secret. You can still do all the same chain analysis that you can do now. But it’s an incremental step to better privacy. I think it’s a pretty good step into more efficient, complicated scripts. I know I’d like to be able to trust this hardware most of the time but I’d also like a backup so that I can use a different hardware wallet if this one breaks or the software stops working or whatever else. Taproot lets me have that without having this great long Script that I have to use all the time. That’s expensive and awkward and whatever else.

Stephan Livera: There are big benefits that will come for Lightning as well if we do get this. A very high level one is when you open a Lightning channel right now it’s very obvious that you’re doing that because it’s a 2-of-2 multisig straight on the blockchain. In a Schnorr Taproot world it starts to make them look indistinguishable, a standard Bitcoin onchain transaction versus a lightning channel open?

AJ Towns: The big advantage for Lightning is that Lightning is always online anyway. You have two peers connected over the internet. Maybe it’s via Tor, maybe it’s a little bit slow, but there’s no problem with any of their interactivity. They’re always communicating to eachother anyway. So sending these hashes of nonces back and forwards, they can always do this interactively, add up the signatures, MuSig stuff. All the transactions can look like “Here’s a key, here’s a signature.” Was this just a hardware wallet signing it? Was it a Lightning thing? We can’t tell because it looks like a pay-to-pubkey. Because Lightning is a hot wallet and always online then that works great. And because of the adaptor signature stuff, it can also do a fair few of the complicated things. It gets a little bit more identifiable at that point because of the locktimes but even if it’s not private, it’s much more efficient with the blockchain usage. If fees ever start rising again, it’s cheaper in fees and more efficient as well. For Lightning it should be super cool. The other thing about Schnorr for Lightning is that it enables the decorrelation stuff. So with the hashes that we currently use for the payment secrets and the preimages, you can’t do any math on that but you can do math with elliptic curve points. They kind of act the same as hashes with the discrete log problem where you can’t go from a public key to a private key but you can go from a private key to a public key. What that means is that we can convert from the SHA256 images we’ve got to curve points instead. We can use Schnorr signatures with the adaptor signatures to do the reveals of them so that it’s still something we can actually express in Script and put on the blockchain. Because we can do math with them we can change the point in a predictable way at each step. Only the two hops in the whole path for Lightning can tell which particular points they’re using. You can’t tell if you’ve got two completely separate points in the same path. They can’t actually tell that they’re in the same path. That starts getting much better privacy for Lightning and there are some other pretty cool things you can do with it too. The discrete log of a point is kind of the second half of a Schnorr signature. So instead of paying for some point or some hash preimage that doesn’t really mean very much, you can actually pay for a signature which signs a statement to say “Yes, I was paid for this.” Then you’ve got a signature back as your proof of payment rather than some number that doesn’t necessarily mean anything. In theory, you could take that signature to a court and say “Yes he signed that” if I pay it and he hasn’t delivered. You’ve got actual enforcement of lightning contracts in the real world. That’s my theory.

Stephan Livera: Let me summarize again. Currently Lightning Network uses HTLC, hash timelock contract. In the Schnorr Taproot world we could move to PTLC, point timelock contracts. Currently there’s this thing called payment correlation. What we want is payment decorrelation, meaning it adds a bit more privacy to the way we’re routing our payments across the Lightning Network. Correct?

AJ Towns: Yes.

Stephan Livera: I’m thinking back to my earlier interview with Rusty. We were talking about atomic multipath payment or multi-part payment. One of the sticking points or debating points there was around how can we retain proof of payment. As I’m understanding from you it’s more possible to retain proof of payment in a multi-part payment scheme using this?

AJ Towns: The multi-part payment stuff that they have at the moment, base AMP retains proof of payment but because it’s the same hash that goes through every step and every path of the multiple paths, that means that every single one of those steps can know that it’s the same payment. If you block at one point then maybe that blocks the entire payment or maybe it risks you spending slightly more than you thought you were going to be spending or something else. With the points and the decorrelation you can have every step in every one of those paths get a different point that only at the time it gets to the person you’re paying all comes back to revealing the same single discrete log, preimage, whatever you want to call it. That still retains the atomicity of the payments. It’s still reveal a single secret at the end and collect all of the payments and that propagates all the way back through. But it has the decorrelation so that I’ve got this point with this amount here and I’ve got this different point with this completely different amount here on the separate path. Are they the same payment going to the same person? Are they different payments going to different people? Are they different payments going to the same person? You can’t tell just from looking at it. Whereas at the moment you can tell just from looking at it.

Stephan Livera: To summarize with this new proposal, if we get it in then it gives us both payment decorrelation and multi-part payment?

AJ Towns: Yes.

Stephan Livera: That’s the summarized way to put it whereas right now we don’t have that. So another cool thing I’ve seen is apparently Taproot helps with getting SIGHASH_NOINPUT which enables eltoo. Could you speak to that?

AJ Towns: So the original plan for NOINPUT was to use the next SegWit version, upgrade CHECKSIG and have an extra flag in the sighash that does the NOINPUT. That supports the features of eltoo. So part of the design priorities that we had for Taproot was to try to get this stuff in because a whole bunch of people really want it and it’s been thought about for years. It kind of makes sense but we kept getting into little does it go this way, does it go that way, is this going to be a problem? We couldn’t quite get it to the point where we were as comfortable with it as we were with the rest of the proposal. What we’ve ended up with is a feature where when the CHECKSIG operation pulls a public key from the stack it’ll say “If the public key is 32 bytes, like the regular public keys that we’re using for the rest of Taproot then we’ll do a Taproot CHECKSIG and check that it’s valid and whatever else. If it’s a different size, it’s 33 bytes say, then if we see a signature for it we’ll just assume it’s valid.” That means we can soft fork in changes later that support different rules so it can support the NOINPUT stuff. That seems like it’ll be a really easy upgrade path. That’s kind of the reason we haven’t had NOINPUT so far. It’s because the upgrade path has been do a new SegWit version which means trying to get all the other features people might want to prioritize in at the same time and coordinating that. That’s why BIP118 has just been sitting there not progressing. There’s been code for it, there’s a kind of plan for it. There’s questions about it which is why we haven’t finalized it yet but it’s hard to make progress on something that requires lots of coordination. If we can get it so that the upgrade just requires this one feature to be defined, then it’s a lot easier to make progress. That’s how our unknown public key types are hopefully going to work for signature hash flags like this. We’ve got an update to the NOINPUT proposal that I’ve been working on with Christian. That is defined in terms of these features. It looks like it should work for eltoo as spec’d but as of just the other day we’ve come up with another concern about whether we should actually be committing to the value or not. NOINPUT, no value will make eltoo work more like Lightning currently does where you can have week-long channel close times with much shorter hashlock times. We’re still debating that on email and we’ll see how that goes. I think we’ll have a pretty solid plan for NOINPUT, ANYPREVOUT first quarter next year maybe. How long it takes to go from that to acceptable working code is another matter. Because eltoo makes backups of lightning channels much more trustworthy it brings the risk of running Lightning channels down.

Stephan Livera: Would you mind just giving us a high level of what NOINPUT is and how that helps with Lightning? The idea of forcing it to update on the latest channel state and not allowing people to broadcast an old channel state?

AJ Towns: So I’ll go the other way around. In Lightning you’ve got a bunch of channel states that you call a commitment. You want to send a payment or route a payment, you update the commitment to say “As long as the hash is revealed then the payment goes this way or as long as the timeout has passed the payment goes the other way.” You’ve got this history of commitments to the Lightning channel that goes back for however many payments you’ve made. However long you’ve had the channel open which could at this point be years on mainnet. If you’ve had a lot of activity there’s a lot of updates and so you don’t want to have to keep all of these updates forever. The way that Lightning works at the moment is that if you publish any old update then you lose all the funds in the channel. That’s fine if your server is completely reliable. But if you do an update and then your node crashes the update wasn’t written to disk. You reboot up and you’ve got the previous update. You can’t reconnect to your channel and they’re not closing it. You close it and then suddenly they reappear and they take all the money. It’s not a high probability risk and we’re all hashtag reckless. It doesn’t kill the whole concept but it’s kind of annoying if you want to take it to mainstream production. The eltoo approach is that instead of saying that if you publish an old transaction you lose all the money, the idea is if you publish any old transaction, any old commitment from the state, whether it be five seconds ago or a year ago, then the thing that the other guy can do is publish the new state or a more recent state on top of that. They publish it from a month ago then you publish it from two weeks ago and then they publish it from one week ago. That doesn’t make sense. You always would publish the most recent version you have. That’s the only thing that makes sense in a game theory way. The problem is to do that you need to have a transaction with a signature that will be valid for any of these older transactions and potentially any path through these older transactions to get there. All those old transactions, if they have a different path to get to them that means that they’ll have a different transaction ID. At the moment every signature we do commits to the transaction ID. You’d have to have a different signature for every different state to get to the current state. That means you’ve got to have storage that just grows every time your state grows which is completely unreasonable. With NOINPUT it doesn’t commit to the txid. It commits to whatever you want it to commit to in Script but it commits to “This was a previous state in the same channel.” As long as that’s the case then this signature will be valid and other signatures will be valid on top of it. So the NOINPUT feature lets you do that so that you only have to have one signature to move from any previous state to the current state. Because that’s just a signature, it’s not a secret in any particular way it means you can give that signature out to other people. If you’re offline and they see an old state replayed they can publish the transaction for you.

Stephan Livera: Watchtowers?

AJ Towns: Yes efficient watchtowers.

Stephan Livera: As I understand in the current model, watchtowers are not as efficient because they’ve got to save all of these states. Whereas in the eltoo world it would be cheaper computationally or hard drive space wise for them to do that.

AJ Towns: At the moment the watchtowers have got to have a reaction transaction to any previous state that’s been out. This is good for privacy because there’s no way for them to say “This thing that I’ve been given and this thing are for the same channels.” So I can tell how frequently the channel is updating but it’s terrible for disk usage and reliability. In an eltoo world, they just have to have a constant amount of stuff for the current state, which does let them see how often a channel is being updated but that’s all they get to see.

Stephan Livera: Do you mind explaining the difference between SIGHASH_NOINPUT and ANYPREVOUT?

AJ Towns: They’re different names for the same concept. When we were working on the Taproot stuff it was suggested that NOINPUT isn’t quite accurate because it really does sign some parts of the input. ANYPREVOUT rather than NONINPUT made sense but they’re the same thing. I think the current NOINPUT BIP will be updated once we’ve resolved this stuff to be ANYPREVOUT so that should resolve the slight weirdness there. It’s just a different name for the same thing.

Stephan Livera: There’s been this review club going on. Can you give us an overview of what happened with that and what was the objective of it?

AJ Towns: Anyone listening to this podcast has probably got an idea of how complicated all this stuff is. So myself and a few others were on IRC on an Optech channel saying “What are the next steps for this taproot stuff? How do we make progress? What are we trying to do? Are we trying to do workshops with people or what?” We throw a few ideas around and one of the things that John Newbery’s been doing is the Bitcoin Core pull requests review club which has been going really well. He was on the channel and we said “What about a review club sort of thing?” We got up a Google doc and wrote up what that might look like. It seemed like it made a lot of sense. Two weeks later we had 160 people signed up which is kind of crazy and we went from there. It was six weeks of content. We went through the Taproot BIP, the Tapscript BIP and the Schnorr BIP. Then we went back through some of the details for each of those things and did an in depth review of all the concepts with references to academic papers and links to the code. We broke everyone up into study groups so that in theory they had a meeting once or twice a week on IRC, Slack or on video chat. The idea was that we’d get a lot of people with a much better understanding of Taproot so that we can have intelligent comments and criticisms. People would understand the proposal rather than just having an idea that some people are working on this, those people seem cool, they probably know what they’re doing, let’s just trust that they do and accept it. But don’t trust verify, that’s the rule. The first couple of weeks were high level overview, mid-level overview. We didn’t go into all the horrible details. That went really well. We had huge numbers of questions, huge interest, huge attendance. It faded off a bit after that. I’m not sure if that was because it was getting close to Thanksgiving and everyone was starting to wrap up for the holidays or if it was just too intense content. Maybe there were too many surveys and stuff getting sent round. It faded off to like 20 or 15 people who made it all the way through. I would have been happy if only 8 people had signed up in the first place, so that’s still pretty good. But 160 down to 20, that’s a lot of attrition.

Stephan Livera: It’s pretty complicated stuff. I’ll be honest, I was doing a lot of reading myself and I still struggled. So tell us a bit about how it went. Did you get much criticism? Was there much constructive feedback?

AJ Towns: There was very little criticism. One of the things we’re concerned about is rather than having pay-to-pubkey-hash we’re having a Taproot public key as part of the output. You’ve always got the public key there and public keys in elliptic curves are vulnerable to quantum supremacy. That doesn’t actually mean anything. But if you actually have effective quantum computers which is a long way away then you have fast ways of going from your public key to your private key. If you have a hash in the way then that makes it more difficult but not in an effective way. So we were pretty worried about that and thought about it a bunch. I know Pieter has made some Twitter posts about why it doesn’t make sense to think of that as protection. That was a thing we continue to be a bit worried about as people follow the proposal. If that becomes more of a concern than we think it should be. There are other things like that that I can’t think of off the top of my head. It has been much more what you’ve decided seems to make sense and we don’t understand this explanation. Can you go into it in a bit more detail? Or we understand how you explain it on IRC can you put that in the actual text rather than explaining it to everyone individually each time? There’s been a bunch more updates to the rationale to explain things better than actual changes to the spec. We had the one big change before the spec which was going from 33 byte public keys to 32 byte public keys. Currently all your public keys start with a 02 or 03 and that doesn’t really add any useful security whatsoever. We’re dropping that and keeping the bits that do add the security. That was suggested by John Newbery mid year sometime and required a bunch of updates for that. From the actual review club we’ve found a problem with the bech32 bc1 address format. It turns out that if you’re unlucky and you have a p towards the end of your address then you can add q’s before it and come up with different addresses. They still pass the checksum which is what’s meant to avoid that kind of typo. The checksum should fail and you should get an alert about it. But if you make this particular typo you don’t get an alert about it. That means that the address that goes onchain is different. It’s not a problem for SegWit as it stands because the only two valid things are 20 bytes and 32 bytes which is more than just one q. But the original idea with Taproot was to leave 33 byte addresses as unencumbered so that we can do future upgrades with them. Then if someone can convince you to accidentally type a q, miners can steal the money that goes to your address. That was raised as part of the review sessions and has been something we’ve been working towards coming up with a fix. That’s probably going to result in an update to the bech32 specification. Some restrictions in the code to say that if it’s not a 20 byte or a 32 byte thing then it’s not something you use bech32 addresses for anymore.

Stephan Livera: Pieter, as you said, commented about this idea of the supposed quantum safety of Bitcoins that are stored on a UTXO that has not had its public key exposed. Pieter’s argument was that even in that world there are many Bitcoin UTXOs out there today that have had their public key exposed. I think the other argument was that even in the case that you have not yet exposed the public key for your UTXO at the point which you broadcast your transaction but before it’s been confirmed, there is still the risk that somebody could theoretically try to take it and steal your Bitcoin in that time.

AJ Towns: If you’ve got a quantum computer that’s already science fiction so what makes you think the quantum computer can’t solve a public key in the 10 minutes it takes to find the next block? They solve for the private key then they create a new transaction that has double the fees. So basically RBFs your transaction and they get all the money. Then their transactions are vulnerable to someone else with a quantum computer. So are they stealing the funds or are they just breaking Bitcoin? The other thing is that there’s a lot more ways that reveal your public key than you might think. You don’t have to just spend it or reuse an address. If you’re doing Lightning then you’re constantly revealing your public keys to your channel partner. If you’re using an xpub and sending that to your auditor for tax purposes or using Electrum that sends xpubs out to other nodes then that reveals the public key.

Stephan Livera: In reality most people have in some way exposed a public key for many if not all of their UTXOs unless they’ve been very careful in terms of software, set up, using their own Electrum server etc. The argument was it’s not really saving you and we want the benefits of Taproot and Tapscript and so on. That was essentially the argument. Another question on the review club. In terms of participants were there any who came from the wallet or hardware device world? Did they have any feedback on how it might impact their software or their hardware or their product?

AJ Towns: I don’t think we got super in depth feedback like that. Everyone was pretty much at the stage of learning how this works rather than actually getting to the point of adopting it into their own projects. I haven’t got the numbers but we got like 13 responses on what was your final analysis of this. They were all positive and that we should move to the next step and start getting code and patches out. As we move to code then there can be test cases. We’ve already got code that we’re pretty happy with. We’ve got test cases but they’re not as up to scratch as we’d like them to be. I think the next phase is probably going to be doing lots more test cases, getting people to break the test cases or find ways of breaking the code that the test cases don’t catch. Setting up a testnet for people to integrate with their software I think it will be more interesting. We can work out cool multisig things to do in our use cases for this but only at the on paper level, not at the writing code, finding all the little problems there might be with that.

Stephan Livera: Let’s talk about an upgrade pathway. What would an upgrade pathway look like and what’s been proposed?

AJ Towns: So I said earlier about the OP_NOP and the OP_SUCCESS. The OP_SUCCESS lets us add a whole bunch of different opcodes that might be interesting in the future. One of the ones that’s getting a lot of attention at the moment is Jeremy Rubin’s CHECKTEMPLATEVERIFY. It’s previously been called OP_SECURETHEBAG. It’s the first really simple, obviously safe covenant construction that we’ve seen. The idea is that you say “This transaction is going to go to these five addresses in these five amounts but I’m not going to do that yet. I’m going to just commit to doing that now so that sometime later, maybe when fees are cheaper, I’ll send the full transaction that spends to all these people. For now I’ll just commit to it so that they’ll know they’ll get the money eventually. But I don’t have to construct this huge transaction for it because nobody’s in a rush to get the money.” That’s the first way in Bitcoin we’ve had of saying that a future spend is constrained in some way. Normally you get the funds to your address, you can do whatever you want with them. You just have to be able to sign for them. But this one says that it doesn’t matter how you sign for them, you can only send the funds on to these people. At a technical, conceptual level it’s a pretty interesting thing. The use cases are in the proposal for it. It seemed to make a fair bit of sense and it seems like it’ll enable some interesting use cases. That is something that we can do with an OP_NOP, it’s not particularly enabled by Taproot. The interesting use case enabled by Taproot is for alternatives. If you want to have the CHECKTEMPLATEVERIFY under condition happen in one way and under another condition happen in another way. In the future if fees goes down then it’s easy to do it. But if fees don’t go down maybe you want to reduce some of your change that you’d be getting from it to make sure it goes out eventually. Maybe you want to have a timelock so that it only happens if fees haven’t gone down over a week or something like that. That’s the biggest thing that I think people are interested in at the moment. There’s other longer term stuff like OP_CAT which lets us do other kinds of smart contract stuff which is just reenabling some of the opcodes that got disabled because there were basically exploits available for them.

Stephan Livera: What are the next steps? The BIP is there. In terms of signaling could you outline what are some of the possible ways that could happen?

AJ Towns: Do you mean for Taproot or for other things?

Stephan Livera: For Taproot.

AJ Towns: For Taproot the next step is to get numbers assigned for the BIPs. I think there’s still a couple of open issues, pull requests or at least there were a couple of days ago. We’re going to get those resolved and send a pull request to the BIP repo to get numbers assigned so that we can start calling them BIP whatever number gets assigned rather than the draft Schnorr BIP and the draft Taproot BIP and the draft Tapscript BIP. That’s pretty pro forma so that’ll happen. Then we’ll start working on the code and getting pull requests up for Bitcoin and having a testnet ready. I don’t expect that to result in major changes but maybe it will. You’ve got to leave that possibility open. We’ll get the code worked out and hopefully people will do some trials. Can we make a testnet version of Lightning work with these payment points with this design of Taproot and Schnorr and whatever else. Once we’ve got all that done that’ll result in a pull request to Bitcoin (Core). So we’ll have the code in Bitcoin (Core) but it won’t be executed. So it’ll be there so that it can be enabled via version bits or whatever in future. The idea is that the code’s there, it’s no longer a major change to Bitcoin to activate it and release it. So we can have these discussions about UASFs and whatever else without that having a huge impact on the code and taking forever to get in and causing more delays. We’ve deliberately not discussed exactly how activation will work because a) it’s something that the community has to decide and b) it’s something a lot of people in the community have very strong opinions on. There’s probably going to be a Twitter flame fest about it. In private discussions already there’s people who strongly disagree with me so maybe there’s strong disagreement amongst lots of people. I at least feel like I’m going to argue about it. I imagine that means lots of other people are going to argue about it too.

Stephan Livera: What’s your position then? What’s your thoughts on it?

AJ Towns: On the good side I’m in favor of UASF. I liked the BIP148 approach and a bunch of people didn’t like that. They thought it was too risky and it was indeed very risky. Whether it’s too risky is a judgment call. I thought it was the better choice amongst the possible options. Other people don’t agree. And so which approach we take is going to require some resolution of that potential disagreement. The other thing is I don’t think there’s any possibility of wrecking miner’s setups in the way that SegWit wrecked ASIC Boost setups. If hypothetically anyone was actually using it because nobody has admitted to it I don’t think. I don’t think there’s going to be the same opposition that we ended up with SegWit. In practice I think the whole UASF argument won’t practically matter. It might be good to have as a reserve or as a policy thing to make sure that problems don’t happen again next time we do stuff. We’ve got a way that worked twice, we’ll just do it the third time sort of approach. My suspicion is that we will effectively just have a BIP 9 upgrade. That’ll work great and smoothly. There’ll be huge Twitter arguments before we get to that point maybe, I don’t know.

Stephan Livera: Could you outline for people who don’t know what is a BIP 9 activation?

AJ Towns: Every block that comes out has a version number associated with it. The original approach to soft forks was that you’d go from version one to version two to version three. Each of those versions would be a soft fork that got enabled. There were two problems with that. One is that if you’re doing that you use up a version every time. You might run out of versions eventually, after 4 billion of them. Bitcoin is meant to last forever so that could actually happen. The other problem is that what happens if one of these soft forks fails? You say version five is going to be Taproot. If you’ve got a version higher than five with Taproot enabled but then we discover this huge problem with Taproot and we don’t want to activate it. If we come up with something better that we want to upgrade to as a version we can’t do any number greater than five because that would imply that Taproot is enabled. We’re kind of stuck. At that point as soon as you propose an upgrade you have to have it accepted otherwise you can never do an upgrade again. It wasn’t a well designed system. In 2016 we moved to a thing called version bits where instead of having a version number assigned we temporarily assign a particular bit in the 32 bit version field. CHECKSEQUENCEVERIFY was the first one that used this I think. This was bit zero I guess. SegWit used maybe bit two. The idea is that while we’re working out whether we’re going to activate the deployment, miners will signal on a particular version bit to say they’ve upgraded and they’ll be able to support mining these transactions and enforcing the rules. Once all the miners have done it and hopefully everyone supports it then at a particular threshold of signaling on those version bits the rules will start getting enforced forever. Then those version bits no longer have to be set and they can be reused for a future upgrade later.

Stephan Livera: That’s the BIP 9 approach. What would a UASF approach look like as opposed to that?

AJ Towns: I think we will always want to have miners signaling just so we can see that they’ve upgraded or not. But the approaches proposed for SegWit were BIP 148 and 149. BIP 148 just said that as of the start of August 2017 or whenever every single miner has to signal for SegWit and otherwise their blocks are invalid. So the idea there was that we’ll definitely pass the threshold if everyone signals. If we just drop the blocks that don’t signal that’s the miner’s loss and their risk. They’ll probably end up signaling and that’s as it turned out what happened. The other idea was the BIP 149 approach where our deployment has failed because miners are opposed and we couldn’t really get consensus. What we’ll do is we’ll wait until that properly finishes in November and then we’ll have a new deployment of SegWit which will automatically activate at the end of November 2018. If miners do come around and start signaling before then, that’s great. We’ll activate it sooner. If they don’t, it’ll just activate at the end. So that was codified as BIP 8 I think. So BIP 9 is miners signal and it goes in or it doesn’t. BIP 8 was miners signal and if they don’t signal it definitely goes in by the timeout. The big question with deploying these things is as soon as the deployment starts then everything’s set in stone. You know it’ll activate at some point but there’s always the possibility that the code is buggy or the plan is wrong. Someone will discover that and we’ll just want to back it out and fix things. We want to make sure we’ve got an option for that. At the moment the option for that is we tell the miners “Hey, this is a really bad idea. You don’t want to activate this. Please stop.” Or like you on your podcast reach out to people, whatever. It’s not developers make a decision and that’s it. That’s exactly what we don’t want to happen. BIP 8 as it’s specified, as soon as you do a BIP 8 developers have made a decision. By the timeout that’s what’s going to happen. That’s not what we want. What we actually do want that’s where the flame war is going to come in I reckon.

Stephan Livera: Is the risk that the network splits and there’s two, it’s unknown which one? Is that the risk in your view or what is the risk under the UASF scenario?

AJ Towns: If you have a legitimate UASF with everyone having looked at the proposal or talked to experts they trust to look at the proposal. Everyone has discussed it and has opinions and the vast majority of opinions str that it should be good that’s sort of UASF. I am not concerned about the risks there. Everyone is clear on what they want to do, where they want to end up having looked at the ideas fairly thoroughly. There’s always going to be some risk that there’s some unknown unknown that’s going to bite you in the butt. But that’s the best you can do. You’ve got lots of people, lots of different viewpoints and they’ve all decided that this makes sense. The risk if you don’t have that. There’s a risk that if you don’t actually have everyone on the same page, you’ve just got a noisy majority on Twitter or something, then you haven’t actually looked at stuff in detail. That’s mob rule and can go wrong in that way. If you’ve got developers making the decision then that’s bureaucracy or elitist. That’s not really taking everyone’s opinions into account. That’s a risk. Then there’s the general, if the code’s wrong we could be losing money or people could be stealing money or you might not be able to spend your coins because there’s some bug that makes stuff not work right. A lot of the things in a change like this is stuff that we could back out with more development efforts. There’s ways of mitigating all those risks but it’s better to take time and make sure as many people look at things in as much detail as they’re willing to and get as much consensus as we possibly can. It’s a consensus protocol so getting consensus makes sense.

Stephan Livera: Of course. Are there any takeaways that you want listeners to take away from listening? Any key areas that you would like them to look at away from this?

AJ Towns: Wallet developers looking into new stuff, whether it be for Lightning or a multisig wallet, those are the developers, companies to be supporting. The ones that are sitting on their haunches aren’t doing as great a job. As far as I’m concerned the better people understand this, the better everyone is going to be. Listening to a podcast, that’s an awesome step. If you are up to reading the BIPs or reading other articles then that’s great too. If you’re up to writing code or reviewing code that’s even better. I know there’s at least a few developers that are supported by sponsored Patreon sort of arrangements. Supporting those guys is great. Being supportive on Twitter and saying “Good job” or asking technical questions is more constructive than getting into flame wars I find. I think this is pretty cool. I think it’s going to enable a lot of cool stuff and the smoother we can get this out with less conflict or flame wars the more it is going to encourage more development like that. That’s been one of the friction points for this. SegWit was very painful for a lot of people. Making it happen, getting it out and not getting the feedback that this was going to break things at the point when it could have been fixed rather than at the point when it just had to be stopped. The more technical engagement we can have between everyone and developers, the better off we all are I think.

Stephan Livera: Awesome. I think we’ll leave it there because we’ve gone for an hour and a half and I’m not sure how much more the listeners can take on some of this stuff. Let’s make sure you let the listeners know where they can find you online or if they’d like to find any more material. Obviously I’ll put the links in the show notes as well.

AJ Towns: Yeah. So I’m ajtowns on social media so you can find me there.

Stephan Livera: Thanks very much for joining me AJ.

AJ Towns: Merry Christmas and Happy new year.