Home < London Bitcoin Devs < Socratic Seminar - Vaults and OP_CHECKTEMPLATEVERIFY

Socratic Seminar - Vaults and OP_CHECKTEMPLATEVERIFY

Date: May 19, 2020

Transcript By: Michael Folkson

Category: Meetup

Media: https://www.youtube.com/watch?v=34jMGiCAmQM

Name: Socratic Seminar

Location: London BitDevs (online)

Pastebin of the resources discussed: https://pastebin.com/3Q8MSwky

Twitter announcement: https://twitter.com/kanzure/status/1262821838654255104?s=20

The conversation has been anonymized by default to protect the identities of the participants. Those who would prefer their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Introductions

Michael Folkson (MF): This is a London BitDevs Socratic Seminar. We are live-streaming on YouTube so please be aware of that for the guys and girls on the call. We are using Jitsi which is open source, free and doesn’t collect your data so check out Jitsi if you are interested in doing similar conversations and Socratics in future. Today we are doing a Socratic Seminar. For those who haven’t previously attended a Socratic Seminar they originated at the BitDevs in New York. There are a number of people on the call who have previously attended them. The emphasis is on discussion, interaction and feel free to ask questions and move the discussion onto whatever topics you are interested in. This isn’t a formal presentation certainly not by me but not even some of the experts on the call. This was set up because we have Kevin Loaec and Antoine Poinsot presenting next week on Revault which is their vault design so that will be live-streamed as well. That will be more of a formal presentation structure and Q&A rather than today which is more of a discussion. The topic is covenants, CHECKTEMPLATEVERIFY. Jeremy Rubin has just joined the call which is fantastic. And also in the second half we will focus on vaults which is one of the use cases of CHECKTEMPLATEVERIFY. What we normally do is we start off by doing intros. A very short intro. You can raise your hand if you want to do an intro. Introduce yourself, how much you know about covenants and vaults and we’ll go through the people on the call who do want to introduce yourself. You don’t have to if you don’t want. If you are speaking and you are happy to have the video turned on make sure you turn the audio and the video on. It will be better for the video if people can see who is speaking. If you don’t want the video on obviously don’t turn the video on. If you don’t care switch the video on when you are speaking.

Bryan Bishop (BB): I am Bryan Bishop, Avanti CTO, a Bitcoin developer, I’ve worked on Bitcoin vaults. I had a prototype release recently. I am also working with a few others on the call here on two manuscripts related to both covenants and vaults. I also did an implementation of my vault prototype with Jeremy Rubin’s BIP 119, OP_CHECKTEMPLATEVERIFY proposal.

Spencer Hommel (SH): My name is Spencer. I am currently a Bitcoin developer at Fidelity Center of Applied Technology’s blockchain incubator. I have been working on vaults since the summer of 2018. More specifically working on a hardware solution using pre-signed transactions More specifically working on a hardware solution using pre-signed transactions with deleted keys. I am also working on with Bryan Bishop on those manuscripts as well.

Kevin Loaec (KL): I am Kevin Loaec. I’m probably the least technical on this call. I’ve been interested in covenants and vaults. Not too long ago, my interest was raised with the first proposal that Bryan sent to the bitcoin-dev mailing list about 7 months ago. I have been working on it since December of this year when a client of ours had a specific need where they wanted a multi-party vault architecture for their hedge fund. That’s when I started digging into this and exploring different types of architecture. I am working on this project which we call Revault which is a multiparty vault architecture that is a little bit different from the one the other guys here are working on. But it also has a lot of similarities so it is going to be a very interesting talk today.

Max Hillebrand (MH): I am Max. I’m mainly a user of Bitcoin technologies and I contribute some to open source projects too, mainly Wasabi wallet. I have always been interested in the different property rights definitions that Bitcoin Script can enable. Specifically multisignatures which was part of my Bachelor thesis that I wrote. I have been following vaults specifically on the mailing list, some transcripts by Bryan and the awesome utxos.org site that Jeremy put up. Just interested in the topic in general. Looking forward to the discussion, thanks for organizing it.

Jeremy Rubin (JR): Hey everyone, I’m Jeremy. Thanks for the intro before though. You know a little bit about me. I’ve been working on BIP 119 CHECKTEMPLATEVERIFY which is a new opcode for Bitcoin that is going to enable many new types of covenants and smart contracts. One of those use cases is vaults. I released some code that is linked on utxos.org. You can see there is a vault implementation that you can check out based on CHECKTEMPLATEVERIFY. I am currently working on implementing better tools for being able to use CHECKTEMPLATEVERIFY that will hopefully make a lot easier to implement all kinds of vaults in the future.

Sam Abbassi (SA): My name is Sam. I am also working on vaults with Bryan and Spencer over at Fidelity. I probably have the least amount of experience with respect to vaults. This is part of me gaining more exposure but happy to be here.

openoms (O): I am openoms. I am working on mainly some new services in a full node script collection called the Raspiblitz. I am generally a Bitcoin and Lightning enthusiast. I am enthusiastic about privacy as well. I don’t know a lot about vaults but I am looking forward to hearing more and learning more.

Jacob Swambo (JS): Hello everyone. My name is Jacob. I am working with Bryan, Spencer and Bob on the vaults manuscript that was mentioned on the bitcoin-dev mailing list not that long ago. I am a PhD student at King’s College London and I have been working on this for about a year and a half. I am happy to be here talking about all this.

Adam Gibson (AG): I just wanted to mention I’m here. It is Adam Gibson here, waxwing on the internet. I don’t have any specific knowledge on this topic but I am very interested.

What are covenants?

MF: Basic questions. These are for the beginners and intermediates and then we move onto the discussion for the experts on the call. To begin with what is a covenant and what problem is a covenant trying to solve?

Bob McElrath (BM): I didn’t introduce myself. I’m Bob McElrath also working with Bryan, Spencer and Sam on a draft that will appear very soon, hopefully a week or so you will be able to read it. I have a talk on this topic last summer in Munich. This is a whole talk about covenants and going through the various mechanisms. A covenant is by definition a restriction on where a UTXO goes next. When I sign a UTXO I provide a signature which proves that I control it and it is mine. But I don’t have control over where it goes after that. A covenant by definition is some kind of restriction on the following transaction. With that you can create what is commonly called a vault. A vault basically says “The following transaction has to go to me one way or another.” I am going to separate my wallet into a couple of pieces, one of which is going to be more secure than the other. When I send to myself, between hot and cold or between an active wallet or something like that, this has to go to me. I am making a restriction on the transfer of the UTXO that says “If you get into my wallet you can’t directly steal this UTXO just by signing it” because the covenant enforces that it has to go to me next. From there I can send it on. I am introducing a couple layers of complexity into my wallet to do that.

MF: It certainly does, a sophisticated answer. Any of the beginners, intermediates did you understand that answer? What was your initial understanding of covenants before this? Any questions for Bob?

AG: I think this has been an open question since the early days of these ideas. It is such an obscure name. It doesn’t immediately jump out at you what it is. I appreciate the explanation Bob, that is excellent.

BM: We can blame Emin Gun Sirer and his collaborators for that. They wrote the paper in 2016 and they named it covenants. It is all their fault.

JR: I know it is a fun game to blame Emin but the term covenants existed before that in a Bitcoin context. The historical reason is that covenants are something that you use when you transfer property. It restricts how it can be used. In the Bay Area where I live there is a dark history with covenants where they were used to prevent black people from owning property. “You can sell this house but not to a black person.” That was relatively common. When people talk about covenants they oftentimes have weird things in their deeds like “You can only ever use this property to house 25 artists.” You sell the property with the covenants and the person can’t ever remove these covenants from the property. There was some mention in the notes that people didn’t really like covenants early on. Covenants is inherently a loaded term. It was a term that was come up to cast some of this stuff in a negative light because some people don’t like covenants. Not in a Bitcoin or cryptocurrency context. In general people have a negative association with someone else controlling your own property. In a Bitcoin context, as Bob pointed out, I liked his description, it is about you controlling your own property. One of the metaphors that I like to use for this is right now each UTXO is a little bit like a treasure chest. You open it up and it is full with gold coins. Then you get to do whatever you want with the gold coins. Imagine one day you opened up your treasure chest and you found Jimi Hendrix’s guitar in it. Where do you store that? Can you take that and throw it in the back of your Subaru? No this is a sacred thing, it needs to go in your guitar case. There is a restriction where you open up your treasure chest and it says “This is a guitar. You can only put this into another suitable guitar case.” That is what a covenant is doing. It is telling you what are safe containers for you to move your item to another. It turns out for the most part we are talking about using coins. It would be “These are coins but they are made out of uranium so you have to put them in a lead box.” They need to go in a lead box, that is the next safe step. That is one of the metaphors that works for covenants. It is about you being able to control the safety and movement of your own coins.

MH: I know the term covenants from the incumbent banking system where if you have a loan contract, for example a bank to a company, the bank can make the requirement that if the company’s cashflow to equity ratio drops to a certain level then the debt has to be paid back. Or it has to be renegotiated. It is a limitation on the contract where the contract itself is terminated or changed when one of these things comes into place. Seeing it as a restriction also makes sense in a Bitcoin context. We have a Bitcoin contract that is “If you have the private keys to this address then you can spend it. But in this restriction that it has to go into this other address.”

MF: You dropped off there Max so we didn’t hear the second half of your answer. I think some of you have already seen this. I did create a Pastebin for some of the resources that we can talk through. That is on the Twitter and on the Meetup page. I got a bunch of these links from Jeremy’s interview on Chaincode Labs’ podcast which I thought was excellent. Jeremy talked about some of the history in terms of implementing covenants on Bitcoin. The first one is an early bitcointalk.org post on how covenants are a bad idea. Perhaps we can talk about why people thought covenants were a bad idea back in 2013 and what progress has been made since then that perhaps has changed their mind or perhaps they still think covenants are a bad idea.

Historical concern about Bitcoin covenants

BB: I’ll start off. As I recall that was a Greg Maxwell post. I have talked with him and without corrupting what his opinion actually is too much I think the major concern was mainly about recursive covenants. People using them and not really understanding how restrictive a recursive covenant really is. That was the main concern, not that covenants are actually awful. It unintentionally reads as “Do no use covenants under any circumstance” which is unfortunate but that was not the intention of the post.

JR: Greg has explicitly said in IRC somewhere something to the tune of “I don’t know why everybody thinks covenants are bad. Covenants are fine. I have never said anything about them being bad.” I said to him “Greg, everybody I talked says that they think it is because you said that they are bad in this thread. If you don’t think they are bad you should make that clear.” You can’t make Greg write something. He has written that in IRC. He doesn’t have any hang ups or holdups about them. It is not even the recursion that is the problem, it is vitality. He doesn’t want a covenant system where somebody else is doing something and then all of a sudden your coins get wrapped up into their covenant. It may be that recursion is a part of that but that is the major concern with the ones that Greg was looking at.

MF: There are two elements here. One is timing. Perhaps it was way too early to start thinking about implementing covenants back in 2013. Secondly there is perhaps views on if the ideas on covenants were stupid or too complex back then that it was just a case of battening down the hatches and making sure some crazy covenant ideas didn’t get into Bitcoin. Any thoughts on that or any of the previous conversation?

BM: Any time you do a covenant… Any agreement you make when you are sending funds with the receiver is a private agreement. You can make whatever terms you want. Greg’s post enumerates a particularly bad idea where one party can impose a restriction on another party against their will. I think most people would think is a terrible idea. It is not that covenants themselves are a bad idea. If you agree to it and I agree to it fine. I think for the most part covenants are most useful for where it is not a two party arrangement, it is a one party arrangement. Once you get two parties involved everybody has to understand what is going on. By default it can’t be a regular wallet. The structure of the scripts have to change somehow. I have to know about those rules. Along as I agree to them I think that is completely fine.

JR: A hard disagree on that note. One of the major benefits of a covenant system comes into play with Lightning Network related stuff. It actually dramatically simplifies the protocol and increases the routability by improving the number of HTLCs that you can have and the smart contracts that can live underneath a Lightning channel feasibly with reasonable latency.

BM: I think we agree there. If you are using a Lightning wallet then you have agreed to those rules.

JR: I do agree that there is a huge amount of value when it is a single party system but multiparty things are actually really useful in covenants because the auditability of those protocols is just simpler. A lot of the set up things are you are writing half the amount of code.

MF: The next couple of links I put up were a good intro for beginners to this topic which is Aaron van Wirdum’s Bitcoin Magazine article on SECURETHEBAG. This was after Jeremy’s presentation at Scaling Bitcoin 2019. Then there’s this paper by Emin Gun Sirer, Mosser and Iyal. Any thoughts on this paper? Anyone want to summarize the key findings from this paper?

The first paper on Bitcoin covenants

BM: I can give it a stab. This was the first paper in the space. They defined a new opcode that acts rather like a regular expression that says “I’m going to examine your redeem script and I am going to impose some restrictions which are essentially an arbitrary sort of regular expression on your script.” The second thing that they defined is recursive covenants which is the thing Greg Maxwell doesn’t like as well as a kind of protocol where if somebody manages to steal your funds you can yourself get into a game where you keep replacing each other’s transactions until the entire value of the UTXO goes to fees. They claim this is somehow beneficial because the thief can’t actually steal anything. That aspect of the paper I don’t like very much. I don’t think anybody wants to get into a game where they lose their funds anyway even if they prevent the attacker from gaming them and sending them to fees instead. Those are broadly the three things in that paper.

BB: I’ll disagree. I think it is valuable to have the lose everything to fee because it comes down to the following. Would you rather fund an adversary or lose your money? Unfortunately in the contrived scenario is that you haven’t.

BM: That is not true. You can definitely have neither. You don’t have to get yourself into this game where you are paying fees.

AG: What exactly was the name of the new opcode for it? This was probably why it didn’t go anywhere.

JR: They called it OP_COV. There were a few problems with it. It wasn’t just the technical capability that it introduced. I don’t think the proposal was that secure. There were a few gotchas in it that would make it hard to deploy. With BIP 119 I tried to answer the integration questions of if you are really signing transactions what can go wrong? It turns out with a design that is not restrictive enough there are a lot of weird edge cases you can run into. That is why that proposal didn’t go anywhere.

BB: The other thing I remember is that in that paper in 2016 the manuscript was published around the time that BIP 68 and BIP 112 occurred, the relative timelocks. I think in the paper itself said this is going to require a hard fork which strikes me as odd.

JR: I think they probably just didn’t know.

BM: It was published right before those BIPs. I had a post after that that used the deleted key thing and those opcodes because it was obvious to me that they had missed that. That paper does not talk about timelocked opcodes correctly.

MF: This is Jeremy’s presentation at the Stanford Blockchain Conference in 2017. This was your first presentation that I saw Jeremy on covenants. It had a bunch of different use cases like “The Naughty Banker” and your current thinking back then. So of all these use cases which ones are still of interest now and how has your thinking changed since that presentation. I enjoyed that presentation, I thought it was very informative.

Introducing the use cases of Bitcoin covenants

JR: Here are the slides. A lot of them are still useful. Congestion control is particularly of note. The example I gave was how to use congestion control for Lightning resolution where you want to lock in a resolution and you do the details later. There’s things like optical isolated contracts, there is some vaults stuff in here too. That stuff is obviously still interesting. In this presentation I define a bunch of different types of opcodes. Those could still be interesting. One of the things that I define here is a way of doing a Tapscript style thing at the transaction level. If you had transactions that you can mark as being required to be spent within the same block then you could have scripts that expand out over the series of transactions. In my opinion that is a slightly more interesting primitive to work with because then you can have scripts that expand out over a number of blocks but then they split how the funds are being distributed to different UTXOs. You can build out some different flows and primitives based on that expansion. I think those things could be interesting in the future. I don’t think there is anything that is irrelevant in this presentation at this point. It is like carving out the small bits that we know how to do safely in Bitcoin and making it work. There are a few that aren’t here that I would be excited to add. One thing that I have been thinking about as a next step for Bitcoin after CHECKTEMPLATEVERIFY or an equivalent gets merged is an opcode that allows you to check how much value is in an output as you are executing. A really simple use case you can imagine for this is you paste an address to somebody and if it is under 1 Bitcoin you have a single key because it is not that much. But if it is over a Bitcoin then you have multisig. You can use that as a safety mechanism in a number of different applications. I think that is an important thing going forward that wasn’t in this presentation. It is worth looking at if you are thinking about how to contract in the UTXO model, what sorts of things could be possible.

The congestion control use case

MH: I am still somewhat lacking intuition on why this is an improvement for congestion. How it can save fees in times where there is a high fee level. If someone could explain that a bit more succinctly that would be nice.

JR: The idea of congestion control is mostly that there’s a fundamental amount of traffic that has to happen but there is a peak demand. It is kind of like Corona virus, you want to flatten curve. I was looking at all these diagrams of flattening the curve and I was like “Hey this has been what I’ve been working on for the last year.” Let’s say it is lunch hour and we have 10 megabytes of transaction data coming in every ten minutes but it is only for an hour. Over the rest of the day those transactions are going to be clearing. With the congestion control solution you can commit to all of them, confirm all of them and then only when they need to be redeemed do they have to pay fees. You have localized the confirmation window for all of them, confirmed all of them at one time and then you spread out the redemption window where somebody goes and gets an individual UTXO out. The reason why this ends up decreasing fees is that if you are think about fees as a bidding market you are bidding for two different goods. You are bidding simultaneously for confirmation and you are bidding for redemption. That is an inefficient market because those are two separate quantities. By splitting out the quantities you bid one price for confirmation and that confirmation price can be shared among a number of actors. Then you bid a separate price for redemption. That has the effect of allowing you to have fewer people bidding in the confirmation market with CHECKTEMPLATEVERIFY and more people bidding in the redemption market. I think that is the shape of why it is going to be an improvement.

MH: One follow up question. Let’s say I make a transaction that pays 100 users. I get that confirmed at a low fee. How does that work with the users redeeming their coins? Does it have to happen for all the 100 users at the same time or can 10 users do it fast and the other 90 do it slower?

JR: CHECKTEMPLATEVERIFY is a general purpose opcode. It doesn’t do one of these things specifically. The answer is what does your users want. We could also talk about mining revenue because I think it is important when we are talking about something that looks like it is reducing fees, if it is improving revenue I think it does improve revenue but that is a separate conversation. What you would do is you would bundle up all your 100 users into a transaction. You would have a single output for all of them. Then you would create that. You would probably end up paying a very high fee on that transaction because it is representing confirmation for 100 users. A high fee on a transaction with one output is a lot lower than low fee on a hundred transactions or a hundred outputs. You are still saving money as the user but you are maybe paying a higher fee rate. What you give to your users is if they have an old wallet it looks like an unconfirmed spend. It would be “Here is a couple of unconfirmed spends” and you can structure the spends as any data structure that you want that is a tree of some sort. A linked list is a tree. You could have something where it is one person, the next person, the next person, that is a little bit inefficient. It turns out that it is optimal for the users to do a tree of radix 4. You would have a tree that says “Pay out to these 4 groups” and each group of 4 pays out to 4 groups and each group of 4 pays out to 4 groups. Then the total amount of work that you have to do is log(n) to get a single redemption in transaction space and amortized over all the users it is only a constant amount of transaction overhead.

BB: One interesting point here is in the tree structure the way this is set up is that in certain scenarios some users are paying a little bit more than others.

JR: Yes. One of the issues is it a balanced tree or not? It turns out that we’re already talking logarithmic so it is already going to be pretty small. We are talking maybe plus or minus one on the depth in the tree. You can pay a little bit more but the other side of it is that users who want to redeem earlier subsidize users who redeem later because it is amortized over all the users. Let’s say I have a branch that ultimately yields a group of 4 and one of those people decides that they really want their coins. Them getting their coins subsidizes the creation of everybody else who is one of their neighbors along the path. There is a thing where naturally there is a priority queue which is what you want to have I think in Bitcoin. You want the fee market to be a priority queue where people who have higher priority, higher requirement of getting their transaction through end up paying more. What this changes is the redemption to a lazy process where you do it whenever demand is low enough to justify your use case. You are not worried about confirmation. The alternative is that these transactions sit unconfirmed in the mempool. I think unconfirmed funds are far worse. The real benefit of that, this goes back to why I think this is really good for multiparty situations, these payouts can be Lightning channels. You are wondering how am I going to get liquidity, it turns out that you can immediately start routing it in the Lightning Network. That’s one of the benefits of this design is that it allows you to bootstrap Lightning channels much more easily because you are not time sensitive on the creation of the channel as long as it is confirmed.

AG: Can we dial back a little bit? I think we have jumped a few steps ahead. I want to make sure I understand the most basic concept. I think Max was asking something similar. Do I understand that the most basic concept of congestion control here is with this covenant mechanism you are able to effectively treat unconfirmed transactions or chains of unconfirmed transactions as if they are settled so to speak? This distinction about confirmation and redemption you were saying is that the receiver can treat the money as having been received even though it is not in a Bitcoin block because there is a covenant. Is that right?

JR: That is exactly correct. So we have a diagram here. I compare normal transactions where you have some inputs and blue is the payments and pink is your change UTXOs. This is normal transactions on the left. If you go to normal batching then you have a number of outputs and a single change and it is more efficient. With congestion control payments what you do is have a single output and then you have a tree of possible redemption paths underneath. Here I show a little bit more advanced demo. Ignore the part on the right. Just imagine you go down with this radix 4 tree, you go down to Option B. You expand out and then you have all these different transactions. What this diagram is showing you is that the different leaves or nodes of this transaction graph can be expanded at different times and in different blocks. If you look at the gray boxes, look at the size of them. Normal transactions are the worst. It is the biggest gray box. Then batched transactions is the next smallest. Congestion controlled transactions are even smaller. Your real time block demand is really low. Then sometime later these other transactions can be played but they are guaranteed to go down that route. The optionality that I’m showing answers the question that Max had earlier which is how do they actually redeem. You could redeem on different types of tree. The one on the right is Option A, let’s redeem as a single step and pay out to everyone. That is an immediately useful one and is maybe a little bit easier to understand and integrate into existing wallet infrastructure. It is a single unconfirmed parent for this transaction. If you want optimal efficiency on a per user basis then you would do a tree expansion. In Option A it is less fair if you are the one person that wants to redeem their funds, you’ve got to pay for everyone. On Option B you only have to pay for log(n) of everyone else which you can kind of ignore.

MF: There is a question in the YouTube chat. How is this different to child-pays-for-parent?

JR: The difference between this and child-pays-for-parent (CPFP) is that CPFP is a non-consensus rule around deciding what transactions you should mine. This is a consensus rule around being able to prove a transaction creates another transaction. In this world you do end up wanting to use CPFP where you can attach your spending transaction with a higher fee to pay for the stuff up the chain. In this example you would spend from one of these outputs and then you would attach a high fee to it. Then that would subsidize the chain of unconfirmeds. It is related to CPFP in that way but it is a distinct concept in that these pending transactions are completely confirmed. There is no requirement to do CPFP in order to get confirmation of the parent. That is the difference.

BB: I’ll point out another difference that CPFP doesn’t require a soft fork. It doesn’t accomplish the same thing either.

JR: The other thing I would add if we are going to go tongue in cheek is I am going to probably end up removing CPFP or having to completely rearchitect it. The mempool is a big project right now. There is a lot of stuff that doesn’t work how people think it works. Transaction pinning is one of these issues that comes up. It is a result of our CPFP policy. There is a very complicated relationship between a lot of these fixes, features and problems we end up having.

MH: Can we still do for CPFP for that commitment CTV transaction?

JR: Yes. You just spend from the child and then it is CPFP. That’s where the mempool issues come in. CPFP doesn’t actually work. This is the problem that people are coming into with Lightning. People have a model of what CPFP means and the model that they have is perfect economic rationality. It turns out that perfect economic rationality is a NP hard problem. We are never going to have a perfectly rational mempool. We are always going to be rejecting things that look good. It just turns out that the current CPFP policy we have is really deficient for most use cases. CPFP already only works in a handful of cases. It doesn’t work for the Lightning Network. Even with the recent carve-out it still doesn’t really work properly.

SH: Is there any consideration to how exactly you structure the tree with radix 4? Is there a certain algorithm or protocol to place certain outputs in certain positions of the tree or is it left to random or open to whatever implementation?

JR: I think it is open to implementation. The opcode is generic and you can do whatever you want. That said there are some really compelling ones that I have thought of that I think would be good. One would be if you had priority information on how likely people are going to be in the same priority group. You can either choose to have a neutral priority arrangement where you try to pair high priority with low priority or you can do something which is a fair arrangement where high priority is other high priority so people are more likely to share fees. There are also fun layouts you can do where the probability of this one being redeemed quickly and then you can Huffman encode the tree based on that. The other one I really like and this goes into the Lightning side of things which is a bit more advanced is you can order things by the probability of people being able to cooperate. If you had some notion of who knows other people then you can do a recursive multiparty Lightning channel tree and then if you group people by the probability that they are able to work together in a group then you make an optimal updatable tree state. That one I am really excited about as a payment pool option. The last one would be if you are making payments and they might be the same service. You can make a payment tree where keys that you suspect are owned by the same wallet exist in the same sub-branches. There is an opportunity for cutting out some of the redemption transactions by redeeming at that higher order node. There are a lot of options.

SH: My next question was about the probabilistic payouts in Lightning so thank you for answering that.

BM: Could you talk a bit more about CPFP because I think one way to describe this is instead of the sender paying fees the receiver can pull and therefore the receiver has to pay fees which means the receiver is going to have to use CPFP to do that. Could you talk a bit more about the interplay between those two?

JF: CPFP is not the only way to pay fees. There’s a litany of ways to pay fees in one of these systems. CPFP is I would say the best way because it is a pure API where you want to declare which transactions you want to do and then the paying of fees for those should be abstracted away from the actual execution. CPFP is good to express these arbitrary relationships. It actually turns out that there are better APIs. That is one of the other soft forks I am looking at that maybe we could do is something that gives us a much more robust fee subsidizing methodology.

BM: If the receiver wants to pull a payment out of the tree and get it confirmed for whatever reason he may have to pay fees. The transaction at the end of the tree could pay fees. It may not be enough. The receiver may have to add fees to that and they may desire to which means they have to use some kind of replacement. Due to the structure of CTV replace-by-fee is not going to be viable.

BB: I don’t think you would replace the fees at the end of it, I guess you could. I was expecting that you do make a child transaction that pays the fees in addition to whatever you pull out of the tree.

JR: Replace-by-fee (RBF) works fine with CHECKTEMPLATEVERIFY (CTV). The only issue that comes up is if you want CHECKTEMPLATEVERIFY to be inherently Lightning compatible then RBF is not a Lightning compatible idea. In general because you have to worry about the state of HTLCs in subcontracts so you can’t arbitrarily RBF because you may be bound to a specific UTXO. If you have things like ANYPREVOUT then that wouldn’t necessarily be true. You would be able to get around some of those constraints. I reason why I prefer CPFP is that it doesn’t disturb the txids in your parents and your own branch. I think txid stability at least for the current Lightning Network designs that we have is an important property. But you can use RBF it just changes your txid. With CTV there are two forms of it. There is an unbounded form where you allow any output to be added that adds more money. There is also a bounded form that is possible through a quirk. I like that you can do it. Using a P2SH Segwit address you can specify which key is allowed to add a dynamic amount of fee. If you pick a key that is known to be of the parties in that subtree then it would only be through the coordination of those entities that the txid could be changed. If you are trying to do a Lightning thing and then the RBF requires coordination of all the subowners it can work as well in a protected form that protects your state of HTLCs. I think that that is a complicated thing to build out and CPFP is conceptually a lot simpler. RBF does not work well for a lot of services. This was one of the debates about RBF in the first place. People didn’t like it because people wanted to issue one txid and they wanted to be an exchange and say “Here is your txid” and then not worry about having to reissue the txid because it looks like a double spend and wallets get upset. It is not awful that the code supports it but it is an awful thing to use in practice because it has bad externalities. It is just more robust, that is the reason why I’ve been advocating CPFP.

The design of CHECKTEMPLATEVERIFY (CTV)

MF: We’ve jumped straight into use cases. I’m wary of that. Jeremy, could you take a step back and explain what CTV is in comparison to some of the other covenant designs?

JR: With the presentation I gave in 2017, at that time I was like “Covenants are really cool. Let me think about the whole covenant space.” The Emin Gun Sirer paper only covers one type of covenant which is how an output has to be spent but it doesn’t cover covenants around which inputs have to spent with, there are a lot of things. I thought about it and I tried to get people excited, people got excited. At the implementation point people were like “This stuff is scary to do. We are not really sure what is possible to do safely in Bitcoin. We have all these properties we want to preserve around how transactions behave in re-orgs.” I was like “Let’s do a long study of how this stuff should work.” I was doing that and working on other things, figuring out what made sense. A lot of the proposals for covenants have flaws in either how much computation they are expecting a validator to do or what abstractions and boundaries they violate in terms of transaction validation context. Observing things that you are not supposed to observe. As I went by I started building vaults in 2016. I was talking to some people about building them. I had a design that ended up being somewhat simpler to what Revault looks like. I was using lots of niche features like special sighash flags for making some of this stuff work. At the end of the day it really was not working that well. I went back to the drawing board, looking at how you can do big ECDSA multisignatures to emulate having big pre-signed chains. I tried to get people excited about this at one of the Core Dev meetings. People said “This stuff is not what we are interested in.” No one would review it. I stepped back and said “I am trying to accomplish this specific goal. What is the most conservative minimal opcode I could introduce to do that without having any major security impact change to Bitcoin.” I came up with CTV, it had a couple of precursors. The design is basically the same. It was actually more conservative originally, I have made it more flexible in this iteration. I presented that to the San Francisco BitDevs. The usual suspects were there. The response was very positive. People were like “This seems like a covenant proposal that does not have that much complexity we were expecting from validation. And it does not have that much potential for a negative recursive or viral use case that would add a large problem. It used to be called SECURETHEBAG, it also used to be called CHECKOUTPUTSHASHVERIFY. It was a back and forth. I originally called it CHECKOUTPUTSHASHVERIFY because I was like “Let me call it the most boring thing that is exactly what it does” and then everybody at the meetup was like “That name sucks. You have got to name it something more fun.” I renamed it SECURETHEBAG and the other half of people were like “Bitcoin is serious business, no funny names.” I renamed to CHECKTEMPLATEVERIFY which is conceptually there but it is not that boring as CHECKOUTPUTSHASHVERIFY. It really gets to the heart of what the idea is, what type of covenant you are writing. Essentially all what you are doing is saying “Here is a specific transaction template. A template is everything except for the specific coutpoints or coins that you are spending. That covers sequences, version, locktime, scriptSigs and outputs of course and whatever other fields that I may have missed that are in the txid commitment. People are writing in the chat that they want to bring back SECURETHEBAG. If you want to bring it back I have no business with that, I can’t be responsible. It just checks that the hash of the transaction matches those details. That’s basically it. That’s why it’s a template. Here is the specific transaction that I want to do. If you want to do more than one transaction, what if I want Option A or Option B, simple. Wrap it in an IF ELSE. If you pass in one then do Transaction 1, if you pass in zero do Transaction 2.

BB: When I was working on my Bitcoin vaults prototype and I was doing the CHECKTEMPLATEVERIFY implementation version I was originally doing secure key deletion and then I was like “I should try BIP 119.” I asked Jeremy, this IF ELSE thing sucks if you have a lot of branching. Jeremy suggested a very simple script that was significantly more concise. That was interesting.

JR: I have been become a bit of Script virtuoso. There are a lot of funny script paradigms that you can do with this stuff to make it really easy to implement. The IF ELSE thing always bothered me. Do you do a big chain of IF ELSEs or do you do the balanced tree branch conditionals and pass that in? In turns out there is a script that Bryan is referencing where you just have to pass in the number of the branch that you want to take. It is that simple. Bryan, maybe I will send it to you for review. I posted on StackExchange somewhere a script which emulates a switch statement where you pass in a number and it takes whatever branch of code you want to execute underneath. It is a little bit more verbose but it is very easy for a compiler writer to target.

AG: You said CHECKTEMPLATEVERIFY is essentially looking at what the txid encompasses, in other words the template transaction. But then you said it includes the scriptSig and it doesn’t include the coutpoints. Surely it is the other way round?

JR: One critique that comes up that sometimes people say, I have designed CTV for a very specific use case. There is a more general thing out there that maybe could be better. That is a little bit true. The very specific use case that I have in mind is where you have a single input. There is a reason for that. That is why I was talking about the malleability before. If you have a single input there is no malleability you can have with the transaction coutpoint if you know one parent’s coutpoint. You know that one parent’s coutpoint and then you can compile down the tree. You can fill in all the details as they go. It is all deterministic. That is one of the use cases that is not specifically designed for that but it is designed so that use case works really well. When you look at the scriptSigs that is a little bit weird. It basically means that you mostly cannot use bare script for CTV because you are committing to signatures there if you have signatures. If you have a bare CTV where it is just a CTV you can use a bare script because you don’t put anything in your scriptSig. As soon as you have signatures and other things you end up having a hash cycle. The way you end up getting around that is you use a SegWit address. In a SegWit address the witness data is not committed to in the txid so your signatures and stuff are all safe. Unless it is P2SH and then you commit to the program. You can use the SegWit P2SH as a cool hack where you can commit to which other key has to be spending. That’s the reason why you are committing to the scriptSigs but not the coutpoints. The scriptSigs affect the txid but given a known chain of CHECKTEMPLATEVERIFYs the coutpoint does not affect the txids given a single parent known coutpoint.

I’ll give you a concrete example. One of the big benefits of CTV is you have all these non-interactive protocols where I define here’s an address and then if enough coins move into this address then I have started the Lightning channel without having to do any back and forth with my counterparty. I still need to know in order to update that channel state the txid of the channel that eventually gets created. If I spend to that address and it has a single input then I know who spent to it and I know the coutpoint. I can fill in all of the txids below. Those txids won’t change. Any terminal state that I am updating with a HTLC is guaranteed to be stable. If I had malleability of the txid either by having RBF or by having multiple inputs or not committing to the set of data I commit to then you would run into the issue that I am mentioning where things can get disrupted. It is a little bit abstract but if you read the BIP there is a lot of language explaining why it is set up that way.

SH: I think you touched on this during your CTV workshop back in February. Can you elaborate how if at all Tapscript affects some of the scripts that you and Bryan mentioned just a few minutes ago or CTV scripts in general?

JR: Tapscript makes a lot of this stuff much easier. In Tapscript you would never use an OP_IF. There are some use cases because you have a combinatorial blowup in script complexity. You would maybe use it for those purposes. You wouldn’t need to use it in most use cases. Tapscript makes a lot of these things easier to do. You could have an opcode which is “This is an intermediate output and it has to spent by the end of this block or this transaction can’t be included.” This would give you the same functionality as CTV. It is about being able to have some branch that has to execute and you don’t need to pass in all these bytes to signify which branch you want to execute. It is painful to do that.

BM: Can you elaborate some of the arguments and counterarguments for or against the implementation of CTV? In particular there is a balance between making a super restrictive opcode, you started with something more restrictive and then you moved to something less restrictive. One of the things that I have been fooling around with is the new Simplicity language which if we got that soft forked into Bitcoin has bare access to essentially all of the transaction data. You could compose anything you wanted as far as a covenant goes. It is perhaps the polar opposite in terms of flexibility. I have been thinking about implementing CTV just for the fun of it in Simplicity to understand how it works. Can you elaborate on the spectrum here, what is too restrictive, what is not restrictive enough and why?

JR: Simplicity is really cool first off. I don’t think it does what you think it does. In the sense that you can write a valid contract in Simplicity for whatever covenant you want but it is not necessarily executable onchain. As you write more complicated scripts in Simplicity the runtime goes up and you have some certain runtime limits or fee limits on how much work a transaction can require. Unless you get a soft fork for the specific jet that you want to add you can’t do it. The way I think about Simplicity is Simplicity is what if we had the optimal language for our sighash flags? What would that look like? Simplicity lets you define whatever you want and then you can easily soft fork compatibility where if you need to add old clients should be able to understand the new specification Simplicity lets you do that. Simplicity lets you express these things, it doesn’t necessarily let you make transactions based on them. One point that I would also make about the compactness, this is something I have spoken to Bram Cohen about and you can ask him for his actual opinion if I misstate it, even if you have a really sophisticated covenant system, general covenants are runtime compiled, where you are interpreting live in the script. CTV is ahead of time compiled. You only have to put onchain the data for the branches that you are actually doing. You could write that in Simplicity as well. I think what you would end up doing is implementing CTV in Simplicity. I don’t think that right now given the complexity of Simplicity as a long term upgrade we should ignore doing something that works today for that type of use case. It is basically just saying if you want to map it “We are doing a jet today for this CTV type script” and that will be available in Simplicity one day. Having this is both good for privacy in that you don’t reveal your whole contract but it is also good in terms of compactness in that you only reveal the parts of your contract that need to execute. There are a lot of benefits rather than having the complete program expressed in Simplicity at least as far as I can tell.

On the question of why have something restrictive versus something general. It is really easy to audit what happens with CTV. There are a few things that you can do, a few different code paths. It is a hundred lines of code to add it to Core. It is pretty easy. Within an individual transaction context there is no major validation overhead. It is just simple to get going. It makes it easy to write tools around it. Writing tools around a Simplicity script is probably going to be relatively complicated because you are dealing with arbitrary binaries. You are probably going to be using a few well tested primitives in that use case. With CTV it is a basic primitive. The tooling ends up being pretty easy to implement as well. I think Bryan can speak to that. With respect to it originally starting more restrictive. The restrictions I had originally were basically around if you added other features to Bitcoin whether CTV would allow you to do more complicated scripts. I removed those features. People said “We want these things to be enabled.” I didn’t want CTV to occupy the space such that we added CTV and now we can’t add this other thing that we want without enabling these very complicated contracts. I said “Let me make this as restrictive as possible.” People said “No if we add those things the chances are that we do really want these more complicated contracts.” This is like OP_CAT for example. I said “Ok sure. I will remove these restrictions, make it a little bit more flexible.” Now if you were to get OP_CAT or OP_SHA256STREAM in Core then you would actually be able to start doing much more sophisticated CTV scripts. This gets to a separate question that I will pose in a second. One thing you can do for example is write a contract that says ‘This template must pay out to all of these outputs and any output of your choosing.” This can be useful if you want to add an additional output. You can’t remove any outputs that are already specified but you could add another output. It gives you some more flexibility if you had OP_CAT. But because we don’t have it you can’t really do that today. That gets to the point of why not just do ANYPREVOUT which also gives you an analog for CTV. There would be no upgrade path for ANYPREVOUT short of Simplicity that would allow ANYPREVOUT to ever gain higher order templating facilities. CTV has a nice upgrade path for more flexibility in the future if we want it.

nothingmuch: What about recursion?

JR: So basically all the recursion happens at compile time. You can recurse as much as you want. This is sort of under wraps right now but I am happy to describe it. I have been building a compiler for CTV. I hope to release it sometime soon. The compiler ends up being Turing complete where you can compile any contract you want that expresses itself in Bitcoin transactions. But the compiler produces a finite list of Bitcoin transactions at the end of the day. There is no recursion within those. Those are just a fixed set of transactions that can be produced. If you want any recursion or any principle you can do that at the compile time but not at the actual runtime. I don’t know what “bounded input size” means but I think that is a sufficient answer. We can follow up offline about “bounded input size.”

MF: There are a couple of things I would like to cover before we transition to vaults. One is in that 2017 presentation you talked about some of the grave concerns. Were you able to address all these concerns? Fungibility, privacy, combinatorial explosion etc

JR: In terms of computational explosion I think we are completely fine. Like I mentioned compile time can be Turing complete but that is equivalent to saying “You on your own computer can run any software you want and emit whatever list of transactions you want.” At runtime it has to be a finite set of transactions. There is no infiniteness about it. Then in terms of fungibility and privacy, I think it is relatively ok. If you want privacy there are ways of getting it in a different trust model. For example, if you want privacy and you are willing to have a multisig signing server then you can use Taproot. You get a trust model where the signing server could steal your funds if you had all the parties working together but they can’t go offline and steal your funds because you have an alternative redemption path. In terms of fungibility, the issue is less around whether or not people can tag your coins because that is the privacy issue. The fungibility issue is whether your coins can be spent with other coins. Because this is a program that is guaranteed to terminate and it has to terminate in a coin that is unencumbered by any contract those coins can be spent with any other coin. There is no ongoing recursive segregation of coins. The fungibility I think is addressed. For privacy what I would say is that I think that having onchain contracts, these are really good onchain contracts in terms of you only show the part you are executing, not the whole program. You don’t learn other branches of the program that might have been there. But you are seeing that you are executing a CTV program so there is maybe a little bit of privacy harm there. The way that I like to think of this and why this is a huge win for privacy, is that this is going to enable a lot better Layer 2 protocols and things like payjoin and mixers. It is going to make a lot of those things more efficient. Our ability to add better privacy tools to Bitcoin is going to improve because we’re able to bootstrap these protocols more efficiently. It is going to be a big win for privacy overall. There is some new information revealed, I wouldn’t say there is nothing new revealed.

Other use cases of CTV

MF: Let’s go on to use cases. We’ve already discussed congestion control. Perhaps Jeremy you could put up the utxos.org site and go to the use cases tab. So one of them is congestion control, one of them is vaults. You have a bunch of other use cases there as well. Before we move on specifically to vaults perhaps you could talk about some of those different use cases and which ones are promising and which ones you are focusing on?

JR: Like I mentioned I’ve been working on a compiler. The use cases that I now have is probably triple of what is here. There is a lot of stuff you can do. Every protocol I have looked at, things like discreet log contracts become simpler to implement in this framework. The use cases are pretty dramatic. I am really excited about non-interactive channels. I think that is going to be huge. It gets rid of 25-50 percent of the codebase for implementing a Lightning channel because a lot of it is the initial handshaking and makes it possible to do certain things that are hard to do right now. The other stuff is all related with like scaling and trustless coordination free mining pools where you can pay people out. I sent Bob at some point some graphs around this. You can set up a mining pool where every block pays out to every single miner that participated in the mining pool over the last thousand blocks. Then you can do this on a running basis. You can have something where there is no central operator. You only get to participate if you provably participated in paying out to the people as specified over the last 1000 block run. Then you can use the non-interactive channels to balance out so that the actual number of redemptions per miner ends up being 1 for every given window that they exist in. You could minimize the amount of onchain load while being completely trustless for the miners in receiving those redemptions. There is a lot of stuff that is really exciting for making Bitcoin work as a really good base layer for Layer 2. That I think is something is going to be the major other use case. Another thing I am excited about with vaults is that vaults exist not just as something for an institution but they are really important for people who are thinking about their last will and testament, inheritance schemes. This is where the non-interactivity becomes really important. You can set up an audible vault system that pays out a trust fund to all your inheritors without interaction and without having to a priori inform of what the layout is. It can be proved to an auditor which is important for tax considerations. Anytime you are like “I gave 10 million dollars of Bitcoin to my heirs”, you have to prove when they got access to those funds. That is difficult to do in the current regime. Using CTV you can actually prove that there is only one time path to redeem those funds. You can set up things where there are opportunities to reclaim your money if you were ever to come back from the dead. If you were lost on a desert island you could come back and there would still be funds remaining in the timed payouts. I am really excited with all the new types of things people are going to be able to do. Vaults I think are a really important use case. Vaults are important not just for individual businesses where you are like “How are we securing our hot wallet stuff?” I think vaults are most impactful for end users where you don’t have the resources to employ people to be managing this for you. You want to set something up where let’s say you’ve got an offline wallet that you can send money to and then funds automatically come back online to your phone. But if you ever lose your phone you can stop the flow of funds. I think that is really exciting for CTV in particular, the ability to send funds to a vault address and for that vault address to automatically move funds to your hot wallet without requiring any signatures or anything. The management overhead for a user is very low. Your cold wallets can just be keys that are only sent to in the event of a disaster. Let’s say you are in 7 different bank vaults around the world. You have your vault that you send to and then you don’t have any requirement to actually have those recovery keys unless you have to recover. That is the big difference with CTV and vaults is that you remove keys from the hot path completely. There is no need for signing, there is just a need to send the funds to the correct place. This vault diagram is not accurate by the way. This a type of vault. The ones that I implemented that are in the repo are more similar I think to the form that Bryan put out.

MF: I am assuming you are going to have to focus on one or two use cases. To get consensus on this you need to convince that there is at least one real use case that people are going to get value out of. The flip side is making sure that it is not adding anything to the protocol that we don’t want to add. The upsides and downsides.

JR: It has been really difficult to be a singular advocate for this because you have to make a lot of conflicting arguments that ultimately work together. If you just told one side people would say “How do they work together?”. An example of this is Bob gives me a bit of grief. “If you had to design an opcode that was specifically the best thing for vaults would be CTV?” My opinion is yes. The next question is “It does all this other stuff too. Is that really accurate?” The people on the other side of the fence say “CTV, you really only have a single use case that you care about. Can you show that you have hundreds of use cases because we want to have things that are really flexible and general?” I am like “Yes it is very general.” What I am hoping to show with the language I am building is that it is really flexible and it is really good for these use cases. Hopefully that will be available relatively soon. The other argument that is difficult with this is talking about fees with scaling. I am telling everybody this is going to dramatically reduce fees for users but it is also going to increase mining revenue. How can they both be true? You are making better settlement layer use of Bitcoin so the transactions happening are going to be higher fee and you are going to have more users. It is called Jevons paradox if anyone is curious. As the system becomes more efficient usage goes up. You don’t end up saving.

MH: To build on what you just said Jeremy, to combine these different use cases. Could someone speak a bit more about having these batched withdrawals where you have the CTV to commit to the withdrawal transaction for users that then directly opens non-interactive channels? Would that work?

JR: That works out of the box. This is why I am very adamant about replace-by-fee is bad is that what I really want to see is a world where I go to an exchange with an address and they have no idea what that address is for, they just pay to it. They pay to it in one of these trees that has one txid or a known set of possible txids for the eventual payout. That lets me immediately start using a channel. What is nice about this is the integration between those two components is zero. I don’t need to tell the exchange that I am opening a Lightning channel. I just tell them “This is my address, pay this much Bitcoin to it.” There is no co-operation. You can imagine a world where you go to Coinbase and you give them a non-interactive channel address. It creates a channel for you. You give them a vault address, it creates a vault for you. You give them an annuity, it gives you an annuity. You can set it up so that there is zero…. If you paste in the address and you send the amount of funds you get the right outcome. I think there is definitely some tooling to support. I mentioned earlier having an opcode that lets you check how much money was sent to an address would be really nice. That’s an example that would make this integration a little bit easier in case the exchange sends the wrong amount of money. Most exchanges I know send exact amounts. Some don’t but I think that is a relatively easy upgrade. It also could be a new address type that specifies how much money is supposed to go there so the smart contracting side integrates really easily. Other than that they don’t need to know what your underlying contract is. I think it opens up a world in Bitcoin that works a lot more seamlessly. This is another big scaling benefit. Right now if you want to open a Lightning channel and you have funds on Coinbase you are doing at least one or two intermediate transactions to get the funds into your Lightning wallet and opening a channel with somebody. In this case you get rid of all those intermediate transactions. If you are talking about how Bitcoin is going to scale as a Lightning channel thing that is without having to convince exchanges to adopt a lot of new infrastructure for opening channels for users.

BM: This is basically one of the major benefits of CTV over deleted keys. A year or so ago I started making a prototype by essentially making a pre-signed transaction and deleting a key which is a mechanism to do a covenant but one of the major problems with it is I have to send from my wallet. I can’t give somebody an address which sends directly to a covenant. As Jeremy has described with CTV you can because you can put the script right there. It reduces the number of total transactions because as he mentioned if you want to open a Lightning channel first you have to send it to your Lightning wallet, then you have to open the Lightning channel. It is at least two transactions. The CTV route is more efficient and perhaps more interesting in that someone can send directly to your vault. You cannot do that generally with a deleted key type of covenant.

JR: What I would add is even less so for the vault use case than a scaling benefit, it is a big security benefit for a user. If you did have an exchange that you set up to understand vault protocols you could say “Only allow me to withdraw to vault contracts.” It would have to receive the vault description. You don’t have to have this intermediate wallet that you move funds on that maybe gets hacked with all the money. I think it adds a lot of user security having that story.

KL: I didn’t really catch up on the San Francisco workshop so sorry if it is a question that has been asked a lot of times. How do you compare or differentiate against a SIGHASH_NOINPUT in terms of vaults specifically?

JR: With SIGHASH_NOINPUT you can perfectly emulate CTV. It is not the obvious way of using SIGHASH_NOINPUT but it is one way you can use it. It emulates something that is very similar. There are a few drawbacks to that methodology. The first drawback is that is less efficient to validate and your fees are going to be more expensive. The other drawback is that it is a little bit harder to compile which is annoying if you are talking about making contracts where you have log(n) possible resolutions but you have a very wide number of different possible cases. Your compiler is going to be way slower. This imposes a limitation if you are using these contracts inside of Lightning channels on how many resolutions you can have. It makes them less useful in a Layer 2 context, negligibly so I guess. Signatures are 100,000 times slower than hashes. It is a lot slower if you are doing a signature based one. You are adding more functionality. SIGHASH_NOINPUT / ANYPREVOUT is less likely to get into Bitcoin. This is what I talked about when I said you have to preserve these really critical invariants in Bitcoin. It is pretty easy to show CTV doesn’t break these but the broader set of functionalities that you have around SIGHASH_NOINPUT you do have issues of burning keys permanently because you signed with them. We have all these design constraints around SIGHASH_NOINPUT that have come out around tagging keys and having different specifiers in order to prevent these weird use cases. CTV doesn’t really have the same issues around the fact that it is using a hash that is only used for CTV, it is not using keys that are used for general purposes. Making keys into some sort of toxic waste. I think that is one of the other benefits in terms of security. There are a few other reasons why you would prefer CTV. Future flexibility, if you add OP_CAT later you don’t get new features with SIGHASH_NOINPUT I think. With CTV you get a bunch of new types of custom template contracts that you can write. It has a better upgrading path in the future as well. With CTV the hashes are versioned so if you add a new version to the hashes you can add a new sighash flag field basically. So there is more flexibility down the road than with SIGHASH_NOINPUT functionality. Strictly speaking I would be very happy if SIGHASH_NOINPUT, ANYPREVOUT or ANYSCRIPT would get merged because that would let me do it today but I think it is less likely.

Vault designs

MF: We’ll move onto vaults. Obviously for those don’t know, some vaults need CTV, other vault designs don’t. I think Kevin (Loaec) who will speak about his design next week has got around using CTV. He did say in the interview with Aaron van Wirdum that ideally he would have used CTV if it was available. In terms of resources that we have on this Pastebin, one of the early ones is a post from Bob McElrath on reimagining cold storage with timelocks. Bob, do you want to talk through this post?

BM: I can give a brief description. This was published shortly after Gun Sirer et al published their paper on covenants. It was around the time that the timelocks came out. At the time, before Jeremy had the idea of doing CTV, there was no covenant mechanism. There have been about five different covenant mechanisms that have been proposed, none of which are active today on Bitcoin. These are all covered in the talk that I gave. There are probably more. The only thing that was actually available when I started our project over a year ago was deleting keys. That what that post was about. There is historic precedence for this. Back in the olden days, like in the old West people would create a bank vault with a physical timelock on it. In other words the bank operator goes home at 6pm or whatever and locks the vault such that a robber can’t get into the vault at night whilst he is away. This is a physical example of a timelock. At the time timelocks had just come out and enabled some of these use cases. The picture for the vault use case is that there are two spending branches. One of which is timelocked, one of which is not. The timelocked branch is your normal operation. This is exactly opposite to the way Lightning works. You want to enforce a timelock on your funds such that you yourself can’t spend it until let’s say 24 hours passes. There is an unvault operation. You have to take your funds and you have to unvault them. In the case of CTV or something like that you are broadcasting the redemption transaction. Or if you have deleted keys you are broadcasting a pre-signed transaction. In the case of Revault they don’t use deleted keys but they do make a big multisig and they pre-sign this transaction. The whole point of that blog post was that once you’ve done that, the signed transaction or the CTV script is a vaulted object. I can then figure out what do I do with that vaulted object. How do I move it around? Where do I store it securely? When do I broadcast it? This signed transaction is of somewhat lower risk than bare private keys. As I mentioned, there are two spending paths. One is timelocked and the second is not timelocked but has a different set of keys in it. That is your emergency back out condition. The point of the whole vault construction is that if somebody gets into your wallet and they get the bare keys they will presumably get the ones that are timelocked. If you see them unvault one of your transactions you know a thief has gotten in. You can go get the second branch of keys out of emergency cold storage and use those to reclaim the funds before the thief can get the funds.

BB: We should not be recommending that.

BM: I am describing what the blog post says. We will discuss reasons why we shouldn’t do that.

BB: You should pre-sign a push transaction instead of having to go to your cold storage to get keys out. That is the obvious answer. You are saying that you go to cold storage when there is a problem and you use the keys to fix the problem. But really you should have a pre-signed push transaction pushing to the cold storage keys.

BM: Yes. This starts to get into a lot of design as to how do you organize these transactions. What Bryan is discussing is what we call a push-to-recovery-wallet transaction. The thief has gotten in. I have to do something and I am going to push this to another wallet. Now I have three sets of keys. I have the spending keys that I want to use, I have my emergency back out keys and then if I have to use those emergency back out keys I have to somewhere to send those funds that the thief wouldn’t have access to. These vault designs end up getting rather complicated rather fast. I am now talking about three different wallets, each of which in principle should be multisig. If I do 2-of-3 I am now talking about 3 devices. In addition, when this happens, when a thief gets in and tries to steal funds I want to push this transaction. Who does that and how? This implies a set of watchtowers similar to Lightning watchtowers that look for this event and are tasked with broadcasting a transaction which will send it to my super, super backup wallet.

BB: One idea that I will throw out is that in my email to the bitcoin-dev mailing list last year I pointed out that what you want to do is split up your coins into a bunch of UTXOs and slowly transfer it over to your destination wallet one at a time. If you see at the destination that something gets stolen then you stop broadcasting to that wallet and you send to cold storage instead. The other important rule is that you only allow by for example enforcing a watchtower rule only allow one UTXO to be available in that hot wallet. If the thief steals one UTXO and you’ve split it into 100 by definition there is one percent that they have stolen. Then you know and you stop sending to the thief. Bob calls it a policy recommendation.

MF: There are different designs here. I am trying to logically get it in my mind. Are there certain frameworks that we can hang the different designs on? We will get onto your mailing list post in a minute Bryan. Kevin has got a different design, Bob seemed to be talking about an earlier design. How do I structure this inside of my head in terms of the different options? Are they are all going to be personalized for specific situations?

JR: The way I have been thinking about it is in terms of what the base layer components are. I think that in a vault essentially what you are looking at is an annuity. You are setting up a fixed contract that has some timing condition on every next payment. At the base layer most good vault designs, some you would do something a little bit different, this is what you are working with. The value flows along that path. At any point you can cancel the annuity. If you cancel it the amount remaining goes back somewhere else. Or you can spend the amount that has been redeemed so far. Everything else exists as either key policies on who can take those pathways or as policies on if you observe certain conditions which actions you should take. Those exist at a higher order of logic. Does that help a little bit? That backbone exists in all of these proposals. It is just whether or not you are using multisigs or you are using push-to-recover or you are using signing paths for the cold storage cancel path.

Bitcoin upgrade path to enable better vault designs

KL: Another important thing is what is doable today and what is not. Of course the kind of vault that for example Revault describes is practical today. You don’t need to change Bitcoin to make it work. Of course we are very far from having a properly blockchain enforced covenant. We have to use some tricks around it with either deleting private keys or for us we use co-signing servers which are very far from being perfect. At least we can somewhat emulate the fact that the output is pre-defined to follow certain rules. The early papers on covenants usually required a new opcode and that is a big problem. Who is going to work on creating an implementation of that if you don’t know if the opcode is going to be added to Bitcoin? That is what Jeremy is facing right now. He has been working hard for a few years on his opcode but you still have this uncertainty. When is it going to be added? Is it going to be 6 months? Is it going to be 2 years? Is it going to be 5? It is really hard when you are trying to push an idea like vaults especially for businesses when you are required to work a lot on the implementation itself because it is a security product, if you rely on assumptions like is this specific opcode going to be added or are there going to be some major changes to my BIP that are going to break my implementation? So for me there is also this separation between what can be done today practically even if it is not perfect versus what would be the perfect way of doing it where everything is enforced by Bitcoin itself.

JR: I think that is a super useful distinction to be drawing because it is looking at the trust models. I do think at the same time Bryan has a public confirmation of this idea, is that the way CTV and multisig and pre-signed and key deletion interact is that they are all basically perfectly interoperable. You could have a system with minimal changes between A and B where you are using one or the other security models. Feel free to disagree, but if CTV were available this conversation wouldn’t be happening. We would all agree that CTV was the easiest thing to work with for this goal. There may be some questions around fees but I think this is the design space. The question between Revault and “The Team” is really around do you prefer pre-signed deleted keys or do you prefer a multisig server. That is ultimately a user preference. That should be a check box. If you have to choose between pre-signed and a multisig server which one do you prefer?

BB: Another interesting way to distinguish that even further is that for secure key deletion, that works really well when a user is the primary beneficiary. It works less well when it is a group of mutually distrusting parties.

BM: You run into serious problems for instance with audits. How do I prove that the vault exists and you basically can’t? I think everyone on the call would agree that CTV is the best solution here. I know Jeremy has been very frustrated and has been working on this for a long time. Everyone is basically hedging their bets. As Kevin just said “Maybe we will get this BIP, maybe we won’t. Maybe we will go in a different direction because we are not sure.” It would be terribly fruitful if everybody on this call could get behind these ideas. There has been very little response to Jeremy’s work. There are no responses to Bryan’s post on the mailing list. We have all got to get together, do we want this or not?

BB: So about that, I know this has been an issue for a lot of us including Jeremy. Getting pubic feedback and review and interest. I would say from my email there has been a lot of private feedback and conversations like this. They just don’t show up on the mailing list because none of us can be bothered to write the same things twice which is a bit of an issue. This is a pet peeve of mine. It would be wonderful if there was only a single place where you had to check to see the latest update on things.

BM: It creates a perception that no one cares because there are no responses. I think this is definitely not the case. We need to rally round something, one way or another.

MF: Maybe it is just a site like Jeremy’s, utxos.org but for vaults. Then there is at least a centralized place where you keep going for updates. The big thing hanging over all of this conversation is that I don’t think there are too many people who want to discuss future soft forks until Taproot is done. People are so busy on getting Schnorr and Taproot in and nothing else is going to be getting into that soft fork. That is still a long way off. That is the big question mark, whether people have the time.

JR: I think this is a pretty big mistake. Maybe this is something as a community we can work on. With this group of people we could really make an impact on this. Taproot is having a lot of changes still. It is still not a very stable proposal. Taproot is great and I really want to see it. But that is the reality. There are changes that happened a week or two ago for the signature algorithm that are being proposed. There are changes to the point selection, if it is even or square. The horizon is perpetually looking a year out on it being a locked down document. I know that CTV has not had the level of review that Taproot has had but it is substantially simpler. I don’t think it requires any changes at this point. I do think if we had a concerted push towards getting it reviewed and slotted for roll out there is no formal schedule for when changes have to deliver in Bitcoin. They can be soft forked out when they are ready. I think the question is if we have a room of five people that are all saying that our lives would be made easier if we had CTV then let’s get it done.

MF: The argument against that though is it would be rushed to try to get it in before Taproot. All efforts towards Taproot and then future soft forks once Taproot is in.

JR: Why though?

BB: One conceptualization of Bitcoin Core review capacity is that it is a single mind’s eye. We can only very slowly move together and carefully examine things all at once.

JR: I think there is value to that. We only have so much focused capacity. I would make the suggestion that if that is the case then we did Taproot in the completely wrong way. We really should have done Schnorr and MAST as two separate things that we can checkmark progress along the way rather than all at once that is going to take years to roll out. There is other stuff that is important work to get done. This is my question for vault implementers generally. I think vaults is one of the most compelling new use cases for Bitcoin to dramatically improve user security. My question is does Taproot or CTV do more for the practicality of you being able to deliver vaults to your users?

MF: In the small case of vaults maybe. Obviously there are lots of other use cases.

JR: With CTV there are many other use cases too. That’s a general question more rooted in the CTV side. That is the question just focused on vaults because we are in the vault section. What is the wishlist of things to make vaults better in Bitcoin you need to have and how do we deliver as the Bitcoin community? How can we deliver on these features to make this viable? If it is not CTV I don’t care. If we need something else what are the actual things that we need to be focused on in order to make vaults really work for people.

KL: For us for example Schnorr and Taproot would be a really good improvement already. Maybe if you have a proper covenant you can prevent a theft. Although at some point you need to be able to move your funds to wherever you want. Having a vault is more of a deterrent against an attack than a way to completely be in your own bubble where nobody can move funds outside. At some point you will need to be able to move funds outside. For us Schnorr and Taproot is a really important thing because it would completely hide the fact that we might be using vaults. It also hides some of the defense mechanism especially around the emergency transactions that Bob also uses. That is one of the things that I wanted to cover is that multisig today is cool but what we can do with Schnorr, Taproot is much more powerful. That would be extremely useful for vaults in my opinion.

BM: When you use Taproot your timelock script has to be revealed. A timelock is an opcode and you have to reveal it. One of the major benefits of Schnorr is that you can just sign a multisig. This is great for protocols like Lightning. Because the timelock in vaults works in the opposite way all of your spends have to reveal the Tapscript that contains the timelock. You lose a lot of privacy doing that.

JR: I don’t think that that is completely accurate. If you are signing and you are using pre-signed or multisig you sign the nLockTime and nSequence field without necessarily requiring a CheckLockTimeVerify enforcement. Does that make sense?

BM: Yes. There are a couple of ways to do timelocks there.

JR: That’s the thing with CTV that is interesting. You have to commit to all the timelock fields so you don’t need CheckLockTimeVerify or CheckSequenceVerify. It is just automatically committed to. It is the same thing if you are doing pre-signed. You don’t need those opcodes unless you want to restrict that a certain key can’t be used until a certain time.

MH: Taproot is awesome, CTV is awesome. Why not both? Could we get CTV as part of Tapscript so the new opcodes that are being introduced with Taproot, could CTV be one of them?

JR: I’m not aware of any new opcodes currently being proposed with Tapscript. There might be some with slightly different semantics like the signature stuff that I’m not sure of. It is not like we are adding new things there. The reason why in my original proposal for CHECKOUTPUTSHASHVERIFY, last year I was like “It looks like Taproot is happening soon so let me propose it as using these new extensions.” Then months went by and Taproot didn’t seem to be making strong headway so I said “Let me do this as a normal OP_NOP upgrade because it is really not a dependent feature.” I think that is better for Bitcoin because if you try to say that you will only get CTV if you accept Taproot is worse for the network being able to independently consider changes. That is one reason to not layer them. The larger question you are asking on why not both? Yes, let’s get both in. I think it is a question of on our engineering timelines what is feasible to get done? One of the reasons why I think we want to get CTV very soon is it does help with congestion. It will take a year or two for people to employ those congestion control mechanisms and we are already seeing a major increase in fees right now. We have to be really focused on improving our fee situation. Taproot helps a little bit with fees but the reality is that most users are using very simple keys. Hopefully we can change that, hopefully we will add more user security. Right now the majority of transactions are not materially going to be made more efficient with Schnorr and Taproot. The majority are simple single signatures. Maybe validation will be faster but we are not increasing capacity or decreasing fees. I think we need to be doing things in that category. That’s why I think there is some urgency to do this. I think this is doable in a month. I’m not trying to advocate that timeline but I think that is the amount of review that this idea would take for people to seriously get comfortable with it. Taproot, I have reviewed it many times and I am still not completely comfortable with it. It is inherently going to take a long time. To Bryan’s point on if we should have the eye of review on a single topic, we need to as a community only put our eye on review on things that we can more quickly process. If it is things that are very slow to process we are not going to be nimble enough as a project to actually deal with issues that come up, if we are stuck on things that are like a three year roadmap.

MF: I asked Christian Decker about SIGHASH_NOINPUT. He was very much of the opinion that we don’t want to change the proposal now. Any thinking of adding new stuff in is potentially going to open up Pandora’s box or a can of worms where it starts all this discussion up and disagreement that we want to avoid.

AG: It was interesting to hear that discussion of motivations. I am hearing both security improvements and you are really focused on this congestion control as to practical implications about trying to get this out fairly quickly. I’m curious though. Let’s say we got Taproot quickly, it would take a long time for it to have any impact on wallets and it probably wouldn’t address fees immediately in any realistic sense. I can certainly see those arguments. I am a bit worried about what this looks like. Suppose CTV was deployed today what are wallets going to have to do to make best use of this congestion control feature? You might argue nothing but I have a feeling in practice it is going to be a lot of infrastructure work.

BM: It is a lot. They have to understand the tree structure.

JR: That is only marginally accurate. It depends on what wallet you are asking about. There are two classes of wallet that we’ll look at. Let’s look at infrastructural wallets like Coinbase or Kraken and then user wallets. They both have different requirements. If you look at infrastructural wallet they have a really hard time changing what their internal keys look like. BitMEX for example still uses uncompressed public keys. Why? They made a really great custody engine and is it worth it for them to change the code there? It has worked, they haven’t had a hack of it that I recall. If they were to change that then there is risk and risk costs a lot of money. For them maybe one day they will get SegWit. But they are probably not going to adopt even these better multisig things, for a decade. For them changing their own internal key type is really hard. Changing their output type is actually a little bit less challenging because they are just changing what address they are paying into. They have been able to do things like batching sooner. Batching is much more complicated than SegWit, keep in mind. Batching has a lot of very weird edge cases that you have to deal with in order for batching to not result in loss of funds. But they have been able to add things like batching. I think that for CTV all it is doing is at the layer where they decide which transaction they are going to spend to, it is a single new address they are going to be paying to cover their liabilities. On the receiving end for CTV users who have existing wallets, those wallets just need to be able to understand, in order for this to work today, an unconfirmed transaction. I think that most wallets already understand an unconfirmed transaction. So it should work reasonably ok. At the exchange, infrastructural wallet layer they can also guarantee some of the execution of those expansions. I think Optech has a good write up of this. If you are willing to wait up until a day of blocks to get final confirmation you can save 99 percent on fees. They can take that advantage to get the confirmation at the time of request. They as the infrastructural wallet make the full redemption whenever fees are low enough later that day. I think you will see a pretty easy to migrate to benefit without having to have too many changes to user wallets that can understand unconfirmeds and which the processing to get fully confirmed can be handled by the exchange. I think it is easy to deploy on relatively near term. But the more sophisticated use cases are absolutely going to take a longer amount of time.

AG: For myself I find it a little bit unclear. My main feeling about it is that it is going to be a struggle to convince the hoi polloi of exactly what is going on here. As you say wallets already understand unconfirmed. If that is all we are talking about then people will just say “Why aren’t you sending me my transactions quickly enough?” Most ordinary users just think unconfirmed is nothing and that is why they are generally willing to spend more in fees than they should be willing to spend. They don’t really get it. I don’t think they are going to get this either.

JR: I think it depends. Ultimately with this the only change that the wallets would need to have is to tag things that are observable as a CTV tree as being confirmed and treat it as confirmed. That is very minimal. I have made that change for Bitcoin Core’s wallet. It is like a 30 minute change. It is not that hard. It is just a question of if they have updated software or not and whether it shows up being fully confirmed or unconfirmed. It is hard to get wallets to upgrade but it is not the largest change around. There is this weird curve where the wallets are worse just always spend from unconfirmed so it is not a problem for them. The wallets that are better separate them out but also people who are using those wallets are more likely to maybe receive an upgrade. I don’t think the roll out would be awful for this type of stuff. It would be we go to the exchange and we ask them “Why isn’t this confirmed?” and they say “No it is. Upgrade your wallet and you will see it.” For users who aren’t sophisticated that is a sufficient story.

BM: It does require an additional communication channel between the receiver’s wallet and the sender’s wallet? The sender has to send the tree?

JR: Just the mempool.

BM: You have the whole tree in the mempool?

JR: Yes. Congestion is really for block space. Unless you have a privacy reason for not showing what the total flow is you can always just broadcast a transaction and then it will live at the bottom of the mempool. That is how people learn of transactions right now.

BM: Another interesting thing to think about here is whether wallets in the past have upgraded to new features in general. As mentioned the vast majority of transactions out there are pay-to-pubkey-hash (P2PKH). Why is that? Most wallets don’t even use pay-to-script-hash (P2SH) or multisig. Why? The answer is because everybody who is making wallets is also making s***coin wallets. In order to have an uniform experience and uniform key management for let’s say Ethereum and Bitcoin, what they’ve done is go toward using a single key for everything. And adding things on the back end like multiparty ECDSA so that it is actually multisig on the back end. Unfortunately I don’t think this dynamic isn’t going to go away anytime soon. In my experience very few vendors have implemented some of the more advanced features on Bitcoin unfortunately.

JR: I think that is a great point. One of the things I am worried about for Taproot for example is the actual roll out in wallets is going to be ridiculously slow. It is going to be a new signing thing and wallets are already existing with a single seed format. They are not going to want to rederive a new thing. I think it is going to take a very long time for that adoption to pick up for user wallets. That is one thing that is nice with the CTV rollout. All they have to do is what they are already doing. Most of these wallets already show an unconfirmed balance, especially the s***coin ones. They show unconfirmed balance because they are zero confirmation wallets.

BM: The benefit of using the tree is that you don’t have to put it all in the mempool. If I am going to put everything into the mempool anyway I might as well have not done the tree?

JR: That is not quite true. You can always broadcast anything that will show up in the mempool somewhere but what is important to keep decongested is the top of the mempool. The actual mempool itself, it is the fine to put these things in and then they get propagated around. If the mempool backlog grows they get evicted. That is fine, that is an ok outcome. You don’t want to be in the situation where you have so much stuff in the mempool that high value transactions that are completely unconfirmed because if their outputs get unspent they could be double spent, those getting kicked out of the mempool is much more problematic for users. When you are a user and something goes in the mempool, you see it, you see observe it. If it applies to you you store it in your wallet even if it goes in and out of the mempool. It just has to go into some mempool. Most of these wallets are not using a mempool on their own wallet. They are using a mempool on the server of whoever is providing the wallet. Those can be configured to be watching for the users’ keys or whatever. Or you are filtering and they can be configured to be storing…. The mempool can be terabytes big. It doesn’t need to be a small thing. It only needs to be small if you are a miner. If it is too big and you are trying to mine, a big mempool is more problematic.

Other vault related ideas

MF: Before we go onto Bryan’s mailing list posts there are a couple of people’s work that I added to that Pastebin. One was Peter Todd’s work on single use seals. Another was Christopher Allen’s work on smart custody. Are any of these of interest to people? Any thoughts on these guys’ work?

AG: I just want to mention how incredibly easy to understand single use seals are. It was an extremely sarcastic comment.

JR: I have always found them and their description completely inscrutable. If somebody feels like they can take a shot at that.

BB: Apparently RGB is using this in their code.

AG: We just read some C++ code and it would be easier to understand than the mailing list post.

BB: Code is the universal language.

BM: I think I can describe it pretty simply but I don’t know how it is related to vaults. A single use seal is something that you can use once. If you have ever seen a tag on your shipping crate.

BB: We understand what it is. It is just Peter Todd’s description of it and how it applies to Bitcoin.

BM: Peter Todd’s description is inscrutable, I agree.

JR: Is it just spending a UTXO? That is all we are talking about?

BM: Spending a UTXO is an example of a single use seal. You can only spend a UTXO once.

AG: My sarcasm is one thing. I do think there is something interesting there. His idea of client side filtering but it is pretty abstract. It is perhaps not a topic for today, I’m not sure.

BM: I don’t know how it relates to the vault topic.

MF: Apologies for that if I am introducing noise. The smart custody stuff that Christoper Allen worked on is relevant.

BB: I was co-author with Christopher Allen on that project for some of the Smart Custody book along with Shannon Appelcine and a few others. It was the idea of let’s put together a worksheet on how to do custody for individuals, how to safely store your Bitcoin using hardware wallets. The sort of planning you might go through to be very thorough and make sure you have checklists and understand the sorts of problems you are trying to defend against. The plan was and I think it is still the plan to do a second version of this smartcustody.com work for multisig which was not covered in the original booklet.

JR: I would love that. I have some code that I can donate for that. Generating on your computer a codex which is a shuffled and then zipped BIP 32 wordlist. Then you take the wordlist and you use it as a cipher to write your seed in a different set of words. You give one party your cipher and you give the other party the encrypted word list. What is the point of that? You have a seed that is on paper somewhere and now you want to encrypt it and give a copy to two of your friends so that you have a recovery option. I feel like having little tools to allow people to do that kind of stuff would be pretty nice. Being able to generate completely offline backup keys and shard out keys to people.

BB: Definitely send that to Christopher Allen. He also has an air gapped signing Bitcoin wallet based off of a stripped down iPod touch with a camera and a screen for QR codes. That is on Blockchain Commons GitHub.

MF: Let’s go onto to your mailing list posts Bryan. There are two. Bitcoin vaults with anti-theft recovery/clawback mechanisms and On-chain vaults prototype.

BB: There is good news actually. There were actually three. There were two on the first day. While the first one was somewhat interesting it is actually wrong and you should focus on the second one that occurred on that same day which was the one that quickly said “Aaron van Wirdum pointed out that this insecure and the adversary can just wait for you to broadcast an unlocking transaction and then steal your funds.” I was like “Yes that’s true.” The solution is the sharding which I talked about earlier today. Basically the idea is that if someone is going to steal your money you want them to steal less than 100 percent of your money. You can achieve something like that with vaults.

KL: Something else really interesting in it is Bryan also takes the path of deterring an attack. I think Bryan in the last step you always burn the funds although maybe it is not always?

BB: It is only if you are in an adversarial situation but yes.

KL: I think it is really cool because the whole point of this type of approach is really not to be bad on the user but really to be hard to steal. To deter the attack in the first place.

BB: My vault implementation is not the same as the version being implemented at Fidelity. It is a totally different implementation. There is some similarity. I admit this is very confusing. There are like five different vault implementations flying around at the moment. Jacob on the call her, he has is own little implementation which is number 5. Jeremy has his which is 6. There is mine and the one at Fidelity based off of secure key deletion and also some hardware wallet prototypes. Then there’s Kevin’s Revault. I’m sure I’m forgetting one at this point.

MF: There is a lot going on. I’m assuming some of them are going to be very specific to a certain use case. I don’t know what is going on at Fidelity. Perhaps they have special requirements that other custodians wouldn’t have.

SH: I can speak to what we’re doing at Fidelity. This is what I am on working on. As you may know in January 2019 we released FDAS which is Fidelity Digital Asset Services. It is custodianship for institutional clients. The deleted key vault that we are working on is open sourced. We do have a work in progress implementation on our public facing GitHub page. The interesting part is Vault-mbed repo which is currently under refactoring. We are not looking at extra functionality at the moment. I’d be happy to answer any questions that someone may have.

MF: The next resource is Bryan’s Python vaults repo which is one of those particular implementations. In Python so I am assuming this is a toy, proof of concept type things.

BB: Yes. This is definitely proof of concept. Don’t use it in production. The purpose was to demonstrate that this could all work. To get some sample transactions and use them against Bitcoin regtest mode. It works and definitely open to feedback and review about it. One of the interesting things in there is that there is both a version which is default that uses secure key deletion or a pre-signed transaction where you delete the key. But also an implementation using BIP 119 (OP_CHECKTEMPLATEVERIFY) as well. An interesting note, Jeremy has been polite enough to not bring it up, Jeremy’s version in his branch of Core is substantially more concise and I am a little confused about that. I’m not sure why mine isn’t as concise as yours.

JR: I think I benefit a lot from having implemented it in Core. You have access to all the different bits and bobs of Core functionality, wallet signing and stuff like that. It is an interesting point. Let me find a link so I can send out this implementation to people. I was trying to think about how I write this as a template meta program so that I have all of these recursion things handled in terms of “Here is a class that attaches to another class and there are subclasses.” I think that is a nice model. I also spent some time trying to make a template meta programming language for C++ that allows you to write all different types of smart contracts. I really hit a wall with that. What I have built now, setting up for the big punchline, is this smart contracting language that I have been trying to hype a little bit. It is called Sapio. It isn’t released yet but hopefully soon I will get it out there. If you think the implementation I have in Core is concise wait until you see this one. This one is like 20 lines of code, the whole thing. It is 20 lines of code and then thousands of lines of compiler. There is a trade-off there. I am hoping the compiler that I am working on will be general purpose and I think this is something that I’d love to follow up later with everyone who is working on vaults because I think everybody’s vaults can probably be expressed as programs in this language. You will probably save a lot of lines of code. Maybe we can put communal effort on the things that are same across all implementations. Things like how do you add signatures? How do you template those out? How do you write the PSBT finalizers? All that kind of stuff is general logic.

Q - Can you describe this language briefly? How does it compare to say Ivy or Miniscript?

JR: It is a CTV based language. You can plug out the CTV with emulated single party pre-signed or you can have a multisignature thing. That is the security model that you are willing to do and whether you have the feature available or not. Ivy and Miniscript are key description languages. They operate at the level in a metaphor like “What is my house key? What is my car key? What is my bus pass? What is my office key?” This language operates at the level of commutes. You say “I leave my house, I lock my door, I go to my car, I unlock my car, I start my car, I drive to my office. Then I unlock my office.” Or I walk to the train station, I take the train, I walk to my office and then I unlock the office. It is describing the value flow not just a single instance of an unlocking. Ivy and Miniscript describe single instances of unlocking. This language is a Turing complete meta programming language that emits lists of Bitcoin transactions rather emitting a script. It emits list of transactions rather than a script. That’s the succinct version.

Revault design

MF: Just to bridge to next week’s presentation with Kevin and Antoine, I thought we could discuss Revault. Aaron tried to do this in the interview with Kevin and Antoine which was tease out the differences between Revault and some of the other vault designs such as Bryan’s and Fidelity’s.

KL: With Revault the main thing is that when we started it we started with a threat model and a situation which is different. It was a multiparty situation where we had a set of participants being the stakeholders and a subset of participants for this specific client at least that were doing the day-to-day movement of funds. To explain more clearly they are a hedge fund, they are different participants in the hedge fund but they are only a subset that are traders. The threat model includes internal threats like the two traders trying to steal the money of the fund. This is something quite new or not well covered in other proposals until now. Hopefully we will see more soon. There is the external threat model that a multisig covers. Then you have the internal threat model. The two signatures are not enough for the two traders because you want to include the other one as some kind of reviewer of transactions or whatever you want to call them. Of course there is the main threat for most companies and people today in Bitcoin which is the physical threat. Somebody coming to you, the five dollar wrench attack. We are quite lucky to not have too many of those but when you look at the security at for example exchanges today you would be really surprised to see 500 million or even a billion USD of Bitcoin being secured behind a single key. If you find the right people to threaten then you might be able to steal the funds which is super scary. We are trying to address that kind of stuff. Another difference is because it is for business operations we are trying to reduce the hassle of abusing it. Most security things are defeated if it is complex to use, people are going to bypass it. This is also very problematic so you want the traders to be able to do their job. If every time they are doing a transaction they have to ask their boss if it is ok to move the funds and check the destination address, amounts etc then it is never going to be used. They are going to bypass it somehow. We are trying to move the validation from being some kind of verification after the transaction is created to the opposite. When the funds enter, when somebody deposits money to this hedge fund or whoever uses Revault, the funds are locked. The funds are locked by default. Then you would need all the stakeholders, in my example it is four, to pre-sign the set of transactions to be able to move them. By default they are locked and then you can enable the spending by having everybody signing the transaction being able to move them and revault them. In case of an attack you need to have a pre-signed transaction to put them back in the vault. After that the spenders, the two traders, are able to craft a transaction. That will be timelocked, you will have a delay before it is mined. Different conditions can be triggered to revault the funds if something is wrong. This could be enforced by the stakeholders, that could be third parties like watchtowers and other things like that. Of course it is more complex than that because we are really trying to emulate something like CTV with the tools that we have today. It is not really simple. Personally I am not fond of deleting private keys. Secure key erasure is not something I really like. Personally I am trying to avoid this in the design. At the end of the day it creates other problems. We are having to use a co-signing servers today which is not cool. I don’t know how we will implement that properly or if we can remove it. Antoine who is not on the call today is actively working on trying to remove this co-signing server but that might mean that we are moving towards secure key deletion as well. I think it is a trade-off and you have to see what risk you want to take. Secure key deletion creates other burdens around backups because you need to have to backup every pre-signed transaction. I hope it is a good primer. I don’t know if you have any questions. It would take a long time to dig into it I think.

MF: The few main differences that you talked about in that podcast with Aaron van Wirdum is multiparty architecture rather than single party. Pre-signing the derivation tree depending on the amount. In Bryan’s implementation you need to know the amount in advance.

KL: In the original architecture from Bryan last year the funds are not in the vault before you know exactly how many Bitcoin you want to secure. You have to pre-sign all your tree and then you move the funds in. From that time the funds are protected. Of course that is not usable in normal business operation. At least you would have to consider a step before that where your deposit address is not part of the vault system. It is doable, you can do a system like that, it is not a problem. But it is not part of the vault before pre-signing the transaction because of the deleted private keys and things like that. We don’t have this problem. Our funds are secure as soon as they enter. Different trade-offs.

Mempool transaction pinning problems

MF: Before we wrap I do want to touch on Antoine’s mailing list post on mempool transaction pinning problems. Is this a weakness of your design Kevin? Does Bob have any thoughts on this?

BM: I think Jeremy is probably the best to respond to that as he is actively working on this.

JR: All I can say is we are all f***ed. It would be great if there was a story where one of these designs is really good in the mempool. It turns out that the mempool is really messy. We need to employ 3-5 people who are just working on making the mempool work. There is not the engineering budget for that. The mempool needs a complete rearchitecting and it doesn’t work in the way that anybody expects it to work. You think that the mempool is supposed to be doing one layer of functionality but the reality is the mempool touches everything. The mempool is there in validation, it is there in transaction relay and propagation and it is there in mining. It needs to function in all those different contexts. It needs to function quickly and performantly. What you end up getting is situations where you get pinned. What pinning means is the mempool has all these denial of service protections built into it so that it won’t look at or consider transactions. Because the mempool is part of consensus it is not what it sounds like. It is not a dumb list of memory that just stores things, it is actually a consensus critical data structure that has to store transactions that could be in a valid block. It has to be able to produce a list of transactions that could go into a valid block. Because of that you really tightly guard how complicated the chains of transactions that can get into the mempool are. People have issues where they will do something that is slightly outside of those bounds and they are relying on the mempool being able to accept a transaction for a Lightning channel let’s say. Because the mempool is quite big you have things that are low fee at the bottom that can preclude a new high fee transaction coming in that prevents you from redeeming a Lightning channel which means you are going to lose money. Especially for Lightning, especially for cross-chain atomic swaps. What is annoying about this is because of the way UTXOs are structured this can be somebody who is completely unrelated to you spending from some change address of a change address of some other long chain. With any of these designs, if you have pending transactions you are going to have a really hard time with this stuff. I would like to say we have a plan for making the situation completely addressed. Bcash did deal with this, they got rid of child-pays-for-parent and they have unlimited block size. It turns out that if you do those two things you stop having a lot of these denial of service issues. I don’t think that is viable for Bitcoin at this point but it is not even the worst option among options that could possibly be a thing. I think we just need to invest a lot more engineering effort to see where we can elevate the mempool into. It is the type of issue that people look for carve outs, little things that can make their niche application work. Lightning did this one time with the Lightning carve out that prevents pinning in a certain use case. Six months later they found out that it doesn’t solve all the problems. I don’t think it is going to be a carve out thing to fix some of these pinning issues. It is going to be we have completely rearchitected the mempool able to handle a much broader set of use cases and applications. I am a little bit negative on the prospects of the mempool working really well. I think that is the reality. I am working on it, I am not just complaining. I have like 30 outstanding PRs worth of work but nobody has reviewed the second PR for two months. It is not going to happen if people aren’t putting the engineering resource on it. That’s the reality.

The role of watchtowers

Q - What are your thoughts on the watchtower requirement here? I see a path of either bundling watchtowers for many services versus separate towers for separate services. Which way of handling do you think is best long term?

BB: I will probably hand this over to Bob or Jeremy about watchtowers. It is a huge problem. The prototype I put together did not include a watchtower even though it is absolutely necessary. It is really interesting. One comment I made to Bob is that vaults have revealed things that we should be doing with normal Bitcoin wallets that we just don’t do. Everyone should be watching their coins onchain at all times but most people don’t do that. In vaults it becomes absolutely necessary but is that a property of vaults or is that actually a normal everyday property of how to use Bitcoin that we have mostly been ignoring. I don’t know.

BM: There are many uses of watchtowers. As time goes on we are going to see more. Another use for watchtowers that has come up recently is the statechain discussion. Tom Trevethan posted a ECDSA based statechain that I think is pretty interesting. It also has the requirement for watchtowers. It is a method to transfer UTXOs. What you want to know is did a previous holder of the UTXO broadcast his redemption transaction and how can you deal with that? I think there is a path here to combine all of these ideas but there is so much uncertainty around it we currently wouldn’t know how to do it. There are multiple state update mechanisms in Lightning and that is still in flux. Once you start to add in vaults and then statechains with different ways to update their state there is going to be a wide variety of watchtower needs. Then you get to things like now I want to pay a watchtower. Is the watchtower a service I pay for? Can it be decentralized? Can I open a Lightning channel and pay a little bit over time to make sure this guy is still watching from his watchtower? How do I get guarantees that he is still watching my transactions for me? There is a lot of design space there which is largely unexplored. It is a terribly interesting thing to do if anybody is interested.

JR: I think part of the issue is that we are trying to solve too many problems at once. The reality is we don’t even have a good watchtower that I am operating myself and I fully trust. That should be the first step. We don’t even have the code to run your own server for these things. That has to be where you start. I agree longer term outsourcing makes sense but for sophisticated contracts we need to have at least something that does this functionality that you can run yourself. Then we can figure out these higher order constraints. I think we are putting the cart before the horse on completely functional watchtowers that are bonded. That stuff can come later.

BM: I think the first order thing to solve is how to do the state update mechanism. We are still not decided on whether we are going to get eltoo and SIGHASH_NOINPUT which implies a different update mechanism and a different set of transactions that need to be watched. That conversation doesn’t seem to be settling down anytime soon. It seems like we are not going to get SIGHASH_NOINPUT anytime soon. I don’t know. There is a lot of uncertainty.

KL: For the watchtowers I am not as skeptical as you guys I think for multiple reasons. One of them is that anybody could be working on watchtowers today or even have watchtowers in production and we would not know about it. This is one of the cool things about watchtowers. It behaves as if it has a private key but it doesn’t have it. It has a pre-signed transaction instead. Another thing regarding hosting itself or giving it to other people or having a third party watchtower. I think it is a good thing that you should have a lot of these. Of course you should have one or multiple watchtowers yourself but you should also deal with third parties. You might have to pay them of course. The fact that you should have a lot of watchtowers and no one knows how many you have is really important in terms of security. At least they don’t know if there is a single point of failure. They don’t know if it is a vector of DDOS. They don’t know who to attack to prevent the trigger of a prevention mechanism. I am really bullish on watchtowers and I know a few people working on them. I am really looking forward to seeing them in production.

MF: I’ll wrap up. In terms of transcripts I am doing a lot more transcripts than Bryan these days because obviously Bryan is very busy with his CTO role. If you want to follow new transcripts follow @btctranscripts on Twitter. Eventually there will be a site up, I’m working on it. Bryan has released a transcript of today for the first half. We’ll get that cleaned up and add any content that is missing from the second half. Apart from that the only thing to say is thank you for everybody attending. If you want to hear more about vaults, Kevin and Antoine are presenting next week. Thank you very much everyone.