SIGHASH_NOINPUT, ANYPREVOUT (2019-06-06)
Transcript By: Bryan Bishop
Category: Core dev tech
SIGHASH_NOINPUT, ANYPREVOUT, OP_CHECKSIGFROMSTACK, OP_CHECKOUTPUTSHASHVERIFY, and OP_SECURETHEBAG
There’s apparently some political messaging around OP_SECURETHEBAG and “secure the bag” might be an Andrew Yang thing.
A bunch of us are familiar with NOINPUT. Does anyone need an explainer? What’s the difference from the original NOINPUT and the new one? NOINPUT is kind of scary to at least some people. If we just do NOINPUT, does that start causing problems in bitcoin? Does it mean exchanges need to start looking at history of transactions and blacklist NOINPUT in the recent history until it’s deeply buried? If we send a NOINPUT to someone, are we responsible for them losing money or anything like that? Is there other scary weird behavior that it might cause?
NOINPUT is different to how sighashes have worked, because it allows signatures to be replayed against different UTXOs with the same script but different value. cdecker’s proposal from a year ago for NOINPUT restricted it so that the signature commits to the value of the UTXO being spent, so that restricts it a fair bit. Although, if your exchange is sending you 200 BTC in two lots of 100 BTC then that’s not enough protection perhaps. When you spend it, wouldn’t it be spent to the same outputs? A change address gets paid twice? Maybe you want to send 5 BTC to someone and oops I guess I accidentally paid twice so 10 BTC.
An initial concern is that because they are indistinguishable as an output type, we can’t tell, is this an address that… sorry if I’m skipping ahead, but the way to think about it is, these sort of NOINPUT signatures are only things that are within some application or within some protocol that gets negotiated between participants, but they don’t cross-independent domains where you see a wallet or a protocol as a kind of domain. You can’t tell the difference, is this an address I can give to someone else or not? It’s all scripts, no real addresses. There are types of outputs that are completely insecure unconditionally; there are things that are protected and I can give to anyone, you don’t want to reuse it, but there’s no security issue from doing so. This is an additional class that is secure perfectly but only when used in the right way. It’s basically us opting in to additional functionality and you should be aware of the tradeoffs when you do so. When luke-jr said he wanted to write a wallet that only used SIGHASH_NOINPUT, that was pause for concern. Some people might want to use SIGHASH_NOINPUT as a way to cheapen or reduce the complexity of making a wallet implementation. SIGHASH_NOINPUT is from a purely procedural point of view easier than doing a SIGHASH_ALL, that’s all I’m saying. So you’re hashing less. It’s way faster. That concern has been brought to my attention and it’s something I can see. Do we want to avoid people being stupid and shooting themselves and their customers in the foot? Or do we treat this as a special case where you mark we’re aware of how it should be used and we just try to get that awareness out?
What dangers are there as a recipient of NOINPUT? If I get an incoming payment with a SIGHASH_NOINPUT signature, then I could replay that and get paid twice– well that’s not a risk, that’s a benefit. If I receive a payment from a UTXO and that UTXO was authorized with a NOINPUT signature, then if there’s a reorg or if that was in the mempool or something, then that signature would still be valid if there was malleability in the past, but my payment wouldn’t. But this requires a long reorg or crazyness, so that might not be plausible. Maybe there’s some other attack that is like that?
There’s tagged outputs, or chaperone signatures as ways to reduce the scope of the replays. Those are the only two ideas we have these days. The chaperone signature is anytime you say a ANYPREVOUT NOINPUT signature on a transaction is not valid unless you see a SIGHASH_ALL on the same input, a second signature. Everything becomes multisig, 2-of-2. It binds that particular address at spend time. The problem with the chaperone is we can’t do signature aggregation anymore with those because they now have different sighashes. It needs to be two different keys, otherwise you would use the regular sighash on the single signature.
The fact that you’re able to produce the chaperone signature implies you don’t have a need for NOINPUT with the same key because those people are going to be online at signing time. There’s no problem with it, but it’s pointless. And now you have double signature size for everything; yeah it’s bigger but no worse than existing multisig. So it’s only for non-cooperative scenarios in the first place.
What’s the argument against chaperone signatures? Well you could just make it so that anyone can sign for it, so what’s the point. It’s cheaper to circumvent it with half of g. That’s only true if you have OP_CAT and you’re doing magic, and we don’t have OP_CAT yet.
What’s the biggest argument against tagged outputs? The tagged outputs idea is that we don’t have NOINPUT ANYPREVOUT supported for taproot v1 outputs, instead we have a segwit version 16 v16 that supports taproot. The reason for v16 is that we redefine bech32 to not cover v16. There’s no addresses for this type of output. If you’re an exchange and receive a bech32 address, you declare it invalid. You make it less user friendly here; and there shouldn’t be an address anyway. You might want to see it on a block explorer, but you don’t want to pass it around to anyone.
It seems like if we’re doing tagged outputs that chaperone signatures don’t add anything. It’s either-or. Chaperones don’t add anything at all; it’s pointless. There’s reasonable arguments to not do it that way. The simplest way to satisfy them is to just bypass them. If you’re committing to the pubkey for the chaperone signatures ahead of time…. if everyone who wanted to use NOINPUT was convinced there was a problem, then they would pick the right thing, but clearly people aren’t. It’s not a foot-gun defense mechanism because it’s easily bypassed, and it’s easier to bypass it than to use it. Whereas for tagged outputs, it’s that if you want any NOINPUT then you must tag.
Where is the tag? The downside for the tagged version is that ‘tis suddenly not fungible, and you can distinguish it from the taproot. I don’t think the fungibility argument is very strong because at least in the imagined use cases now, these outputs would only be created in an uncooperative setting. It would also be very temporary as well. So they would be swept away immediately. If you look at the blockchain, you will know well that was an uncooperative close. We also tend to gossip about this stuff. You don’t even need to go as far as redefining bech32 to drop these v16 addresses… the imagined fear I think is, something bad happens, NOINPUT signature somehow gets used, like MtGox version 2 or something, and big players and go say wow this new taproot thing is apparently scary we’re not going to support sending to it, or not support receiving to it or whatever. That would be far more damaging I think than the fungibility hit by having a separate version.
If there’s a separate version then I would not be able to use taproot and NOINPUT together? No, they can be used together. It could even be a byte longer saying v16 is just non-addressable outputs, and first byte 0 that follows means that it’s taproot, but the non-addressable version or something. So even if you don’t go as far as not making a new address, the risk is that these addresses which are recognizable are scary, versus all of taproot is.
Putting aside that argument for a moment, you said you could already tell about these uncooperative closes and that’s what’s going on. Is there a thought about, do some people have ways to make uncooperative closes more indistinguishable? With adaptor signatures and sequence numbers, yeah. You can tell by the sequence number and the locktime that it is eltoo. I don’t think the argument is that it’s already recognizable, but that these outputs are already only created in the uncooperative setting. Well, you know it was created in the NOINPUT setting and that makes it distinguishable. Well no, we care about fungibility for the cooperative case not the uncooperative case. In the uncooperative case in taproot, it breaks the fungibility too because you reveal the scripts so you’re already going to tell the world what you’re doing. If there was a use for NOINPUT-like signature, for a cooperative case, I just haven’t seen that yet. There could be some. You could use a Schnorr adaptor signature… for taproot cooperative you need to be online to do the musig signing ceremony and keep the fungibility.
Do the tagged outputs completely address the risk of replay? It makes more apparent that it can be replayed, rather than finding out later. You know which signing path is replayable. The act of using a noinput signature retroactively..
What about committing to all the previous times you use NOINPUT so that you’re sure they want it? Well, that’s bad for validation. Consensus it could work. Well you could use nonces like ethereum does…. the other replay has to increment a sequence number each time. That’s more baggage for verification as well. It’s equally the case that bypassing it is easier than using it.
What about feedback from companies in the space about tagged outputs for NOINPUT? Not yet really.
Other than eltoo, are there any other protocols that expect to use NOINPUT? There’s some wallet stuff where you have simple covenants with SIGHASH_NOINPUT. Unfortunately Bob isn’t here. You could put the signature in the output of the transaction that is spending it, because it doesn’t depend on the txid. So you have two transactions, one is spending the other, and you can already assume the hash of the transaction, you can assume the sighash of the transaction before it’s created, or you already created, because it doesn’t spend anything that could change anymore. You create an output that has a signature that would cover the next transaction for where the funds would go. The signature is in the witness script. Yes. You can’t do much with it, it’s pretty weak. Russell would find something. Once we add OP_CAT…. if you don’t commit to the script, then you could make provably unknown private key signatures. That’s fun. Here’s a signature, and I can show that I know the preimage of the public key.
Does anyone just think NOINPUT is a really bad idea? gmaxwell seems to… He’s very concerned about replays, like an exchange failure or a bunch of money is lost. That’s his main fear I think. Like a mtgox v2. There’s three camps– who here thinks that it would be simplest, YOLO your fault you’re a big boy for using a new sighash. What about the tagged output camp? Are we voting on a soft-fork? Or you have to include the entire blockchain in the signature? Okay, nobody. It looks 50-50 between YOLO and tagged output. It’s a “big boy” sighash flag. Greg said we had to meditate on it in 2015; and we have for like six years now.
There seems to not be a huge opposition to the idea of NOINPUT, it’s just a question of how constrained it should be. “What we thought was missing from bitcoin was replay attacks”… Actually someone from Bitcoin Cash wanted to use NOINPUT as a malleability fix, but it actually effects it worse. That’s when I started timestamping my emails, because I wanted proof that I told them about it. Well, I got a reply quoting me, so I timestamped that. Did they PGP sign it? Come on, what do they know about that signature stuff? Ah, NOINPUT for PGP, invented for CSW.
With tagged outputs, you could do ANYPREVOUT via the key path as well. This means you couldn’t use NOINPUT inside p2sh because in p2sh you don’t reveal the witness version in the output which would break the protection. So we need to do native segwit? I don’t think anyone who wants to use NOINPUT cares about this. Oh, nested p2sh yeah. People are still going to look up things on block explorers.
What ever happened to webbtc? Yeah it broke a few years ago in like 2014.
It’s a way to do covenants. You could do covenants with like CHECKOUTPUTVERIFY where you pattern match on the outputs themselves. Another way from jl2012 is you push elements on to the stack and you can push on like v-output size or the number of otuputs or the value. A third one is you instead have a DAG of transaction execution where you do a checkoutputhash and then a particular hash, and you say the output spending it must have a particular hash itself. So as you serialize all the outputs, it has the value and the script. You can say something like, the first thing spending this must have an output size of like 5 or something.
The underlying assumption is that currently you can immediately spend after a certain number of confirmations. But here you can receive confirmation for the funds, but you can delay when you need to do the payment. Payment and settlement can be done at different times. Me receiving the coins can be separated from when I need to spend it. So you commit to some DAG of future transactions, and this is what validates the confirmation. Once they get confirmed, then I can spend them.
When you combine this with taproot, you can have multiple branching paths depending on what happens. Jeremy presented this for
digestion control congestion control for exchange withdrawals. It’s “freedom dividends”. The exchange can unroll them and pay all of them in the same output, or pay people one person at a time. When I saw that idea, I was skeptical about its use. In that case, you’re using more block space than you would otherwise use in the end. Also, the fees are super screwed up right? If you have to pay for all of them ahead of time? There’s a way to attach fees later on; you commit to the entire structure created from this. You can’t add fees after the fact because of the commitments. But you could change the proposal to allow him to add multiple inputs. You could have an initial payout that is not timelocked at all but with low fees that might not ever confirm; then you have another one that pays higher fees and is timelocked to a week later and it just reduces your change. So you can do stuff like that too.
It’s the most restrictive covenant you can think of; the hash must match. Let’s say you’re giving bitconner like 5 BTC. By the time he spends 5 BTC, he must pay me 1% of that value or something like. That’s a weird use case. It’s an interesting case. You can also define vaults with this; when you spend this, say there’s CSV for this particular amount. It’s like pre-planned execution rather than a runtime doing anything at that point. It can be a bunch of paths augmented with taproot. You can get finite depth too. You have to make the hash, so you can’t have unknown or script or value, all of it must be pre-meditated.
Everyone you’re sending money to needs to see the full script; or their subtree. Well, there might be another path that doesn’t result in them being paid. They don’t need to know things below them on the other side. It might be signature less witness as well, that shows the current taproot path and continues on there.
It’s covenants, but very simple, and you can still do cool stuff. You can control how coins get spent, without having a viral infection thing. It can’t infect all the coins in the world indirectly or something. So it’s finite. You have to compute ahead of time what is going to happen. Maybe you could use a secure hash function and therefore not make cycles.
Vaults is a pretty big deal for security. So that would be really cool. There was vaults, digestion control, and also you could have a payout from an exchange if you’re a whale and you don’t have to reveal what the outputs are going to at the time of withdrawal and you could reveal it later in some discrete amounts and you don’t have to post that upfront. It’s a commit-reveal or UTXO hiding. This is generally useful for any kind of, covenants just really blow up the amount of things you cna do with bitcoin script. You can have an assurance of certain things.
You can have it fully enumerated and you say, I’m these 20 endpoints and I can now create new signatures on those endpoints and anticipate where they are going to create new trees.
What about removing the revocation sequence with OP_CHECKSIGFROMSTACK? They are somewhat related. If we’re going to do covenants, then why not a more powerful version? If generic covenants are the design goal, rather than the goal of OP_SECURETHEBAG accomplishes, they are distinct goals. If the goal is generic covenants, then you don’t want to do it with OP_CAT and OP_CHECKSIGFROMSTACK you actually want proper opcodes for inspecting parts of your transactions. The fact that this can be done with this massive hash hackery, I mean, having certain features may encourage people to start thinking about what kind of things are possible but rarely the way it’s done is how you actually want it to happen in production. Personally I think the things put into script should be production usable and we can think about things like what would be possible without them being there. I prefer the use case where we know how to do it efficiently and how you would want to do it. Hashing is pretty simple, too. The question should be, are there use cases for OP_CHECKSIGDATAFROMSTACK that is distinct from generic covenants.
CHECKSIGFROMSTACK is an opcode. Right now in bitcoin you have signature verification. CHECKSIGFROMSTACK says you are going to push stuff on the stack, hash it, and then you can verify arbitrary pubkey privkey pairs. You can do this for delegation. You can check a public key and a regular checksig on the transaction itself. Ask for a pubkey, and you can delegate it to someone else; or any kind of outside oracle data. Say we’re betting on price, and we have Bitstamp’s key hardcoded in the script. There’s also other constructions for probabilistic payments, where I sign a hash and do comparisons on that to payout to particular individuals. You can also do this for revocations in lightning. It’s like a 2-of-2 multisig; we sign the sequence number of that state itself. We say, present to me a signed sequence data greater than whatever, then we can go forward and move on. This allows you to get revocations in a simpler way with signed sequence numbers that move forward. It might be Schnorr only. The CHECKSIGFROMSTACk can be m-of-n multisig. I can have outside data, or delegation, or different ways of having some type of ordering in the script based on signed values from the outside world. You could do a threshold oracle thing with CHECKSIGFROMSTACKADD?
All those constructions; a lot of work by people like Tadge and Poelstra to talk about ways that we can do things that are complex off-chain with adaptor signatures or something. But this sounds like the opposite thing. But CHECKSIGFROMSTACK lets you do more complex things off-chain as well.
For eltoo, you can use CHECKSIGFROMSTACK instead of the locktime. The transactions no longer have CLTV and they can be timelocked. It’s another signature, but now the locktime is free for other stuff possibly. We’re no longer limited to one billion revocations, and now we can do a few more, we can do four billion right?
OP_CAT and OP_CHECKSIGFROMSTACK also provides some covenant possibilities, I think that’s really cool. But any time you add these things, there’s this emergent property of things you hadn’t seen beforehand.
For signed sequence numbers, you could actually do that better with an annex. How could you add a locktime without using a hard-fork? You could use the annex; you could add a relative or absolute locktime by the annex. Then the annex would be the locktime value. You need to define the semantics for the encoding of the data inside the annex. You can have a one-byte prefix that says this is the type of the annex. Locktimes per input would be a lot more flexible and useful for lightning. Yeah the annex, I didn’t think about that. That’s interesting. So now I have a solution to something, cool thanks. Cool, avoided a future hard-fork. I was thinking, you have to change the transaction format no other way but I guess that’s not true. Or add additional things under the signature, that’s the annex.
The fact that this annex is possible is an accident; it’s due to p2wsh and p2wpkh we can reason about the last byte of the stack elements, and if we picked another kind of encoding this would not have been possible. The first byte of the witness cannot be an invalid opcode, or it has to be a public key, and it starts with 2, 3, 6 or 7. So there’s a couple bytes available. That’s why the leaf version is picked from one of those prefixes. It’s less necessary for the leaf version to have this property than for the annex.
I never heard of a use for an absolute per-input locktime. You could do better aggregation probably. You can commit to the timelock and you can’t aggregate them with SIGHASH_SINGLE, but you would be able to satisfy them, you would be able to aggregate them. Right now we have HTLCs that have a locktime and we can’t combine them, but if it was just on the input then we could aggregate them and save space on chain. We don’t have to commit to them in the HTLC script, either. That’s one example use case. Typically, you want to aggregate different inputs, and you have pre-signed signatures, and then you want SINGLE|ANYONECANPAY.
What about input version 2 while we’re at it? Well we have that with tapscript and all that other stuff. Any other questions about CHECKSIGFROMSTACK?
Jeremy gave a talk in 2016 about a more powerful covenants; but it was the general ideas for covenants. I think at the time he had an idea and meditated for a while. The feedback was “too powerful too scary”. This is the first one where we say yeah with that I don’t see how you can shoot yourselves in the foot. Also with tapscript you have a leaf of possible paths. Ideally you have a language where you can generate what you want it to be, compile your transactions, sign it, and move it on chain. Tanglescript? Because it’s a DAG.
Previously you could pre-sign transactions; and with SECURETHEBAG they don’t need to be online. It’s in the chain, it’s going to happen. And people can have particular branches they are ready for. You might want people online to verify that their branches are correct.
Was there a more complete tagged output proposal? Yes, it was discussed on the mailing list in January or December. Around there.