Merkleized Abstract Syntax Trees
Date: September 7, 2017
Transcript By: Bryan Bishop
Tags: Mast
https://twitter.com/kanzure/status/907075529534328832
I am going to talk about the scheme I posted to the mailing list yesterday which is to implement MAST (merkleized abstract syntax trees) in bitcoin in a minimally invasive way as possible. It’s broken into two major consensus features that together gives us MAST. I’ll start with the last BIP.
This is tail-call evaluation. Can we generalize P2SH to give us more general capabilities, than just a single redeem script. So instead of committing to a single script, what about multiple scripts. The semantics are super simple. All it does is that when you reach the end of termination for a script, if the stack is not clean meaning there are at least two items, then it will recurse into the top most. This is the actually the same as P2SH does… So we recurse into scripts. There’s two levels of P2SH. To keep this safe, we limit recursion to one tail call evaluation.
To get the rest of MAST, we have to introduce a new opcode which I am calling OP_MERKLEBRANCHVERIFY. It takes 3 arguments: the root, the leaf hash, and a proof. This is the same extension type of opcode that OP_CSV and OP_CLTV. MBV will be nop4. It will fail if there aren’t 3 items on the stack. The leaf and the root are both 32 byte hashes. And the other one has to be a serialized proof which is specified in the first BIP. It checks the types to see if they can be serialized. And then it will check to see if these two match the root and make sure it gets the value and either failing or continuing.
Together, these two capabilities give us generalized MAST because we can construct something that looks like this, where our witness looks like this. We push the proof, the merkle proof. So the witness looks like arg1..argN and then [script] and then [proof].
Witness: arg1 … argN [script] proof
For the reedemScript, we want to copy the script, we capture the second element and hash it… we add the hash of the root, then MBV. This gives you generalized MAST capabilities with these two features. After executing this, we will have …. oh, also we have to drop the MBV parameters, to make it soft-fork compatible.
Redeem script: OVER HASH256 root MERKLEBRANCHVERIFY 2DROP DROP
We will have proven that this script was committed to in the original script. We drop everything we did here, with the proof, leaving just the script in the stack and the arguments to pass into this, and we recurse exactly once into this script and do whatever it wants to do. We can generate a complex contract and it has multiple execution pathway.s You have a tool that generates each possible execution paths you’re interested in. At redeem time, you specify the path you want to use, show the path, and then execute it.
You can blind certain code pathways by only showing the one that you do use. Or you can do something like sipa’s key trees where you specify the merkle tree of the keys you want to use, and just pull out the ones you want to use. Merklebranchverify uses most of the logic from somewhere else, although it would make it consensus-critical. And the tail-call recursion is very simple because it’s limited to being just tail calls. You reach the end of the script, you reset some of the variables, set your script to the new script you’re recursing into, and then perform a GOTO.
When you get to the end of your execution, if there is something still on your stack, you execute it. If there’s two or more. If you go through your execution and find you still have stuff on your stack, you take the top thing and execute it as a script. This is safe because, first there are no scripts out there at least on bitcoin mainnet that do not have cleanstack. It is not relayed at the moment. That’s not consensus. “It’s a consensus rule for witness.” This would not be work in SegWit, right. When you terminate execution right now with 2 or more items on the stack, which is allowed except for SegWit…. cleanstack is a consensus rule on SegWit, it’s not just standardness. This is only for v0 of course. Part of the benefit of this is that… reduce the need for script versioning. Well that might interact there. At least without SegWit, this is a soft-fork because any interesting script you push on the stack, except an empty script or series of 0 pushes, this would evaluate to true.
The redeem script is part of the witness. How do you avoid being limited to .. levels. One is that, the merkle branch, without CAT or something you can’t get around it. But what you can do is that these merkle tree structure was designed for chaining calls. You can just pull out a middle hash and you verify it’s part of the next one. And because you can make balanced trees this way… and because the unary case is handled as a pass-through, you can have just one pass through. What about tail recursing log N times? Well, only one level of recursion depth on the final execution. But that will probably not reach bitcoin because it is hard to bound the computation of that. But the size of the script will be linear on the… no, OP_DUP and generate data, you can make a script that generates itself and calls DUP or whatever, and it keeps going. Write a script that pushes to the stack a copy of itself, and then you run that and it runs forever. There’s no hash in the execution in this proposal.
Tail recursion as a model of handling general recursion on stack-based languages, makes a lot of sense, and it would require cost modeling and there would probably be run-forever scripts and bloat-memory-forever scripts. We get away with a lot in bitcoin script because we’re limited by no recursion and no looping and we can we almost ignore the cost of the memory usage or everything, there’s no pattern of options that can become a problem. But with unbounded recursion, that’s not the case. One level of recursion is the only thing you need for getting something out a hash tree. It’s hard to come up with a use-case where you want the generalized recursion. Delegability? If you have OP_CHECKSIGFROMSTACK, then that would solve that. If we had a CHECKSIGFROMSTACK this makes the merkle branch verify stuff even more powerful where you can run a script and signed for whatever.
What are the issues with OP_CHECKSIGFROMSTACK? … You could do covenants, well with OP_CAT. Here’s a coin that can only be spent by a certain transaction, so you could lock a coin into a chain.
If you don’t mind the exponential blow-up, then all things reduce to “long list of ways to unlock this, pick one”. The exponential blow-up in a merkle tree turns into a linear blow-up in ways to unlock things, but still exponential work to construct it.
One of the advantage of this over jl2012’s original bip114 ((but see also)) is that, besides being decomposed into two simpler components… go fetch pubkeys out of a tree, then have key tree signatures, it also helps you deal with exponential blow-up when you start hitting those limits, you could put more logic into the top-level script. How hard is it to make this … pull out multiple things from the tree at once, because there’s sharing to look at. Yes that was some of the feedback, it’s doable, and you have to work out the proof structure because it’s meant for single outputs. In the root you might have n which is the number of items to pull out. So it might be 3 root leaf leaf leaf proof. But without knowing what that n was, you basically have to use it as a constant in your script, the root is a constant. I think it would be interesting to have a fixed n here.
The outer version is the hash type or explaining what the hashes are. So the history behind this is that at some point we were thinking about these … recoverable hashes which I don’t think anyone is seriously considering at this point, but historically the reason for extending the size … I think my idea at the time when this witness versioning came up, we only need 16 versions there because we only need the version number for what hashing scheme to use. You don’t want to put the hashing version inside the witness which is constrained by that hash itself because now someone finds a bug and writes ripemd160 and now there’s a preimage attack there and now someone can take a witness program but claim it’s a ripemd160 hash and spend it that way. So at the very least the hashing scheme itself should be specified outside the witness. But pretty much everything else can be inside, and I don’t know what structure that should have, like maybe a bitfield of features (i am not serious about this), but there could be a bit field that has the last two are hashed, into the program.
One of the advantages of script versioning that we did is that it’s actually hard to be super confident of an implementation of a new flag is actually a soft-fork. It’s easier to be sure that these things are soft-forks. Mark is saying as a principle we don’t merge multiple unrelated things except the most reasonable things. CHECKSEQUENCEVERIFY and CHECKLOCKTIMEVERIFY sure. Segwit had bip147 nulldummy. But generally combining multiple things, there’s no end to that and you get this exponential penalty on review where you’re saying okay there’s this boatload of unrelated features… good luck.
I couldn’t do a tree in your scheme I couldn’t do a 19-of-20 multisig… 600 byte serialization can’t go on the stack. Any single item on the stack. There’s a push and the push is… … This tree plus tail recursion ends up with next script on stack, so it’s subject to the 510 byte push limit. It could be limited in v1 or something. You can’t push something larger than that. Even in SegWit, the SegWit script is not pushed on to the stack, it’s just separate. Segwit encodes the stack— it’s not a script that runs anything. It’s not the proof that’s the problem, it’s the script itself. The script can’t contain all the pubkeys. A tree of 19-of-20 checkmultisig, you can’t put that kind of redeem script in there, and this could be fixed in v1 or something. The 520-byte limit for P2SH has been a pain. You can only get up to 17-of-17 or something. There are reasons to not use… not accountability, but also you have to do setup in advance. So if you have some script that pulls keys from a bunch of different trees, and says go do a multisig with that. It’s been a limit, but it would be much more of a limit, limited in other ways. That limit becomes more favorable as you soft-fork in other stuff into the script.
The way that people that interact with this, like some application writer trying to write a script, this is much more complex than the original MAST thing where you just have a data structure. People working with this are going to have a harder time. Someone should go figure out a proper proposal of what the alternative looks like.
Merklebranchverify is more useful than generally MAST. This is a slight generalization of P2SH and this is what P2SH should have been originally. P2SH was designed explicitly to (not?) be a nop eval… code dealing with what code will be executed. P2SH with user-specified hashing. You do the hashing, and it handles the execution, but only one level of execution just like P2SH. The user provides some outer level of code, and then provides the next level. In P2SH you are confined to only running a template, and this eliminates the template. With the merklebranchverify you can make use of this. It’s 3 layers for technical reasons, because of P2SH and compatibility. If we had meditated more on P2SH we might have ended up on this tail recurse design too. It’s almost indistinguishable from P2SH in the sense that… if P2SH template was slightly different, then you could say it always did that.
The delta from P2SH is that the script would be a DUP, that’s the difference. If P2SH had a DUP in front of the hashequalverify and then you ran these consensus rules, then it would just work, because P2SH consumes the script. Or had a special opcode that doesn’t actually remove the thing being hashed, then it would be compatible with this. This would be a reinterpretation of what P2SH is doing, and it would make P2SH makes sense. … A script and OP_HASH160 and then spend to it.. I put OP_NOP and then it worked? People complain about the magicness of P2SH sort of suggesting that people might be confused like this. There’s pushonly magic, right.
Clean way to do multi output for merklebranchverify? Pull 3 out of a tree from a thousand. Pretty cool. You could repeat the proof and do it again, provide 3 separate proofs or something. The concern with multi-merklebranchverify is the verification scripts.. but maybe not an issue.
Compact serialization could optimize that away… only doing it at the top level. Prevents you from doing the gather operation and pulling 3 items out, and… No, that’s not true. You can still have the merklebranchverify as a nopcode for doing keytreesignatures where you don’t need to do recursion. You can do both.
If you use a scheme like bip114, then it would be less, because in the output itself… and it mainly solves the push size issue.
Once you check the top of it, you can grab part of the tree, and then you can merklebranchverify and– this is more like merklebranchget, because you already did the verified. You do an encoding of this where from a partial merkle tree, compute a hash, and then do GETs on it to go get items from the tree. You have an opcode that verifies a pruned merkle tree data structure is what you think it is. It only does verification on hash expected. And then get the thing you wanted out of the tree. So you can do the shared branching down. You take a proof, an opcode that takes it, then you do get operations to get a branch and attach another branch. This is close to what bip114 does. It was not clear why he was pulling multiple things out, but this makes sense. If you are doing multiple gets out of the merkle tree then you definitely want to share levels. If you’re pulling two things out, then half the time it’s going to be on the same left branch, etc. And in practice you’re… and if you intentionally order your tree such that they are, you can make your common cases do this anyway. Notice how again a system that is just, let’s just rewrite that script input as a merkle tree, might be easier to implement, there are edge cases at the end where you can check any branches that were provided but not pruned and never accessed, if yes then fail the script and that’s your ((what?)) rule. I’m not quite getting that. With a serialization merkle tree where you have pruned and unpruned branches, malleability is someone taking a pruned branch and making it unpruned by adding in the extra data which they might be able to do in some cases. So you want to get rid of the malleability by something like the cleanstack rule: if you didn’t pull it out, … is it in the minimum possible serialization for what you did. The actual implementation of the tree can say how many values can be… and just keep track of that. The minimum serialization size is different between… in these kinds of trees. Either something either is or isn’t unpruned. In the serialization, there is only one possible serialization. That’s why cleanstack got …. and the motivation is to prevent malleability. Where a miner can’t add garbage to your witness. In a different type of serialization coding for transactions, you could argue that malleability matters because the wallet can just throw that away. We do have residual witness malleability and I’d like to have code in the relaying network that autostrips witnesses. Arguably this might be a proof point for malleability for being an expansion point. Unfortunately it’s both. Malleability is software expansion space… The midground is to make things policy-prohibitive rather than….
Can you serialize the size of the.. in a checksig? Yes. In v1, we would probably do something where the– something gets serialized, like the size of your witness, some you can’t malleate larger.. validate the signature. And this idea has come up a couple times in the past. Sign your signature size, essentially. Malleability in both cases isn’t an issue, it reduces the efficiency of relay. The one thing that you could do is take a valid transaction that is being relayed, and I make it larger to the point that it is … resulting in policy rejects of that transaction. Segwit kills tail-call recursion because of cleanstack. In v1 maybe we can change that. Well what if we want general recursion later? Specific attack here– you can sign the size, or provide the maximum. The witness will be no longer than this value, right. You can serialize the maximum that you used. The maximum that you used when you signed was 200 bytes, your signature might be less, and the verifier needs to know what number to use, in the hash. This whole thing with witness malleability is like different from…
The review process for turning NOPs into soft-forks is painful.
What exactly is going into bitcoin script v1 proposals? ((lots of mumbling/groaning))
We could have an OP_SHA256VERIFY instead of OP_SHA256. For a lot of things, it’s conceptually… … If every operation always verifies, with no non-verifies, then you could validate scripts in perfectly parallel. You take every operation in a script and validate it on its own. You don’t need sequential verify. It’s bad for serialization– you can’t do that. But if we didn’t care about bandwidth and storage, then it would be a better design.
If you had a hash256verify here in the merklebranchverify… some could turn into OP_DROP. The nopdrop ((laughter)). OP_RETURNTRUE, not just OP_RETURN. All the nopcodes should be turned into OP_RETURNTRUEs. If you hit this opcode, then you’re done for good. This is a generalization of version. The downside is that you don’t know if you’ve hit one of these until you’re in the script. This basically lets you have a merkle tree where some scripts have different versions in them. This is a way to do witness version by saying … my first opcode is OP_RETURN15 and 15 isn’t defined yet, so it’s true, and after defining it, now it has a meaning. Some of them might turn into NOPs… Let’s not call it OP_RETURN, it should be OP_EVIL or OP_VER. Right now script that doesn’t deserialize is never valid, because you can’t get to the end. Run to the end, but have a saturating counter that says whatever happens, I’m already done. In DEX, everything is deserialization. So you have your pointers in a stream. The upgrade path for that would be the data structure you just deserialize… What parts do you need to deserialize if you haven’t? OP_RETURN in a scriptsig, return whatever would be on top of the stack, which would be bad.
Not good to mix code and data. Can you hash what is on the stack and not duplicate it. The overhead would go away. Otherwise you wouldn’t need the large push in merklebranchverify.
The version of merklebranchverify in the link is on top of Core and that’s a hard-fork because of the witness details. But also there was one in elements.git where there’s a merklebranch RPC and others. Takes a list and turns it into a tree, but want to do arbitrary JSON objects.
So merklebranchverify maybe should be deployed with the non-SegWit version.. but maybe that would send a conflicting message to the users of bitcoin. Segwit’s cleanstack rule prevents us from doing this immediately. Only in v0.
Need some candidate soft-forks that are highly desirable by the users. Maybe signature aggregation, maybe luke-jr suggestion anti-replay by OP_CHECKBLOCKATHEIGHT proposal. Needs to be highly desirable by the user to qualify for this particular case though. Needs to be a small change, so maybe not signature aggregation, but maybe signature aggregation since it’s still highly desirable.
They can break a CHECKSIGFROMSTACK… in a hard-fork. CHECKBLOCKHASH has other implications, like transactions aren’t– in the immediately prior block, you can’t reinsert the transaction, it’s not reorg-safe. It should be restricted to like 100 blocks back at least. [a]I approve this chatham house rule violation
https://www.reddit.com/r/Bitcoin/comments/7p61xq/the_first_mast_pull_requests_just_hit_the_bitcoin/
bip98 “Fast Merkle Trees” https://github.com/bitcoin/bips/blob/master/bip-0098.mediawiki
bip116 “MERKLEBRANCHVERIFY” https://github.com/bitcoin/bips/blob/master/bip-0116.mediawiki
bip117 “Tail Call Execution Semantics” https://github.com/bitcoin/bips/blob/master/bip-0117.mediawiki
implementation of bip98 and bip116 https://github.com/bitcoin/bitcoin/pull/12131
implementation of bip117 https://github.com/bitcoin/bitcoin/pull/12132
https://bitcointechtalk.com/what-is-a-bitcoin-merklized-abstract-syntax-tree-mast-33fdf2da5e2f