Home < Scaling Bitcoin Conference < Milan (2016) < Breaking The Chain

Breaking The Chain

Speakers: Bob McElrath, Peter Todd

Transcript By: Bryan Bishop

Tags: Mining, Proof systems

https://twitter.com/kanzure/status/785144266683195392

http://dpaste.com/1ZYW028

http://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/client-side-validation/

http://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/braiding-the-blockchain/

http://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/jute-braiding/

Inclusivitiy

So it seems like we have two big topics here, one is braiding and one is proof-of-publication. I had a talk in Hong Kong about braiding. The major difference is the inclusive nature. The notion that every block gets a block reward as a way to solve selfish-mining. It’s a competition between the orphans that get a reward and the ones that don’t that causes a loss of security. What’s the difference between jute and bob braids? The property that we both agree on is that we need inclusivity. We need to reward all blocks. Equal pay for equal proof-of-work. It’s bound to be controversial. Bitcoin conflates the source of value with the consensus value. There’s no value in hashcash. There’s a source of cost. Bitcoin conflates source of scarcity and source of value; well everything does that. Scarcity keeps us from having to talk about value. Another difference is delayed transaction fee allocation; you cannot allocate transaction fees until you see the global state of the network. Perhaps we should split the fees. But if you don’t split the fees and you have competition, and you see there’s a chain with a high fee, you could try to do reorgs until you get that fee. I think this is still an open problem.

Delayed transaction fee allocation

I almost did this slide anyway… but, basically, if you do any sort of splitting or pools or theory or allocation, your devaluing the expected value of the fee in tha ttransaction which wallets have incentives to circumvent. It’s easy to create a dead output like an ANYONECANSPEND output which anyone will pick up. Or you can pay miners directly. Moving fees around is going to cause the smart wallets to circumvent this fee moving around, so basically it’s just a useless mechanism. You’re going to have to give up on that. Fees are going to have to be all or nothing. If you have a smart wallet paying a specific miner, that’s a centralization pressure… and the small solo miners, especially if we have solo miners and … with the block time of like.. you’re not going to have access to that out-of-band fee scheme. We want the transaction fee to be the smartest way to get your transaction into the blockchain.

Fee sniping is when you have a hain that looks like two diamonds. In Hong Kong, I described an arrangement of beads called a cohort, where everything before or on the other side is from the other side of the ancestor. In diamonds, these are parallel to each other. Let’s say you have a high fee. Then it’s going to broadcast to a network and tie up here. So this is something that can be allowed. Depending on ohw you organize transaction selection, this guy might be trying to snipe a fee from a large graph. In my cohorts, someone broadcasts this guy and turns the whole thing into a big cohort…. Cohort is a demarcation in time where everything that comes to one side of it is an ancestor of everything that comes after it. Anything between the two outer lines has to be treated as part of one big cohort.

You could either share fees, have adversarial fees, or have no p2p layer, or have a deep mempool. Anyone is free to join mining, you just might not be very good at it. Some people wont make the cut. That’s a point of confusion, can anyone join? Yes. But what how many people can actually survive for more than a day? Your hardware setup might not be right, but unless you make it easy without social engineering to go get transactions– then you have serious problems.

With no p2p layer, the reason why this is a solution is because p2p layers exist because transactions propagate faster through the network than blocks. I don’t think this is why you need a p2p layer..

So if you’re.. if there are enough fees to justif that extra burn, then that’s good. Then you have probably saturated and you have a deep mempool. Otherwise, you … this is kind of odd, bitcoin currently can’t handle all the transactions thrown at it. You want a transaction processing system to process everything, not having a limbo mempool. The solution is only, like, is only to make sure that transactions sort of get spread out evenly rather than having some full blocks and some empty blocks because otherwise you have an adversarial fee mess. You have to smooth it out.

If you have higher block controls, these problems go away. Like a higher block time? Yes. Well, yeah. If you had a higher block interval, and you also had a braid, then why would you have selfish mining? The problem with having both a – say a 3 minute block time, which just wipes away all these issues, and also a braid, the problem is double spends are free because if you try a double spend you get 5 out of 6 say… So you want to go get enough confirmations with this braid. You need 50 confirmations. I want that to happen in 5 or 10 minutes instead of in 6 hours. I don’t understand the double spends are free. You use the miner, try to double spend, right, and in bitcoin, if you get five out of six, you’re trying to double spend 60, you get 5 out of 6 blocks, you fail to get the six. All five of those blocks which you spent a lot of energy creating – you’re basically out $50k or whatever. In an inclusive DAG construction, you just submit those blocks late but you still get paid for them.

With these systems of two ways of getting paid, one way is your inflation reward. The other one is transaction fees. And then the reward. I don’t think you could argue that every block could get transaction fees. So you do want to get it first. Inflation is not going to be here forever. Well, we can make it clean sheet re-design, the inflation can be whatever we say it is. This system is not– it’s a scale improvement, but it’s not a scalability improvement. You get a constant factor in scalability, it’s like 5x or 10x, but it’s not going to get you to 7 billion/sec. Well, my goal was to solve selfish mining and go to sharding. That’s where the real scalability comes from.

If I’m a miner, how do I know that other blocks are valid? You either have a system where the definition of valid is the first blocks; and if you don’t, then you go down the path of client-side validation. Maybe not completely, but you’re starting down that path.

Deep mempool is necessary in bitcoin because at low subsidy environment, you get into weird grinding situations. If there are $0.30 of transaction fees in the current block and $4k of transaction fees on the previous block, it’s more profitable for oyu to try to steal those fees. So bitcoin today… the code today after 2 or 3 more halvings is going to be dependent on a deep mempool. Also, if you are a miner, or a sender, you’re not going to send a $4k fee because you know the entire network is just going to grind endlessly and not actually send your money or add more confirmations. There is always demand at a lower price.

How many people have a copy of the data you’re talking about?

Proof-of-publication stuff

With merge mining, I can decide whether a block is valid whether the output of the hash times a nonce is less than the target. Well the target gets twice as hard to find as you go up. So you can go make it so that the one attempt at a solution you can just pick how many of these blocks were actually valid. So for instance I might simultaneously try to mine all three of these, and some percent of the time I’ll find it at this level, and at a fourth of a time we’ll find it at that 2^3 level. In my data structure I can build a merkle tree of those different levels, and for this data that I fail, I just throw that block attempt as well. When I succeed, I link successive chains together such that I create a timestamp relationship between one level and another. The interesting question is what do you do about reorgs? If you don’t handle reorgs, you’re in a situation where I could have a coin that existed say here, and… you don’t know if it’s a mistake; I spend some coin to a different chain, then I reorg around to screw up. If I make the rule, then… reorgs up here invalidate whatever has been linked underneath, a very easy way of ordering things. If I make it exist here, then it’s only confirmed once you link back up to the very top and then down again such that…. you don’t know if you’re linked to the top? The top’s a chain that everyone has consensus over. Because of merge-mined links. If I find a block at the top, that block committed to a block underneath it, and it then confirmed a further one back at that level. The ones at the top level didn’t validate anything– they don’t know what’s there, but they do know that if something changes and invalidates things that are linked to it, … but they don’t know what it was. They can’t answer a true-false question. If I give you the data that I have, I can show you that the data still exists. There was a double spend or not; and then in another chain, I can show you that there was also not a double spend in that time of history. The definition of spend here commits to a new chain and that starts a new set of history. You prove an output is valid it has a ton of chain history with that output. If your block time is all the way down at the bottom are low, so that… high, rather, so that you have a block every 20 minutes and they are 1 kb, your proofs are not that scary. If each block in a particular chain is on the order of 1 kilobyte, including the additional data that you need to prove its existence, which might be little… 1 kb is a reasonable number. It’s easy to imagine proofs on those proofs I got in my simulations to get on the order of 100 megs. I can have my block commit to a hash of some hash preimage that would normally get revealed as part of the spend, so in most cases, that thing can immediately rule out that block as a potential spend. When I go to prove it, I don’t need to provide you that bit. This is similar with SPV proofs, if you know the block is valid then I don’t need to give everything in the block to you, because we already know the validity of the block.

Why would miners include any messages at all? You could do this not at the consensus layer, but rather the convention layer. If I am a miner and I give you a signed transaction that is only valid if it’s in a mined block, and if it is valid then you can add an output… without your thing, I need to know the entire history of the coin. I think it’s feasible to go and provide you the history. So it’s out-of-band. It could be peer-to-peer. I can pay you coins from the same shard, but the transaction itself might involve different shards. So I can send you short proofs for the coins on the shard that the miner exists on. So the input I give the miner is valid. It’s valid only if some other data happens to be published. The other data might be valid or invalid, but you don’t care; the part of the transaction you care about…. But what if miners want to maximize their revenue. As a miner, you can probably say that 10% or 5% of people that want to use a different coin– perhaps we don’t care about that, but we demand transaction payments in the coins that everyone uses.

Your input commits to a merkle sum tree of inputs. Imagine that’s one coin. That merkle sum tree has this output that you wanted, and all this other stuff, you don’t care about. It’s hidden in the merkle tree. But these inputs- in turn have other merkle trees leading to them, paying to them as well. So the value here is declared to be say 0.01 coins. The other output might be 100 or something. That contribution plus say the 999 coins going from this output perhaps that’s enough. That means that if these inputs are fake, these outputs are invalid but these outputs are still valid. Because I’ve got two– for every output, you have one or more merkle trees feeding funds into it from the inputs of the same transaction. When you have an output that you spend, you’re committing to a distribution. Transations specify the flow from each input to each output within that transaction. That’s different from bitcoin. Yes.

The interesting thing is that you wanted to solve a problem where miners work in a big team to combat orphaning or stale rates or whatever… they are working as a team to make history replay to be different. For them to participate at all, they need all the data. I want miners to be able to participate in the system without having everything. Well, the point I would make is that they want to maximize transaction fee revenue. They still need to know about validity. The output that pays them must be valid. They should only care about their fees. A transaction’s validity can be on a … basis. You as a miner know this is real. I’ve given you a proof for the fees. You don’t see any of this other merkle sum tree data. You just see half the merkle tree or something. But as a miner would I like to know if there’s a transaction fee down there? All you know is that there is this transaction fee. If your option is you don’t know anything, then you get no money. You get some money. I give them some fee. That’s the standard. For this fee to be valid, you have to put all this….. As a user, it’s sufficient for me to make one output that pays the miners. There’s no reason for more than output. Yes, you can encrypt the other half of the transaction. You never need to release the encryption key. One approach is to give the miner the linearized transaction history. An improvement on this is to give the miner the coins on their subchain or shard, as the fee.

Imagine embedding this in bitcoin as-is, where miners are getting paid to include some data in OP_RETURN but the data might be some… yes this is similar to …. it’s different to merge mining. You have provable publications. Anything you publish has the exact same security as the chain it’s in. In merge mining, you’re linking to something but that something can be attacked. It’s more like extension blocks. You could implement this as bitcoin, yes. I have been asked to do this for colored coins. You can implement this in a way where in bitcoin it’s undetectable because lal these merkle links and stuff can be kept off-chain. The only thing that needs to be on chain is the things that commits to the data. So for colored coin for instance, the definition of a colored coin ould be this transaction input and then when you spend it you commit to a different data structure that in turn commits to another data structure and then nearly all that data is off the chain.

Back to braids

Cohort time vs target difficulty. There’s an outstanding problem of merging blocks with different difficulty. This might overlap with Mark’s thing about continuously adjusting difficulty. The network is fundamentally asynchronous. Beads appear in parallel. If I want to allow the difficulty adjustment to also be asynchronous, that means there’s a range of asynchronicity that I can convert into the reward by taking the derivative of the reward function. What about taking the difficulty by taking the mer… of the last hash.. based on the difficulty? You want to get a measurement of the global hashrate, rather than the luckiest hashrate. You have a lot of data points in the beads. You can make a more complex function. Why choose the luckiest ones? You need a low pass filter of the data points or something. Er, high-pass? Low pass. So you want to make it… if you have a braid, then how do you pick the range? You are going to have uncertainy values. For consensus reasons, all I have to do is choose the derivative of the range to be larger than the asynchronicity of the network. Miners that are actively mining at the wrong difficulty… so to counter that, the wider you make this range, you make this more unlikely. But how do I merge blocks at different difficulty? It’s a reasonable thing to do if they are close enough. But if someone mines the genesis block only and wants 100 coins, do I mine that or not? David used a 200 block look-back esseentially for that, but that basically says I won’t allow for network forks longer than that. I don’t want to proscribe the behavior of the network longer than that. Let’s say that someone cuts off from the internet, there might be war or something, I want there to be a way to merge all of this back in. If you define your merging algorithm, then we can get it back in. We need it to hold up under adversarial conditions. Bitcoin would use git merge, and ethereums would use git rebase. It’s probably more important to pick a diff algorithm not a merging algorithm.