Home < Sydney Bitcoin Meetup < Sydney Socratic Seminar

Sydney Socratic Seminar

Speakers: Lloyd Fournier, Anthony Towns

Date: February 23, 2021

Transcript By: Michael Folkson

Tags: Research, Dual funding, Privacy problems, Lightning, Ptlc, Taproot, Soft fork activation

Category: Meetup

Topic: Agenda in Google Doc below

Video: No video posted online

Google Doc of the resources discussed: https://docs.google.com/document/d/1VAP4LNjHHVLy9RJwpQ8LqXEUw79z5TTZxr9du_Z0-9A/

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

PoDLEs revisited (Lloyd Fournier)

https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html

We’ll start with me talking about my research into UTXO probing attacks on Lightning in the dual funding proposal. I will go quickly on it because there are a lot of details in that post and they are not super relevant because my conclusion wipes it all away. I think we’ve discussed this before if you are a long time Sydney Socratic attendee, maybe the first or second meeting we had this topic come up when the dual funding proposal was first made by Lisa from Blockstream. This is a proposal to allow Lightning channels to be funded by two parties. The reason for doing that, both parties have capacity at both sides. Both sides can make a payment through that channel right at the beginning of the channel. The difficulty with that is that it creates this opportunity for the person who is requesting to open the channel, say “I am going to use this UTXO”, wait for the other guy to say “I will dual fund this with you with my UTXO” and then just leave. Once the attacker has learnt what UTXO you were going to use he now knows the UTXO from your wallet and then just aborts the protocol. You can imagine if you have a bunch of these nodes on the network that are offering dual funding the attacker goes to all of them at once and just gets a bunch of information about which node owns which UTXO on the blockchain and then leaves, does it again in a hour or something. We want to have a way to prevent this attack, prevent leaking the UTXOs of every Lightning node that offers this dual funding. We can guess that with dual funding, probably your node at home is not offering that, maybe it is but you would have to enable it and you would have to carefully think about what that meant. But certainly it is a profitable thing to do because one of the businesses in Lightning is these services like Bitrefill where you pay for capacity. If anyone at home with their money could offer capacity in some way to dual fund it might become a popular thing and it may offer a big attack surface. One very intuitive proposal you might think of is as soon as this happens to you, you broadcast the UTXO that the attacker proposed and you tell everyone “This guy is a bad UTXO”. You shouldn’t open channels with this guy because he is just going to learn your UTXO and abort. Maybe that isn’t such a great idea because what if it was just an accident? Now you’ve sent this guy’s UTXO around to everyone saying “He is about to open a Lightning channel”. Maybe not the end of the world but the proposal from Lisa is to do a bit better using a trick from Joinmarket which is this proof of discrete logarithm equality or we’ve called it PoDLE. What this does is creates an image of your public key and UTXO against a different base point. It is fully determined by your public key, by your secret key, but it cannot be linked to your public key. It is like another public key that is determined by your public key but cannot be linked to it unless you have a proof that links the two. What you do is instead of broadcasting a UTXO you broadcast these unlinked coins. No one can link to the onchain UTXO but if that attacker connects to them they’ll be able to link it.

This is suboptimal. The drawbacks are pretty clear. You have this new broadcast message, you don’t want to add broadcast messages to the Lightning Network without a lot of careful thought. This is the main drawback. I made a post trying to avoid this because I happened to be studying this kind of proof, I came across it and thought “Let’s think about this.” I came up with some refinements of other proposals and thought “Maybe we can avoid this by doing this”. In the end it turns out none of them are good ideas. The reason none of them are a good idea is that you can do this, you can broadcast to the network after something bad happens but then the attacker can just do it in parallel. They can do it all at the same time to all the nodes and get the UTXOs anyway. That is the main thing, you don’t protect against parallel attacks with this. I talked with Rusty a bit and we figured out that to protect against parallel attacks, the only proposal that can be modified is the PoDLE one to protect against parallel attacks. With the PoDLE one what you can do is broadcast it immediately, rather than waiting until something bad happens. You can imagine that is going to create a lot of gossip on the Lightning Network. Every Lightning channel that gets opened will get this broadcast. This makes the proposal even more sketchy. But it would actually work, we would actually protect against parallel attacks.

Having said all that, the main conclusion I had from all of this is that we shouldn’t do any of these proposals because of what we talked about in the last Socratic. In the last Socratic we had the authors of this paper “Cross Layer Deanonymization Methods in the Lightning Protocol” and it showed that the UTXO privacy of Lightning is really not so great. You can use the onchain heuristics that you normally use from chain analysis and combine them with Lightning heuristics like the gossip messages about channel openings and channel IDs. You can combine those two and you can figure out who opened the channel and therefore the change outputs of the channels, who owns them. You can figure out essentially what UTXOs a node had because they used that UTXO to open the channel. Now they’ve got this change UTXO, now you’ve got a UTXO in your wallet. These heuristics help you identify which nodes own which UTXOs. Although this attack helps you figure out which node owned a UTXO before they’ve used it, these heuristics if you wait long enough and you watch the chain, you’ll be able to figure out that information most of the time anyway. That’s the main conclusion. All these complicated proposals, they are trying to solve this protocol problem where you send the guy information and they leave the protocol. This narrow thinking led to these different ideas but if you take a step back and you realize what information already leaks out in the Lightning Network, the heuristics you can already use and the chain analysis companies will actually use, it feeds into what they already do, they will be able to figure out that information already. Without having to create these special active UTXO probing attacks.

Let’s say I’m a chain surveillance company I could run the parallel version of that attack to try to figure out what everyone’s UTXOs are and then combine that with my knowledge of the transaction graph and my knowledge of gossip data such that I could then try to form a more complete picture of who is paying who and where the coins are going and that kind of thing. It would make it more difficult for them if you did have this. The other point I would bring up is that I recall from that paper, the “Cross Layer Deanonymization Methods in the Lightning Protocol” paper they weren’t able to deanonymize all Lightning payments. It could be that just having dual funded channels helps in some sense break the heuristics that they are already relying on.

Dual funding may help the situation, that is a really good point. The question is whether leaving this attack surface open is advantageous to them actually. I think the jury is out. It is not all the time they get the heuristic right but my main conjecture is when they do get it right it is usually against these nodes that are churning. They are closing channels, they are opening new channels. I think that the nodes offering to open dual funding channels will be exactly these nodes. They will be these nodes that are online, you can connect to them, you can get some funds to lock in a channel and they will charge a bit of money from you. Once that channel is over they’ll quickly put it into another channel without doing any mixing or any tricks and these are the nodes where the heuristic just works all the time. The heuristic is not perfect but I think the heuristic really gets these nodes that will be doing this dual funding. The fact that we can scam these dual funding nodes and get their UTXOs from them, it is probably not worth it, these nodes probably don’t care that much because they are always creating channels with them and broadcasting that to the network. Public organizations, people active on the Lighting Network, they are not super concerned about that. I think it is not worth adding the complexity of these cryptographic solutions at this time. We can just get this dual funding specification done, Rusty agrees that it is pretty easy to add whatever trick we have in our bag that is best at the time later on.

WIP: Dual Funding (v2 Channel Establishment protocol): https://github.com/lightningnetwork/lightning-rfc/pull/524

BOLT 02: opt-in dual funding PR: https://github.com/lightningnetwork/lightning-rfc/pull/184

Is there an overlapping protocol here where there is some gain by still doing the PoDLE proposal or is there a bigger problem that engulfs the smaller problem such that this is completely pointless?

It is not completely pointless. The heuristics chain analysis companies could use combined with Lightning information give you a pretty good idea, if you watch the chain and you listen on the Lightning Network, on what UTXOs these active nodes own. They are always recycled from previous channel openings and closings. With PoDLE, they can still do the attack, it is just they can only do it to one guy once and then they have to move the UTXO to a different thing. The PoDLE thing is not perfect either but it would provide some guarantees. The question is whether it is worth having a guarantee of this kind of privacy to a small level when with other heuristics you are losing the privacy. It is not completely useless but we can’t see that it provides enough benefit for the complexity you have to add to actually realize this cryptographic proof with Lightning. It would be the most complicated cryptography you would have on Lightning so far and also open another gossip message. You have to then rate limit people if they are spamming this message. You can’t really tell which message you should keep and which you shouldn’t. It is a can of worms that I think would best be avoided unless we know that this is actually really useful against chain analysis people. That is my conclusion.

A limited gain but to get that limited gain the complexity is not worth it.

Yeah, I think it is likely you may never ever see this attack. It is possible you will but it is likely you may never see it done systemically and routinely by these major companies.

It sounds like it is only useful when you are deploying new funds on a Lightning node. When you are recycling funds they are already known because they were part of public channels that have been advertised.

Correct. You would learn new funds before they are used but they are eventually going to use them in a Lightning channel and you’ll eventually going to figure that out. Then you know the funds in the future, you just have to wait. You don’t get the information as early as you would but you will get that information eventually.

Lightning dice (AJ Towns)

https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002937.html

Slides: https://www.dropbox.com/s/xborgrl1cofyads/AJ%20Towns-%20Lightning%20Dice.pdf

StackExchange question: https://bitcoin.stackexchange.com/questions/4609/how-can-a-wager-with-satoshidice-be-proven-to-be-fair

Satoshi Dice

A similar idea to Satoshi Dice, anyone not heard of Satoshi Dice? Satoshi Dice was way back in the day. Erik Voorhees made a whole bunch of money off it. The idea was you send some coins to an address and if Satoshi Dice thinks you won then you get more coins back and if it thinks you didn’t win then you don’t get any coins back or you get far less. It was really simple. The big idea about it was that it was a trust but verify thing. You’d have to trust it for a day but after the day you could verify that it wasn’t cheating you. I think if you do a web search you will find some comments on Reddit that at least once upon a time it went down the next day and didn’t pay out. People figured that was one way of it cheating. Those transactions, you win money and you just have to send something. If there were no fees then why not? There were millions and millions of transactions. It got to the point at some times where you couldn’t get real transactions through for all the Satoshi Dice spam. The 50 percent double your money or lose your money thing had 3 million transactions on the blockchain at the moment.

Do this with Lightning?

An idea I’ve kept thinking about is how we could do this with L2 because L2 lets you do, in theory, similar stuff and it takes it all offchain so you don’t have to worry about fees or spam. The thing I’d really like to be able to do one day is have something like if you have shares as an Ethereum smart contract or something, ideally you’d like to be able to do share trading over Layer 2 instead of Layer 1. One of my things is I’d like to figure out a way of doing that sanely. That’s where my goal is for this eventually.

PTLCs

The background for it is that instead of HTLCs which Lightning does at the moment we will use PTLCs because we can do math on PTLCs. With hash timelocked contracts you can reveal a hash, because of the way hashes work you stop there. With point timelocked contracts you can do ECC math, elliptic curve cryptography, and at least get addition, maybe a little bit of multiplication so that you can actually do slightly more complicated things with it. One of the things you can do with it is make a partial Schnorr signature. You create your Schnorr signature, you reveal half of it to the other guy which doesn’t actually tell them anything useful. Then once you’ve revealed a point preimage they can take that signature and it is a valid signature at that point.

Trust

This is a “trust but verify” model, it is not a “don’t trust, verify” model. You do have to trust for a little while. Your funds could get instantly stolen but at least you will know after the fact that they have definitely been stolen, I don’t need to deal with this guy again. For something like Satoshi Dice that might be reasonable because you are only gambling 5 cents and trying to win a couple of dollars. For other things it might not be reasonable. That’s the security model. By getting away from trust there we can say that we don’t need to lock up funds. We are going to trust them anyway so locking up funds for 5 minutes while we resolve the protocol is no big deal. If we don’t lock up funds that means you can’t get your funds locked away from you while the protocol fails or whatever. That’s the trade-off there. Whether the risk of getting funds immediately stolen is worth the benefit of not having to have funds locked up for maybe 30 days until a timelock rolls out.

The wager

To be a bit more specific about the wager here. The idea is you are having a bet on something completely random that has got no skill involved whatsoever. It has just got odds of winning and you agree on a payout if you win or not. In particular pick some numbers, add them together, if the result is less than whatever condition then you win, if it is greater than you lose. I’m using 256 bit numbers here rather than just numbers between 1 and 1000 because that way when you create a point on them it is random. If you just had the numbers 1 to 1000 then someone trying to hack you could try every single one of those and then figure out from the point you’ve got exactly which number you picked.

Protocol design

The goal is to take all of those ideas and then get a protocol where we are not doing anything onchain so we can communicate over the web like HTTPS or something and we can send PTLC payments. We don’t want to use any more primitives than that and we want to make sure that it is a safe protocol. If someone stops playing, stops participating in the protocol that doesn’t let them cheat.

Steps

That’s the specifics of it. The key concepts are you connect to the website, you decide that you want to make a bet of some sort, you pick the random number (b) and you calculate the corresponding point (Pb=b*G) and that lets you get started. You send the public parts of those over to the casino. The casino decides to accept the bet. If they don’t want to accept the bet because they’ve run out of funds or they are under regulatory pressure or something you just stop there and no one is any worse off. Carol picks her number (c), calculates the corresponding point (Pc=c*G) and she signs the message “Carol (C) agrees to the bet with Bob (B) with conditions (Pb)/(Pc)” giving a signature (R1,s1). A signature on this message is later going to be what you use to say that the casino cheated you out of your money. That generates a signature (R1,s1) and Carol is going to send a partial signature (R1,s1-c) and her point (Pc) to the person who made the bet. The way the math works is if you get those two numbers and you add c to the second one you get back to the original signature. If you don’t get c you can’t get anything out of it because as far as you know c could be any number out of 0 to 2^256.

Q - If there is a payment failure or if the channel is exhausted at the precise moment the person is trying to claim their payment? It cannot go through unless there is capacity?

A - If a payment doesn’t go through for whatever reason that’s the same as they didn’t pay you. If you are at a shop and the bank declines your card then you didn’t pay for the goods. For the purposes of this I am assuming that Lightning works really well. Payments go through instantly, they get cancelled instantly, there isn’t much of an incentive for anyone to hold up payments in this protocol. You don’t want to have the payments get stuck halfway or something as per normal.

Q - You wouldn’t lose money. Your protocol is secure even in the case of stuff like that happening.

A - Yeah.

At this point no money has changed hands at all. You’ve just gone on the website and exchanged some information. The next thing is that Bob needs to check that this (R1,s1-c) is actually the correct partial signature which is doing some elliptic curve math.

(s1-c)*G = R1+H(R1,C,m)*C - Pc

Assuming that is alright then Bob makes the bet. He pays this wager to get this value c over Lightning as a PTLC. He bets his 5 cents in return for Carol’s number.

Q - What does Bob verify here? Does he verify that Carol has chosen a random number and it was Carol that has chosen that number?

A - Carol is sending Pc and (R1,s1-c) across. The Pc is the point committing to the random number and the signature is the signature but adjusted by the random number. What Bob is verifying here is if he gets the random number that corresponds to the point and then applies that to the signature he’ll actually get the signature for what he thinks he is going to get back. At the moment this signature on its own doesn’t mean anything because he doesn’t know what c is. Because he does know what Pc is he can bring these numbers up to the elliptic curve thing and do all the math because all the elliptic curve points are public.

Once he’s got the preimage for Pc he knows Carol’s number and he can calculate the signature (R1,s1-c+c). At that point he can figure out whether he has won or lost. If he’s won, the original message, he can provide the numbers for b and c and prove to everyone that Carol owes him money.

Like I said he works out if he won or not. If he did win he is going to sign a message saying “Carol (C) has paid me (B) my winnings for our bet with conditions (Pb/Pc)” so that he can eventually give this signature to Carol when she’s paid him and everyone can be square. In the future if Bob tries claiming that he’s cheated Carol will say “I’ve got this signature from Bob and he says I paid the money so we’re definitely square.” He gets the signature (R2,s2). He sends the signature nonce (R2) to Carol and he sends his number b. At this point Carol can evaluate b,c and tell that Bob has actually won. He sends a Lightning invoice for his winning and then payment of that Lightning invoice will reveal s2 which will give Carol her signature. Carol needs to check that the signature stuff is right, that Bob did actually win and pay the invoice so everyone is square.

(Carol checks that b*G = Pb, that R2 and the PTLC will be a valid signature)

Does it work?

A couple of caveats with that. There’s the possibility that Bob has chosen a winning number, he gets Carol’s c and then he just finishes. At this point he has a signature that says Carol owes him money but he has never told Carol anything so that Carol knows that she owes him any money. If you are going to be publishing public proofs of “Carol has cheated, she owes me money, she hasn’t paid so she can’t provide the signature from me because I haven’t given anything at all to say she’s paid so she’s a cheater”. You need some way to deal with that. If you are dealing with a court where people go in person then at that point the mediator can pass stuff at all. Otherwise the mailing list post about this has some ideas for how you could use Lightning to verify a public proof of things which then has further complications. The other thing is what if Carol does an exit scam. She has got a bunch of bets, she owes half of them winnings and then she doesn’t bother paying and doesn’t do anything more ever again. She doesn’t care if her reputation is ruined.

Does this generalize?

That’s the bad side. The good side is that absolutely none of this is onchain. It is all purely Lightning transactions, it is not even complicated ones. The stuff you do before you call the Lightning client is a bit complicated but the Lightning stuff itself is just PTLCs. The other nice thing is that all the complicated conditions on the bets are not even getting passed to Lightning. They are not onchain, the Lightning client doesn’t have to know anything about them. You can generalize that just by changing a Satoshi Dice website, Javascript, Android app, whatever. You could make that both more complicated, instead of just a dice roll you could have different levels of pay-offs, like a scratch it or something, or you could have payouts dependent on some oracle, sports betting etc. In general I think you can have pay outs of any sort of nature as long as they only end up being payouts in Bitcoin and not payouts in some other asset like cash or stocks or something because you can’t send those over Lightning in the first place. I think in theory because no funds are locked at any point as long as you trust that the casino is not going to do an exit scam on you you can extend this for bets that go over a longer period of time as well.

Implementable

It is pretty close to implementable. The Suredbits guys have a PTLC proof of concept that works on testnet I think and should work on mainnet. If you are doing PTLCs with Taproot then at least theoretically you can do them on Signet already. That’s my theory.

Q - At the beginning you talked about how this may be related to other applications other than shares. On the mailing list you talked about how it might be used for prediction markets as well. I don’t get that bit, how does that relate?

A - A prediction market says you win 5 dollars if something happens and you lose 2 dollars if it doesn’t happen. It happens is some condition that at least with an oracle you can verify programmatically in some scripting language and you can define whatever scripting language you want for this. A prediction market, you want the predictions to last over a moderate period of time. You’ve got the exit scam risk but otherwise I think that all works as a trusted but verifiable prediction market. The prediction market is holding all the money until the prediction gets resolved, you’ve got that risk still. But you are still able to do everything over Lightning and you’ve got this level of proof that they’re behaving properly.

Q - There was already this idea of paying for signatures but this is taking that a bit further. You pay for a signature and you can claim that against someone under conditions that are in the signature. The clever thing is you’ve got cryptographic commitments to the lottery numbers you’ve chosen in there. That’s the breakthrough. You can take that claim against the guy and if they don’t fulfill that then you publish it. That is really what the breakthrough is.

A - Shares are pretty hard because we can’t do Lightning over anything but Bitcoin. Or at least there are serious problems with doing Lightning over anything but Bitcoin at the moment. You could maybe do if the share price is such and such then you get paid such and such in Bitcoin which you could have used to buy the chain afterwards, that sort of option. That devolves everything to a prediction market. The biggest problem with trading shares and options is the locking them in. If you want to trade shares over Lightning then every hop in your Lightning Network has to own some shares that they can move from one channel to the other. This is probably completely impossible to make work. I think this might be a step forward to making that a little bit more reasonable because it gives you some of the benefits of decentralizing stuff in the casino model. I’m not sure how that works, I’m just hopeful that maybe one day it does.

Q - With the shares, the problem you just described as to finding a way to trade shares over Lightning, couldn’t you do an atomic swap? You are transacting Bitcoin like you said but then it automatically uses an atomic swap that swaps it into an asset in another blockchain.

A - In theory that sounds good. I think it has the same problem with multi-asset Lightning which is that because of the timelock… If you are trying to pay 100 US dollars to someone and you’ve got Australian dollars, you spend 130 Australian dollars to someone, that gets forwarded as 129 Australian dollars, 128 Australian dollars, then gets converted. The problem is that your Lightning stuff has the timelock of a few days. So if the price is changing over a few days that’s giving the guy who is doing the conversion a free option to cancel your transaction or send 100 dollars and take 5 dollars profit because the Australian dollar has improved. I don’t see how you reduce the timelock enough to make that work. If you could do that then yeah you’re probably right. If you have your Lightning channel with your shares against a broker and the broker has got a lot of shares that get distributed amongst their clients and they want to transfer that across to some other broker, they are just moving money to settle the difference. The end broker reconverts that back into shares and you solve this multi-asset problem then I think that would be pretty cool.

Q - That is a problem a lot of people right now in the normal market are betting on as well, on just having the little window of time to play with. I don’t know if it is an actual problem.

A - The whole two day window of shares getting settled was why Robinhood had to cancel the GameStop things.

Q - Did you ever use Satoshi Dice in the early days? What were people doing?

A - I wasn’t around in the early days.

Q - Were people literally making really tiny bets just on random numbers? The equivalent of a few cents?

A - It was on Reddit or Bitcointalk. People were like “I made all this money on Satoshi Dice”. That is what I saw people do.

A - The news articles were reckoning that 400 million dollars in Bitcoin back in 2012, 2013 days had gone through it.

A - As far as I remember more than a million Bitcoin went through the most popular address on Satoshi Dice, a million Bitcoin. It was filling up the blocks.

Q - It was purely betting on random numbers? Is this is a possible thing to kick off use of Lightning? Could we all make tiny bets of a few satoshis on random numbers? What is to stop that from happening?

A - Let’s do it, let’s figure it out. That’s the coolest thing about AJ’s thing, it lets us figure out the answer to the question without more complicated stuff. We can actually build this now.

Q - How do you build in a house edge? You have a certain probability and that’s how the house gets their edge?

A - The idea is that the casino makes an offering, the 48 percent tells you which set of numbers win and the payout is part of the message that says “I made this bet with Carol, I am owed 50 satoshis because I bet 25 satoshis on p being b, c and then I will tell you what b and c ended being so you can see that I actually did win with the 48 percent odds.

Q - Another thing I recall from those days is they would keep doubling up and doubling up and doubling up.

A - That’s martingale bets.

Q - Yeah. What if at the moment you were trying to do the payment there was literally not enough in the route even using MPP, whatever. There just wasn’t enough to route to that person, then that person, are they stuck in the lurch? They have technically won but they aren’t able to claim their payment. Would they need to try to get more inbound capacity and then claim it? How would that work?

A - If they don’t have inbound capacity then they are bit screwed. If you are making a bet that is big enough that is worth doing onchain then you could do it onchain. But if you don’t have any inbound capacity and you don’t have dual funded channels then nobody has any inbound capacity when they start out with Lightning, this is all screwed in that case. If you are making a bet where you are winning 1 percent of the time and you get 95 times what you bet then maybe you send the money on Lightning and you get paid on the blockchain.

Q - One of the reasons Satoshi Dice was successful was because fees were basically zero. What’s the latest state in terms of fees on Lightning? For historical purposes it is great that everyone doesn’t have to store these transactions because it is on Lightning but you are still potentially having to store lots of tiny HTLCs until you close the channel. Maybe the really tiny bets don’t make sense on Lightning.

A - All of these PTLCs can get resolved immediately. You send your wager over and you are trying to get c back, there’s no need to keep that open for any length of time because Carol already knows the c that she decided. She has to look that up in a database, send it through, collect the funds, funds go in the channel and it is done. The other thing about this is if you are doing everything on Lightning then apart from the casino’s edge all the money is going in from a dozen people and then getting paid back out to six people so the casino’s channels should stay relatively balanced.

Q - Transaction fees are currently really low on Lightning. I read an article a few weeks ago, if you look at what transaction fees are currently used in Lightning then the only conclusion is that nobody is making money on transaction fees. It is not economically viable at the moment. It is just a bunch of hackers keeping up the Lightning Network, nobody is making money off it.

A - Alex Bosworth sounds like he’s making money on it if you read his Twitter stream.

Q - I think he is but he is probably the outlier. That said, a Bitfinex guy, he said they have done 12,000 Lightning payments in and out of Bitfinex. They are one of the leading Bitcoin exchanges who supported Lightning very early on compared to others. Potentially as this bull market heats up you’ll see more and more people getting on Lightning. We are seeing promising signs.

A - The longer you keep channels open the higher chance you are making some profit off them. Obviously you need to rationalize your fees. Regarding a casino like this if you were a gambler you would open a direct channel and play for free.

Q - Have you seen lightningspin.com?

A - That’s been around for a while?

A - I think it was Rui Gomes from OpenNode who created it. I think he sold it off to somebody else when he went to go work on OpenNode.

Q - My hypothesis is fees are going to go crazy, if we are in a bull market, towards the end of the bull market fees are going to go crazy onchain and then people are going to be forced to use Lightning. Then I expect the fees will go up on Lightning and then there may be money to be made for Alex Bosworth and the expert routers.

A - If fees go up on Lightning at least you can always make more capacity on Lightning.

A - To have a channel, a routing node pointing towards a use case like this which is a balanced use case, you can afford to lower your fees because the traffic will be much higher. Whereas if it is a trading peer, like for example on Bitfinex everyone is just depositing, they are not withdrawing on Lightning, so the traffic is unidirectional and you need to increase your fees a lot to make back the cost of rebalancing or opening a new channel. Same is true for the Loop node, a submarine swap service, where everyone is just swapping their money out to onchain rather than doing the other way. I think it is more affordable to have low fees towards this kind of service.

Taproot activation

Taproot activation meeting 2: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html

Let’s talk about Taproot activation so we can actually have this thing working on mainnet.

So after 2017 and the whole UASF thing there is kind of a split community where there are people who thought 2017 occurred in a way that UASF meant SegWit was activated and UASF was the crowning glory and then you’ve got the other half of the community that’s like UASF was totally reckless, it should never have been attempted and the only reason why Bitcoin didn’t die was because a bunch of smart, conservative people had discussions with miners and made sure it didn’t split the network and kill Bitcoin etc. I think people are in those camps and it is informing how strongly they feel about this lockinontimeout discussion. We have a revised BIP 8 and within that there is a parameter called lockinontimeout. Some people think that should be set as true as the default which would basically mean that Taproot definitely activates after a year. If miners fail to activate it will activate at the end of the year regardless. And then lockinontimeout being false would mean if miners fail to activate then it wouldn’t activate after a year. It is only a clause if miners fail to activate. Given that miners have said that they support it, there doesn’t seem to be any controversy, you would expect it to activate before that lockinontimeout parameter is even relevant. Yet people have very strong views on whether it should be set to true or false. I think a lot of those arguments are kind of symmetrical. Some will say “You are totally reckless proposing some people run lockinontimeout=true because Core should release lockinontimeout=false and if you release something with true then you can split the network.” But the same argument applies the other way. If Core released true anybody running software with false is potentially risking a chain split on the network. It is hard because you have strong views on both sides. For a one off soft fork I lean towards doing true just because I think it is cleaner, you don’t need any unorganized UASFs because you definitely know it is going to activate after a year. But Greg Maxwell and David Harding had this argument where developers shouldn’t be releasing something that is definitely going to activate. That puts too much power into their hands and bad precedent etc. They could in future push a soft fork that isn’t widely popular amongst the community and it would end up activating because developers want it to activate. I don’t know why people are getting so passionate about it and so rigid in their views. My preference would be true but I’m also happy with false. Luke Dashjr is very strong on true being set and Matt Corallo is very strong on false being set. I think AJ wavers between the two. I don’t know how it is going to resolve. My expectation is that Core will release false if it releases anything and then there will be a community version that releases true. I think that is what will happen because a number of Core contributors are against setting lockinontimeout (LOT) to true in Core. So either Core releases nothing and we try to activate this with non-Core releases, community releases, or Core releases default lot=false and then there’s potentially a community release with lot=true that some people can choose to run. That is how I think it will end up. I am just trying to make sure that people don’t start shouting and swearing at each other. I think that has been my primary job so far.

I think that was a fair summary at least from what I understood.

How familiar are people with what BIP 8 actually does? Has anyone looked at BIP 8.

There is a picture. The idea is that we define a new deployment, we say “We want to deploy Taproot.” The green section (ACTIVE) is when the Taproot rules apply. We start off in the DEFINED section, we eventually reach start_height whatever that parameter is set to. We seem to be looking at late July for that. It goes into STARTED and at the point miners or rather blocks can be signaled to enable activation or not. If a threshold of blocks in a retarget period, 2 weeks, 2016 blocks, the threshold we’re thinking about is 90 percent. If 90 percent of blocks in a retarget period signal it then we go into LOCKED_IN, we spend another 2 weeks sitting in LOCKED_IN so people can go “Oh my g*d, this is coming in two weeks” and actually upgrade and not just talk about it. At the end of the two weeks it is ACTIVE. If we’ve got this lockinontimeout thing set and we get almost up to the timeoutheight then we reach a MUST_SIGNAL phase. During that MUST_SIGNAL phase there has to be 90 percent of blocks signaling. At the end of that we do the same LOCKED_IN, ACTIVE thing. If we don’t have lockinontimeout set and we get to the timeout then it just fails and we don’t have Taproot active. The whole having to do things rather than just making sure everyone is happy with everything kind of sucks. People get upset about that either way. The ideal path is this. We go through STARTED, we skip the MUST_SIGNAL, we get enough signaling to go into LOCKED_IN naturally and we go to ACTIVE. Odds on, touch wood whatever, that’s what will happen. The MUST_SIGNAL is there as a safety valve so that we don’t have to do all the BIP 148 stuff at the very last minute like we did last time.

Just to clarify you don’t get into that MUST_SIGNAL phase if you’ve set lot to false. That MUST_SIGNAL phase is only accessible if lockinontimeout has been set to true in your software.

Importantly it is upgradeable. If the majority of people have lockinontimeout set to true it drags everyone else in because by forcing all the miners to signal anyone who was just going “It hasn’t timed out yet”, they’re all signaling. It has this ability that you could start with lockinontimeout being false and decide later “No that’s it. We are going to go hard, we’re going to turn it to true.” Those who set it to false will also just follow along assuming a majority has set true.

If the chain with the most proof of work on the network has enough signaling then that will satisfy both lockinontimeout as true and lockinontimeout as false nodes. They will all end up in the same state. They will all be enforcing Taproot rules. The only difference is if somehow there’s only a minority of people with lockinontimeout as true, they don’t get the most work chain, they’ve got maybe a 20 percent hash power fork so it is a much shorter chain, it is much slower, it activates Taproot and the longer 80 percent chain doesn’t then you’ve got a network split like the BCH thing. Taproot is on the wrong side of it for some reason and everyone is unhappy.

There is a valid chain split risk if say half the network is running lot=true and half the network is running lot=false because some people will be enforcing that MUST_SIGNAL phase and other people won’t. You could potentially have a chain split if we get to that scenario. We only get to that scenario if miners have failed to activate for a year.

Even in that scenario miners have to also mine both chains. In that case the miners would also have to be split. If the miners were unanimous then you would get one chain stopped and the other chain still going. You’d need everyone to split. If one large group agree and have a super majority it is fairly straightforward what happens. If there is a huge amount of contention it is all bad but I think that statement is generally true in Bitcoin anyway.

If the upgrade works like you said what’s the argument against doing that? It is a soft window. Give them a chance and after a little while, how is that not the best option?

That’s a good question. This is a topic of some debate. For me, you can’t back out. Once you’ve set lockinontimeout to true in a deployment that’s it, you are committed. You can’t later on go “That was a mistake, wait, hold up everyone.” At the moment with the old style lockinointimeout=false, if we found a bug, it was a terrible idea, we should not do this. We could kill the activation and tell the miners “Please do not activate. Nobody set lockinontimeout = true, we’ll let this one die in a year’s time and we’ll do the real one later.” That is an unlikely scenario. But in a meta sense I think it is really important because with this way everyone gets their say. The developers go “We have proposed it, we’ve created it, we’ve setup the conditions but we are not going to be the ones who dictate that it happens.” If the network rejects it, the network rejects it. Then the miners get their shot, they get a chance to activate it. And then the users get their shot, they get a chance to say “F*** you, the miners are being obstructive. We really believe this should be activated so we are going to put lockinontimeout to true.” If the economic majority of Bitcoiners do that then the miners can mine the other chain if they want to but it won’t be worth anything. The thing I like about lockinontimeout=false is it gives everyone a chance to have their say. If things go well and we all get consensus then we know that everyone has bought into this. You can never truly know you have consensus in Bitcoin. We believe that miners are enforcing SegWit today, we believe that nodes are enforcing SegWit but the only way to find out is to actually mine a block that violates some of the rules. It is an expensive test and even then it is only a statistical test. But for all you know everyone stopped enforcing it last week and you just haven’t been told. You can never truly know you are in consensus so we have to have this mechanism that is slow and gradual. Everyone signals and says “Yes” so we are all fairly convinced that we are in this together. That applies to old soft forks, it applies to new soft forks. Importantly I think this is going to be pretty uncontroversial and go through whatever we do but I like the idea in future that we have this situation where everyone gets their chance to have their say so that we are all confident that we are all in agreement. We’ve not only got to all be in agreement, we’ve all got to know we’re in agreement. Because you can never really know what the network is doing. You can never really know what software people are running. I do like the idea of roughly three groups, the nodes, the developers and the miners, all getting an opportunity to sign off on this. At the end of the day the economic users of the network can impose whatever rules they want. They can override everyone. If they set rules and the miners don’t like it the miners just mine something that isn’t Bitcoin. They always eventually have the override but as far as possible it is good if we have this broad consensus. And I don’t want a perception that any one group is in control, I don’t want a perception that developers can define what Bitcoin is and when we can upgrade. Developers don’t want this because if they are perceived to be a point of control on the network then they are going to have the s*** lobbied out of them over every single issue and upgrade that anyone could ever want. If it is decided that actually the devs set lockinontimeout=true and it is going to activate in a year come hell or high-water then we’ve removed an important check and balance on future development decisions. I don’t think developers want that because the next thing that happens is that you start lobbying developers for every possible change you might want. As we see Bitcoin become more serious that is exactly the kind of gamesmanship that we expect to see. By making it very clear that the devs do not activate things, they get to step away from this decision making process. They always have a huge influence of course and there are day to day decisions that they make that do have a huge influence but on something as key as this I think it is really important that it be very clear that developers are not in control and bribing or threatening the developers does not give you control of the Bitcoin network. For their own sake they do not want to be in the driver’s seat on these things.

Was there any discussion around this idea of first releasing code that has lot=false and then changing it at a later date or is it seen as that it should be a community driven thing if somebody in the community wanted to release a version of Bitcoin that has lot=true?

We are headed to that by default because Luke (Dashjr) has already said he is going to do lockinontimeout=true. There are a number of people who have suggested a hidden option, an undocumented option, that would be in the initial release to say I want lockinontimeout=true. That makes it easier if we decide to go that path. Users don’t have to upgrade or go for some dodgy patch from somewhere. They have already got it, they just need to change one config option. It makes it as simple as possible to put power in their hands. That’s my personal approach. I think that having that option there, default off, but users should be able to turn it on without having other consequences. You could do it by upgrading for example. What else are you going to get with the upgrade? Changing a single option is an isolated decision that I think users can make. A third party release is pretty bad because I’ve already decided I know how to validate the Bitcoin Core releases and I trust those developers enough to do that. I’ve now got this other release. That’s a whole rigamarole. Who released it? Do I trust them? Who has vetted the patches? All those kinds of things. I just think it is cleaner for the developer to make an explicit user choice.

Let me try to do a good job of summarizing Luke’s arguments because I think he has some very good arguments in response to Rusty’s arguments. One is that miners are only signaling, they are not voting for the change. Everything should be preloaded before the activation mechanism. We should have all the discussions, all the stakeholders should discuss whether they want Taproot and if there is any substantial opposition and people don’t want Taproot then we shouldn’t even attempt the activation. The only reason we are even discussing activation is because there is broad consensus amongst the community across all the different constituents, users, miners, developers etc that this is a change that we all want. That would be one challenge I’m sure Luke would say if he was here.

Note that by that measure SegWit would never have activated. Miners had huge opposition to it. They did not stress it. You could say “We have this broad consensus” and the developers believe they have, and I think they’ve done a great job of doing so but let’s verify that, not trust it. The devs can individually be convinced but they can’t convince me for example, there is not transferrable proof. So by getting as many groups to buy in and signal that, signaling of course is different from actually doing it. Everyone can lie but at least it is better than nothing. In the past we have seen that that argument is wrong.

Why didn’t the miners express their dissatisfaction with SegWit? What was in their minds?

Because they were secretly violating a patent and they weren’t going to tell anyone.

ASIC Boost? That is a funny situation, I guess that’s a special case.

It is not that they were violating the patent at that point, though they are now because the patent has actually been granted as of August. What they were doing was a competitive advantage that they didn’t want to tell anyone else it was a competitive advantage because then they’d do it too and it wouldn’t be a competitive advantage. You can’t explain something and then lose the point of doing it in the first place.

As we understand it the mining suppliers were strong arming the miners. At this point we have a concrete argument that they didn’t express anything and suddenly boom. And the things they did express weren’t genuine and were dismissed. Consensus is hard, I think it failed in that case.

With the New York Agreement, SegWit2x, they wanted a block size increase. There were very few people who were unhappy with SegWit, they just wanted an additional block size increase on top of SegWit.

That is not clear. That seems to have been a fig leaf for Bitmain. They were very unhappy with SegWit.

For ASIC Boost, yeah.

It was a huge threat to their margins.

I suppose there were different players wanting different things.

That’s right. SegWit2x seems to have been a negotiation to try to get more players onboard with their thing or a delaying tactic. It is hard to say. At the end of the day, agreements and blog posts are a weak signaling. I do like to have a signaling that is closer to Bitcoin itself.

Another Luke argument would be yes we don’t want developers making all the decisions and perhaps there is even that argument where we should do things differently if there is even a perception that developers have the power to do that…

I don’t think Luke would argue that. I think Luke is like the developers are right.

Without Luke being here we can only guess what he would say. I would guess he would say the defense against running something that the community doesn’t want or the user doesn’t want is the community or user not running the software that Core releases. If Core released a contentious change there would be a massive community uprising and there would be a different release to run released a community group of developers. That Core release would never get activated. Similarly miners don’t have to run Core software. This “We need miners to pull the final switch” is kind of passing the buck. Even though we’ve got this community consensus, even though we’ve included miners in this process to get consensus on the change we still want miners to pull that final switch because we are worried about this perception that developers have too much power. That is potentially just passing the buck. We’ve already come to consensus, developers just push the superior change that is cleaner, that doesn’t need any uncoordinated UASFs on top, we can all be done within a year. Everyone has got clarity, everyone knows that it will activate within a year and the ultimate defense is just don’t run what Core releases if you are that angry about it.

It is not that easy though. You might step forward and then the debate explodes and then you decide you have to upgrade twice, you’ve got to downgrade. Those who don’t now are screwed because they are locked into something. There is a huge amount of inertia, it is a blurry line. In this case, personally I feel really happy with Taproot. I feel happy with the review, I feel happy that everything has gone through and all these things. But I do feel that we are using this as hopefully something that we will use for future activations. Not all of them may be quite as clear. I think this idea that devs have done all this due diligence so you don’t need to is a little bit awkward. The reason we let the miners signal is because they are easy to count, not because they are particularly special. Everything else is very much a qualitative view. They are so easy to count in a distributed manner. But they are not the true power in the network, nor are the devs, as you say the users are, the users can choose what to run. But the devs have a huge amount of influence, defaults have a massive amount of influence. Unless you feel so strongly that you are prepared to pay someone to write something else or that you can do it yourself, there is a significant bar to running anything but Core. Not a bar for Luke who is a developer himself but for everyone else it is a big deal. If you have a significant amount of discontent and I feel that if you don’t give people a way to express that discontent you seem to have consensus until you really need it and then you find out you didn’t. I feel it is better to give people the option to signal and express things because then I feel more confident that actually we did have a vast majority of people feel this way. We gave them an opportunity to say no and they didn’t take it. Not like “All they had to do is patch their software and install this dodgy patch from some crazy Bitcoin Cash dude who decided to create a patch for them.” They could have resisted, they could have opposed it. Since they didn’t do that clearly we have consensus. No. We should make it as easy as possible for people to express themselves. I think that’s important. I think in this case it won’t be necessary this time but I’ll feel much better knowing we have that option because at the end of the day users do have that option, they just have to be driven pretty hard before it would happen.

It is miner signaling rather than user signaling. The way you are talking it is almost as if users are pulling that final switch. But it is miners pulling that final switch and users don’t necessarily want to trust miners any more than they potentially want to trust developers.

No. On the surface it seems like miners control the signaling in the same way it seems like miners control Bitcoin. But miners are more exposed to forks. If there are multiple options miners have to decide at every moment what they are mining. There is opportunity cost in making a mistake. You or I sitting here at HODLers can sit out a fork for a lot longer than a miner can. That’s why the economic weight at the end of the day is controlled by the actual users, the people who run nodes, the people who actually use their Bitcoin, doing economic activity with Bitcoin. They have huge leverage over the miners because they literally have costs and they are burning them all the time. They have to decide. They are a lot easier to pressure than you could think. They are terrified of the idea of being on the wrong side of a fork in a way that I don’t care. If I’m on the wrong side of a fork I can just upgrade my software and I haven’t moved my coins, it is all good.

So why is it important that we get miners to do that final switch pulling or that final readiness signaling if they can be pressured?

I thought it was more about the devs not putting in a change unilaterally. I thought that was the argument.

The reason we care about miners is not just that they are easy to count, it is also that if we’ve got a huge amount of hash power supporting something and actually validating, not building blocks on something that doesn’t validate, then the likelihood of getting a chain of 2 or 3 or 6 blocks that are invalid is extremely low. You don’t get the 6 blocks are confirmed, oops there is a double spend in it, oops it all gets re-orged out and someone has lost money. That is the reason why we’ve got a 90 percent threshold and our preferred way of getting things done by having miners actually upgrade, actually signal that they’ve upgraded and that we’ve reached this threshold with everyone being happy and not fake signaling or whatever else.

That is an issue too. There is a trust issue because people can signal and it is almost completely detached from what they are actually doing. But also of course if 99 percent of the nodes are upgraded then nobody gives a f*** what the miners do because you won’t even see it. You have to have the case where you haven’t got enough nodes upgraded and you haven’t got enough miners upgraded where it kind of falls apart. If all the miners are upgraded you’re good or if all the nodes are upgraded. Technically 51 percent of the mining power is upgraded you are good whereas the vast majority of nodes have to be upgraded so that anyone who wasn’t isn’t partitioned off. It is much harder to count that. Ideally they are both upgraded and it is all good but it does come back to the miners being easier to count at that point.

Yeah, 51 percent of people upgraded, you can still have really long re-orgs if the 49 percent get lucky of course.

Absolutely. The miners are really on the sharp end of this. The miners do not want to be on the wrong side on that, more than anybody else. If your node is on the wrong side of that then transactions that you make during that time may be vulnerable but for miners they are definitely vulnerable if they are on the wrong side. Whereas a node that is on the wrong side, you just upgrade it. You upgrade it to whatever fork wins and you haven’t been hurt in any way. Where Bitcoin is not used for compulsory economic activity, I think that will still be true. And it will always be true for the miners, the miners will always be subject to this whereas economic nodes at this stage I believe are much more resistant to chain forks because they can just stop in a way that miners can’t really.

Once people are running nodes with lockinontimeout set to true or false is there a way to query? Will we be able to query someone’s node to see what they’ve got it set to?

I personally believe that it should also change the default user agent for exactly that reason. The only way to find out would be to actually do the fork. People are presumably doing it in order to signal “Yes I want lockinontimeout=true” so I do believe that the default user agent string should change in that case so we can run some stats and have some signaling happening. But I haven’t seen any discussion on that.

That would be solved assuming I’m right and the likely scenario Core releases false and a community version releases true, you would be able to see the version that people are running when you connect to them, when you do that peer handshake.

I like to think it would come out as patch to Core anyway. Hopefully it would just do both. Change default user agent when you turn it on.

In a scenario that miners don’t activate in say the first 6 months you think Core will release a new version with lot=true even though they released lot=false 6 months ago?

I like a hidden option in Core which is always there from day one, which is lot=true or taproot-bip8-lot-true or something that would change lockinontimeout to true and also change the default user agent string so you can tell people are running it. That’s good because by the time 6 months comes almost everyone has upgraded anyway, they’ve just got to change their config. They don’t have to upgrade again or do anything else. They don’t have to find a new supplier. They can just use Bitcoin Core the way they always have and turn it on. But yes I know Luke’s node Knots will almost certainly run lot=true.

I was surprised to see him say a couple of days ago that he hadn’t made a decision on what Knots would do but that might have changed by now.

https://luke.dashjr.org/programs/bitcoin/files/charts/historical-pretaproot2.html

That’s Luke’s numbers on adoption after release of various versions of Bitcoin. That top line that is super sharp with number go up is the latest release with the Taproot code included but not the activation stuff. That going up really fast is a pretty good sign.

It is generally perceived that what happened with SegWit was bad but I’m not understanding fully why it is so bad. When I think back we tried to do the soft fork, it didn’t work out and I presume that during this time people figured out it was because of this patent thing. That was seen as not a reasonable reason to oppose SegWit and it just had to be done by users in the end. You had this conflict on the technical level where we had a very good technical solution but it changes the game for miners in a way that they aren’t going to support it. It is a difficult one but it is one that had to be done and that period where you didn’t have an automatic lock-in, you waited for the signaling, gave time for this information to come out and then make the final doomsday decision to do the UASF. It seems like although it is messy maybe it wasn’t so bad.

Messy in Bitcoin is bad. We won at chicken, they conceded and that’s great. But that doesn’t mean anyone wants to go through it again because if we had found out what happened with UASF and various miner interests it would have at least been a huge stutter for the use of Bitcoin. With the chain split and uncertainty of what is going on every exchange would have had to shut, everything would have had to wait it out while the battle roared. Whichever way it went it would have been a war. Even if you won it was much better to go the way it was. Nobody wants to trigger that again. Nobody wants to step that close to the edge I think. Even though it worked out there are two things. One is everyone wants to make it easier to do a UASF next time which is really what BIP 8 is all about if you do it with lockinontimeout=false by default. It makes it very easy for users to do that. I am arguing it should be even easier.

With SegWit, it was enforced by a UASF after it floundered, which is similar to what we are doing here. Isn’t it the same thing? What’s the difference?

A whole heap of people upgraded, threatened and signaled and there was huge uncertainty. Miners, remember, are terrified of forks, they do not want to be on the losing side of this. A hundred percent of them started signaling. We never found out if anyone was running that code. I know personally I was running that code but we never found out. We never had a test because literally 100 percent of miners signaled. There were no blocks that did not signal which is unheard of and it means there is probably too much miner concentration in the world if you can get 100 percent of miners to agree on anything something is wrong. It was ridiculous. They literally all caved, were signaling it because they did not want to mining invalid blocks. But we never knew what the percentages were. And economic nodes, is it just Luke and three friends or is it actually major exchanges who are going to go “SegWit or bust.”

I don’t think we had major exchanges saying that but I think we had a couple of companies, maybe Bitrefill or someone like that on the SegWit support and running BIP 148. Whether they did or not or just said they would…

I don’t think anyone wants to go through that again.

The timeline for SegWit was signaling started in mid November 2016. We started the whole UASF discussion in I think February or March and that was about the same time that the ASIC Boost stuff came out. I don’t have the date that BIP 148 got finalized but I think it was April with the August deadline, 3 months or whatever. Then BIP 91, where the miners coordinated to actually do the 100 percent signaling was over two weeks beforehand in the second half of July. That BIP didn’t come about in the first place until the start of June. It was all very rushed especially compared to all the time we’ve spent on Taproot. Avoiding the rush alone would be a huge improvement.

The argument of course for doing the UASF with SegWit is that without it we potentially would have never had SegWit. We wouldn’t even be in a position to be discussing Taproot because SegWit would have never activated.

We did BIP 148 in order to try to get the SegWit BIP that was already progressing to activate. That would have timed out the end of November 2017. But there was another proposal, BIP 149, which was let’s deploy Taproot with a UASF from Day 0 as soon as the current deployment failed. It would have started in December or January 2017-2018. A lot of people who don’t like the lockinontimeout from Day 1 did like that proposal and did want to do something similar. It is just more delay basically.

I can only speak for myself but I was supportive of the idea of “We will force the miners.” They have not shown good faith. They do not have a valid reason for blocking consensus on this. I think that was a pretty common feeling. We will just do something. But there is a bridge between doing it when we have to, which absolutely we have to be ready to do, and doing it all the time by default which I think is a step too far.