Home < Adopting Bitcoin < Adopting Bitcoin 2021 < Transaction Relay Policy for L2 Developers

Transaction Relay Policy for L2 Developers

Speakers: Gloria Zhao

Date: November 16, 2021

Transcript By: Michael Folkson

Tags: Transaction relay policy

Media: https://www.youtube.com/watch?v=fbWSQvJjKFs

Slides: https://github.com/glozow/bitcoin-notes/blob/master/Tx%20Relay%20Policy%20for%20L2%20Devs%20-%20Adopting%20Bitcoin%202021.pdf

Intro

Hi I’m Gloria, I work on Bitcoin Core at Brink. Today I am going to talk a bit about mempool policy and why you cannot always expect your transactions to propagate or your fee bumps to work. What we are doing to try to resolve stuff like that. This talk is for those of you who find the mempool a bit murky or unstable or unpredictable. You are not really sure if it is an interface you are comfortable building an application on top of. I would guess that this most of you. Hopefully today we can make the mempool a bit more clear. My goal is to be as empathetic as possible when it comes to transaction propagation limitations. I am going to try to bring a perspective of a Bitcoin protocol developer, how we reason about mempool and transaction relay. I am going to define what policy is, how we think about it, why we have certain policies and define pinning. I will go over some known limitations that we have, maybe clear up some misconceptions about what attack vectors are possible. I’ll mention a specific Lightning attack. My goal is for us to walk out of this today as friends so we can start a conversation about how to build a Layer 1 and Layer 2 with a seamless bridge between them.

Anyone should be able to send a Bitcoin payment

I want to start off from the ground up to make sure we are on the same page about how we think about Bitcoin. We want Bitcoin to be a system where anyone in the world can send money to anybody. That is the whole point. Abstractly people often use terms like permissionless and decentralized and trustless. Perhaps a slightly more concrete framework would be something like this. The first point we want it to be extremely accessible to run a node. If for example bitcoind doesn’t build on MacOS or it requires 32GB of RAM or it has 2000 dollars a month of running costs in order to operate a full node that is unacceptable to us. We want it to run on your basic Raspberry Pi, easy hardware. The other part is censorship resistance because we are trying to build a system that works for those whom the traditional financial system has failed. If we have built a payment system where it is very easy for some government or wealthy bank or large company to censor arbitrarily transactions from users they don’t agree with then we have failed. The other thing is we want it to be secure, obviously. This is an arena where we are not relying on some government where we can say “This node which corresponds to this real life identity did something bad so now we go and prosecute them”. We don’t have that option. We want anyone to be able to run a node, that is part of the permissionless and decentralized portion of Bitcoin’s design goals. Therefore security is not an option, it is first and foremost when we are designing transaction relay. Fourthly, also relating to permissionless and decentralization we do not force software updates in Bitcoin Core. We cannot force anyone to run a new patch of Bitcoin node so if we were to say have an upgrade that improves user privacy but it costs way more or miners need to be willing to lose 50 percent in revenue and fees we cannot reasonably expect anyone to upgrade their node to that software. Incentive compatibility is always in mind when we are talking about transaction relay. This is the baseline framework for how we are thinking about things.

P2P Transaction Relay Network

Hopefully this has not come as a surprise to you but the way that we deal with this in Bitcoin we have a distributed peer-to-peer network where hopefully anyone can run a Bitcoin node, connect to the Bitcoin network. That comes in very forms. It could be a background task of bitcoind running on your laptop, it could be a Raspberry Pi that has Umbrel or Raspiblitz installed on it, it could be a large company server that is servicing thousands of smartphone wallets. All of them might be a Bitcoin node. They look very different but on the peer-to-peer network they all look like a Bitcoin node. When you are trying to pay somebody all you do is you connect to your node, you submit your transaction, you broadcast it to your peers, hopefully they will broadcast it to their peers and then it will propagate and eventually reach a miner. They will include it in a block.

Mempools

One really key part of transaction relay is everyone who participates in transaction relay keeps a mempool. I’ll define what a mempool is for those of who want it defined. It is a cache of unconfirmed transactions and it is highly optimized to help pick the sets of transactions that are most incentive compatible aka highest fee rates. This is helpful for both miners and non-miners. This helps us gracefully re-org, it allows us to design transaction relay in a more private way because we are doing something smarter than just accept and then forward. It is useful. I have written extensively on why mempools are useful if anyone wants to read more.

Mempool Policy

You may have heard the phrase “There’s no such thing as the mempool” and that is true. Each node keeps a mempool and each node can have different software. They are allowed to configure it however they want because Bitcoin is freedom right? Different mempools may look different, your Raspberry Pi 1 may not have as much memory available to allocate for the mempool so you might be more restrictive in your policies. Whereas a miner may keep a very large mempool because they want a very long backlog of transactions they can include in their blocks if things empty. We refer to mempool policy as a node’s set of validation rules for transactions in addition to consensus which all unconfirmed transactions must pass in order to be accepted into this node’s mempool. This is unconfirmed transactions, it has nothing to do with transactions within blocks. It is their prerogative what they are going to restrict coming into their mempool.

From a protocol development perspective we focus on protecting the bitcoind user

I want to really hammer home the point that from a Bitcoin protocol development standpoint we are trying to protect the user who is running this Bitcoin node. I get into a lot of fights with application developers who think we should be protecting the user who is broadcasting the transaction and in some cases those two are not aligned. We have an extremely restrictive set of resources available to us in mempool validation. Typically it is just 300 MB of memory and we don’t want to be spending for example even half a second validating a single transaction. That is too long. Once again even though we are trying to support the honest users, the whistleblowers, the activists, the traditionally censored people, we also need to be aware that when we are connecting to a peer by design we need to expect that this peer can be an attacker. The most common thing that we think about is CPU exhaustion. Again if it takes half a second that is too long to validate a transaction because we will have hundreds of transactions coming in per second. Our code is open source so if there is a bug in our mempool code such that you can cause an out of memory error because of some transaction with a specific type of script for example we need to expect that that is going to be exploited. Another thing to note is in the competitive world of mining it might sometimes be advantageous for a miner to try to fill mempools with garbage or render other miners’ mempools useless or try to censor high fee transactions from their mempools. Or maybe they are launching a long range attack where they put things in the mempool such that when they broadcast a new block everyone has to spend half a second which is too long updating their mempool. Whatever it is they stall the network for half a second and that gives them a head start on mining the next block which is significant. Or they are able to launch some other type of attack.

The other concerns that I will talk about less today regarding privacy, I will be talking a lot about Lightning counterparties but we are also aware of the fact that Lightning counterparties with conflicting transactions might have a back and forth with respect to fee bumping races or pinning attacks. We want to make it such that as long as the honest user is broadcasting something incentive compatible they are always going to be able to win. Most of the time, let’s say. And of course transaction relay is an area where privacy is a huge concern. If you are able to find the origin node that a transaction came from and you are able to link a IP address with a wallet address it doesn’t matter how much you hide onchain, you have already de-anonymized this transaction. This is an area where privacy is a huge concern. The other example of this is transactions are probably the cheapest way to try to analyze network topology. We are always trying to make sure we are not shooting ourselves in the foot with respect to privacy. I am not going to talk about privacy as much today but it is also a concern.

The Policy Spectrum

We can think about policy on a spectrum where on one extreme end we have the “ideal” perfect mempool. We have infinite resources, we are able to freeze time every time we receive a transaction or need to assemble a block and we have an infinite amount of time to validate or assemble. Perhaps in this case we accept all consensus valid transactions. On the other side of the spectrum we have the perfectly defensive conservative mempool policy where we don’t validate anything. We don’t want to spend any validation resources on anything that comes from someone we don’t know. Unfortunately if we want to make this sustainable at a network level we are going to need a central registry of identities that correspond to nodes or wallet addresses. We are just a bank at that point. Neither end of the spectrum really follows our design goals that we want. We want to be somewhere in the middle.

DoS protection isn’t the full story

But it would be too easy if this was the only concern. DoS protection is actually not the only thing that we think about when it comes to mempool policy, it is a common misconception. Let’s go back to thinking about these two ends. We’ll see why this linear approach is not the full picture. Even if we were to have infinite resources would we want to accept every single consensus valid transaction? The answer is actually no. The best example I can offer is soft forks. While we have this space of all consensus valid transactions today that space can shrink. With the recent Taproot activation for example we noticed that F2Pool who mined the activation block did not actually upgrade their nodes to have Taproot validation logic. The question is what would have happened if somebody sent them an invalid Taproot spend, an invalid v1 witness spending transaction. If they had accepted that into their mempool and mined it into a block then F2Pool, AntPool and all the users with nodes that weren’t upgraded would have forked. We would have had a network split where some of the network is not upgraded and some of it is actually enforcing Taproot. This is disastrous but as long as 51 percent of the hash rate is enforcing Taproot we are ok but this is a very bad situation that we would want to avoid. So presumably all miners were running at least version 0.13 and above which included SegWit. With SegWit had semantics for v0 witnesses, it discouraged the use of all witnesses v1 and above. That is why F2Pool did not enforce Taproot rules but they also did not accept any v1 witness transactions into their mempool and did not include those in their blocks. This is one reason why this kind of linear view is imperfect.

On the other side of the spectrum we have the perfectly defensive one. The question is are there policies where we are trying to protect the node from DoS but we harm users? This goes back to all those conversations I’ve had with application devs where policy is harming their users. The example I’m going to give is descendant limits. In Bitcoin Core’s standardness rules, the default limit for descendants is you won’t allow a transaction and all of its descendants in the mempool to exceed 101 kilo virtual bytes. This is a sensible rule because you can imagine a miner broadcasting transactions such that everyone’s mempool is one transaction and descendants. They publish a block that conflicts with that transaction and they’ve just flushed out everyone’s mempools. They spend a few seconds updating all those entries and now they have an empty mempool and nothing to put in their next block. And this miner just got a head start on the next block. This is a sensible rule in protecting our limited mempool resources. However for Lightning devs out there, perhaps you are aware of this, when anchor outputs were proposed in Lightning the idea was each counterparty would have an output in the commitment transaction that could be immediately spent so that each of them could be attaching a high fee child to fee bump the commitment transaction. That was the background behind anchor outputs. It was a great idea. However, they were thinking “What if one of the counterparties dominates the descendant limit so they broadcast a 100 kilo virtual byte child and now the other counterparty is not able to fee bump?” This is what is known as a type of pinning attack. A pinning attack is a type of censorship attack in which the attacker takes advantage of mempool policies to either censor a transaction from mempools or prevent a transaction that is in the mempool from being mined. In this case they are preventing the commitment transaction from being mined. We’ve defined pinning.

It is now worthy to mention the CPFP carve out exemption which was shoehorned into mempool policy to address this exact attack on Lightning users. It is a bit convoluted if the extra descendant has a maximum of two ancestors in their ancestor limit and it is within 10 kilo virtual bytes etc etc then they get an exemption. It is the cause of a lot of testing bugs and it is an ugly hack. I’m not here to complain but I want to note is that the average Bitcoin node user who maybe doesn’t care about Lightning for whatever reason doesn’t really have a reason to have this in their mempool policy. It might not be entirely incentive compatible is what I’m saying. I’ll come back to this later.

Policy designed for incentive compatibility enables fee bumping

Speaking of fee bumping, I do want to mention some examples of policies that people might like. These fee bumping primitives are all brought to you by mempool policy. The first one is RBF and this comes from a behavior in our mempool where if we see a transaction that conflicts with something in our mempool we’re not just going to say “We don’t want that”. We will actually consider that new transaction and if it pays significantly higher in fees we might replace the one in our mempool. This is good for users because for incentive compatibility it aligns and it allows users to fee bump their transactions. The other one is the mempool is aware of ancestor and descendant packages so when we are including transactions in a block we will go by ancestor fee rate. This allows a child to pay for a parent or CPFP. And when we are evicting transactions from the mempool we are not going to evict something that has a low fee rate if it has a high fee child. Again this is incentive compatible but it also helps users.

I am going to shill my work really quickly. Package RBF is the combination of RBF and CPFP such that you can have a child paying for both of its parents and RBFing the conflicts of its parents. It is pretty cool, you should check it out.

Mempool Policy

Just to summarize what we’ve covered so far. We’ve defined mempool policy as a node’s set of validation rules in addition to consensus that they apply to unconfirmed transactions in order to be accepted into their mempool. We have covered a few different reasons for why we have mempool policy. One of the biggest ones is DoS protection. We are also trying to design a transaction acceptance logic that is incentive compatible and it allows us to upgrade the network’s consensus rules in a safe way. There is a fourth category that I thought I mentioned but we don’t have time today, network best practices or standardness such as the dust limit. That’s the one that I think is more miscellaneous let’s say.

I did title this talk “Transaction Relay Policy”. It is because we are aware of the fact that the vast majority of nodes on the Bitcoin network run Bitcoin Core nodes. The vast majority of people who respond to polls on Twitter have stated that they use the default settings when it comes to mempool policy. This is kind of scary when you are opening a PR for mempool policy changes but it is a good thing in that it allows us to kind of have a set of expectations for whether or not our transactions are going to propagate across the network. It might be called transaction relay policy.

Policies can seem arbitrary/opaque and make transaction relay unpredictable

Now I am going to get into known issues with our default mempool policy in Bitcoin Core. I have been told that a lot of policy seems arbitrary such as the dust limit for example. I want to empathize and that is why I made this meme to show my sympathy. But essentially as an application developer you don’t have a stable or predictable interface for broadcasting your transactions. Especially in Lightning or offchain contracts where you are relying on being able to propagate and confirm a transaction in time to redeem the funds before your counterparty gets away with stealing from you. This can be a huge headache. I am aware of some within the standardness of a transaction itself, the biggest one is the dust limit, and the myriad of convoluted opaque script rules that transactions need to follow. Another category is transactions in evaluation within the context of a mempool. I already mentioned very niche exceptions in ancestor, descendant limit counting. I am also aware of BIP 125 RBF giving a lot of people grief because it can be expensive or sometimes even impossible to fee bump your transactions using RBF. And of course one of the biggest ones is in periods of high transaction volume, you never know when that is going to happen, you are not always able to fee bump your transactions. I put at the bottom the worst one is that every mempool can have different policies. I would say another one is that mempool policy is not well understood by application developers. That is why I’m here.

Commitment Transactions cannot replace one another

I am going to now describe a specific Lightning channel attack. It all stems from this fact that commitment transactions cannot replace each other. In Lightning you have this combination of conditions where you negotiate the fee rate ahead of time, way before you are ever going to broadcast this transaction because you weren’t planning on broadcasting that transaction. You are also not able to use RBF. If your counterparty is trying to cheat you or you are doing a unilateral close because they are not responding to you, they are obviously not available to then sign a new commitment transaction with you. But also because mempools only look at replacement transactions one at a time and your commitment transactions are going to be the same fee rate because why wouldn’t they be, commitment transactions cannot replace each other. Even if Tx2A has a huge fee rate Tx1A cannot replace Tx1B. A small note, I am aware that these are not fully encapsulating all of the script logic in Lightning transactions, please bear with me, the revocation keys and stuff I didn’t include. Just imagine these as simplified Lightning transactions.

Lightning Attacks

We are going to describe an attack between Alice, Bob and Mallory where Alice and Bob have a Lightning channel and Bob and Mallory have a Lightning channel. Alice is going to pay Mallory through Bob because Lightning. If Lightning is working properly either both Bob and Mallory get paid or neither of them get paid. That is how it is supposed to work. When the HTLC is offered from Alice to Bob and from Bob to Mallory each one of them is going to have a transaction in each channel. There is going to be a pair of commitment transactions in each channel. With LN-Penalty we have these asymmetric commitments so we are able to attribute blame for who broadcast the transaction. These are conflicting transactions just to be clear. Tx1A and Tx1B conflict. Tx2B and Tx2M conflict. Only one of them can be broadcast and confirmed onchain. We have constructed this such that Alice is paying Bob and Bob is paying Mallory. Mallory can reveal the preimage to redeem the funds or Bob can get his refund at t5. On the other side Bob can reveal the preimage that he got from Mallory and redeem the funds or Alice can get her refund at t6. Bob has that tiny little buffer timing out between Mallory not paying and Alice redeeming. But spoiler alert, the outcome of this attack is going to be Bob pays Mallory but Alice doesn’t pay Bob. These would be the corresponding transactions where Mallory broadcasts, she got the redemption with the preimage and Alice broadcasts where she got the refund from Bob. It is going to happen such that at t6 Alice gets her refund but Bob was not able to get his refund.

What happens is Mallory broadcasts her commitment transaction with a huge pinning transaction attached to it at t4. These two transactions are in everyone’s mempools. There are two scenarios. Either Bob has a mempool and is watching, he knows Mallory broadcast these transactions or he doesn’t. Let’s go over the first scenario first. We require CPFP carve out but like I said before I don’t really know why every mempool would have CPFP carve out implemented as one of their mempool policies. It allows Bob to get his money back in this case, he is able to fee bump the commitment transaction and then he gets his refund. But in this case CPFP carve out is critical to Lightning security. The second case is Bob doesn’t have a mempool. I think this is a more reasonable security assumption for Lighting nodes because it doesn’t make sense to me that Lightning nodes need to have a Bitcoin mempool or to be watching Bitcoin mempools in general. Bob was not watching a mempool and so at t5 he says “Ok Mallory hasn’t revealed the preimage to me and she’s not responding to me so now I am going to unilaterally close. I am going to broadcast my commitment transaction plus a fee bump and I am going to get my refund after the to_self_delay. The problem is because Mallory’s transactions are already in the mempool even if Bob broadcasts it, he’s like “Why isn’t it confirming onchain?”, he’s watching the blockchain, he’s not seeing his transactions confirm and he’s very confused. It is because Mallory has already put this transaction in the mempool and he is not able to replace it. The solution to this is package relay. In this case package relay is an important part of Lightning security.

In both of these cases t5 has elapsed and Bob has not been able to successfully redeem his refund from Mallory. At t6 Alice gets her refund but Bob didn’t. Let’s pretend Alice and Mallory are actually the same person. They have managed to steal this HTLC amount from Bob. Hopefully everyone followed that. As a disclaimer this is easily avoided if the commitment transactions are negotiated with a fee rate that was high enough to just confirm immediately in the next block at t4 or t5. This brings me to my recommendations for L2 and application developers.

Let’s try to be friends?

My recommendation is for the time being lean towards overestimating on fees rather than underestimating. These pinning attacks only work if you are floating at the bottom of the mempool and you are not able to fee bump. Another one is never rely on zero confirmation transactions that you don’t control. What you have in your mempool does not necessarily match what miners have in their mempools or even the entirety of the rest of the network might have a different transaction in their mempool. So if you don’t have full control over that transaction don’t rely on it if it has no confirmations. Another one is there is a very nice RPC called testmempoolaccept available with Bitcoin Core. That will test both consensus rules and standard Bitcoin Core default policy. Again because that is the vast majority of policies throughout the network it should be pretty good for testing. Especially if you are relying on fee bumping working I recommend having different tests with various mempool contents, maybe conflicts, maybe some set of nodes to test your assumptions. Try to do that on every release, maybe on master every once in a while. I don’t want to put too many expectations on you but these are my recommendations for how to make sure that you are not relying on unsound assumptions when it comes to transaction propagation. I would like to invite you to communicate your grievances whether it is with our standard mempool policy or it is with the testing utilities not being sufficient for testing your use cases. Please complain and say something. Hopefully this talk has convinced you that while we do have restrictions for a reason we want to support Lightning applications. Please give feedback on proposals when they are sent to the bitcoin-dev mailing list. On the other side I think we all agree that Lightning transactions are Bitcoin transactions so at least in the Bitcoin Core world one big effort is to document our current transaction relay policy and provide a stable testing interface to make sure transactions are passing them. We are also working on various improvements and simplifications of our current mempool policy. Recently we have agreed not to restrict policy without notifying bitcoin-dev first. But of course that doesn’t work unless you guys read bitcoin-dev and then give us your feedback if something is going to harm your application. Together we can make privacy and scalability and all those usability wonderful things that come with L2 applications without sacrificing security.

Q&A

Q - With testmempoolaccept you said it was hardcoded to the default policy or is it based on the policy that I’m running on my node?

A - If you are calling testmempoolaccept on your node it will be your node’s policy. But if you say spin up some regtest nodes or you compile bitcoind that will be whatever the defaults are.

Q - It is not the defaults it is whatever my node is running. If I want to test other assumptions I could change the policy on my node and that would be reflected in testmempoolaccept?

A - Yes

Q - If the package relay makes it in would that allow us to safely remove the CPFP carve out in the future without harming people with existing Lightning channels and stuff like that?

A - That is a very good question. I think so but I haven’t completely thought it through. I think so.

Q - One question regarding the Lightning nodes monitoring the mempool or not. You said it was not a proper assumption that a Lightning node would be monitoring the mempool because why should it. But if you are forwarding HTLCs and someone may drop the commitment transaction onchain you may see the secret there. You have to accept HTLCs with the same secret. Why would you wait for that transaction to get confirmed instead of taking that from the mempool? You shouldn’t assume that you are going to receive that in your mempool because you may not but if you see it in the mempool you may be able to act quicker.

A - This is how I’m understanding your question. In a lot of these types of attacks if the honest party is watching the mempool then they are able to mitigate that attack and respond to it. I think it is a difference between having a mempool helps and mempools are required in order for Lightning nodes to be able to secure their transactions. Yes on one hand I do agree. Having a mempool helps. But I don’t think anybody wants a mempool to be a requirement for Lightning security.

Q - Even then it will be a monitoring of the entire mempool of the network which is not possible.

A - Right.

Q - What’s the story on you and Lisa with the death to the mempool thing?

A - I am happy to talk about this offline. Lisa and I had a discussion about it where we disagreed with each other. She tagged me in her post to the bitcoin-dev mailing list. I think people were under the impression that our discussion was us being in agreement when actually we didn’t agree. I think that is where most of the confusion comes from.

Q - Her stance is that mempools are unreliable so we should get rid of them altogether?

A - Mempools as they stand today are not perfect. It is not an extremely stable interface for application devs. I have heard multiple Lightning devs say “For us a much better interface would just be if our company partnered with a miner and we were always able to submit our transactions to them.” For example all lnd nodes or all rust-lightning nodes were able to just always submit to F2Pool or something. That would resolve the headaches around making sure your transactions are standard and you are going to meet policy. I understand why that’s more comfortable for Lightning devs but I feel like in that case it is not “We need a mempool in order for Lightning to be secure” it is “We need a relationship with a specific miner in order for Lightning to be secure”. I don’t think that is censorship resistant, I don’t think that is more private, I don’t think that is the direction that we want Bitcoin and Lightning to go in. That is my own personal opinion. That’s why I came today to talk about this because I understand there are grievances but I think we can work together to make an interface that works.

darosior: At the end of the day it is pretty similar to fee bumping, you have shortcuts and very often they would increase centralization pressure. Once we lose censorship resistance we will realize that it was precious and we should think before we lose it.

Q - …

A - I can’t offer a specific roadmap but I can tell you what is implemented and what is yet to be implemented. All of the mempool validation code is implemented. It is not all merged. That part is already written. The next part is defining the P2P protocol which we get closer to finalizing everyday. Writing out the spec, putting out the BIP, implementing it and then getting it reviewed and merged. That is what is implemented and what is not implemented. You can make your estimate on how long that will take.

Q - While you’ve been working on package relay what have you learned that surprised you? Something that is interesting that was unexpected, something more difficult or more easy than you thought?

A - I am surprised by how fun it is. It is a privilege everyday to wake up and get to think about graph theory and incentive compatibility and DoS protection. It is very fun. I didn’t expect it to be this fun. The next layer of surprise is I am very surprised not everyone is interested in mempool policy.

Q - Do you have some example of DDoS attack vectors that you would introduce that are not super obvious? When you have a mempool package policy.

A - For example our UTXO set is several gigabytes. Some of it is stored on disk and we have layered caches. We want to be very protective over how peers on the peer-to-peer network can influence how we are caching things. That can impact block validation performance. For example if based on how we limit the size of one transaction we can expect churn in our UTXO caching system. We don’t want that to be way larger if we are validating 25 transactions at a time or 2 transactions at a time or n transactions at a time. That would be an example of a DoS attack. Same thing with if there are exponentially more signatures that can be present in a package compared to an individual transaction, we don’t want to be expending exponentially more resources validating a package compared to a transaction.

Q - The Lightning case is a very specific case where you have a package of two transactions. Is it just trying to get that one right first and then think about the general case later?

A - Yes. The single child, single parent versus single child, multi parent. I think most of it was a trade-off between complexity with implementation versus how useful it would be for both L1 and L2 protocols. Given the fact that we know there is a lot of batched fee bumping that happens even on the onchain Bitcoin level it has a very high usefulness score in terms of multi parent, single child. When we were looking at single parent, single child versus multi parent, single child the complexity is not that much worse. Weighing those two things against each other, multi parent, single child was better on those two things.

darosior: It seems to be weird to tailor the mempool rules to a specific use case that is Lightning. The criticism with carve outs which I kind of agree with, multi parent fee bump, CPFP could be useful for other applications as well and is useful for other applications.

Q - I agree with you that mempool is a super interesting research area. Throughout the entire blockchain space we have seen things blow up because of mempool policy, not necessarily in Bitcoin but in other blockchain ecosystems. Do you spend much time trying to look through what other blockchains have tried and either failed at or even succeeded at that we could bring over to the Bitcoin mempool policy?

A - No I haven’t spent any time. Not because I don’t think there’s good ideas out there. I’m not the only person who has ever thought about a mempool and Bitcoin is not the only place that has ever looked at the mempool. I have noted that a lot of the problems that we have in mempool are specific to the fact that we are a UTXO based model. It is nice because we don’t have the problem where we aren’t able to enforce ordering of transactions. It is trivial for us because parents need to be before children. But it does mean that when we are creating a mempool 75 percent of the mempool dynamic memory usage is allocated for the metadata and not the transactions themselves. We need to be tracking packages of transactions as a whole rather than treating them as atomic units.

Q - That is in a package relay world or that is in Bitcoin Core?

A - In the Bitcoin world.

Q - Today?

A - Yes, 75 percent. It is always surprising to people. I think it is mostly because of the fact that we have a UTXO based model. I don’t think it is a bad thing, this is something that we have to deal with that other people don’t have to deal with. Like I said before there are advantages of this that we have that other people don’t.

Q - Where can I find out more about package relay?

A - Brink has recently released a podcast called The Bitcoin Development Podcast. We have recorded the first 5 episodes and it is all about mempool policy, pinning attacks, package relay and mining improvements. I would recommend going to brink.dev/podcast to learn more.

Q - Could you talk a little bit about the dust limit and mempool acceptance? Lowering the dust limit for mempool policy, is that on the roadmap? Should we have it? Should we not have it? Are there different clients that have different thoughts about what is the current dust limit? I think I had some channel closings because there were some disagreements between Lightning nodes.

A - For those who are not aware dust is referring to uneconomical outputs. If you have an output with say 20 satoshis in it, even at the network rate of 1 sat per vbyte you cannot spend this output in an economical way because the input size to provide the script to spend that output is going to require more than 20 satoshis to pay for it. In Bitcoin Core mempool policy we ban any transaction that has dust in it. This is not just to gatekeep but it is the idea that when you have dust in the global UTXO set probably nobody is going to spend it but everyone on the network has to keep that. Because scalability is essentially limited by the UTXO set size it is an arbitrary choice by the Bitcoin Core developers to say we don’t allow dust to propagate through Bitcoin Core nodes. But it is an example of that best practices thing that I talked about earlier in the slide where it is consensus valid but it doesn’t fit into the other categories of DoS protection but in a form it is a DoS. You are asking the entire network to store dust in their UTXO set. It is also an example of this inherent tension between application devs and protocol devs when it comes to mempool policy. For example in Lightning if you have outputs such as HTLCs inflight or the channel balance such that one of those outputs is dust it is not going to propagate. So I think typically in Lightning if you have an output that is going to be dust you drop it to fees. That is considered best practice. But this is an inconvenient thing that they have to deal with when they are implementing Lightning. Again I am trying to be as empathetic as possible, I understand why this sucks. But there are reasons to have that dust limit. It is a place of tension there. I am not saying it is never going to change, I’m just saying those are the arguments.