Sergei Tikhomirov - Lightning Privacy (2020-05-09)
Transcript By: Michael Folkson
Name: Sergei Tikhomirov
Topic: Quantitative Analysis of Lightning Network Privacy
Location: Lightning Hacksprint (Fulmo)
Date: May 9th 2020
Hello everyone. I am Sergei, I am a PhD student at the University of Luxembourg. I am going to present our joint work with Pedro Moreno-Sanchez and Matteo Maffei from TU Wien on certain aspects of the privacy and scalability of the Lightning Network. The full paper can be found here.
LN intro (for completeness)
Let me give some context on what Lightning Network is. The Lightning Network is a second layer scaling solution for Bitcoin and the general idea is that we leverage the security guarantees of Bitcoin but we perform most of the transactions off the chain. Therefore we improve latency and we enable payments of smaller value than could be possible on Bitcoin. A payment channel is a protocol between two parties. It is a 2-of-2 multisignature Bitcoin address that enables two parties to exchange transactions and rebalance the balance in this channel however they want with offchain updates. The key feature of Lightning is of course the revocation mechanism, these game theoretic economic security mechanism that enables this quality that if Alice tries to cheat Bob can retaliate and take all the funds from the channel and vice versa before a certain timeout. Therefore we are trading this requirement of being online at least once in a while and looking at the blockchain and tracking these potential breaches of trust. But we are getting a large transaction throughput and a very low latency compared to Layer 1 which is Bitcoin. Of course we don’t have to open channels to all the parties in the network. We can leverage multihop payments where each user only opens a small number of channels to selected nodes and then payments are being routed through a sequence of channels that are rebalanced atomically. The atomicity here is important and it is enabled by the fact that all the channel updates are linked by the same secret value. Here is the illustrative payment from User 1 through Users 2, 3, 4 and 5. The first tab of a typical Lightning payment is User 5 generating a random value r locally, calculating the hash of this value H(r) and transferring this value to User 1 in an invoice message. User 1 offers a HTLC, as hash timelocked contract to User 2 which essentially says “This is a contract between User 1 and User 2 with a hash value of y (=H(r)), 1.3 coins will go to User 2 if User 2 provides such value that hashes to y. Otherwise if User 2 does not provide such value before time 4 then the money will be unlocked and User 1 will be able to move this money wherever they please.” This is one step of a multihop payment, the HTLCs being created. Then User 2 creates a similar HTLC to User 3 offering User 3 to give them 1.2 coins because we have this difference as a fee. 0.1 coins is the fee that User 2 takes. All the subsequent users take 0.1 coins as well. User 2 says “I am going to give you User 3 1.2 coins if the preimage of this hash is provided before time 3 and take this money back if it is not the case.” Then once the first step of the payment sequence of HTLCs is created then User 5 reveals the secret value r and takes the money from User 4. Then User 4 takes this from User 3 and so on and the payment finalizes. This is the typical flow of the payment in the Lightning Network.
Here is the outline of this talk. This talk consists of two parts as the paper consists of two parts which are more or less disjoint. The first part evaluates the probabilities of various attacks on the Lightning Network. The second part evaluates the fact that a certain aspect of the Lightning protocol has on the applicability of micropayments and their visibility in real world circumstances. It may turn out the Lightning Network despite the vision described in the original white paper is not actually that good for various small payments.
Part 1: Attacks on LN
Starting from Part 1 let me first describe the three privacy related attacks that we considered. I would say the first two of them are privacy related and the third is more a denial of service attack. They are value privacy, relationship anonymity and wormhole attacks. I will describe in more detail in the next slides what exactly are those attacks and what is definition and the criteria for them. In general we are trying to answer the question how likely are these attacks and what does their likelihood depend on? Because of course for different network parameters and different capabilities of the attacker the probability of the attacks is also different.
Attack 1: value privacy
The first attack that we consider is value privacy. This is probably the simplest attack. The goal here for the attacker is to learn the amount that is being transferred. In the Lightning Network the amounts are not hidden, they are not encrypted, they are transferred in plaintext. If User 3 wants to know how much money is being transferred through this node they can look it up in the HTLC and see that they received a HTLC of value 1.2 and they were asked a HTLC of value 1.1 which means that the true value of the payment is approximately 1. Of course we cannot say for sure because different nodes may have different fee policies. Assuming that the fee is relatively small compared to the amount of the payment that is being transferred it means that each node along the route can understand with a high precision how much money is being transferred. It is enough for the attacker to control just one of the intermediate nodes. Here node number 3.
Attack 2: relationship anonymity
The second attack is a little bit more complex. Here the attacker wants to learn who pays whom. If we formulate this problem in the setting as cryptographers always do when they talk about cryptographic problems and cryptographic algorithms we have two things happening simultaneously. The question is whether an attacker can distinguish between these two things with a probability of significantly better than 50 percent, randomly flipping a coin. Here these two events are two payments. The payment flows from User 1 through Users 2, 3, 4 to User 5. The second payment is flowing from User 1\’ through the same intermediary nodes 2, 3 and 4 to User 5`. The goal of the attacker who controls two nodes along this common part of this route namely User 4 and User 2 is to determine that it is User 1` who pays to User 5` and not to User 5. And User 1 pays to User 5 and not User 5`. It is harder to formulate this problem than to solve it because in the Lightning Network as I described earlier the atomicity of payments is enabled by the fact that they share the same payment hash and the same payment preimage. This means that for all the channel updates, for all the HTLCs here we have the same value y here, here and here. In the second payment we have a value y` here, here and here. Therefore these two nodes can understand who is actually paying to whom.
Attack 3: wormhole attack
The third attack we consider. Here the attacker is trying to cheat and to take the fees from honest participants. Imagine the attacker controlling User 1 and User 4. The first part of the payment goes as usual, the sequence of HTLCs is created from User 1, User 2, 3, 4 and 5. Then User 5 starts redeeming the payment by revealing the secret value r to User 4. Then instead of sharing this value with User 3 and redeeming the payment from User 3, User 4 just forwards this secret value r outside of the Lightning protocol to User 2 which allows User 2 to redeem the value from User 1. As a result User 1 and 5 may not even notice anything because from their point of view everything is fine and the payment is finalized. But User 3 is being cheated here because the fees that were intended for Users 2 and 3 were collected only by User 2. Not only User 3 loses the fees, User 3 also has the capital locked up because these HTLCs are not redeemed and they will have to expire. Only after the timeout User 3 would be able to withdraw the funds or allocate the funds elsewhere or use the funds in the channel to forward other payments and earn fees. This possibility is taken away from this user while this attack is happening.
What parameters influence the attack success rate?
Now let’s talk about what parameters influence the attack success rate. First of all it is important how many nodes are compromised. Of course if the whole network is under adversarial control it is basically game over, everything is very bad. If the attacker has only one node somewhere on the periphery of the network it is probably not scary at all. We are trying to quantify what happens in the middle. How many nodes an attacker should compromise to achieve non-negligible success rate for the attack. Which nodes are compromised? This is also very important because nodes have different degrees of authority so to speak in the Lightning Network. Of course from the protocol point of view everyone can join, it is an open network, you don’t need to ask for permission. Just deploy your Bitcoin in open channels and you are good. But because of the economical reasons there are nodes which have larger liquidity and more channels. Of course payments are flowing through them more actively than they are flowing through some nodes with fewer channels or with less capital committed. Intuitively it makes more sense to attack these large payment hubs but again we don’t know yet how much more efficient it is compared to another approach. We are quantifying this as well later in the paper. Everything is also dependent on the payment amount because small payments have more options. A payment can only flow through a channel if the payment is smaller than the capacity of the channel. To be even more precise if it is smaller than the balance of the sender side of the channel. In any case small payments have more options because more channels can accommodate small payments than large payments. It may be the case that the attacker controls some large nodes in the middle of the network but small payments can still flow around through smaller channels but larger payments cannot. We quantify that as well. How long the routes can be is also important because for longer routes it is more likely to stumble upon a compromised node. But for experiments we limit the path lengths to four channels which means three intermediary nodes. We show this graph in the paper. For most common payment amounts for smaller and medium sized payments this is enough that at least some paths exist for a given payment from a random sender to a random receiver. This means that in real life as the real Lightning implementations optimize amongst other things for shorter routes, we think that our results are applicable in the real world as well.
Snapshot based simulation
This is our approach from a more practical standpoint. We didn’t mess with real Lightning nodes or channels. We conducted our experiments in a simulated environment using a public Lightning Network snapshot. First of all we created a graph model from a Lightning Network snapshot. Here I should say a huge thank you to fiatjaf for this website. This website contains the historical information about all public Lightning Network channels and the timestamps when they were opened, when they were closed and the amounts. This proved hugely valuable for our resource. We based our research on this data set. For a given combination of parameters we create 100 random pairs, this would be a sender and a receiver. For each pair we generate all paths that are theoretically suitable for these payment amounts. Only containing channels with the capacity larger or equal to the payment amount. For each such path we check whether it is prone to one of our attacks or not. Each of the three attacks, we define it as a template, a pattern on a path. We check each path against this pattern. For example for the value privacy the pattern is very simple. Just one compromised node anywhere in the path. For relationship anonymity the attacker has to compromise two distinct nodes along the path. For the wormhole attack the pattern is a bit more complex. Then the attacker has to compromise two nodes along the path but there also must be an honest node in the middle because this is the node that is the victim in this attack. Then we calculate the share of paths which are prone to an attack as opposed to the ones that are not. By averaging across all random sender, receiver pairs we estimate the probability of a successful attack given a certain parameter combination.
These are our parameters and the values that we consider. Which nodes are being compromised? We consider two heuristics for important nodes, for important hubs. We consider the nodes with the highest amount of channels adjacent to them. And we consider nodes with highest total capacity in the channels. Of course these subsets intersect to a large extent because nodes which open lots of channels also tend to be well capitalized but these are still distinct subsets of the Lightning Network. As a control group we use a random selection of nodes. Another parameter, another axis is how many nodes are being compromised. We consider these values from 0 to 100. For the reference at the time of our experiments in February 2020 the Lightning Network contained 5862 nodes and 35196 channels. Only talking of course about the public part of the network. The payment amount is also important for our experiments. We considered four payment amounts, the powers of 10 from 100 to 100,000 satoshis which assuming this exchange rate of 10,000 dollars per Bitcoin corresponds to 10 US cents, 1 dollar, 10 dollars and 100 dollars.
This slide represents our main results. Let me spend some time explaining this because it may not be obvious. We have these 9 subgraphs. Each subgraph contains the x axis which represents the payment amount in satoshis from 100 to 100,000. The y axis shows the share of paths between a random sender, receiver path that are vulnerable to an attack from zero percent to 100 percent. The three columns correspond to the three attacks. We have value privacy three graphs, relationship anonymity attack and wormhole attack. The three rows correspond to the type of nodes that we consider compromised. The top row is the highest degree nodes compromised, the middle row is the highest capacity nodes and here we have the random nodes compromised. On each of the subgraphs the lines represent the share of vulnerable paths for this number of nodes compromised, from zero to 5, 10, 20, 50 to 100. If we take some graph for example. If we compromise zero nodes then for the payment of 100 satoshis we have zero percent vulnerable paths, for 1000 satoshis we have zero percent vulnerable paths, for 100, 1000, 10,000, 100,000 still zero vulnerable paths which is absolutely logical because we have zero nodes compromised. Then if we have 5 malicious nodes with the highest capacity, we compromise 5 highest capacity nodes then how many paths are vulnerable to the value privacy attack? We have about 30 percent, 25 percent, 20 percent and 30 percent for different amounts. Then if we compromise more nodes, the 10 highest capacity nodes we have about 40 percent paths that are vulnerable. This is how you should read these graphs. If we go to the big picture. First of all our controlled experiment on the random group of course it doesn’t make any sense for the attacker to attack random nodes because it doesn’t help at all. Only if the attacker compromises 100 random nodes then the probability of success for the attack on value privacy raises a little above zero. But for all other cases everything else is zero. Random compromise doesn’t work. But if the attacker compromises the nodes with the highest capacity then we see a common pattern which replicates in other graphs as well, the more nodes the attacker compromises the higher the share of vulnerable paths. This is also logical. In this particular case it is enough to compromise 20 highest capacity nodes to have the probability of success for the attack at more than 50 percent. If 100 nodes are compromised then we have nearly 100 percent probability of successful attack. How does this compare to other combinations of parameters? If we compare across this row, how do different attacks behave for the highest capacity nodes being compromised. We see that the effect is the same but it is weaker. If we go from zero to 5 to 10 to 20 compromised nodes the share of vulnerable paths raises only a little bit. A wormhole attack, this is probably just a statistical artifact, but it is harder to perform a wormhole attack given a fixed number of nodes compromised than it is to perform a relationship anonymity attack or a value privacy attack. I will explain it by the fact the structure of the path is simpler in this case. For the attacker it is easier to compromise one node anywhere in the path as it is the case here, rather than to compromise two nodes in the path as is the case here. Or even harder it is to compromise two nodes with an additional constraint than one honest node must be in the middle between the two compromised nodes. Finally if we compare these graphs across columns the question is what is better for the attacker? Is it to focus on the highest capacity nodes or the highest degree nodes? As we show in our graphs, it is preferable to focus on the highest degree nodes because as you see the probability of the attack raises more quickly than with the highest capacity nodes. Just by compromising 5 highest degree nodes the attacker gains a 50 percent chance of successful attack compared to around 25-30 percent in this case. This patterns also represents itself in these graphs and these graphs as well. From that we conclude for the attacker from these strategies the most profitable and the most preferable strategy is to focus on the highest degree nodes. Maybe this is explainable by the fact that the largest number of payments in the Lightning Network are relatively small and for small payments it doesn’t matter that much if the channels are very large. It matters most whether intermediary nodes are well connected. Regular payments that are not very large in terms of satoshis, they will flow through intermediary nodes with the highest connectivity rather than the highest total capacity.
Takeaway from part 1
The takeaway from part 1 is that it is enough to compromise relatively few nodes. Even 100 node is around 2 percent of the size of the whole network. The attacker should probably focus on highest degree nodes rather than highest capacity nodes. Of course compromising random nodes is more or less useless as we have shown with the data. A fun fact is that this entity known as LNBIG at some point controlled about 40 percent of Lightning Network capacity which means that the scenario where many important nodes are under the control of one centralized entity is not that far fetched. It is actually rather close to reality. It is important to understand that and what it means for privacy and security of the Lightning Network. There is an efficiency, privacy trade-off as often happens. You have to pay for privacy, if you want to protect your payments you probably shouldn’t optimize too much for fees. You should understand that going through large popular intermediary nodes makes you more vulnerable because these nodes are more likely to be the target of an attack and a potential compromise that would enable checking users’ payments and something like that.
Part 2: Concurrent HTLCs
Now going to part 2 of the paper. We study the issue of concurrent HTLCs. The question that we studied here is how does Lightning Network handle concurrent payments? There is this little known fact in the Lightning specification and indeed the implementations as well that the number of in flight HTLCs, concurrent payments in other words that a Lightning channel can handle, is limited because Bitcoin transaction size is limited and we have to be able to close the channel in one transaction. Therefore a channel can keep only up to 966 unsettled HTLCs. We call this HTLC slots and we argue that for small payments these HTLC slots get depleted faster than the capacity gets depleted. First of all for transactions under the dust limit which is 546 satoshis, no HTLCs are even created so this problem that we discuss in this part of our paper does not apply to these ultra small payments. They are not protected by the Bitcoin security guarantees anyway. But then we have the small payments which are above the dust limit where the HTLCs are created but it may be the case that the HTLC slots will get depleted faster than the capacity will get depleted. Which means that the channel will be underutilized. The channel can be in the state where the channel has more capacity and could theoretically handle even more payments but it cannot do this because it has 966 unsettled HTLCs hanging around and waiting for the preimage to be revealed or for the timeout to expire. We tried to measure this effect and measure the evolution of this effect over time.
Effective update rate
Our most important metric that we define in the paper is the effective update rate of the channel. Here is the illustrative example to better understand what this means. If you consider a channel with a capacity of 10,000 satoshis then it means that theoretically it could handle 1,000 in flight transactions of value 10. But in fact due to the specification and implementations it won’t be allowed to handle more than 966 transactions. Which means that it will be underutilized, some capacity will go underutilized.
Effective update rate / transaction amount
This is how this percentage changes depending on the transaction amount. If the transaction amount is just above the dust limit, so the HTLC is actually created, we calculate that around 20 percent is the actual effective update rate. The channels will be only 20 percent utilized. This utilization ratio, this number of concurrent updates that a channel can actually handle as opposed to the amount that it could theoretically handle raises linearly until it reaches this inflection point which we call the borderline amount of around 2700. Above this limit the capacity is the limiting factor. This HTLC limit issue doesn’t play a role here because the capacity is what limits the throughput of the channel. In this interval from a bit more than 500 satoshis to a bit under 3000 satoshis, this is the interval for micropayments that we want to investigate a little bit more.
Affected channels / transaction amount
We also calculate the share of affected channels. We see again that for an average transaction amount that is very close to the dust limit, nearly 50 percent of the channels are affected. This means that half of Lightning Network channels are less effective than they could have been if not for this limit. This percentage of affected channels drops until it reaches around 10 percent. For an average transaction amount of around 10,000 satoshis, still around 10 percent of channels can be underutilized due to this issue.
Share of channels affected over time
We also generated a number of snapshots on the first of every month from March 1st 2018 just after the launch of the Lightning Network on mainnet and we calculated how this share of affected channels has changed over time. We observe that it has been raising for different transaction amounts, this blue line is the dust limit. Here we have 1K, 10K and 100K satoshis. All these lines have been raising pretty steadily until around mid 2019. From then for about a year they have been steady. It means that this effect that we have just described is not getting worse or getting better but it affects the Lightning Network in more or less the same range as it used to during the past year.
Borderline amount over time
Finally we were interested in how the borderline transaction amount has changed. It has risen dramatically in late 2018. Then it has stayed relatively stable and for the past months it is somewhere around 2500 satoshis. For the values of payments below 2500 satoshis this issue is actually relevant.
Takeaway from part 2
The takeaway is that the Lightning Network may be not that well suited for micropayments as the original Lightning Network paper envisioned. The Lightning paper of 2016 by Poon and Dryja contained paragraphs that described use cases such as users being able to pay for each megabyte of internet traffic that they consume or every second of video that they watch. Actually these use cases for very small and very frequent payments may not be possible at least with the current technology. Below the dust limit, for payments below this number of satoshis, the payments are not secured by HTLCs at all. They don’t inherit the security from Layer 1 Bitcoin as larger Lightning payments do. In this range from 546 to 2500 satoshis, these payments are less efficient than could be possible in theory. The closer we are to the smaller side of this range the more profound this effect is. We observe that this effect is stable in the current Lightning Network until mid 2019. By the way the HTLC limit is also the basis for a nested denial of service attack which we discuss briefly in our paper. We do some back of the envelope calculations of how much value the attacker should commit to this attack. It turns out that there have been described attacks in the literature where an attacker can block the capacity along a route when the attacker initiates a payment and initiates a series of HTLCs being offered but then does not redeem it from the receiver which means the capacity in all the channels is locked for the duration of the timeout which can be pretty long. Here with this HTLC limit we can do the same thing but with a lower capital requirement. The attacker would be able to block the capacity in the channels just by sending 1000 micropayments but not finalizing them which requires a very small amount of money compared to other possible ways to achieve the same effect.
Let me suggest three papers which we cite in the paper and seem relevant to what I have described. Malavolta and co-authors described anonymous multihop locks for blockchain scalability and interoperability. This works suggests a cryptographic mechanism that would let Lightning Network move away from this architecture where each channel update along a route is linked with other updates in the same transaction by the same payment hash which lets intermediary nodes link the parts of the same payment. This is bad for relationship anonymity. This is a proposal on how to fix this. This is a paper that appeared concurrently to our research and they went into more detail regarding the denial of service vector that I described in the previous slide. How the congestion attacks in payment channel networks can work and how an attacker with relatively small capital requirements can block large chunks of the Lightning Network. Finally this work, Lockdown: Balance availability attack against Lightning Network channels also proposes a denial of service vector for a Lightning Network attack which you can read and compare to our approach. The Lightning Network is an exciting field of study for privacy and security researchers with this new privacy model and new security model, new security assumptions compared to Bitcoin. This makes it a very exciting field of study and I anticipate lots of interesting papers to come. Let’s hope they not only lead to publications in prestigious conferences and journals but also lead to improvements in the actual Lightning protocols and implementations so we can all benefit from fast, reliable, secure and private Bitcoin payments over Layer 1 and Layer 2 and possibly future Layer 3 or what have you.
Let me give you a link to the paper one more time. Huge thanks to Pedro and Matteo. Huge thanks to TU Wien for hosting me in the summer of 2019 for a research visit. You can follow me on Twitter at this Twitter handle.