Rene Pickhardt - Path Finding in the Lightning Network
Transcript By: Caralie Chrisco
Name: Rene Pickhardt
Topic: Path Finding in the Lightning Network
Location: Chaincode Residency 2019
Today I’m going to talk a little bit about path finding on the Lightning Network and some part of it will be about the gossip protocol which is like a little bit of a review of what the gossip messages are going to look like. So it’s a little bit basic again. But other than that, I’ve been told that you seek more for like opinionated talks and recent ideas instead of, like, how the bolts work because you already know all of that. So I’m trying to go a little bit more in this direction. Yet I’m still an educator so we’re going to do a little bit of education.
So the outline is: I will start with a quick recap of the 2018 Lightning Residency as I was a resident at that time, so I thought I’d share a little bit of my experience with you. Then I’m going to compare IP routing with routing on the Lightning Network. Christian already mentioned a couple of things on that. Then I will go through the motivation of why pathfinding is actually a difficult problem on the Lightning Network. And then I’m going to talk about actual path finding strategies.
[inaudible] in the middle but I was shifting topics so yeah [inaudible?]
This was the picture of last year’s Lightning Residency. So the quick recap goes like this. It started with Christian explaining to me that routing is a hard problem and then there was Elaine who actually told me that routing in practice always fails and then we had Christian again who explained to me that routing can be difficult and relies on channels and capacity and peers being online, and then there was Jack Mallers who if you read it correctly was talking about autopilots and the channels won’t be relevant, so he touched this topic of this is difficult. And this is what I got out of the residency that routing is a problem.
As we discussed yesterday, this is plain wrong. Routing has been solved on the Lightning Network, it’s actually path finding that is the hard problem to solve. I want to be a little bit more precise on the terminology that I’m trying to use in this talk.
So the good thing is I want you. I want you because these are open problems that you can make a huge difference on. I would currently say that we don’t have a working solution for path finding on the Lightning Network. There’s many proposals out, we already saw some yesterday and I will go a little bit through them again today. Especially for these people staying in the summer and wanting to work on Lightning, I think this is a great opportunity because there is also the chance to make a huge difference. I want to motivate you to think about these problems.
IP Routing vs Onion Routing
As I promised before, really going into path finding, let’s compare IP routing with Onion Routing in order to understand. So let’s do a quick recap of how IP routing works. I think most people are familiar with this. A node processes the IP header of the IP package that it receives and then it extracts the sender and the recipient of this package. Actually it would not necessarily need the sender for forwarding it, only for response in IP and then it dissects the IP address of the recipient into the network part and the network part you know is usually the first couple of digits or bits.
It’s an hierarchical address scheme which the network uses for making routing decisions and then there is the host part of it. A router looks up the network part in its routing table and if the network is known that it sends the package to the next link via the link layer protocol, so for example Ethernet or a DSL or whatever link layer protocol there is implemented and otherwise it will send it to its default router. Which is something like hey you probably know better how to handle this. This is pretty much how IP routing works and the routing tables themselves they’re dynamically updated for example via the border gateway protocol.
There’s several variations of those but there is basically some communication among Internet hosts and Internet routers that basically talk to each other all the time and share information about the topology of the network of where networks are located and then they can make decisions of where to forward this package. The concept, as I always call it, is called best effort routing. Christian he was the word distance-vector routing. I think that’s in computer science terms the more precise term but as you can see from my shirt I’m a mathematician so please forgive me and I would say this is an amazingly strong concept. I mean if you think about it the internet is just running all the time and if you talk to the people who came up with tcp/ip I think maybe they are just humbled but what they usually say is we didn’t expect this to scale that much. They were doing just a research project for university and then it took often. People are just implementing and building on top of it and I mean basically yes we have this address space dilemma but to some degree it scaled to infinity. It’s one of the strongest stable computer systems that we currently have. I just want to like make it one last time like explicit of how it really works like there is the address of the IP package that states: I want to go to chaincode labs and what the router does is it says oh yeah that’s in New York because this is what it reads from the host part and then it says I send you along this route because this gets you closer to New York ask again later when you go to the next cross across way or next router then you can repeat the process. Basically this is how information is being sent on IP and this is actually path finding right the network finds a path.
I want to compare IP routing with Onion Routing or at least with Onion Routing how we use it in the Lightning Network. The first type as we already mentioned: in IP routing we have this method called best-effort routing because every router basically has its best effort to say “the best thing I can do is send you along this route.”
Where in Onion Routing as we discussed several times we have the source based routing the sender has to make all the decisions about what path to use. Then the data format is very different because in IP we have open headers. Everybody along the path can read the header information whereas in lightning we basically have encrypted headers. There’s only a small part of the header that gives a hint of what you should do at your hop, but the rest of the entire information is encrypted end to end and will be hidden away from you. The sender and the recipient are known to the routing node so they can use this in to help with the routing to make decisions, whereas in Onion Routing I think it was Fabrice yesterday who where is really I think some people already mentioned it to me and for me it was also eye-opening of saying it’s really funny the routing notes don’t actually meet the gossip protocol in the gossip store to some degree because they are just gonna for what the package anyway to their peers. They don’t even have to be aware of the topology of the network.
Then IP routing we have an address space that is a logical and hierarchical overlay network and to some degree I would argue it reflects some geographical properties. In the IP networks that you usually have are usually located in some direction it doesn’t have to be like this but usually when you make requests you route geographically in your country or in your region. You don’t necessarily go over China let’s use something like Arsenal. Whereas a Lightning Network it’s really a fully decentralized peer-to-peer network the addresses you know I’ll just public keys and curve points right there they don’t give you any information about that. We don’t have this address space and in that sense. Then when you talk about the edge weights of the network and I appear I would argue that they’re mostly static. You have the bandwidth of the links that is basically the information that you need for routing. A package arrives and then you say oh I can actually forward it’s on this link or this is what you could use to make routing decisions. If you look at the routing table protocols they might use this and say hey this is less hops but the hops are really slow so I’d rather take a longer route where I can actually really push through traffic.
Whereas in Lightning it gets really strange with the edge weights. First of all not even clear what to put on the edge weights but I would argue it’s highly dynamic. The fees can change or another edge where it’s edge weight that you should use for routing is actually the balance but how much money is in this channel and not the channel but on your side of the channel because that’s the only that’s a necessary condition to be able to forward a package. This thing is changing all the time with every routing process this thing gets updated.
If you think about the topology and information propagation the network in IP can react to changes of the topology pretty well. We have this border gateway protocol and if some router fails or if we just switch some cables, routing tables are being updated it takes some time but since the network is fault tolerant we might find some other routes it’s still working pretty well. If we look again at this like a fully decentralized peer-to-peer network we have a gossip protocol which is rather slow it’s noisy to propagate all relevant information. For example if you think about the balance let’s consider we ignore privacy for a moment and we just want to propagate the balance of all channels at all times. This would be a mess because it’s changing with every payment process. technically is not probably not even possible to achieve this. But then again you want to have privacy so we might not even want to share this information.
This entire topology and information about how the network looks like in lightning we don’t have this. Then if you think about denial of service attacks in tcp/ip you can do spoofing. I can just send out an IP package and say it comes from a certain sender and then the nodes will reply to this since everything is open since we have open headers the ISPs can actually monitor or some behavior like this and can interfere with this and can mitigate these attacks. Whereas in Lightning everybody or anyone can just send and delay onions and this is pretty much impossible for us to detect or to mitigate.
If you are running a Lightning node and I want to be nasty I just gonna send some onions away don’t put a lot of payload the sense of like Satoshi’s that I put in the HTLCS but I can fill up your payment channel as you know you can only have 483 HTLCS concurrently in flight and I make a circular onion to myself and I just don’t review the permutation for the next 24 hours your payment channel is completely locked.
Currently I would say we don’t even have any. I’m looking at you but we can’t prevent this in Lightning. We can just hope nobody does this because there’s an economy attached to it. You need to have some capacity or liquidity in order to do this but you can do it rather cheaply. Maybe a market emerges where people say I don’t accept too small HTLCS. But that is all stuff we don’t know yet. Then if you really look at path finding itself like from all these properties coming it’s basically just a summary and IP routing it’s collaboratively done by the network and this is to me if I think about it really amazing achievement that people came up with this. I’m not even sure if they were aware of how strong this property was when they built this. But it’s really this distributed network that solves this complicated pathfinding problem in a really beautiful manner. Whereas in Lightning it’s achieved by the sender or maybe a third party path finding service. I mean it might happen. I would argue that this is going to be a business model or reasons we can discuss offline but we don’t know yet.
Seeing the table you probably already have a pretty good feeling of why pathfinding his heart and the Lightning Network I just want to elaborate on this a little bit more. The key ingredient for outing is to know a path and in an ideal world half finding it is really easy. I mean we create a graph or we know the graph from the gossip store and the nodes are Lightning Network nodes and the edges are the payment channels like we expect this.
Then the weights might be the routing fees and there is a label or the edge is called the balance because that has to be respected to some degree and then finding a path is just as simple as calculating Dijkstra when you’re on the network. Every computer science student learns this I think everywhere in the world I hope so. You can basically take this information from the gossip protocol that you can compute the cheapest path and just create an onion end and go out and send it. Then one constraint that’s why I call it the constraint Dijkstra problem is labels should be respected. If there’s not enough balance then you cannot do this. After computing that Dijkstra algorithm either you find a path because the network is fully connected and you can actually have it or we know that no path exists with enough balance because yet reagents labels didn’t work out and yeah then you know this too but the reality it’s a little bit different.
First of all, are they really the routing fees? I would say they changed with the payment amount and overtime. Depending on whether I really built the graph as a data structure in my computer. I have this in memory. Depending on the payment size, the fees and the way it’s that I use for the Dijkstra program is actually changing because I have this fee rate which makes it a little bit complicated and also people can announce their routing fees. Then there is no label for the edges. Technically yes, when you forward the onion when you select a path the onion might come back. You remember you have this telescope metaphor from Christian of saying you start setting up all the HDLCs and at some point there’s not enough balance there and everything goes back to yourself. So it’s private, it’s a highly dynamic property to you so this is really an issue.
We can’t really respect these labels when we are selecting a path. We can only try it out and probe it and then of course we don’t so so even after we compute the Dijkstra algorithm and we try all the paths and the path didn’t work we don’t know that no path with enough balance existed because what could have happened is I try a couple of paths they don’t work. I try other paths and meanwhile the network has changed all the balances in the channels and one of the first paths might have worked through after all.
Actually when we don’t find a path we don’t know that we can stop the routing technically we should try again and again and again. It’s a little bit theoretical here. What we do is we probe for paths until we find one and currently we have maybe ten thousand nodes on the Lightning Network. I didn’t look at the number but it’s a very small graph so we can actually afford to do that but if we want to use the lightning as a scaling solution for Bitcoin in the Lightning Network really grows we run into problems when we probe for paths all the time. It’s a highly dynamic problem with a lot of uncertainty. That is what I want you to remember. This is really a problem that needs more brain power to be solved.
I talked a little bit about the gossip protocol because this is really the part of the Lightning Network that helps us to make some routing decisions. I try to make it quick because it’s basically just what’s written in the bolts. The purpose of the recursive protocol is obviously routing a source base and ignoring it to be aware of the topology of the Lightning Network which nodes exist, how can i connect to them and how to open the channel with them. I need some information in order to actually operate this and then what are the channel policies? What are your routing fees and what are the HTLCS you are accepting? Is there a minimum payment or is there a maximum payment? What can I do with you if I want to use your channel?
Audience Member speaks.
Then of course you want to share this information with your peers. So because it’s a peer-to-peer protocol you peer with somebody and then you share the information you know about other channels with others and you need some mechanisms to do this.
There are 4 announcement messages on the gossip protocol and they’re all Lightning messages so they’re all encrypted. We have a noise XK protocol as we saw before. There’s the announcement signature messages and they are needed to negotiate if the channel shall become public or not. As we discussed yesterday both parties have to agree when a channel is open that the channel is supposed to be public.
If I don’t announce my signature for this channel and you open the channel with me you cannot make this channel public. You can announce it but nobody on the gossip network will forward this channel or accept this because there’s my signature missing. Then there is the actual channel announcement, so this really makes the channel publicly known to the network so as soon as you have these signatures exchanged you are connected to the channel and it kind of proves ownership of the channel because you have signatures attached to it. You cannot just as a denial-of-service take fighting mechanism you cannot just go out on the gossip protocol and say hey by the way this is a channel and your peers would not forward this you need some proof of ownership that you and your channel partner are actually the people who maintain the channel.
Then there’s node announcements. So this shares a metadata of the note for example its IP address. That’s a tricky one I think tomorrow we’re going to have a discussion section about privacy on the gossip protocol and therefore I already want you to take this into consideration because I mean this is information that’s also easy to double check because you can just like peer with this node and check if there’s really Lightning node running on this IP address and now you have a mechanism to connect Bitcoin addresses to IP addresses. This is in my opinion a huge intrusion into your privacy.
The interesting thing about node announcements on the Lightning Network is that they are only possible if at least one channel was announced publicly. This is again measure to fight in out of service attacks because otherwise I could just go out and say I’m Lightning node and I sent this message like a million times and then the protocol would have to do a lot of work so nodes will only forward these messages in at least one public channel was announced and verified to this channel. That’s we need signatures of these public channels and only existence you know with the funding transaction it’s part of the announcement message so you need to blockchain transactions. This really takes away some spam. Then there are the channel update messages and what they do is they update the channel policies. Once the channel is announced we only know there is a channel of a certain capacity here with all this hard data but everything that is flexible like the fees, the amount of HTLCs we accept or the minimum HTLC amount all of that part is part of the channel update message.
I would skip the details of how these messages look because I think you can just read them up on the board.
Audience Member speaks.
Rene: I would talk through this slide pretty quickly. There’s actually no message on the gossip protocol for deleting channels from the network. So the way you delete in nodes not not only nodes but channels in particular from your gossip store is you basically look at the funding transaction as spent and then you know blockchain tells you this channel doesn’t exist anymore.
Rene: Well we don’t have splicing yet. But when we have splicing we need to think about these things probably and then of course you can also think about privacy with gossip and stuff that’s related you can for example see the channel lifetime. Maybe now you are able if the node opens other channels to connect channel alerts or connected code addresses again. Maybe somebody was using the same wallet to run several Lightning nodes so you can even merge Lightning nodes so weird stuff can happen. I can have friendship of IP addresses.
Then there’s another part of course with messages that exist and these are the various for the gossip protocol. One thing was just the announcement and forwarding information but nodes can retrieve old gossip messages from their peers. Question to you: why would this be useful? Why do we need this?
Rene: For example if a node goes offline it misses some stuff, so it once like at least a few other reasons and even more basic reasons it’s the other reason why we need bootstrapping. The node comes online for the first time. The node wants to make a payment source based routing it needs to know what the network looks like. It needs some way of asking the peers of like “hey what is the network actually looking like?” There’s queries basically for short channel IDs and reply short channel IDs and messages and what they do is basically, you ask for channel announcement and channel update messages for specific channels. Of course this is not really of use yet if you don’t know the short channel IDs and that’s why I I think you have these queries where you have the channel range and reply channel range messages.
One is ask for channels in a certain range and the range is the block height so you say from block 500,000 to block 600,000 please give me all the short channel IDs that you know and then come reply messages that basically telling you these are the short channel IDs and then you can do the other query and ask for channel announcement and channel update messages. You can pull this information from your peers and then there is the gossip timestamp filter. Now you can only have channel updates that have a certain time stamp. So you don’t want to have all the old channel updates. You only want one that are most relevant.
What do I do from all these messages? I have a quick summary of information that can be used for routing and pathfinding. There’s basically the CLTV expiry delta which basically says if you want to route over me, this is how many blocks the HTLCs I want to lock in order to make sure everything works. We have the minimum and maximum HTLC amounts that you are accepting, a short channel idea of course you need this information because and the onions need to like put in which short channel you’re using and then there is the fee rate the base fee.
Remember every time you route over a node you have a certain amount of fee is fixed if they want to be paid and the fee rate basically looks at how much money are we forwarding and then a fraction of that is also put into this. I also included the channel flags which are the direction and availability of the channel so there are some flags in the messages where you can turn off your node. Basically make an announcement saying channel update “hey now we’re offline. Don’t use this channel anymore. “
This also takes some time as this information is spread but it’s a useful information to take this away and then of course for example flag. So for example there’s this option channel HTLC max which says how many HTLCs do I accept on this channel, something like this. If there’s a channel that only accepts 20 concurrent HTLCs you might not wanna use this for routing. This is all the information that we have and what I would argue is basically taking into consideration the first part of this talk. This is not really a lot of information. Some information and of course you can maybe get some other information from elsewhere but it’s not a whole lot of helping me as a source to make qualified routing decisions.
Alright, so after doing this short course on the gossip protocol, which apparently I don’t know much about, let’s go back to path finding. So what are strategies for pathfinding that we currently know? Well one strategy I already mentioned we’re probably the cheapest path or cheapest path is like maybe the first one doesn’t work so we do the second one. Then Alex talked a little bit about atomic multi-path routing. You split payments into smaller pieces and maybe that helps you to find some way through this network instead of like making one large payment. I came up with this concept of just-in-time routing and I will talk a little bit about it. because amp we already saw yesterday. It’s my idea of bringing the best effort concept back to lightning but keeping privacy properties because we already have seen trampoline payments and when I created the slides i thought i hoped this talk earlier and said i hope to learn something and I already did yesterday. Thank you very much. Which also tries to bring this best Effort method back to lightning but it comes with a privacy issue and I should be honest about it when you legit in the way of how I would want to do it. It also comes with lower privacy but with more success rate. I think there’s this general concept like the less private you are i see this already from from ip networks, the easier the pathfinding gets.
The last strategy that I will not elaborate a lot on but I think it’s a very viable strategy is you can use previous knowledge and I think Alex already talked about it. You can look at previous routing attempts where good nodes before which kind of like stuff worked or you can use uptime of nodes like information you have from other sources. You can use those.
I want to mention some considerations for each of these methods and for jit routing again. I will do two or three slides too to explain how it works. What is the considerations for probing pass? One has to compute the shortest path from destination to the sender is that actually clear to everybody why i’m not computing the shortest path for me when i want to do?
So i’m the source at source based routing. Why why do i have to compute the shortest path starting from the recipient to me and not from me to the recipient?
Rene: Exactly and the fees, like if the fees were fixed it wouldn’t matter but since we have the fee rate the later hops already like changed the amount that is being forwarded and they change the fees of the earlier hops because the fee rate says at that many satoshis depending on how much rather forward. So later fees actually have an impact on the routing. So yeah I wrote this down here. We need an adapted dynamic version of the dijkstra algorithm. So dynamic as the actual weights change with every hop in the amount of the onion changes. So you don’t have a fixed graph like whenever you make a Step depending on the free rates stuff changes and adapted. We actually need to probe k parts of the first k minus one paths fail and if you look this up for k shortest path routing in wikipedia you come up that there is jen’s algorithm for example and also some others and the runtime is O K times n times m plus some log in this but if you look at the main component you have the number of nodes the number of edges and then times k like for every part you do this you have this loads times edges and
Audience Member: Does that actually help you if all the K shortest paths goes through one channel and that’s the channel that is dying you can throw away your entire result?
Rene: I agree but at some point in time there will be paths that are more expensive going through another channel. So of course you can use a different algorithm or adapted version with more information you get back. So yes I’m not saying we should use this algorithm but I was saying if you probe for paths and the only thing you do is you start with the shortest path because we want to save on fees. The notion of short is also tricky. You could also say it’s a node and they’re short and in order of hopes that you’re doing and then you to just breadth-first search.
Audience Member: I’m just wondering if computing K shortest paths is not more expensive than just doing the single shortest one adaptively over and over again?
Rene: Probably it’s better to do the single shortest adaptively but still it’s an expensive algorithm right and you try to do it over and over again.
Audience Member: So I agree that the K shortest paths is is a really good solution if you have some some really expensive piece of service that you’re talking to and the latency to talk to it is probably expensive as well but if if it’s located right next to your actual node then it might be cheaper to do the adaptive.
Rene: We just discussed this yesterday. Already if you’re running Lightning on your mobile and your mobile is supposed to do this for a graph, let’s plug in some numbers. Let’s say I don’t know, say a million nodes and a reasonable amount of channels that are coming with them and you multiply these two numbers you get a lot of computational steps. It’s getting ugly pretty quickly. It’s pretty inefficient in how to compute on a large scale on Lightning Network and in particular mobile devices maybe one of the reasons why I’m suggesting that pathfinding might be a service that is being outsourced where you just ask pathfinding service to give you a path or we could outsource this by having the network finding a path for you which would mitigate the trust issue here little bits or this is your privacy to some degree because if you ask a third party service the third party service at least knows that you want to pay a certain person on the network. If all nodes behave well a single path should be quick, maybe one second to do it independently of the network size.
This is of everything, it’s nice in path finding and routing. The problem is if nodes misbehave, probing can take a pretty long time because as we learned yesterday we should not try to pay a second time if payment doesn’t work. While probing a path we could actually end up in this scenario where something maybe hit the chain or maybe there’s just some timeout we don’t know what the node is doing, HTLCs are being set up but you don’t get anything back you don’t know what’s going to happen so you’re gonna wait like really really long for one probing attempt. It’s not a computationally heavy problem on your side.
Considerations with AMP and it’s a little bit like trash-talking this right now. It’s not in the sense of like I’m not a fan of AMP but I want to point out the problems. There are advantages. I refer to an X for them and you already mentioned some of them but the idea as we know it’s basically a known fact that larger payments have a higher likelihood to fail. You can simulate this very easily, you can try this out, you can use your own data. You can do whatever you want but you will. It’s basically common sense. At some point in time some channels just are less likely to have enough balance for you. So what you could do is you can split larger payments into several smaller payments and send them along various routes.
So what are the disadvantages of this strategy in my opinion? Well one thing is the routing time agrees increases drastically because an atomic multipath routing still we need to probe every of these smaller paths and we already saw that probing might take a really long time so now let’s -
Audience Member [unintelligible]
Rene: No, it’s dead. It’s the maximum time for each path. Because the preimage by the end of the day is only going to be revealed when there is enough incoming capacity over all HTLCs there. So now comes the problem if I’m probing one part in this path is just misbehaving. I don’t know what to do. I’m just waiting and now it’s getting even more messed up because different paths that I’m probing might have different CLTV expiry times so stuff gets really ugly.
Audience Member: I should probably mention that what you’re referring to probing is what I call a payment attempt and I use probing differently.
Audience Member: Finding the route is more work, but but actually trying it is worst case
Rene: The thing is in atomic multipath routing I can concurrently actually try those routes right? This is not the issue here, the issue really is that if one of these routes takes a really long time all the other routes have to basically wait because they don’t settle independently of each other.
Audience Member: Larger payments ..it would more likely cause a problem.
Rene: This would also be a rule of thumb. Sentence in an endpoint we’re doing if we’re decreasing the amount to pay but we are deeply decreasing the amount to pay but we’re increasing the amount of nodes that we evolved. This is fair as we just discussed, probably still necessary for every path then also the base fee is paid several times. So amp is also more expensive. Side note to a lunch discussion with me about the fee model in Lightning because I was arguing that we should actually get rid of the fee rate because a lot of the data science part makes a lot more sense when we actually have base fees and smaller fees as small payments but that’s a different discussion I don’t want to force here but here James.
Audience Member: I had one channel walls with racks closes….inaudible
Rene: It’s no big deal, that’s a really cool thing about the design of Lightning. Remember yesterday we already had this. I was surprised that this was a frequent misconception. I think it was not really in the room but we discussed it if on a route the channel closes the other HTLCs are still settled off chain. So also in AMP if some of these things break it closes. It has a long time lock. So if the other stuff works out and we release the preimage–
But again these paths have to wait anyway all the time. What AMP where it might or could do- this is all speculation anyway because we haven’t built in said we don’t know this but what I’m saying is the danger of AMP is that payments now have a really good likelihood to always take 24 hours or whatever. It’s not Lightning fast anymore because it’s efficient that one single path is taking your time
Audience Member speaks
Rene: One thing you can do in Lightning already is you can overpay a little bit, already to obfuscate some stuff right. You could say well I’m willing to overpay a little bit. I have fees anyway so I make a couple paths more, maybe a percent or something. In this case if one path’s recipient at some point says well I have enough payment now. I release the preimage and with the rest whatever happens.
Audience Member speaks.
Rene: Payment hash? Currently in the suggestions yes, but I think when we go with payment I correlation and stuff we could also like come up with ideas. But as I said before, remember this picture. It’s very open. We can be very creative here. If you currently think oh there might be a way of making different preimages. You can simulate AMP already. You can always say hey I want to pay merchants and I really trust those merchants. I don’t trust the payment provider but I trust that merchants because the merchant sits in Australia.
Audience Member speaks
Rene: What I’m saying is of course you could simulate AMP already and tell the merchants hey please give me one euro in or one dollar invoices and you pay one invoice after a time. If one of those got stuck you could do that technically and then you have different preimages.
Audience Member speaks.
Rene: So there’s many AMP proposals floating around as you know. I think one of the reasons why many proposals are floating around is because nobody came up and said “hey by the way this is a really good solution for pathfinding” and “everybody else said oh yeah we agree it’s very obviously there’s a good solution let’s go for it” because we’re currently figuring this out and as I said before that’s a great opportunity for you having learned about all this stuff because you could actually work on these issues. On the next slide I will show a small table to argue that it’s not even clear if the likelihood for successful payment is higher when you split those payments. Keep this for a second.
Another disadvantage of AMP is you have more HTLCs in flight. I think we discussed this yesterday already in the room a little bit. When you make more payments, those payments tend to be longer because you have to wait for the longest path. More HTLCs and more payment channels are in use which could result in several things one thing is I think I have these points here. Yes statistically we waste more blog space if channels break because statistically speaking the chances higher that on this channel is there are currently more HTLCs. These HTLCs are usually also smaller, the fees might eat up a lot of stuff and the channels might become overloaded.
Remember a channel can only have 483 HTLCs. Let’s assume we make really really really really small payments then we’re gonna eat up a lot of HTLCs. We have to think about it. We want to do that. I end up improving AMP. I kindly refer to the talk of Alex that you already gave and discussions that we outlined. I don’t want to say amp is a bad idea but it has some problems and we should be aware of these problems.
I promised you to talk about and the the the question if actually AMP has a higher chance of routing a payment and I have to say these are really completely arbitrary not cherry-picked probabilities that I used in this one so what I did is I said well let’s assume the probability of success for a large payment is only 20% and the probability of a small payment is 80% and I don’t even know what small and large actually means. But I thought 80/20 rule is always a good thing to start with and again it’s completely arbitrary so the AMP success probability is you take the probability of a small payment and this has to happen for every small payment. So you have to basically compute this exponential and you ask the question is this really bigger than the probability of a large payment to fail and obviously better you do a simulation that you do whatever. This is just a really rough thing.
Sorry for the transition here. You came up with this table. So here this probability is always fixed and this is the number of paths that I would use an atomic multipath routing and obviously if I only do one path it’s the same. For two paths I didn’t go directly to 80% because it’s still a fairly large payment so I went to point four point six and from there on I said it’s always point eight probably .It’s just rough numbers but what you can see is there’s only a small window in which you actually get a higher probability with atomic multipath routing in comparison to just use the standard large payment and in this small window don’t take this serious now because all the numbers are completely arbitrary chosen. But if you think about it it’s not even clear. there’s counter arguments to what I’m currently presenting obviously. Because this formula is not really true I just want power fails well you just probe another one so we multipath routing so eventually you might get it because you already have like a large fraction but there might be really constraints on the global view of the network that really hinder you to to find the path even with atomic multipath routing and it’s not even clear when when this fails
JIT routing is just in time routing. It’s the following idea: let’s say S wants to make a payment of 90 satoshi to R and this is how the network currently looks like. Yes, if we have a channel of a capacity of 110 Satoshi’s never let smaller than lightning but and s has a balance of hundred satoshis and he has a channel with our capacity of 200 but he only has 18 satoshis on this channel and then he also has a channel with T also the capacity is 200 the the balance for B is 80 and then here’s another channel which is evenly balanced everybody has hundreds of satoshis. So much you can already see this when you probe this network you don’t find a path. Because here you come forward 90 until you come forward and very obviously you can do it with AMP. You can just do a certain split name and then you find the solution but what I’m trying to do in just in time routing is this picture so when the onion so s selects this path over B to R that’s the path that is really probing and doing when he receives this onion he says I cannot forward this onion in a regular payment he would now set back an arrow and say I can’t forward this onion all the HTLCs go all the way back to s and nothing happens but B could now say well we do something and I think I wrote this down here sorry what what equal to is equal to use one of the coolest tricks that we use in Lightning what is the coolest trick ever used in lighting in general yeah rebalancing is the idea but but what is the concept what what he does no okay I just give you the answer B is B he is deferring the routing process to the future right and remember a lightening with deferring publishing a Bitcoin transaction so the future was a good concept. Since we’re doing the same here he says instead of sending back an error I do what you say you rebalance. Now B does a circular onion sending 50 Satoshi on this channel only having 30 now this one also moved some capacity on the channel now this 130 Satoshi’s on the channel and what you can see it the payment went through great
Rene: well that can’t that can’t happen because the onion is already constructed. I mean that’s that’s a question. I I have different suggestions to mitigate this problem my my suggestion is that then I call it just in time because you get your liquidity that you need just in time like this or your supply chain concepts
Audience Member: What happens if I cancel the receiving of the payment?
Rene: If you cancel it if I can’t tell my receiving I don’t I don’t accept it wait who are you in this graph though if I’m getting the payment and and you do the just-in-time routing yeah but I just when the payment comes to me I just cancel I reject it well that’s no big problem because if you need if you read on the mailing list a person suggested that you can do the rebalancing with the same payment hash and in that case the rebalancing only takes place if you accept the payment so you only pay fees for the rebalancing at the payment knowing that
in that case you still don’t have enough capacity to go through
Because you already are your eyes. I mean in that particular - no because if you bind the rebalance into the same preimage then the HTLCs will not get resolved and you can’t create a new HTLC for 90 no but here since you have two HTC’s in two directions they could have a protocol of mitigating this oh you could I’m not saying we should do it I’m saying you could but I even have a different suggestion for this so in general my suggestion is the peers might want to lift the fees on the rebalancing operation so in general if you look at graph theory there there is a certain amount of networks you should always look at when you look at the graph so there’s one thing that is called your ego Network that is if you are being your ego network consists of S, T and R but everybody who is around you. That’s the network that you usually know in Lightning very well because you have your node and you know what’s with your channels and your peers but what I would suggest is to look at your friend of a friend network.
If I am S my friend of a friend network actually goes all the way to R because all the friends of my friends are in there and if you do the math you will realize that all circles of length five are in your friend of a friend network so what if and that’s a question what if we would argue and say you know what in your friend of a friend network we introduce another gossip query or another query of saying hey to all my peers give me your channel balances of all your channels. I have a very local view where I actually know the exact topology and I can suggest a rerouting rebalancing route and I could even do it publicly. I could extend the bolt protocol and everybody on this world would say hey actually for each one of us it’s better to now shift these funds so now you can do the rebalancing for free you could. I’m not I’m not saying we should do it in this way. This is one way of achieving this and if you compare it with a privacy model of trampoline it might actually be a better privacy model and you don’t have to do it as drastic as I’m suggesting this right now. You could also have a query of saying tell me channels on which you want to have inbound capacity or tell me channels on which you want to have outbound capacity I just continue
Some considerations with JIT. I think the idea got clear so the disadvantage is you need additional probing for the rebalance operation. It may be costly for routing nodes, especially when you do it on the current system because you’ll just have to pay fees and maybe even for the rebalancing use a different pre image because currently that’s easier the CLTV data is current. It might not be sufficient to support in addition we’re routing operations because they’re very tight on time and you don’t find a route that actually completely works on smaller CLTV daters and it also increases the time because there is a time needed for rebalancing again.
As I mentioned one one way of improving this is by creating this friend-of-a-friend overlay Network and maybe extend the gossip protocol here to exchange a little bit more information similar in this like BGP setting. We could use some aggregated information about the channels to do this and we could rule cost free to serve the roads in lightning, especially if they are not encrypted, you know that you can’t use them for denial of service attacks or something because you know who is involved.
Multihop trampoline payments. The high level idea I think we already saw to get the best effort component in you define a rough route. If you do multi trampolines where you say “hey it should go through these trampolines” and then let the nodes and the route figure out how to do the actual paths between the trampolines. Again you shift this, like path finding problems to those people who are along your route. I think yesterday the idea was was coming up but not very clear that it might make sense on Lightning to to extend the gossip protocol that the trampoline nodes have their own network of saying “I’m pretty confident that I’m able to forward to Christian” even though we’re not direct peers or something like this.
So then on this trampoline Network I could make a route where somebody in the invoice could say “hey use this trampoline as an exit trampoline” kind of thing. Going this way disadvantages that need a protocol change. Currently we need these onion formats which are always a little bit. I think it’s coming but we need it. The only form it needs to adapt it’s a clique of trampolines that in my opinion a more severe problem might emerge and build a cartel. Especially if we even enforce something like this on the protocol level, They could be I think the question was already coming from the room centralization or have been spoke to apology might emerge and the privacy model decreases a lot because the trampoline really no knows I’m doing a payment to somebody else or somebody’s doing the payment to somebody else.
Whereas with JIT routing you don’t have this problem. The person initiating the rebalancing just knows before from where to where on the local view but not on the global view and I think yesterday we kind of figured out that it wouldn’t work for all at least right now for spontaneous payment protocols. For improvements I refer to talking to Christian and probably also to memory bastions.
To sum up some general thoughts on pathfinding, I think we should acknowledge as I already did in the other talk that our funding is an open and difficult problem for the Lightning Network. We should not ignore this and I personally think we should not wait to tackle this problem. You could argue currently the lighting network is small and our current mechanism works but if we need to view protocol changes for some of these ideas or even other ideas then of course it might be too late once the Lightning network is already widely spread.
I think this is really a problem. We should quickly work on various proposals and ideas. Some of them require protocol changes and different strategies could be combined. I want to emphasize this. You could combine trampoline payments with JIT routing with AMP. You can combine AMP with JIT. You can do all of these. It’s not like one algorithm wins, maybe it does at the end and there seems to be a trade-off between these strategies. Some have more privacy of individual nodes or the ability of the network to solve the pathfinding problem. It’s like where do we want to stand on this?
Lightning might want to copy some ideas from the autonomous system space in particular. I think it would be really useful if we have this hierarchical address space or as Christian suggested, to have a core open routing protocol on the base that always works and then create an encrypted layer on top of this. Because this will just increase the success rate and not everybody is desperate for privacy anyway. My personal opinion is that our finding as I mentioned before will emerge as a business model and opportunity because if we keep it with this current set up and people with third party services will definitely win because once you offer that service you easily might have more information than other nodes on the network and then you can offer better paths.
Why do we have more information? Well I think it’s similar to the Google setting but everybody can now set out and write a web crawler and have a view of the web graph but Google I’m sorry Google has all the log files and log files are a crucial piece of information that helps people to make better search suggestions. I conclude my talk by saying power finding has many channel challenges so we think about them, join us and take this program seriously.