Home < Boltathon < Major Limitations of Lightning

Major Limitations of Lightning

Speakers: Alex Bosworth

Date: April 6, 2019

Transcript By: Michael Folkson

Tags: Lightning

Media: https://www.youtube.com/watch?v=ctx-oAIhhSY

Slides: https://drive.google.com/open?id=1UIHfYdnAWGMvsLhljTgcneCxQAHcWjxg

https://twitter.com/kanzure/status/1116824776952229890

Intro

So I had a Twitter poll to decide what to talk about. People seem pretty interested in the limitations. Where is the line of what Lightning can do and what Lightning can’t do. That’s what I want to talk about. To make it more clear what can be done and what can’t be done.

Every Transaction A Lightning Transaction

When I first started really focusing on Lightning I thought what is the goal of Lightning, what is the goal of the Lightning Network? Something that really clicked to me is what if every transaction in the entire world was a Lightning transaction? Of course it doesn’t really make sense because there are so many different transactions in the world. I like it as a point off in the distance, a goal we could shoot for. How can we make this network be able to support and be compelling for every transaction that there is ever? The thing to really think about with Lightning is the definition of Lightning. It is based on these channels and the channels have balances that are fixed. I have some money on my side and you have some money on your side. Fundamentally all that we’re really doing when we’re using Lightning Network is we’re saying “I’m going to move some on my side over to your side” or “You’re going to move some from your side over to my side”. We use HTLCs to multiply this effect so that we can say “I’ll move some over to you and you move some to somebody else.” That’s still the fundamental boundary of the Lightning Network is that we have these fixed capacity channels. What the problem there is is that if you just want to randomly send to somebody a huge amount you’ve never sent to them before, these channels are not really well set up for that because they have this fixed size and they have these fixed set of relationships. You can change them around but that’s not really what the design of the channel is for. That’s probably the major problem with using Lightning. There’s also some other problems that are interesting to touch on. One is that Lightning is a totally new ecosystem, a totally new platform and we need people to learn how to use it. We need people to develop the protocol out, we need people to develop the software, we need people to build these routing nodes, commit their capital. This is an ecosystem, this is building a marketplace. Without that, without these people the Lightning Network is nothing really. That’s also another major limit of Lightning. Finally the big thing that really makes Lightning work or not work is the Layer 1. We need to be able to have this assurance that these offchain transactions could eventually go onchain. We can do tonnes of stuff offchain but we always need to make sure we have this grounding in Layer 1 and we need a decentralized Layer 1. At the moment the thing that seems the most promising is that Bitcoin can be that Layer 1 that we need to power this second layer.

High Value?

Even from a currency perspective we need to make sure that Bitcoin is a currency that people want to spend. If I was onboarding people to Lightning one of the biggest obstacles to getting people to spend on the Lightning Network is why should I even use this weird currency? What’s in it for me, why is it cool? That’s a limit to Lightning Network’s growth that is out of our control a bit as Lightning protocol devs but is super important to the growth of the network. I just said that if you want to do a very rare, large value, random type of send you’re going to be constrained. Lightning is not going to be well suited for that. A lot of people ask the question “What is a high value send?”. Is a high value $5, $50, $500? That’s a wrong way to ask it. This is a marketplace and an evolving ecosystem. You can think about it like the internet. In the internet you had this interior network of universities all linked up to each other and big companies linked up to each other and they could send to each other really quickly because they have very good bandwidth to each other. When users would log on to the internet and they wanted to watch a video or something, they would have the last mile problem. They would be thinking that this server can serve out high quality video but then they would use their dial-up modem and they wouldn’t be able to watch that high-def video. Or it could even be the opposite side of the problem. Somebody wants to upload a video and I have great bandwidth but they don’t have great bandwidth. That’s what I mean by high value. If you have a network where everybody is linked up each other with $1000 and I’m a rich guy, I want to put in a few million dollars. I’m still going to be limited by those links, I’m still going to be limited by the overall ecosystem’s money. If I try to send to some guy who only has $1000 dedicated to him, even though I have plenty of money I’m not going to be able to reach that guy.

The Blockchain vs Lightning

That’s how the blockchain is oriented towards Lightning. Why would you use the blockchain? The blockchain is good if you don’t have a tonne of repetition. Channels really benefit from tonnes of repetition. The blockchain is good if you are using high values because there’s less chance of a liquidity problem along the path somewhere. The blockchain is pretty simple. If you want to keep your funds in the safest possible place I would say use a super cold wallet. Another thing I’m going to touch on later is that Lightning really puts a lot more responsibility on you the user. The blockchain actually takes some of the responsibility off and shifts it over to the social network of the blockchain.

Custodial vs Lightning

Another competitor to Lightning is custodial settlement networks like Liquid or even exchanges. Even systems that still use Lightning like tipping.me or I made one called htlc.me. The custodial works really well on small values because Lightning has problems settling very tiny values to the blockchain because it’s just not worth it to go to the blockchain. The blockchain has certain cost and if the amount you are trading is below that cost then there’s a problem. There’s even problems at the very large size. You could send maybe $10 million on the blockchain but if you’re sending $1 billion on the blockchain you start to worry about the proof of work. At that point you may want to think about custodial services where you have contracts in place or some other kinds of systems. Otherwise you’re going to have to wait a long time for the proof of work to become valid. Lightning is not great in those types of situations where you want to do boundless things. In that case if you could find a trusted custodian and you don’t mind taking that tradeoff and you don’t want to take responsibility of your own funds for yourself then custodial starts to look more attractive than Lightning as the solution.

The Block Size Limit

I kind of covered at a high level what the limits of Lightning as a concept are. If you really want to zoom in on what are the scalability limits commonly people talk about the block size limit. Everybody is obsessed about this number that we have a certain number of transactions that can fit into a block. I thought it would be interesting to look at the transactions and see visually how to fit them into the block. I made a little chart here of a channel which is just a regular transaction so you can look at what bytes are required for a channel. This is the absolute minimal channel and the bytes are mapped into virtual bytes so you can see that the signature is collapsed because it gets a witness discount. If you actually counted up all these boxes it would be around 112 virtual bytes, just as an estimate. That is what is required for a channel at the absolute minimum and that’s our starting point. That’s like our atomic unit of how many channels can we fit in the blockchain. If we can get channels down to being around 100 bytes then there is plenty of space because we have a million bytes every block. But currently it doesn’t actually work that way. There’s more complicated things that we would need to do to get to this ideal atomic unit.

VByte Saving Tricks

What are some ways that we can reduce the size of channels to become the smallest possible unit? One way is we can say if you look at that transaction it only has one output. That saves a lot of space. But when you close a channel you maybe have multiple outputs because at the very least you’re splitting up the money between the two people in the channel. One thing you can do is you can say “Before we close this channel to reduce the onchain size we can rebalance the channel”. So we’re going to push out one of the outputs somewhere else and we’re going to be able to settle onchain with only one output. That will save us some chain space, some fees. Another way that we can reduce the onchain space to get to that ideal is we can say when we’re opening a channel, at the same time we are closing a channel. Or the other way around. Because that output can be the input to another channel. Then we’re getting two in one. That’s a trick. Another really powerful trick that we don’t support right now but we can make strides to in the short term is to turn multiple signatures, and I already put that into my illustration, we can collapse multiple signatures into one signature. That’s so powerful as there are no significant limitations to that tool. We could merge tonnes of signatures into one signature using signature aggregation. It would be nice if we had Schnorr and Musig to do that but we can even do that with ECDSA. Another thing to think about is right now a lot of times people have this estimate of chain fee space and they’re using the uncooperative close scenario to say “What happens if one guy goes offline and we have to broadcast this onto the chain?”. In the example that I showed it didn’t have that complicated script that you need when you uncooperatively close. We can think of uncooperative closing as a failure condition. We can try to economically incentivize people through timelocks and higher fees that they should always really try to cooperatively close with other people. If they can’t do that then they’ll pay higher fees. Another thing that we already do to save Vbytes is if there is an output that is too small on the transaction… let’s say it is a dusty output. It is sending like half a penny that is possible on Lightning. But you can run into a problem when you try to settle it on the chain because the chain will resist economically and also at a peer-to-peer level you spamming the chain with half a penny outputs. Instead the software will move those outputs into the miner fee bucket where it will say “We’re just going to bleed that over into fees so that if we actually have to close uncooperatively we’re just going to forget about that and just say the transaction confirms a bit faster.”

More Members Fewer VBytes

Another tool we can use towards that ideal is we can make channels that don’t just have two people but they actually have many many people. There’s drawbacks to that and it’s complicated and that’s why that’s a limit today. The limit today is you can have two people on a channel. Having more members in a channel is actually a super good way to extend the utilization of those vbytes because you can add tonnes of members to the channel. The bounding condition for adding members to a channel is how many outputs can you cast? You want to be able to say we can take this channel transaction and settle it to the chain if we wanted to. We’re not going to do that but we could if we wanted to. The limit is that everybody is going to need their own output. Let’s say at the end of the day we close this channel. I want my money. Each output is going to take around 30 vbytes. That’s something that you’re not actually paying until you hit a close scenario where you actually have a split of the difference and you couldn’t cooperatively fold it up into one output. One important thing to remember when you’re talking about these multiple member channels is that only one person actually needs to fund the channel. You only actually need one input to go into that transaction. I can fund a channel and I can say “Let’s have ten people in this channel but there’s only one input that is being fed into this channel which is my input and then we’re going to push over more of my input funds into other people’s outputs. Those outputs won’t be represented on the chain until there’s an uncooperative close where we couldn’t rebalance.” This number is actually really high. We could actually in theory get to have thousands of members in the channel because like I said before with the signature aggregation, you can fold hundreds of signatures into one signature. The only thing that is necessary when you set up this input is that you are setting up people’s public keys and people’s public keys are all hashed together so there’s no space needed. That’s a constant size. This could mean that we could actually have thousands and thousands of channels per block. We could have thousands of members potentially, that’s probably not realistic. We could certainly have tens or hundreds of members in a channel. If you do those calculations that’s getting us to billions of people in even a week. There’s a lot of room to explore the amount of efficiencies that we can pull out of a channel by adding more people to the channels. That doesn’t really require changing the nature of the blockchain.

Responsibilities

Going back to more of the design of the system, I want to talk about what the scalability design is of the Lightning Network and how do we achieve scale. We have these problems in the system, these responsibilities. People in the system have to answer these problems. Whose money is who’s? How is it controlled? Is it in a channel? How is it set up? Who’s validating these things? Who’s making sure all the rules are being followed? Also how are we making sure all the people are getting the right data? These are the high level problems for scalability in the blockchain and in the Lightning Network.

Shard Responsibility

The Lightning Network is going on this design path which is we’re going to share in that responsibility. We’re all going to individually control as much as we can ourselves. That way we can add more people and it’s not creating much more burden for the overall system. One principle that we’re going to use in Lightning is we’re going to say “You’re going to keep the database of your own funds. Nobody else is going to be responsible for knowing how much money you’ve got, you’re going to have to keep track of all the metadata around your transactions: when you sent it, who you sent it to, that’s going to be your responsibility instead of a shared responsibility. When you go to prove it to everybody else when you do global settlement work, you’re only going to present the absolute minimal proof that what you’re doing is acceptable to everybody else. These things are going to make it so that every new user that we’re adding to the system, we’re not increasing the burden on the overall system which is not a scalable way to grow. We’re going to make it also that you are only validating, only checking the signatures and checking on things of people in the network that you are personally interacting with. Just a small fixed size set of people that you are going to be talking to. It’s not like every new person who’s coming into the Lightning Network you have to be checking them out. That also goes for sharing the data about the network. We’re going to try to make it so that the only data you need to care about is data that is relevant to you directly or maybe part of a trade. A fixed set of people. That way we can keep adding people to the Lightning Network and we’re not creating a situation where we’re increasing the marginal cost for joining the network.

Responsibility Proxies

This creates a problem for the onboarding of people to the network. In the blockchain you actually have a lot of the work is taken care of for you. You’re not responsible for the database of your funds, the blockchain keeps that. You can take your seed, download the blockchain, scan for all the blocks to see where your funds are. We’re moving more in that direction. There’s a lot of proposals where you would keep more responsibility so that we could scale Layer 1 but that’s not currently how it works. Currently you can forget everything apart from your seed and that’s fine. On the blockchain, the reason why it is difficult to scale is that the more people we add to the system, the more people you have to validate. That means if we add a thousand people, everybody on the network has to check out these thousand people, it doesn’t scale very well. In terms of sharing data with people, the more people who join the more data there is so it creates a mountain of costs that’s growing.

Responsibility Proxies

It’s nice though that other people are taking care of stuff for you on the blockchain and it is not nice in Lightning where you have to be responsible for your own stuff. What happens if you lose your database? Because we sacrificed the communal responsibility for the database now that’s your job. You’re out of luck if you lose that database. One thing we can do to address that problem is we can say “Let’s have proxy responsibility.” It’s not just going to be all on you, there’s going to be some shared responsibility but you’re in control of how much you want to share it out. We have this thing called the data loss protection protocol which lnd will require now. Your peers are going to have to be keeping a minimal set of data for you so that if you want to close a channel and you’ve lost some data they need to be keeping the appropriate data to help you close it. We just added in 0.6 this ability to create this static channel backup system. It’s a simple set of data that you can copy round. That you can copy to a peer-to-peer service or you could just copy it to Google Drive, wherever you want. It is an encrypted file and it is pretty small. We’re working on the watchtower project which is that if you don’t want to be monitoring all the time or have full responsibility for monitoring your channels you can outsource that to a watchtower. In the blockchain equivalent if somebody tried to do a double spend or something the overall social network would crush that. They’d be like “You can’t do that, that doesn’t make sense.” Or if they tried to do something invalid. If you relate that back to Lightning you could just say “Ok I’m going to choose these people, they’re going to be my watchers. I could do it all myself if I wanted to but that would require me to be online.” Also since a blockchain is a shared ledger, it is sharing all this data, it is keeping it around for you. You have this problem where you can’t really receive if you’re not around to receive it. We can proxy responsibility for that too where we can say “Another node can act as a mailbox for you.” He can delay the delivery of that final HTLC and then he can maybe send you an email or you could come back online once a day to check if anything is around in your mailbox. Then you can take that final HTLC step. The key thing to think about in all of these different ways to proxy is yes you do rely on somebody else but you’re also relying on them in a way where they can’t take just your money. We can create moderated solutions for delegating responsibility.

Routing Limits

Another super popular limitation that’s talked about for Lightning is how are we going to do routing? It is difficult. Routing is a difficult problem on the Lightning Network. The theory of the Lightning Network is that you have this six degrees of separation. You have all these people on the network, maybe billions of people on the network. It seems like a very difficult task. How do I know to get from one in a billion to another in a billion? Once you start thinking that people have relationships where I know somebody who knows somebody who knows somebody. Actually it is not that difficult to be able to theoretically reach them. The limitations on making that actually work though is that in all of these billions of people you don’t actually know who is online at the time so how do you know who can even help you get to that person? It is not enough to know that they have some connection, you also need to know that they can respond to routing requests. You also don’t know how much money people have in their channels so you need to figure out some path which is balanced where the balances on the channels are set up in the right way that you can actually send to whoever you want to send to. Even the idea that a mobile phone would even know about billions of people in the world is a challenging idea because that’s a lot of data to keep up to date. It is probably infeasible that we would have complete knowledge of the graph. We’re going to need to have something where you’re not going to know some subset of the graph.

Dead Zones

There is another problem with routing. This is a major problem. What happens if one of these nodes doesn’t forward along your HTLC? They just sit on it or they cancel back to the chain. All of the smart contracts that power the Lightning Network are based on these block times. The block times can take a long time. They can be theoretically days. Lnd will now allow you to set what you want for your acceptable time but it also needs to be acceptable to the routing nodes who are forwarding for you. Another tricky thing is because the Lightning Network is an onion routed network, you don’t know exactly who held it up. You sent it off to this route and somewhere along the line somebody is holding it up. Somebody is delaying things but you don’t know exactly for how long it will be sitting around or who is doing it. You’re not going to lose money but this is time and this is locked up capital which is annoying certainly. You just want to make your payment. That’s also a routing problem.

A Reliable Routing Network

One solution that is in the works to solve these problems is to separate the nodes into different responsibility roles. The thing about the network is that you can think about it as everybody can potentially be equals in terms of they can serve all the same roles if they wanted to. Some nodes are going to want to serve different roles than other nodes. An easy way to think about that is a mobile node probably doesn’t want to keep his phone online connected to the internet, have millions of peers, be doing all this work. He just wants to be a spending node. He could if he wanted to be a routing node but that’s not what he’s looking for. We’ll see specialization in terms of routing nodes. We’ll have a certain set of nodes that do want to be forwarding payments and earning those fees. We’ll have a stratification of the network into these different types of nodes. If we can set up this network where we can identify who are people who want to be routing and we want to identify them as behaving in a certain way. A routing node is somebody we’re choosing… I’m going to choose you to send my traffic through you because it looks like you’re online a lot, I don’t have a lot of failures through you, things happen quickly. If we can make this kind of a network where we have people at the edges, these mobile wallets or people who are not interested in routing particularly or maybe not good at it, they’re at the edges. In the center we have these people who are wanting to run these nodes that are great at routing and always online and doing an amazing job. Then we have a much better picture of how to map from A to B on the network. It kind of models the internet in how you could run a web server on your laptop but you don’t have to. You can run it on a server somewhere. It is up to you how you set it up but you may want to choose a server if you want to be routing on the internet. This also deals with the problem that we have which is this capital liquidity problem. If you can think of routing nodes are like the broadband equivalent so they have very high capital links to all of the other routing nodes, you no longer have to think about the liquidity problem to get to your destination. You can rely on routing nodes to be very well connected to each other. Then your only problem is can you get to the routing node? There’s an equal problem on the other side which is can the receiver get from the routing node to where you want to send? That makes the problem a lot simpler because instead of having to worry about six hops all having proper liquidity you only have to worry about the first hop which is you and then the final hop which is the receiver. In order to get to that place we need to build a market. We need to make it so that people are rewarding these routing nodes for doing their job and that routing nodes are easy to run, they are cheap to run so they can charge limited fees. That’s kind of a market process and also a software process.

Common Reference Points

Once we have this network, we have this network of these great routing nodes. It also solves the problem of graph computation. I’m one in a billion people so I need to get from my node through the network of a billion people to somebody else. How am I going to do that? I may not even have the whole graph. This was thought of ahead of time. In the specification for the payment request you can include these reference point nodes for how do you get from a reference point node to my own node. You can think of all these great routing nodes as reference points. When I send a payment I can see reference point nodes because there’s a limited number of them and they are all pretty well linked together. I don’t know exactly how to get to you but in your payment request when you sent it to me, you included the final part of the path that gets to you. That makes routing way easier. We can even also just outsource routing period. Even the computation itself, some people were talking about this recently on the mailing list about how to do that in a private way. If you didn’t want to do it in a private way you can just pay to somebody who you are able to pay to and they can have the harder job of synchronizing the billion people on the network. They can do all that work of figuring out the cheapest mode of getting to that. You can outsource that without trusting them because you can… I think also there’s a tradeoff with outsourcing, there’s a tradeoff with the whole network which is that you probably won’t get the perfect route. There probably will be something that is a little bit lower fee, a little more private. Graph algorithms currently, lnd will just focus pretty much on getting you the lowest fee and then it will try the next lowest fee and the next lowest fee. Instead in the future, we recognize we have that limit that there’s no way to get the perfect route so we’re just going to get an acceptable route. That’ll be good enough.

Capital Limits

Another super limit on the Lightning Network that I talked about before is that we have this capital limitation. You have these small channels, these small outputs. You send around a hundred satoshis but it can’t really settle to the chain. If you have a bit more money like $5, whatever is related to the current fees, the current usage of the chain, you can settle that on the chain. At large values you start to run into this liquidity problem. This is the fundamental structure of Lightning’s limitation.

Multiple Payment Paths

One solution that doesn’t totally solve it but really helps a lot is this idea that you can take multiple paths to your goal. There are lots of ways we can implement that. One way we can do right away that has been tested on the real network and isn’t too difficult is this idea of Base AMP. What that is is I negotiate “I want to get $100 and I’m going to wait for HTLCs that equal $100” You’re going to send HTLCs through a bunch of different paths that add up to $100. I’m going to wait before accepting them all. I’m going to have them arrive at my node but I’m not going to reveal the pre-image. Once I get to $100 I’m going to reveal the pre-image and take all of those. That’s a secure way of doing AMP that is very low tech and that is something that you can try out right now though the clients don’t have great support for it. Another way to deal with sending more money, avoiding this maximum limitation of a channel is that we can say if you have multiple channels in the same direction you can do that same trick where you wait for multiple HTLCs but you just do it between two different peers. That’s something that you don’t even need the sender and the receiver to know about, that can just be on the path. In the future we can go to this idea where we take the pre-image and we shard it into a bunch of different secrets and then the receiver can not take it even though in Base AMP they are economically incentivized to wait around. Using Schnorr, using more advanced constructions we can say that even if they wanted to take it they wouldn’t be able to take it.

Multiple System Support

Another thing that we can do in the Lightning Network is just accept that it has these limitations and we can come up with all these tricks on how to deal with situations that go to the edge cases. With the small size we can say let’s just bump these tiny outputs into fees. We can do crazy stuff like probabilistic payments so we can say “let’s just bump this into a 1% chance that you get $100” or something like that. We can also use the chain or we can use other chains. We could use Liquid sidechain. That could come in handy if we’re closing the channel and we need to move the flow around so we can absolutely minimize the size of the channel on the chain. We can move those other outputs into using submarine swaps, using these other chains, using all sorts of different methods. We can also fall back to the chain. We could say “Let’s integrate chain sends and Lightning sends into the same wallets so that if the amount is being sent is too much, let’s just accept defeat, we’ll go onto the chain and these will work together.” Something also that people are experimenting with now is this idea of turbo modes where in certain situations it can make sense to do zero confirmation channels. That means you can get the best of both worlds where you can still stay on the Lightning system but you can use the chain’s ability to move capital around in a random high value way. Another way that I think will be cool, I don’t know if anybody is working on this. At the small scale we could use client signatures to deal with these little commitments that we have between nodes. We can use e-cash so that you can privately trade around these tiny commitments that don’t really make sense on the chain. Finally, another thing that we can do to lift the capital limits is that we can grow the sophistication of this capital market. That’s another thing that people are working on like Bitrefill is working on this turbo mode but they are also working on this Thor project that would allow you to buy inbound liquidity and that’s something that I’ve been working on at Lightning Labs which is the Lightning Loop project. The more tools that we have, the more sophisticated buyers and sellers that we have in this market, the more we can grow the ability for you to escape the limitations of that capital.

1.0 Protocol Limits

I put a lot of stuff into this presentation. One thing that I want to talk about is that the 1.0 protocol actually has these annoying limitations. I just want to highlight them. It is also important to think about Lightning as it is not a protocol that is fixed in stone for all time. It is something that we can adapt and grow. That’s a great thing. The number one problem in 1.0 that I think needs to be fixed is that when you have a force close of a channel, the force close output that goes back to you is using a randomized key. If you just have your seed you won’t be able to go to the chain and scan all the blocks and find those funds because it is using a randomized key and you don’t know that random number. That’s a problem in 1.0. That increases the amount that you have to rely on keeping good backups for yourself. Every time you have to keep that random number around. There are other problems. When you have the transaction closed to the blockchain it can be locked to this CSV output which means that it can’t be respent until a certain amount of time has passed, a certain number of blocks have to be passed before it can be respent. That creates a problem where you can’t bump the fees in child-pays-for-parent on the parent because you can’t spend it until you wait for a bunch of blocks. Child-pays-for-parent requires you to get the parent and the child into the same block to bundle up that benefit to the miner. Another problem that we’ve had and we knew about is that the channel capacity amounts are fixed and we also have a limit on the channel capacity which is pretty low, like 0.16. That creates lots of problems in terms of wanting to build this out to exchanges. It is difficult to do that. Another thing that is housekeeping is that if you have funding transactions that don’t have a good fee during a fee spike, it is difficult to negotiate with your peer on how to bump up that fee. Also, Lightning has a limitation right now because the protocol doesn’t support it which is you can’t push a payment to somebody, they don’t know how to receive that.

Upgrade to 1.1

In 1.1 we’ll add all sorts of new features to solve all these problems, hopefully, if it all gets done. We’re going to make remote addresses static so that if there is a force close you’ll be able to scan the chain. You won’t have to keep that random number around anymore. On the CSV constrained transactions we’re going to add these other hooks. I’m out of time. I’m going to stop and do quick questions.

Q & A

Q - Do AMPs have the same problem as the current system where one node may not forward the HTLC and that will lock up liquidity? How will that be dealt with?

A - For sure. Number one I think you have to think about the routing node network. You’re not sending through totally random nodes. You’re sending through a network of nodes who are specialized. The fact that they are specialized gives them this big incentive. You still have this problem where you don’t know exactly which node does something weird. There’s other ways to deal with that. Once we have these push payments or once we have more interactivity with the receiver we could actually pre-figure out the route by trying something out first and seeing if that works. We’re not using real money in that situation. We’re using a fake hash which they don’t know. We can say “This went through”. We could send it through again. We can make it more and more of an edge case scenario.

Q - How to solve the issues of network wide channel balancing?

A - I do think that we need more sophistication in the idea of capital as a good. Liquidity is a product. I see people mention in chat “Can you throw some inbound liquidity my way?”. I think the way that that will mature is “You want inbound liquidity? Go to the inbound liquidity market and look for whatever kind of inbound liquidity you want.” Once we’ve grown the sophistication there we can say there’s easier ways to handle your specific liquidity problem.

Q - Is it possible to change the minimum HTLC that can be forwarded on a channel after it has been opened?

A - Yeah I think that’s adjustable. You can just say it is part of the channel policy. Of course it is technically possible. You just see a HTLC come and you’re like “That’s too small. No, I reject it.”

Q - Are we going to increase the capacity of a channel in 1.1?

A - Yeah it will effectively be unbounded. The channel capacity will be unlimited. But we will still have that problem which is number one, we probably will want to have some default safety limits just to say let’s negotiate something reasonable. Number two, even if one node in the path has huge capacity it doesn’t mean that all of the nodes on the path will have that capacity so it is probably better if we grow it slowly and together as an ecosystem.

Q - I want to move my lnd node to .onion network to hide my IP but I also need to have access to lnd REST API for my e-commerce store. How would you suggest that I solve this problem? Use VPN instead of Tor?

A - No. Tor is just applicable to the Lightning node itself. I do this with my own node on Yalls.org which is behind Tor. I suggest people set up their nodes behind Tor, it is pretty cool. It still leaves the normal networking open so you can know about the IP address and use it that way.

Q - What first end use cases or services will 1.1 open up in your opinion?

A - I don’t know exactly what will be in 1.1, it is still being worked out. The push payments is the thing that I’m most excited about because it allows for interactivity, back and forth interactivity totally on the network. You could have APIs that only exist on the Lightning Network, that are nowhere else. You don’t need to have a HTTP REST API, you have the Lightning API. That seems like the coolest thing to me.

Q - What resources do you recommend for a dev who is trying to better understand the Lightning Network?

A - I definitely like the BOLT RFCs, that’s where I started. I just went over all the different RFCs. They are changing around but you can look at the chain transactions and stuff. Lightning Labs has a API development system and I have my own library that I built ln-service that tries to make things easy to get up and running. That’s what I used when I built Y’alls and HTLC.me.