Home < London Bitcoin Devs < Lightning Network Panel (2022-03-01)

Lightning Network Panel (2022-03-01)

Transcript By: Michael Folkson

Tags: Lightning

Category: Meetup

Name: Christian Decker, Bastien Teinturier, Oliver Gugger, Michael Folkson

Topic: The Lightning Network in 2022

Location: London Bitcoin Devs

Date: March 1st 2022

Video: https://www.youtube.com/watch?v=fqhioVxqmig

Introductions

Ali Taylor-Cipolla (ATC): Ali coming in from Coinbase Recruiting. I’d like to introduce to my friend Tiago who is on the Europe side.

Tiago Schmidt (TS): Hello everyone. Quick introduction, sitting in the recruiting team here in London and leading the expansion for engineering hiring across UK and Ireland. Coinbase is in hyper-growth at the moment and we will be looking to hire loads of the people. We have over 200 engineers across all levels here in UK and Ireland and further down the line in more European countries as we continue to expand. We are going to be at the conference on Thursday and Friday so if you have any questions we’ll be there to answer any questions.

Q - Are you hiring Lightning engineers?

ATC: Sure are. I have one more friend.

Trent Fuenmayor (TF): I’m Trent from Coinbase Giving. We have developer grants available for people in the community doing Bitcoin Core development, blockchain agnostic. We have four themes, you can read more about them by grabbing one of these flyers over here. If you are looking for money to do some Core development please apply to us, if you are looking for a full time gig please apply to us, if you just want to enjoy the drink and food enjoy it.

Greenlight demo (Christian Decker)

Blog post: https://blog.blockstream.com/en-greenlight-by-blockstream-lightning-made-easy/

Michael Folkson (MF): So we’ll start with the demo and then we’ll crack on with the panel. Christian is going to demo Greenlight, as far as I know as an exclusive.

Christian Decker (CD): This is the first time anybody has seen it. It is kind of adhoc so I can’t guarantee it is working. It is also not going to look very nice.

Audience: Classic Blockstream

CD: I suck at interfaces. I was hoping to skip the demo completely but Michael is forcing me to do it. Let’s see how this goes. For those who have never heard about Greenlight it is a service that we hope to be offering very soon to end users who might not yet know what operating a Lightning node looks like but who would like to dip their toes into it and get a feel for what Lightning can do for you before investing the time to learn and upgrade your own experience. This is not intended for routing nodes, this is not intended for you if you know how to run a Lightning node. This is for you if you want to see what it can do for you but are not prepared to do the investment just yet. The way it works is we run the infrastructure on your behalf but we don’t have access to your keys, the keys that are necessary to sign off on anything that would move funds. What we call it is a hosted non-custodial Lightning as a service, service. Did I mention this is adhoc? I will walk you through quickly how you can get a node up and running and interact with it, do a payment with it. It is going to be from a command line which I know all of you will appreciate. Hopefully we will have a UI for this.

As you can see this is an empty directory. I want to register a node for mainnet. What this does is goes out, talks to our service, allocates a node for you and it will give you back the access credentials to talk to it. What we have here is the seed phrase that was generated locally, we will never share this with anybody, it will never leave this device at all. And we have the credentials here. What we do next is we ask it “Where is it running?” The node with this ID is currently not running anywhere. Let’s change that. What I do now is I tell it to spin up this node, I want to use it and talk to it. I talk to the scheduler and after a 2 second break it will tell me my node is running here. What I can do is I can talk to it. When I call getinfo it will talk to this node and say “Hey I am node VIOLENTCHASER with this ID running this version and I am synchronized at this block height.” We can tail the logs and we can interact with it. We have a whole bunch of useful command line tools that we can use. We can open channels, we can disconnect, we can stop it, we can generate a new address, stuff like that. That is kind of cute but it is not helping us much. So what I did before as with every cooking show you’ve ever seen I prepared something. What we have here is a node I setup a couple of days ago, it is called T2, very imaginative. When we say “I’d like to talk to this guy” this is going out to the scheduler, starting it and talking to the node, getting the getinfo. This is VIOLENTSEAGULL. If we check where it is connected to using listpeers we see it has a connection but it is disconnected. What we’ve just done is we have connected the signer to the node, we have connected the front end to the node and now this node is fully functional. It has reconnected to its peer and it could perform payments. We can do listpeers again and we see that it is connected, it is talking to a node and it is up to date.

Now I go over to my favorite demo shop (Starblocks) by our friends at eclair. This is the best way to demonstrate anything on testnet and buying coffee has always been my go-to example. I create a Lightning invoice and I copy this invoice and I pay it. And it works. Now I can breathe a sigh of relief. This is always scary. What just happened here? This machine acted as a signer, it held onto the private keys, and it acted as a front end. The node itself did not have any control over what happened. The front end was sending authenticated commands to the node, the node was computing what the change should be in the channels and to affect those changes we had to reach out to the signer to verify, sign off on changes and send back the signatures. You might have seen this scroll by very quickly. What it did here, the signer received an incoming signature request, signed them and sent them back. This allows you to spin up a node in less than a second. When you need it you open the app, it will spin up the node again, it will connect, it will do everything automatically. You can just use it as if it was any app you would normally use. What does this mean for app developers? You don’t have to learn how to operate Lightning nodes anymore because we do it for you. What does this mean for end users? You don’t have to learn about how to operate a Lightning node anymore before seeing what you could do with a Lightning node. You first see what the upside is and then we give you the tools to learn and upgrade your experience. Eventually we want you to take your node and take self custody of it, now you are a fully self sovereign individual, you are a fully self sovereign node operator in the Lightning Network. This is our attempt to onboard, educate and then offboard. That is pretty much it. I hope this is something that you find interesting. We will have many more updates coming soon and will have some sexier interfaces as well. Thank you.

MF: Cool, the way this is going to work today is we’ll have topics and we’ll open it up to questions on each topic. The topic now is Greenlight. Does anyone have any questions or comments on Greenlight?

Q - Is there a reason to not use this long term?

A - The way we achieve efficiency with this is if you are not using the node we will spin it down. We can’t do any changes so we will spin it down and free those resources and use them for other users. If you are online continuously, make use of it and keep it alive then we currently impose a 1 hour limit but we could lift that for you as a business. Ultimately if you are a business you are probably better off running your own node. You probably don’t want to run this anyway. You want to have more control over your funds. This is mainly for onboarding new users, teaching them how things can look like before they have to make any investment. But we don’t prevent you from using it if you want to have a business on it. One of the use cases that I can imagine is you are at a hackathon, you need a quick Lightning backend. It takes less than a second to spin this up and have an experimental backend for the weekend. Once you see that it is working we will allow you to export the node and make it a fully fledged Lightning node in your own infrastructure that you fully control.

Q - Is Greenlight itself open source so others can run this service?

A - We plan to open source many parts of Greenlight, among which is the networking interface. I know this is and old friend of yours, complaining about the lack of network RPC in c-lightning. We will open source the components that allow you to run the nodes themselves on your infrastructure. What we are still considering whether we want to open source is the control plane: how you register a node and how you schedule a node. That might also be an option because we want as many of these offerings as possible to help as many people onboard to the Lightning Network as possible.

Q - Will Blockstream be implementing this from their own front ends for Green wallet and things like this? If so what library images would that be?

A - The way we interact with the nodes is just a gRPC interface. Any language that can talk gRPC can interact with the nodes running our own services. The goal is to eventually integrate it into Green. We didn’t want to force the Green team to spend time on this as of yet because they tend to be quite busy. We are working with a couple of companies in a closed beta right now to explore how to get Lightning integrations into as many applications as possible. The name “Greenlight” gives away that it is planned for Green I guess.

Q - How are you managing the liquidity per user? What does that look like in an export scenario?

A - The liquidity is handled by us coordinating with external liquidity service providers to make sure that you as the end user have the cheapest and best possible offer you could have to open channels to the wider network. We plan to have a lower bound offering by ourselves as well to make sure that every user that wants liquidity can get it. For the export this is running open source c-lightning. What we do with the export is we give you a copy of the database, mark the node as exported in our own database so we don’t start it again and you can import the database into your own node. This means that you don’t have any downtime, you don’t have to close channels, you don’t have to reopen channels, your node will be exactly as is at the moment that you export it. We also plan to offer a couple of integration helpers such as a reverse proxy, giving each node its own URL. If you have any wallet attached to your node it will still be able to talk to the node even if you offboard it into your own self hosted infrastructure. Making it a zero config change for front ends.

Q - On the demo the HSM secret there, you connect it to your hardware module or is it just a soft signer?

A - That name has always been a bit aspirational to be honest. We will take the learnings from this to inform also the way that the open source c-lightning project will be designed going forward. We will use information that we gather from this to make c-lightning more efficient in future. Part of that is the way that we independently verify in the signer whether it is an authenticated command or not. That will eventually inform how we can build hardware security modules including hardware wallets for Lightning nodes themselves. This is very much a learning experience for us to eventually get a Ledger or Trezor that can be used to run a Lightning node.

Q - You will expand the HWI library perhaps?

A - The HWI might not be sufficient in this case for us. It would require quite a bit more state management on the hardware itself to get this working in a reliable way and make sure that we can verify the full context of whatever we are signing.

Contrasting the different Lightning implementations

MF: So we’ll start the panel. I can see a lot of familiar faces. For anyone who is new to the Lightning Network I’ll just give the basics. The Lightning Network is a network built on top of the Bitcoin network. It allows for a lot of transaction throughput that doesn’t touch the chain. There are various implementations that try to stay compatible using the spec process. Today we have three representatives of three implementations. We have a representative of a fourth implementation at the back so for any LDK questions we’ll try to get a mic to the back. We have Christian representing c-lightning, Bastien representing eclair and Oliver representing LND. I thought to start, something that might be quite fun to do, a short sales pitch for your implementation and also an anti sales pitch, what you think your implementation is currently bad at doing.

CD: So c-lightning, written in C so very efficient. It adheres to the UNIX philosophy of doing one thing and one thing very well. We don’t force decisions on you, c-lightning is very modular and customizable so you can make it very much your own however you like, however you need it to be. The anti sales pitch, it is very bare bones, it is missing a network RPC and you have to do work to get it working. It is probably a corollary to the sales pitch.

Bastien Teinturier (BT): So eclair, its goals are stability, reliability, scalability and security as well. We were not aiming for maximum number of features, we are a bit lacking a developer community. Our goal is to have something stable that does payments right. You don’t have to care about your node, it just runs and never crashes, there are never any issues. The anti pitch is that it is in Scala, no one knows Scala but it is actually really great.

Oliver Gugger (OG): LND tries to be developer first. We want developers to be able to pick it up easily, can integrate it into their product, build apps on top of it and distribute it as a wallet or a self hosted node. Bringing it to the plebs. We focus mainly on having a great developer interface. We do gRPC, REST and try to build in everything and also make it secure and scalable. If you run a very large node then currently database size is an issue. We are working very hard on that to make that not a big issue. We are working on external database support, lots to do.

MF: Cool. So Christian went through Greenlight just then. Perhaps Bastien you can talk about one of the biggest if not the biggest Lightning node on the network?

BT: It depends on what you count but yeah probably the biggest.

MF: I know that you don’t intensely manage that node but can you give any insights into running such a big node, I think there were scaling challenges that you could maybe talk about briefly.

BT: So the main challenge is sleeping at night knowing that you have that many Bitcoin in a hot wallet. We didn’t have any issues scaling because there is not that much volume on Lightning today. It is really easy to support that volume. We built eclair to be easily horizontally scalable. One thing that we did a year ago, our node is running on multiple machines. There is one main machine that does all the channel and important logic but there are also two front end machines that do all of the connection management, routing gossip management. The things that are bandwidth intensive and can easily be scaled across many different machines, each connection is independent from other connections. To be honest we don’t need it. It could easily be run on a single machine even at that scale. It is a proof of concept that we can do it and if we need to scale it is easy to then scale these front end machines independently. It was a bit simpler than we expected in a way. On the scaling issue, the main scaling issue is not related to Lightning the implementation, more onboarding users and getting many people to have channels and to run in non-custodial settings. That is independent of implementation and more Lightning as a whole, allowing people to be self-sovereign and be able to use Lightning without any pain points.

MF: It is such a big node because you have the Phoenix mobile wallet and lots of users connecting to this massive node? That is the reason why you have such a big node on the network?

BT: We had the biggest node on the network even before we launched Phoenix. It helps but it is not the only reason.

Audience: You did have eclair, the first proper mobile wallet in my opinion.

BT: But eclair could connect to anything, you could open channels to anyone.

MF: With these different models that are emerging for running a Lightning node, perhaps Oliver you can talk about Lightning Node Connect. This is offering a very different service to Greenlight and what Christian was talking about earlier. This is not getting a company to do all the hard stuff, this is allowing you to set up infrastructure between say two servers and splitting up the functionality of that node and wallet.

OG: Lightning Node Connect is a way to connect to your node from let’s say a website or a different device. It helps you punch through your home router and establish a connection. Before what we had was something called LND Connect. It was a QR code, you had to do port forwarding, there was a certificate in there and a macaroon. It was hard to set up. What Lightning Node Connect is trying to do is through an external proxy bridging that gap to your node so you can connect from any device to your node, even if you are running behind a firewall and Tor only. It gives you a 10 word comparison phase that it can use to connect to your node. The idea is that this is implementation agnostic. Currently it only runs on LND but it is very similar to the Noise protocol. It is a secure protocol to connect to a node behind a firewall. We want to see this being adopted by other implementations as well. It could be cool for c-lightning, eclair to use this. We have an early version released, it needs some work but if that sounds interesting please take a look.

MF: This is separate to the remote signing. I was combining the two. These are two separate projects. One is addressing NAT traversal, the other one is addressing private key management.

OG: Remote signing is different. It is just separating out the private key part of the LND node. Currently it is just splitting it out, you need to run a secondary LND. There is a gRPC interface in between. Now we have the separation we can implement more logic to make it implement policies on when to sign, how often. It is just a first step.

MF: So the topic is comparison of implementations or any particular questions on a specific implementation. We do have Antoine in the back if anyone has any LDK questions, contrasting LDK with these approaches.

Q - Does LND have plans for being able to create a similar environment to Greenlight, like you were mentioning fully remote signing where keys are totally segregated?

OG: If you mean with an environment like a service provider then no. We won’t be offering such a service compared to Greenlight. We have partners like Voltage that already do that. If you are just asking technology wise we want to have compete separation of private keys and you being able to apply policies on how often you can sign, how much and on what keys, whatever. That is the plan. But us hosting a remote signing service I don’t think so, no.

Q - I’m coming from the app user experience. The app can just hold the keys for the user. It could be from the wallet provider, it doesn’t have to be from LND’s service. Just being able to have that user experience.

OG: That is something we are definitely thinking about, how to achieve that and how it could be done. Not all the keys, if you are wake up to gossip stuff it might not be very efficient.

Q - Since not too many people seem to be implementing eclair stuff do you have plans for doing similar services or looking at other applications use your implementation? Or is your primary goal just to make your implementation functional for your wallet?

BT: eclair implements all of the spec so what exactly are you referring to?

Q - A lot of people are looking at mostly using LND for applications, I don’t know if it is just because scaling isn’t popular and they don’t mess with the node at all. Now that Blockstream has Greenlight, it is at least a solution to be able to put the keys in users’ hands. Granted you have to connect it to a Blockstream service instead of your own service but they’ve said they’ll open source it.

BT: I think eclair is really meant for server nodes. We have a lot of ways to build on top of eclair. We have a plugin system where you can interact with eclair. You can send messages to all of the components of eclair and implement almost anything that you like in any JVM language. JVM is just not popular so we don’t have many people running plugins but in theory you could. What I think is more interesting for you is we not only have the eclair codebase for a node implementation but we also have an implementation of Lightning that is targeted only for wallets. At first our wallets were completely based on eclair, on the Scala implementation and on a specific branch that forked off of eclair and removed some of the stuff that was unnecessary for wallets. That could only run on Android, on the JVM, and not on iOS. When we wanted to ship Phoenix on iOS we considered many things. We decided to rewrite the Lightning implementation and optimize wallets, leaving many things out. Since there was a Kotlin stack that works on both Android and iOS we started based on that. It was really easy to port Scala code to Kotlin because these are two very similar languages. It took Fabrice a lot of months to bootstrap this. We started from an architecture from eclair and simplify it with what we learned, what we could remove for wallets. That is something an application developer could have a look at as well. It is quite a simple library to use. It is quite minimal, it doesn’t do everything that a node needs to do. It is only for wallets so there are things left out. I think it can be an interesting thing to build upon.

Q - LND has Neutrino and generally the user experience is not that great. It takes a pretty long time to do a full sync, you have to re-sync if you haven’t had your wallet online for a while. Does LND have plans for a node that is performant and more appropriate to mobile? Same question for c-lightning.

CD: I personally never understood why Neutrino should be a good fit for Lightning nodes. Neutrino does away with the downloading of blocks that you are not interested in but since most blocks nowadays contain channel opens you are almost always interested in all blocks so you are not actually saving anything unless you are not verifying that the channels are being opened. That caveat aside. c-lightning has a very powerful plugin infrastructure. We have a very modular architecture that allows you to swap out individual parts including the Bitcoin backend. By default it talks to a bitcoind full node and it will fully verify the entire blockchain. But you can very easily swap it out for something that talks Neutrino or talks to a block explorer if you are so inclined to trust a block explorer. In the case of Greenlight we do have a central bitcoind that serves stripped blocks to our c-lightning nodes. It is much quicker to actually catch up to the blockchain. This is the customizability that I was talking about before. If you put in the time to make c-lightning work for your environment you are going to be very happy with it. But we do ship with defaults that are sane in that sense. There are many ways of running a Bitcoin backend that could be more efficient than just processing entire blocks, with Greenlight we are using stripped blocks. You could talk to a central server that serves you this kind of information. It very much depends on your security needs, how much you trust whatever the source of this ground truth data is. By default we do use the least amount of assumptions but if your environment allows it we can speed it up. That includes server nodes, you name it.

OG: As far as I know there are no plans to support another kind of chain backend than we currently have. btcd, bitcoind and Neutrino. There are still a few performance optimisations that can be done on Neutrino. It needs a bit more love. We could do some pre-loading of block headers, an optimization with the database because it is the same database technology that we use in LND. It has come to an end. I feel like Neutrino is still a viable model but maybe we need to invest a little more time. What is being worked on is to have an alternative to ZMQ with bitcoind. You don’t need an actual ZMQ connection. One contributor is working on allowing you to use a remote bitcoind at home as an alternative if that could be an interesting option. Apart from that I am not aware of any other plans.

MF: It seems to me there are specific use cases that are gravitating towards the separate implementations. Perhaps merchants getting set up for the first time would use Greenlight and c-lightning, mobile users would use eclair and developers using an API, there are some exciting gaming use cases of Lightning using LND. Do you think these use cases are sticking to your particular implementation or do you think your implementation can potentially do everything?

CD: We definitely have a persona that we’re aiming for. We do have a certain type of user that we try to optimize for and we are looking into supporting as much as we can. Of course this persona is probably not the same for all of us but there is a bit of overlap. I like the fact that users get to choose what implementation they want to run. Depending on what their use case is one might be slightly better than the other. I think all 3 implementations, all 4 implementations, sorry Antoine…

MF: Given Antoine is so far in the back perhaps Christian you can talk about the use case that LDK is centering on.

CD: Correct me if I’m wrong but LDK is very much as the name suggests a development kit that aims to enable app developers to integrate Lightning into their applications without forcing them to adhere to a certain set of rules. They have a very large toolset of components that you can pick and choose from and you can use to adapt to whatever your situation might be. Whether that is a mobile app or a server app, if I’m not mistaken the Square Crypto app now uses LDK. There is a wide variety of things that you can use it for. It is not a self contained node but more aimed at developers that want a tight integration with the Lightning node itself. Whereas the three implementations here are a software package that can be run out of the box. Some more opinionated and feature complete, some that give you the opportunity of customizing it and are less opinionated. I think the main differentiating factor is that LDK is a development kit that gives you the tools and doesn’t tell you how to actually do it. The documentation is actually quite good.

MF: Another use case that Matt (Corallo) talked about, I don’t know how much progress you’ve made on this was an existing Bitcoin wallet integrating Lightning functionality. A Bitcoin onchain wallet coming and using the LDK and bringing Lightning functionality into that wallet. That sounds really cool but also hard. I don’t know if any progress has been made on that.

Audience: There is a talk at the conference on that.

BT: With ACINQ it seems to be confusing. We have two very different personas and only one is sticking. People think ACINQ is the mobile persona whereas our first persona is actually big, reliable routing and merchant nodes. That has been our first focus. But we have also embraced a different persona of working on mobile. A mistake was probably to name our first wallet the same thing as our node so people thought we were only doing that wallet part. The reason we did that is we really wanted to understand the whole experience of using Lightning. We thought that using Lightning for everyone would not go through running your own node. We wanted to understand what pain points people on a mobile phone would have when using Lightning. You don’t have the same environment at all as when you are running a node in a data center or even at home. That’s why we embraced this second persona of doing mobile wallets to see what we needed to fix at the spec level and implementation level for routing nodes for Lightning to work end-to-end, from a user on a mobile phone to someone paying. We have these two personas and we are trying to separate them a bit more by having our wallet implementation be different from our server node implementation. Don’t think we are only doing mobile.

CD: So what you are saying is Amazon would choose eclair to accept payments?

BT: Yeah probably.

OG: I’m not sure what persona I would ascribe to LDN other than the developers themselves, we have a batteries included experience for the developer so they can choose the persona. We have a lot of features, everything is in a single binary so we are also trying to unbundle some things so developers can have a more configurable experience. We have something for everyone almost. We also have some quite big nodes running LND but I wouldn’t say the largest nodes are the main goal but we definitely want to get there as well. We are limited by this database, we don’t have replication built in just yet but we want to go to SQL so we can also cover this persona. I guess we want to serve everyone and anyone.

CD: While everyone is trying to cut out their niche there is friendly competition in trying to give users the options here. We aren’t going to limit ourselves to just one persona or another, you will be the ones profiting from it.

Q - I’ve been in the Lightning space for a while. I am very invested in users using mobile wallets. This is a question for eclair. In my opinion the eclair wallet on Android is one of the best mobile wallets if not the best. Especially the UX. My question is, it is 2022, I recently moved over to a iPhone and there are very few non-custodial Lightning clients available, what is the biggest thing in the way right now for greater mobile wallet adoption and creation?

CD: Apple.

BT: I think you mean a mobile wallet for someone who understands the technical detail and wants to manage the technical detail, someone who wants to see the channels. The approach we have taken with Phoenix is different from the approach we’ve taken with eclair-mobile. Our approach with eclair-mobile was to make a wallet that anyone could use but we failed. Lightning is complicated when you have to manage your channels and you don’t even understand why we cannot tell you exactly how much you are able to send. We started again from scratch with Phoenix. Our goal was anyone, anyone who doesn’t care that it is not onchain Bitcoin, just wants something that sends payments and it just works. If you want that we already have it with Phoenix and Breez and other wallets are doing the same kind of things. If you want something that is more expert that gives you control over channel management maybe what you should look for is more of a remote control over your node at home. If you are at that level of technicality maybe you want to have your node at home control everything. Other than that I think the libraries are getting mature enough so that these wallets will emerge if there is demand. I’m not sure if there is such a big demand for people who don’t run a node at home but want a wallet that gives them full control over the channels. I don’t know. If the demand is there the tools and libraries are starting to emerge for people to build those wallets.

CD: Channels, it’s complicated should be Lightning’s slogan actually.

Priorities in the coming year for each implementation

MF: Before we move onto the network as a whole and spec stuff, anything to add on priorities for your implementation in the coming year? I suppose it kind of links to the anti sales pitch, anything in addition that you want to be focusing on this year on your implementation?

CD: We’ve certainly identified a couple of gaps that we’ve had in the past including for example giving users more tools to make c-lightning work for them, be more opinionated, give them something to start with. The first step to this end is we are building a gRPC interface with mutual TLS authentication to allow people to talk to their nodes. That has been a big gap in the past, we were hoping that users would come and tell us how they expect to work with c-lightning and gRPC is definitely the winner there. We are also working on a couple of long requested features that I might not want to reveal just now. You are not going to be limited to one channel per peer much longer. We are going to work much more with a specification effort to bring some of the features that we have implemented, standardize them and make them more widely available. That includes our dual funding approach, that includes our splicing proposals, that includes the liquidity ads that we have. Hopefully we will be able to standardize them, make them much more widely accessible by removing the blockers that have been there so far on the specification. And hopefully Greenlight will work out, we’ll take the learnings from Greenlight and apply them back into the open source c-lightning project, make c-lightning more accessible and easier to work with.

BT: On the eclair side I would say that our first focus is going to be continuing to work on security, reliability and making payments work better, improving payments. The only thing that Lightning needs to do and needs to do well, your payments must work, be fast and be reliable. There are still a lot of things to do. There are big spec changes that have helped Lightning get better but they need a lot of implementation work. They create a lot of details that make it hard and can be improved. Also there are a lot of spec proposals that will really help Lightning get better as a whole. The three that Christian mentioned, we are actively trying to implement them and we want to ship them this year. There are also other proposals, some of which we pushed forward and we hope to see other implementations add like trampoline (routing) and route blinding. Route blinding is already in c-lightning because it is a dependency for offers and onion messages. I think it is really good for privacy. Better payments, better security, better privacy and better reliability. All of these spec proposals all help in a way to get to that end goal where all of these specs are better.

OG: Our main focus is of course stability, scalability and reliability. Payment failures are never fun. These are the biggest things to look at. If we want to get Lightning in as many hands as possible then we will experience these scaling issues. The next step will be Taproot on the wallet level and of course on the spec level. We want to push for everything that is needed to get Lightning upgraded with Taproot but also in our products and services. With Loop and Pool we can take advantage of some of the privacy and scalability things. I personally think we should take a close look at some of the privacy features such as route blinding and what is proposed in offers. I think we should do more on that side.

Security of the network

MF: That was the implementations. We’ll move onto the network as a whole now. I thought we’d start with security. You can give me your view on whether this is a good split but I kind of see three security vectors here. There is the DoS vector, the interaction with the P2P layer of Bitcoin itself and then there are bugs that crop up. Perhaps we’ll start with Christian on the history so far on the Lightning Network of bugs that have cropped up. The most serious one that I remember is the one where you would be tricked into entering into a channel without actually committing any funds onchain. That seemed like a scary bug. Christian can’t remember that one. Let’s start with the bugs that people can remember and then we’ll go onto those other two categories.

BT: I think there is only this one to remember, it was quite bad. The good thing is that it was easy to check if you’ve been attacked. I think no one lost funds in this because it was fixed quickly. We were lucky in that it was early enough that people did the upgrade. There are so many things that are getting better in every new version of an implementation. It helps us ship bug fixes quickly. The process to find these kinds of issues, we need to have more people continually testing these things, we need to have more people with adversarial thinking trying to probe the implementations, trying to run attacks on the existing nodes on regtest or something else. That needs to happen more. We are doing it all the time but it would be good if other people outside of the main teams were doing it as well. It would be valuable and would bring new ideas. Researchers sometimes try to probe the network at a theoretical level right now, doing it more practical and hands on would help a lot. I would like to see more of that.

CD: This gets us a bit into the open source and security dilemma. We work in the open, you can see every change that we do. That puts us in a situation where we sometimes have to hide a bug fix from you that might otherwise be exploited by somebody that knows about this issue. It is very hard for us to fix issues and tell you right away about it because that could expose others in the network to the risk. When we tell you to update please do so. When we ask you twice do so twice.

OG: That bug you mentioned was one of the biggest that affects all of us. LND has had a few bugs as well that the other implementations weren’t affected by. We had one with the signature with a low s. Someone could produce a signature that LND would think was ok but the network wouldn’t. That was a while ago. Then of course bugs that are affecting users, maybe a crash or incompatibility with c-lightning, stuff like that. We’ve since put more focus on stability.

MF: The next category, we’ll probably go to Bastien for this one, the mempool, replaceable transactions. Perhaps you can talk about that very long PR you did implementing replaceable transactions in eclair and some of the challenges of interacting with mempool policy in Core.

BT: I don’t know how well you know the technicalities of Lightning. There was a big change we made recently, almost a year ago, it took a long time to get into production, called anchor outputs. We made a small but fundamental change in how we used a channel. Before that you had to choose the fee rate of your channel transactions beforehand and sign for that. You couldn’t change it. That means you had to predict the future fee rate for when that channel would close. If you guessed wrong then you are screwed. That was bad. Now with anchor outputs you can set the fee rate when the channel closes, when you actually need it. You don’t have this issue anymore but you run into other issues. Yes you can set the fee rate of your transactions when you broadcast them but there are still quirks about how the Bitcoin P2P layer will relay transactions and let you bump the fees of transactions that doesn’t guarantee that you will be able to get these transactions to a miner and get them confirmed. If you don’t get them confirmed before a specific timelock then you are exposed to attacks by a malicious actor. We have definitely improved one thing but we have shifted complexity to another layer and we have to deal with that complexity now. We have to make it better and this is not an easy task. This is a very hard, subtle issue that touches many aspects of Bitcoin and Lightning Network. This isn’t easy to fix but something that we really need to do and make it much better security wise. It is getting better, we are working towards it. I think we will be in a good state in the end but there is still a lot of work to do.

MF: I guess the concern is with the bugs, the first category, you can solve those problems. You identify and squash those bugs. This one seems like a long term one where we can kind of incrementally make progress and make it more secure but there is no hallelujah moment where this bug is fixed.

BT: Lightning starts from a statement that may or may not be true. You are able to get transactions confirmed in a specific time period. If you cannot guarantee that you cannot guarantee fund safety. That is not always that easy to guarantee. In very high fee environments where the mempool is very congested it may cost you a lot to be able to guarantee that security. We want to find the right trade-off where it doesn’t cost you too much but you are still completely secure. It is a hard trade-off to find.

CD: It is as much a learning experience for us as it is a learning experience for Bitcoin Core, the Bitcoin peer-to-peer layer. That is what makes this exciting. This is very much a research in progress kind of thing. What excites me and gets me up in the morning.

MF: I looked up the stats on Clark Moody. There’s 3,500 Bitcoin on the Lightning Network that we know publicly. That’s about 130 million US dollars. Does that scare you? Is that about right? If it was a billion would you be scared? 10 billion? Any thoughts?

CD: I would be lying if I was saying this is not a scary amount of money. There is a lot of trust that people put into our code. We do our best to rise up to that trust. That being said we are also learning at the same time as many of you are while operating a Lightning node. The more you can tell us about your experiences while operating Lightning the better we can help you make it more secure, easier to use and more private to use as well. We depend very much on your feedback, just as much as you depend on our code.

Q - What part of the implementation is most prone to breaking interoperability?

CD: We are not pointing fingers. When breaking interoperability you also have two sides. You have one side that has performed some changes, the other side is no longer compatible with it. It is sometimes really hard to figure out which one is the wrong one. The one that has changed might have just addressed a bug that the other one hasn’t addressed yet. It is very much up to the spec effort to say “This is the right behavior, this one isn’t.” That sometimes is an after the fact issue. The spec sometimes gives a bit of leeway that allows you to interpret some parts of the specification in a certain way. Without clarifying comments of which one is the intended behavior you can’t really assign blame to either of them. It might sometimes be the specification that is underspecified causing these kinds of issues. I remember recently roasbeef reached out to ask whether the way that we interpreted one sentence in the specification was the same way he interpreted that one sentence in the specification. It turns out to be different. Is it LND that interpreted it wrong or was it us who interpreted it wrong? There is no right or wrong in this case. It is up to us to come back to the table and clarify what the right interpretation is. That is the vast majority of these kinds of incompatibilities.

Q - How much longer can we have BOLTs without version numbers associated with them? If we want to say we are BOLT whatever compliant it is kind of amorphous. We are changing it, modifying it. It seems really prudent for us to start versioning BOLTs to say “eclair, LND, c-lightning release is BOLT 2.5 compatible” or whatever. What benefits do you see to that and what potential downsides?

BT: It would be really convenient but it is really hard to achieve. This would require central planning of “This feature will make it into the 2.0 release.” All of the features that we are adding to the spec are things that require months of work for each implementation. To have everyone commit to say “Yes I will implement this thing before this other one and before this other one” which is already a one year span with all the unknowns that can happen in a year. It is really to hard to get because this is decentralised development. That is something that we would really like to get but there are so many different things that we want to work on. People on each implementation assign different priorities to it and I think that part is healthy. It is really hard to say “This is going to be version 1.0, 1,1, 1.2”. I used to think that we really needed to do that. I don’t think that anymore.

CD: We actually tagged version 1.0 at some point. It was sort of the lowest common denominator among all the implementations. This was to signal that we have all achieved a certain amount of maturity. But very early on we decided on having a specification be a living document. That also means that it is evolving over time. Maybe we will end up with a version 1.1 at some point that declares a level playing field among all of the implementations with some implementations adding optional features and some of them deciding that it is not the way they want to go. It is very much in the nature of the Lightning specification to always have stuff in flight, always have multiple things in flight and there not being as much comparability as there could be maybe if we were to have a more RFC like process. There you say “I’ve implemented RFC1, I’ve implemented RFC2.” You build up the specification by picking and choosing which part of the specification you build. That was very much a choice early on to have this be a living document that evolves over time and has much flexibility as possible built into it.

BT: One thing I would like to add to that, the main issue is right now we’ve only been adding stuff. We are at the point where we’ve added a lot of stuff but we are starting to remove the old stuff. That is when it gets better. We are able to remove the old stuff. When there are two things, one modern thing and one legacy thing and everyone knows about the modern thing, we start removing the old things from the spec, it helps the spec get better, get smaller, get easier. Rusty has started doing that by deprecating one thing a week ago. I am hoping that we will be able to deprecate some other things and remove the old things from the spec.

Q - I know it is technically legal but probing feels like buggy behavior. What are we going to do about probing? There is a lot of probing going on.

CD: I don’t like that categorization of probing being dodgy or anything because I like to do it. Just for context, probing is sending out a payment that is already destined to fail but along the way you learn a lot about how the liquidity is distributed in the network. The fear is that it might be abused to learn where payments are going in the network by very precisely figuring out how much capacity is in those channels and on what side. When there is a change you can see there is 13 satoshis removed there and 13 satoshis removed there so those two changes might be related. It is very hard to get to that level of precision and we are adding randomness to prevent you from doing that. The main attack vector is pretty much mitigated at this point in time. It also takes quite a bit of time to get to that precision even if you weren’t randomizing. You always have a very fuzzy view even if you are very good at probing, so much for probing as an attack. When it comes to the upsides of probing, probing does tell you a lot about the liquidity in the network. If you probe your surroundings before you start a payment you can relatively certain that your view of the network is up to date. You can skip a lot of failed payment attempts that you’d otherwise have to do in order to learn that information. It also provides cover traffic for other payments. When you perform payments you are causing traffic in the network. A passive observer in the network would have a very hard time to say “This is a real payment going through and this is just a probe going through”. In my opinion probing does add value, it doesn’t remove as much privacy as people fear it does. On the other hand it adds to the chances of you succeeding your payments and to you providing cover traffic for people that might need it. I pretty much like probing to be honest.

Q - I was less worried about the privacy side of it and more the 95 percent of payments going through my node appear to be probes or failed payments at least. I guess the argument is cover traffic works for that.

CD: The one downside I can see with a huge amount of probes going through a node is that it may end up bloating your database. Every time we add or remove a HTLC to a channel we have to flush that to disk, otherwise we might incur losses there. There is work you have to do even for probes. There are ways we could replace those probes with more lightweight probes that do not have this property. You could have lightweight probes that don’t add to your database but still give you some information about how the liquidity situation is in parts of the network and provide that kind of cover traffic. It is not exactly free because yes you are storing a couple of bytes for each failed probe. Over time that might accumulate but I think in many cases the upsides outweigh the downsides. Apparently I’m the only probing so sorry for those extra bytes. I’ll buy you a USB stick.

Tensions in the BOLT spec process

MF: So we’ve left the slight controversy until the end. The spec process and the BOLTs. Alex Bosworth put the cat amongst the pigeons with a few comments in an email that was shared on Twitter. I’ll read out a couple of the quotes. Obviously he’s not a fan of BOLT 12 but there were some specific comments on the BOLT process itself. To clarify, Alex is speaking for himself and not speaking for Lightning Labs. I think roasbeef and other people clarified that, it is just his personal opinion. “The way the BOLTs are standardized is arbitrary” and “if your side produces Lightning software that only 1 or 2 percent of the network uses, you still get to set the standard for the remaining 98 or 99 percent”. I guess with any spec process or any standardization process there are always going to be tensions. I haven’t followed any others so this is quite new to me. Obviously with different priorities and different business models and different wishes for the direction in which the spec goes it is almost inevitable there are going to be these tensions. Firstly thoughts on Alex’s comments, thoughts on how the spec process has evolved? Is there anything we can improve or is this just an inevitable side effect of having a standardization process with lots of different players, lots of different competing interests.

CD: I think those are very strong statements from someone who has never participated in a single spec meeting. As you rightly pointed out there is a bit of contention in the spec process but that is by design. If one implementation were able to dictate what the entire network looks like we would end up with a very myopic view of what the network could be and we wouldn’t be able to serve all of the different use cases that we are serving. And so yes, sometimes the spec process is frustrating, I totally agree with that. We certainly have different views on what the network should look like. But by this thesis, antithesis and synthesis process we come up with a system that is much more able to serve our users than if one implementation were to go it alone.

BT: I’ll address the frustration about the spec. Yes it is frustrating. Personally I opened my first big spec PR in 2019, it was trampoline, I was hoping that this would get accepted in 6 months. It takes a long time so 6 months should be great. In 2021 I closed it and opened the 2021 edition of trampoline. This time, everyone says they will implement it so in 3 months we will have it and it is still there. But it is ok. If in the end we get something better, between the 2019 edition and the 2021 edition I have already improved some stuff. Some stuff has changed based on feedback, based on things that we learnt running this in one of our wallets. If this is what it takes for the end result to be good I am ok with it. We are not in a rush here. We are in this for the long haul. I think that is the mindset you should have when you approach the spec process. You should not aim for results, you should not aim for merges, you should aim for a learning path. Getting in the end to something that many people contributed to. That in the end is really good for your users and good for the network. If you are ready to wait I think this is a good process. But it is frustrating and it is hard sometimes but I think it is a good thing.

OG: I personally don’t work on the spec so I don’t feel qualified to give an answer. I just wanted to add that I don’t necessarily agree with all the points that Alex mentioned. I definitely would have said it in a different way as well. I think lack of resources to work on the spec sometimes is interpreted as us blocking stuff which is not the intention and not our goal of course. We want to put in more work on the spec so I hope we will improve there. It is an interesting thing to observe, how that frustration sometimes comes to the surface. Thank you for all the work you do on the spec. I need to pick up as well so I’ll do my best.

BT: Two things I want to mention. Of course the spec takes a long time, this is time you could spend doing support for your users, time you could spend doing code for your users, time you could spend doing bug fixes. It is really hard to arbitrate. We all have small teams and a big userbase so it is really hard to find the time to do those things and choose how to allocate your time. It is hard to blame someone because they don’t have the time but I think it is still an important thing to do and so you should try to find some time to allocate to it. I really hope that people don’t think the spec is an ivory tower kind of thing. “These guys are doing the spec and it is complicated. I cannot participate.” Not at all. You should come, you should learn, you should listen, you should ask questions and you should propose things. It is ok to fail in public. That is a good thing. It is ok to embarrass yourself because you proposed something and it was completely wrong. I’m doing it recently with all these RBF things on Bitcoin Core. I am thinking “Why can’t we do that? Wouldn’t that work?” and it is completely dumb because there are a lot of things I don’t know. That’s how you learn. It is ok, you just need to not have too much of an ego. No one will judge you for trying something and learning that there were things you didn’t know. This is a really complex project so don’t be afraid to come in and say things, maybe you will be wrong but that is how you learn. There’s no ivory tower and we are welcoming of all contributions.

Q - What is the actual problem with implementations leading the spec and treating the Lightning Network like competition? You don’t need unanimous consensus like you do with the base layer. In the end you can’t even actually achieve it in the Lightning Network because you already right now don’t have completely compatible specs with the different implementations. What’s the actual problem with one implementation taking a risk, implementing something not in the spec and seeing if it floats, seeing if it sticks. It brings new users and new use cases to the Lightning Network, letting the other implementations agree by seeing it demonstrated and all agreeing to upping the lowest common denominator to that spec. What is the problem with that?

CD: Like Bastien said there is no problem in implementations trying out experimental things, it is very much welcome. Back in 2016 we came from 3 different directions and decided to join all of the things that we learned during this initial experimentation phase into a single specification so that we could collaborate and interoperate. This experimental phase must always be followed up by a proposal that is introspectable by everybody else and can be implemented by everybody else. Sometimes that formal proposal is missing and that prevents the other implementations giving their own review on that feature. This review is very important to make sure it works for everybody and that it is the best we can make it.

Q - Is that where the tension is coming from? I know there are some arguments on offers, AMP, LNURL. There are a lot of different payment methods in Lightning. Where is the drama at all? There seems to be some form of drama emerging. Is it just people that are trying to lead by implementation are not going back and making a spec.

MF: Just to be clear there has to be two implementations that implement it.

Q - That’s arbitrary.

MF: But if you don’t do that and one implementation implements it they are attempting to set the standard. Someone comes 6 months later and goes “You’ve made lots of mistakes all over the place. This is a bad design decision, this is a bad design decision.” But because it is out there in the network you’ve got to follow it.

Q - It is only bad if it fails. It is a subjective thing to say it is bad if no one does it. If it succeeds and if it is effective?

CD: Very concretely, one of the cases that comes to mind is a pending formal proposal that was done by one team that has been discussed for several months before. Suddenly out of the blue comes a counter proposal that does not have a formal write up, that is only implemented in one codebase and that is being used to hold up the process on a formal proposal without additional explanation why this should be preferred over the formal proposal or why there isn’t a formal proposal. There is a bit of tension there that progress was held up by that argument.

Q - Incompatibility is inevitable though in some features. I believe PTLC channels aren’t compatible with HTLC channels.

CD: One thing is incompatibility, the other one is holding up the spec. Holding up everybody else to catch up to the full feature set that you are building into your own implementation.

MF: And of course every implementation if they wanted to could have their own network of that implementation. They are free to do that but the whole point of the spec process is to try to get all the implementations when they implement something to be compatible on that feature. But they are free to set up a new network and be completely independent of the BOLT compliant network.

Q - I don’t make an implementation so you don’t have to worry about me. I just feel like it is a bit idealistic in that the competition could result in even more iteration and faster evolution of the network than the spec process.

CD: That is a fair argument, that’s not what I am arguing against. Like the name Lightning Network suggests it very much profits from the network effects we get by being compatible, by being able to interoperate and enabling all implementations to play on a level playing field.

Q - The last part sounds like fairness which is not really a competitive aspect. If one implementation led the whole spec, just incidentally not necessarily by tyranny, “We know the way” and they were right and brought a million users to Lightning the other specs would have to go inline but we would have a million users instead of x thousand.

CD: That’s why we still have Internet Explorer?

Q - I don’t think it is the same thing. You don’t need total compatibility, you just need a minimum amount. If you have some basic features that work in all implementations you are ok. If there is some new spec thing that isn’t in all of them that brings in new people it would become evident.

CD: That assumes that the new features are backwards compatible and do not benefit from a wider part of the network implementing it.

Q - 10,000 users compared to 100,000 is a different story. If it is useful people will use it.

CD: But then you shouldn’t be pretending that you are part of an open source community that is collaborating on a specification.

MF: So there are conflicting opinions on BOLT 12, I think a lot of people support BOLT 12, there are a couple of oppositions but it is quite a new proposal. Let’s talk about a couple of proposals that have been stuck for at least a year or two. You mentioned your trampoline proposal, there is also dual funding and liquidity ads that I think Lisa is frustrated about with lack of progress on. Perhaps we can talk about what’s holding up these. Is it business models? Is it proprietary services not wanting decentralized versions of that proprietary service? Is it not wanting too much to go over the Lightning Network so that it becomes spam? What are the considerations here that are holding up progress on some of these proposals?

BT: I think the main consideration is developer time. We have done all the simple things which was Lightning 1.0. Now all that remains is the harder things that take more time to build and take more time to review and that involve trade-offs that aren’t perfect solutions. That takes time to review, that takes time for people to agree that this is the right trade-off that they want to implement in their implementation. That takes time for people to actually implement it, test compatibility. I think we have a lot of proposals right now that are floating in the spec and most of us agree that this is a good thing, this is something we want to build, we just don’t have the time to do everything at once. Everyone has to prioritize what they do. But I don’t think any of those are really stuck, they are making slow progress. All of these in my opinion are progressing.

CD: All 3 or 4 proposals that you mentioned…. Trampoline isn’t that big but dual funding is quite big, splicing is quite big, liquidity ads is quite big, offers is quite big. It is only natural that it takes time to review them, hammer out all the fine details and that requires setting aside resources to do so. We are all small teams. It comes down to the priorities of the individual team, how much you want to dedicate to the specification effort. c-lightning has always made an effort to have all of its developers on the specification process as have other teams as well. But it is a small community, we sometimes get frustrated if our own pet project doesn’t make it through the process as quickly as we’d like it to.

The far future of the Lightning Network

MF: So we’ll end with when you dream about what the Lightning Network looks like in 5 years what does that look like? I know Christian has previously talked about channel factories. As the months go on everything seems further away. The more work you do, the more progress you make the further you are away from it almost. Certainly with Taproot, perhaps Oliver can talk about some of the stuff he’d like to get with Taproot’d Lightning. What do you hope to get and in which direction do you want to go in in the next few years?

CD: Five years ago I would not have expected to be here. This network has grown much much quicker than I thought it would. It has been much more successful than I thought it would. It has surpassed my wildest expectations which probably should give you an idea of how bad I am at estimating the future. You shouldn’t ask me about predictions. What the Lightning Network will look like in 5 years doesn’t really depend on what we think is the right way to go. There are applications out there we Lightning spec developers cannot imagine. We are simply way too deep in the process to take those long shots. I am always amazed by what applications the users are building on top of Lightning. It is really you guys who are going to build the big moonshots. We are just building the bases that you can start off on. That’s really the message here. You are the guys who are doing the innovation here so thank you for that.

BT: I don’t have anything to add, that’s perfect.

OG: I’m just going to be bold and say I’d like to see a billion users or a billion people actually using Lighting in one way or another. Hopefully as non-custodial as possible but number of users go up.

Q&A

MF: So final audience questions. Rene (Pickhardt) had one on Twitter. How should we proceed with the zero base fee situation? Should LN devs do nothing? Should we have a default zero recommendation in the spec and/or implementations? Should we deprecate the base fee? Something else? Any thoughts on zero base fee?

CD: Since Rene and I have been talking a lot about why zero base fee is sensible or why it wouldn’t be I’ll add a quick summary. Zero base fee is a proposal by Rene Pickhardt to remove the fixed fee and make the fee that every node charges for forwarding payments just proportional to the amount that is being forwarded. It is not removing the fees, it is just removing that initial offset. Why is that important? It turns out that the computation that we do to compute an optimal route inside the Lightning Network might be much, much harder if we don’t remove it according to his model. If we were to remove the base fee then that would allow us to compute optimal routes inside of the Lightning Network with a maximum chance of succeeding in the shortest amount possible. That’s a huge upside. The counter-argument is that it is a change in the protocol and maybe we can get away with an approximation that might not be as optimal as the optimal solution would be but is still pretty good. As to whether we as the spec process should be dictating those rules, I don’t think it is up to us to dictate anything here. We have given Lightning node operators the ability to set fees however they want. It is up to the operators to decide whether that huge performance increase for their users is worth setting the base fee to zero or whether they are ok with charging a bit more for the fixed work they have to do and having slightly worse routing performance. As spec developers it is always difficult to set defaults because defaults are sticky. It would be us deciding on behalf of Lightning node operators what the path forward should be. Whereas it should be much more us listening to you guys what you guys want from us and taking decisions based on that.

OG: I met with Rene recently and we discussed this proposal again. We kind of have a plan how we could integrate some kind of proof of concept into LND that can work around some of the issues with the base fee. Just to get a sense of how does this compute, how fast is it compared to normal pathfinding and how much better could it be? So we can actually run some numbers, do some tests, get some real world comparison, real world numbers which I thought was a bit lacking before. Have actual results being shown. My goal is to give it a shot, implement something very naive and very stupid based on his paper and see where it goes. I am curious to see what comes out of that.

CD: As you can see many of our answers come down to us being very few people working on the specification. I think all 3 or 4 teams are currently looking for help there. If you or someone you know is interested in joining the Lightning efforts I think we are all hiring.

OG: Definitely.

CD: Raise your hand, join the ranks and we can definitely use your help.

Q - The base chain is always quite incentive compatible, something that is good for you is good for network and vice versa. In Lightning there are some deviations from that. For example if you lock up liquidity for a long time you don’t pay for that. I think it is fine for now as we are bootstrapping but going forward when we get more adoption do you see that as something we need to move away from? Are there any technical obstacles, something like stuckless payments? What are your views on paying for liquidity over time and not just for a one-off?

BT: You say this is not an issue because we don’t have a lot of volume but I think it is an issue even if we don’t have a lot of volume. It is just that we don’t have a perfect solution to that and I don’t think there is a perfect solution to that. We have done a lot of research, there have been a lot of proposals trying different things to fix it but all of these proposals either don’t completely work or work but require quite a lot of implementation changes and a network wide upgrade that takes time. We are actively trying to fix this but this is still very much in research phase. If more people want to look at it from a research angle that would be very much appreciated because I think it is an interesting problem to solve and there is still some design space that hasn’t been evaluated. But we haven’t focused on implementing anything because we are not yet convinced by the solutions we’ve found. This is very much something that we are trying to fix in the short, mid term.

CD: I think saying that the base chain is always incentive compatible is probably also a bit of a stretch. We still disagree on whether RBF rules are incentive compatible or not. We are still fighting over that. That being said I do agree that the incentive incompatibilities on the base chain are much fewer because the base chain is much simpler. There is much less stuff that can go wrong on Bitcoin mainnet than can go wrong on Lightning. The surface where stuff can go wrong on Lightning is much bigger. Bitcoin itself has had quite a lot more time to mature. I’ve been with Bitcoin since 2009 and trust me I’ve seen some s*** go down. So I think we shouldn’t set up the same requirements when it comes to being perfectly incentive compatible or perfectly secure or perfectly private either. We should be able to take the time to address these issues in a reasonable way, take those learnings and address them as they surface. That being said there are many proposals flying around, we are evaluating quite a few of them including paying for time of funds locked up, you mentioned stuckless payments which might become possible with PTLCs which might become possible with Taproot. There are ways we can improve and only the future will tell if we address them completely or whether we have to go over the books again and find a better solution.

Q - Do any of you have any general advice for aspiring Lighting developers? Pitfalls to avoid or things specifically to focus on?

CD: As an aspiring Lightning developer you probably want to go from your own experience, go from what you’ve seen while running a Lightning node yourself, whether that is for yourself or somebody else. Or trying to explain to somebody else where are your own disagreements with what Lightning is nowadays. Try to come up with a solution, you can propose something completely theoretical at first or it can be something that you built on top of Lightning. All of that is experience that you can bring to the table and is very valuable for us as implementers of the various implementations or as spec developers. Those additional views that you bring to the table can inform on how we can go forward. My personal tip is try out an implementation, try to build something on top of it and make an experience with it. Then come to us and say “This is not how I expected it to work. Why is it? How can we do it better? Wouldn’t this be better?” Things like that. That is a very gentle approach to this whole topic. Usually the way that you can make a permanent change in this system as well.

BT: My biggest feedback and advice on the Lightning learning experience is to not try to take it all in at once. This is a multi-year learning process, there is a lot you have to learn. This is a very complex subject. You have to master Bitcoin, then Lightning. It is a huge beast with a lot of subtleties. Accept that there are many things that will not make sense for a very long time. You have to be ok with that and just take piece by piece learning some stuff starting with some part of Lightning and then moving onto something else. This is going to be a long journey but a really good one.

OG: Very good points. Just to add onto everything already said, the main constraint in resources isn’t that we don’t have enough people creating pull requests but we don’t have enough people reviewing, testing, running stuff, making sure that new PRs are at the quality that is required to be merged. As a new developer if your first goal is to create a PR it might be frustrating because it might lie there for a while if no one has time to look at it. Start by testing out a PR, giving feedback, “I ran this. It works or it doesn’t work”. You learn a lot and also help way more than just adding a new feature that might not be the most important thing at the moment.

CD: I think Bastien said it about half a hour ago. I think it is worth repeating. Don’t hesitate to ask questions. There are no dumb questions. Even the simplest of question allows us to shine a light on something from a different perspective. It is something that I personally enjoy a lot, to walk people through the basics or more advanced features. Or maybe it is an opportunity for me to learn a new perspective as well. We are kind of ok, we don’t bite. Sometimes we have cookies.

MF: Thank you to our panelists and thank you to our sponsors, Coinbase Recruiting.