Home < Stephan Livera Podcast < Laolu Osuntokun - Stephan Livera

Laolu Osuntokun - Stephan Livera

Speakers: Olaoluwa Osuntokun

Date: December 11, 2018

Transcript By: Michael Folkson

Tags: Lightning

Category: Podcast

Stephan Livera podcast with Laolu Osuntokun - December 11th 2018

https://twitter.com/kanzure/status/1090640881873387525

Podcast: https://stephanlivera.com/episode/39

Stephan: Hi Laolu. Thank you very much for coming on the show.

roasbeef: It’s good to be here.

Stephan: It’s been a while since the Lighting Summit was on and now you’re back home. How’s things back home?

roasbeef: Not bad, just the usual back to work, just the regular stuff. Australia was great. I had never been. It’s spring here, this is different.

Stephan: Fantastic. I know you told me on the day but I thought it was an entertaining story. Could you tell the listeners what is the origin story of the roasbeef name.

roasbeef: I feel as if I’ve told this a few times but I guess it’s never been written down. Roasbeef came about when I was in 9th or 10th grade, when I was like 15 or 16. I was playing World of Warcraft at the time and at that point the loser kicked everyone off the private server I was on. To go onto the public servers I needed a character name. I remember I’d just got back from football practise and my mom got me some Subway so I was sitting there thinking of my name on the game itself. I couldn’t get anything going and I was like I’m eating a roast beef sandwich right now, I had a Subway Club. I put in roastbeef with a t, that was taken, I was like damn, what am I going to do now? I take off the t and then it wasn’t taken and from there on, the past ten years, I’ve been roasbeef on everything. I’m roasbeef on everything other than Instagram. There’s this Norwegian kid that has it, I’ve messaged him a few times. I guess he’s not going to give it up.

Stephan: He doesn’t even know who you are? Come on.

roasbeef: The best part about roasbeef is that it’s funny. We’re doing all this serious cryptocurrency Bitcoin work and it’s like oh roasbeef. It makes me laugh which is why I like it.

Stephan: I love it. You just came back, roughly a month ago, November 2018, we had the Australian Lightning Summit in Adelaide. Can you give an overview on what it is and what happened?

roasbeef: Yeah so basically the Lightning Summit this time in Adelaide. Some people might not remember, originally we had a summit in Scaling Bitcoin, this was 2016 in Milan. We were at the office of BHB. At that point we were all working on Lightning implementations ourselves. We decided to synchronize and make sure things are interoperable. That was a two day thing, we decided we were going to do it again. The point of this one was not really the super far off Lightning stuff but kind of the things that we all know that we really need to get it in. Things like dual funding, splicing, AMP and things like that. It was a two day thing. Mostly, it was deciding what people were interested in getting in. We’d leave the specifications on the mailing list and there’s a lot of traffic on the mailing list right now on the nitty gritty details of things we’ve delayed for next time. I thought it was great, Adelaide was a great town, I’d never been to Australia. We had some time to visit some other cities as well. We actually got a lot more done than I thought because initially we had this really long agenda, I was like “damn are we going to get to everything realistically in two days?”. At the end we even had time to get some beer.

Stephan: Fascinating. What was the most exciting in terms of the technology coming out of the Lightning Summit?

roasbeef: Some cool stuff. We had this cool routing protocol. That seemed pretty cool because that could add some increased privacy into the system itself. Another cool thing is there were some measures added for light clients. Rather than watching the chain you could SPV proof the channel was closed. The biggest thing I thought was important was fixing some issues with the way we handle fees in the protocol. Before we had to guess ahead of time what the fees were going to be. That’s something we didn’t really anticipate would be a huge issue. Once we went to mainnet, it was a big issue because peers were disagreeing with what fees we actually needed to have. When there’s disagreement on fees you have a lot of issues. So instead of agreeing fees ahead of time you can basically add on fees after the fact. This can be via a child-pays-for-parent type thing where you can have an anchor output and you bring down the transaction after the fact. I think it’s one of the biggest things we’re going to get out here and I’m pretty excited about that. Fees in general can be pretty hairy and this sidesteps it all until the last moment which I think is going to be a big difference.

Stephan: In terms of that child-pays-for-parent concept, what will it help smooth the experience on? Will it basically be helping you close a channel?

roasbeef: Yeah it’s helping you close a channel. Right now because we have to agree on fees ahead of time you basically need to guess ahead of time what the fee will be when you want to go into the chain. In the future, the fee rate could spike. The worst case is basically I have a fee rate and let’s say I have 10 satoshis / byte. All of a sudden the fee rate spikes to maybe 70 satoshis / byte. All of a sudden I can’t get in and I can’t resign my transaction if the other party is offline. If they were online, I could issue a new update fee in the protocol, we could raise the fee. If they’re offline all of a sudden I’m stuck with this low fee transaction. The other thing is because the outputs are CSV towards me I can’t actually broadcast that and anchor it down myself. CSV only starts once the transaction hits the chain. We can be in a weird situation where it is very difficult to guess fees ahead of time but now there are hooks in the protocol to allow you to, when you overshoot fees or undershoot it, you can correct that. At this point we can just know after I broadcast I can regulate my fees as much as I want to.

Stephan: Fantastic. Could you give a background on AMP and what changed in terms of AMP from the Summit?

roasbeef: AMP is atomic multi-path payments. It is a method to allow you to…. payments to the network. Right now with Lightning, let’s say you have five channels. You can only fully send and receive with one of those channels which is pretty limiting. So let’s say I have five $10 channels. All of a sudden I can’t actually send $50. I can only send $10 at once. What AMP lets you do is combine liquidity of all of those channels and push or receive them all at once. This is a big benefit to the network as a whole because now you can more effectively utilize liquidity within the channels. Also, it’s pretty good for routing nodes because now larger channels are less important. They’re less important because with a larger channel I would only be able to send a payment if I could route entirely through that channel but now I can share my commitments amongst a bunch of smaller channels. We had the original version of AMP which was based on this secret sharing mechanism which would allow the receiver to pull the payment only once all the pieces got there. Then some people started working on a different version on the mailing list which is called base AMP or MPP or something like that which is a little bit simpler. It reuses the same payment hash. The idea there is that you can still maintain this invoice protocol which we have going on here, which you can with base AMP too. I think this is going to be one of the biggest things as far as quality routing the network because now I can utilize the full bandwidth of the network at any given moment rather than being restricted to a single path. This is also pretty good for privacy as well because now all the payments are more sharded and you can’t as easily correlate a larger payment because now I’ll see ten or so payments come by and they may all be from different payments or they could be to the same payment.

Stephan: Obviously my understanding on this is not as deep as yours but with the concept of splicing which I understand as resizing the channels that you already have, AMP is like an alternative to that because you don’t need to resize the channels, you just use multiple ones?

roasbeef: Not necessarily. AMP is good because it reduces the pressure on the network to have really, really large channels. Before let’s say I wanted to send $1000 payment. I would need a much larger channel to send that. Now I can send $1000 over a series of $100 channels which is pretty good. Splicing and AMP are complementary. Splicing lets you do some cool things. One of the biggest things it will let people do is on the UIs right now, most of the UIs have two balances: an offchain and an onchain component, that’s because your coins are in two different areas. With splicing you can combine that into one single balance because now I can send a payment outside of my channel so I don’t have to close my channel. It is also the case where I might want to have most of my hot wallet coins in a channel because now they’re more liquid, I can move them more easily. Another cool thing that splicing allows is I can effectively give you an address and you can deposit into my channel. AMP is more like routing throughout the network itself. Splicing is pretty important for wallets in general because it really improves the UX of wallets. All of a sudden now I can instantaneously move in and out of my channels at will. The line between layer 1 and layer 2 blurs a lot more where it’s like this is my wallet. My wallet can move small amounts quickly and large amounts a little bit slower depending on fees. At that point, once we have that in, users can trade fees for time. Maybe you pay higher fees if it’s going a little bit slower or you pay more fees and it’ll be a little bit faster. That is a ratio you can modify depending on what you want to do with your payments.

Stephan: Again, my understanding is not as great as yours so if you could help me on this. To do an AMP payment is all on Lightning but to do splicing that requires an onchain payment?

roasbeef: Yeah that’s true. What splicing does more or less is closes a channel and opens a channel in a single transaction. I broadcast a transaction that spends the old multisig and then creates a new multisig and that multisig is the 2-of-2 for the brand new channel. The cool thing is that doing a transaction, we can add or remove funds from the channel itself. Let’s say I want to pay my friend and also deposit to an exchange, I can do that all in a single transaction. Every single Lightning implementation over time will have a pretty cool batching engine implementation. It is something we’re working on in lnd where you can feed in all these transactions you do otherwise and coalesce them into a single transaction anytime you’re going to splice in or out. It’s going to make things more efficient as well because now we have this spot that we can all synchronize on and we can send or receive payments via a single transaction onchain. It’s a pretty cool feature because batching will be a native thing in most implementations and in the future once we get signature aggregation then things will be a lot cheaper and smaller from there on out.

Stephan: Excellent. Could you maybe outline a little bit for me around using batching within lnd?

roasbeef: It is something we’ve been working on for a bit. One of our engineers, Joost, has had this idea when he joined to make this new form of sweeping. One thing Lightning implementations need to do is ensure they are able to sweep outputs from CSV or CLTV transactions. It is a very basic thing to do. We have something called the UTXO Nursery which incubates outputs until they mature because it was a CSV or whatever else. We realized we could generalize that a little more to sweeper in lnd. This is a system where every single block is going to ask anything in the codebase if it wants to sweep a transaction or not. Now we can use this as a primary synchronization point. Let’s say we’re doing a splicing tranasaction and I have Channel A and I want to add more funds to it but then at the same time we have a request to make a Channel B. Rather than doing a transaction for splicing and opening a new channel, we can combine those into a single transaction. Even further, let’s say I want to send a payment out at the same time. I can combine that into a single transaction too. The sweeper in lnd will become this really cool batching engine where ideally lnd has one transaction a block. That one transaction is doing things like opening channels, closing channels, sweeping HTLCs, sweeping the CSV, sending payments and doing everything in that single transaction. This is pretty good for us because we’re going to save on fees obviously because we have less transactions onchain and also the system as a whole is more efficient because every single transaction is more batched. You can save more money when you batch transactions.

Stephan: You mentioned a UTXO Nursery. Could you outline a little bit around what is that concept?

roasbeef: This is something I made a while back. Whenever you force close a channel on Lightning you have a timelock output as the person who… the transaction. This timelock output has a CSV value, the CSV meaning there would be a delay until you can sweep it. Back in the day, 2015 or 2016, I was like let’s make a UTXO Nursery and the idea is anytime you close a channel you give the output to the Nursery and the Nursery watches over the output until it is mature. By mature, I mean it can be swept. You can do the same thing for CLTV transactions. Before we can broadcast we give it to the Nursery and the Nursery handles figuring out when the maturity is itself. It was a cool abstraction that worked at that point but we’re working to generalize a little more. I think the naming meant sense. I have a kid output, it is in the crib and eventually it graduates to kindergarten. The kindergarten is when it gets swept into the wallet.

Stephan: I suppose from a privacy point of view that’s an improvement as well?

roasbeef: Yeah. We can do some really cool things with that. In the future whenever we move to having channels either be 2-of-2 ECDSA or 2-of-2 Schnorr. That’s really cool because at that point every single channel, assuming you’re doing cooperative closes where you’re just signing the multisig, it’s like any other regular payment. Right now, channels in Lightning are pretty identifiable because you have a 2-of-2 multisig and we’re the only people using 2-of-2 multisig in witness script hash with all the new SegWit stuff. In the future that’s going to look like a regular witness key hash, using ECDSA or a new witness program type using Schnorr. The cool part about this is that now anytime you open or close a channel it looks like a regular payment. Let’s say I’m doing a splicing operation and maybe I have four inputs and I have some outputs and I also have my new multisig. That looks maybe more like a send-to-many or a coinjoin. Now we’re increasing the anonymity set of Lighting users and regular users for Bitcoin payments as well. It may be the case that smart contracts and regular payments are indistinguishable which is really good for privacy because the anonymity set is everyone using either Bitcoin or Lightning. In the future if it gets more popular it’s a lot hard to tell things apart.

Stephan: Fantastic. And then probing. Could you tell us a little bit about this concept? You’ve got different channels across the whole network. You as one node don’t necessarily know what’s going on in some other area of the network. How does probing help?

roasbeef: Probing can help… I’d say it helps the most weeding out poor nodes or poor channels. This is something that Alex Bosworth has been working on for us a little bit. We have this system where we’re probing the network slowly to figure out where the bottlenecks are and also which… are viable. If you can know ahead of time that Bob always drops payments or he’s never really online, you can avoid him altogether. You may be able to wiggle out different nodes to force them to close their channels if the channels aren’t being managed effectively. If we can give people some of this data they can have better pathfinding attempts so they know which paths are viable and which ones are not viable. There’s something in Tor where they have a directory authority. What they do is they go round to the nodes, measure bandwidth, latency and stop people doing DOS, things like that. This is similar to that. Other people in the network can probing people slowly to see if the channel is good or not. It’s basically improving the network by removing poor candidates effectively. If you can ignore those candidates when you’re doing pathfinding, your route will be a lot more successful and you’ll be able to do things a lot more easily.

Stephan: So overall it improves the strength and quality of your experience when you interact with the Lightning Network?

roasbeef: Exactly

Stephan: My understanding from listening to Noded with Rusty and Christian. They were mentioning this concept that came up that you want to try to preserve privacy where you can in the Lightning Network and only where there starts to be many routing failures, that’s when you start to introduce more and more slightly weakening privacy such that you can improve the experience. Can you chat a little bit on that?

roasbeef: I’m not sure exactly what they’re talking about there. Via probing you can weed out bad nodes. I think they are talking about a thing where you can identify your probing attempt as a probing attempt therefore people won’t really worry about a timelock. The other cool thing about probing is that it adds cover traffic for the network as a whole. If every single node is randomly sending payments to another node that they know is going to fail, that’s extra cover traffic for regular payments. Even with onion routing you still have a bunch of vectors as far as timing and packet analysis. If there’s a steady state of volume on the network it’s a little bit harder to tell if someone sent a payment or not. Let’s say there were no payments going on the network at all, only one person sent a payment. That may be more identifiable to someone that is mining a bunch of links versus a bunch of noise going on, everyone is sending payments back and forth and you send your payment underneath there. This is what mixnets do to have privacy preserving communication on the internet where they basically have a bunch of cover traffic going back and forth. They do things like random delays, they may even add dummy traffic. I may randomly route payments towards myself in a circle to ensure they can’t tell when I’m receiving a payment myself. They don’t know who the destination or origin is. It is kind of a supplemental thing on top of the things we have already as far as onion routing, picking peers to be distributed.

Stephan: Another big topic I wanted you to touch on was wumbology so can you give us a background of what is wumbology?

roasbeef: Wumbology is the study of wumbo and wumbo is the opposite of mini. It’s basically from this episode on SpongeBob where he has Mermaidman’s belt and he’s like “Hey you’re doing it wrong. We haven’t accepted mini… set w for Wumbo.” He’s like “What’s wumbo?”. “Wumbo is like I wumbo, you wumbo, the study of wumbology.” It basically just means big, the opposite of mini. It is a joke we developed in terms of making channel size a little bit larger. Some people may not know. Right now on the network we have two limits as far as payments. The first one is a limit on channel size which is 0.16 BTC. The other one is a limit on the largest payment which is 0.04 BTC. These were set initially back in the day as training wheels because things are still very new and we’re also figuring things out for ourselves. We didn’t want people to throw a bunch of money onto the network before things were robust. I think maybe we’re getting more comfortable with our implementations. In the future we may have this special feature bit. On the network whenever two nodes connect they can exchange information telling each other what they support. Let’s say I connect to you and I see that you have the wumbo bit set which means that you have big channels and I have the wumbo bit set as well, at that point we may be able to make a 1 BTC channel which is far larger than the current limit on the network right now. Because it is double opt-in, only once two implementations have agreed they’re ready to support it will we roll it out. Because we have this feature bit system it’s pretty easy to introduce new features in the future because all of a sudden bit 25 means this and then we can go on from there.

Stephan: Fascinating. The growth of the network has been quite a lot over the last few months. Have you guys done much in terms of benchmarking on how the network operates?

roasbeef: We’re doing two things. One of the things we’re doing is poking different nodes around the network to see how readily are they able to accept payments and also forwarding successfully themselves. It’s something that is pretty cool because now we can develop a framework to analyze different nodes on the network in order to suggest channels to different users. That’s more of a quality of service thing. It may be the case that even though you have a lot of BTC on the network you may not be effectively managing your routing node. If you’re not effectively managing your routing node in terms of liquidity and balancing your channels you may not be a good candidate. That’s one of the things people have trouble with at times. Just because a node has more channels and more BTC on the network, it doesn’t necessarily mean they’re going to be good candidates for routing nodes. That’s a thing of information asymmetry and the skill of the operator, how aware they are of routing nodes themselves. Another point in terms of performance. Back in the day, there were a bunch of estimates on lnd, actual benchmarks. This protocol ran a bit different. It basically had more of a window for pipelining throughput. I got 2 or 3K transactions per second on lnd. These days because we have a lot more persistence and there’s a lot more checks, maybe it’s around 800-900 transactions per second for a channel. That’s on a single channel and the way lnd is designed is that it can really scale across all cores pretty easily. Right now, the main bottleneck as far as forwarding throughput is disk I/O which means writing to disk a lot. That’s not really optimized at this point. I think those are pretty good numbers because things aren’t optimized at all. I think it will be the case that only once we see us reach those numbers on a regular basis will we optimize that. Beyond that there’s a lot of safety things that we’re working on and quality of service, reliability of lnd itself. Once all that’s done, if people’s nodes are falling over because there’s some new streaming video, Candy Crush game, something like that then we’ll tweak things a bit more, try to optimize it.

Stephan: With that 900 transactions per second, was that just using lnd in a test environment with another lnd?

roasbeef: Yeah it was two lnds at that point with pretty manageable latency but doing real disk I/O. If you put it on the network, it gets similar to that. There’s two things to point out. Right now there are two caps on the protocol. One of which is a cap on the number of outstanding HTLCs which is the number of pending HTLCs which you can have that aren’t yet settled. If this number was like 10 that’s going to really reduce our throughput but right now it’s around 900 cumulatively which makes it a little better. That number can be bumped in the future. Another thing that was in the past before we wrote the specifications… In the past you were able to have multiple remote commitment transactions which would let you pipeline these updates a lot more quickly similar to the way TCP Sliding Window works. That was removed for reciprocity but that may come back at some point in the future.

Stephan: What about benchmarking from a routing success or failure? What percentage of payments successfully route? I assume you guys have done some benchmarking on that?

roasbeef: Do you mean nodes that are actually on the network that people are operating?

Stephan: Sometimes one of the angles people say is “oh look I tried to send this payment but I just get all these routing failures”.

roasbeef: I feel that is more of an issue again talking about the skill of a routing operator. It’s not really at the state yet where you can dump money, set and forget. You have to be actively managing your channels. One thing we’re looking at the network right now, we’re trying to identify good nodes as in they’re able to route payments pretty effectively. That’s something Alex Bosworth has been working on in the past few weeks or so. He probably has better results than I do right now. He’d be a better person to enquire the way things are going right now. Typically most of those issues are nodes being down or people not balancing their channels effectively. In the future we’ll have a lot more tools coming out to make these things a lot easier to use. Right now there are a lot of routing operators that are really excited about this stuff but they maybe don’t really understand the implications, what to do or all the settings. I think with time as we get more tools out there and also more guides and resources all of that will improve a good bit.

Stephan: Is the idea that autopilot will also help do some of that channel management as well in terms of rebalancing, splicing, that kind of thing.

roasbeef: Yeah definitely. Our autopilot system we have right now is one that I wrote a bit back, it is pretty basic. All it does is assume a scale-free network which means they’ll be a low degree, the actual path length for the shortest path should be pretty low. Something we’re doing right now within Lightning Labs is working on the next generation of the system. One of our engineers, Johan and then also Alex and Joseph are working on it themselves. Right now, the system has a single heuristic. We’re working on modifying that to make it multi heuristic. Rather than looking at one feature, one attribute, it’ll go on a series of attributes. There will also be the ability to add your own external coordinate into the system itself which will let you add supplementary information that may not be completely identifiable from the graph itself. In the future it’ll grow to be things like actively managing channels. Let’s say you have Channel A and Channel A isn’t getting that much volume but I have a lot of money in there. Channel B is doing a lot but Channel B isn’t able to catch as much volume because it doesn’t have enough liquidity in the channel. What the system can do is identify that and take away money from Channel A and put it towards Channel B in the hope that I can get more money out of Channel B versus Channel A. It can be a resource management engine for both your channels and also your payments. The cool thing is that you can chain data between these two systems as far as making channels and sending payments because there may be some overlap in there.

Stephan: Also related to channel management and you were mentioning there around transferring the balance across channels. Another concept there is which I’m sure you’ve spoken about is around the dual funding aspect of it. Is it the idea that once dual funding comes in, that would be the new default for channel opening?

roasbeef: I don’t think so necessarily. I feel that dual funding is one of those things that people think is going to solve all the issues. The initial version of lnd, the first thing I coded was dual funded channels. lnd today supports dual funded channels, we even had tests for it. It is not yet exposed on the network or on the RPC. The thing about dual funded channels is that dual funded channels require more upfront negotiation. Even if I think you’re a good person, I may not want to put money into a dual funded channel. At that point, if you go down, my coins are locked there too. The difference is that with a single funded channel I can instantly open a channel with anyone else with not much thought because they’re going to accept the channel. With a dual funded channel, because you both have money up it’s a bit more of an involved relationship. You may not want to accept a dual funded channel from just anybody but you may be happy to accept a single funded channel. The other thing as well is that there really aren’t as good tools for rebalancing channels and managing liquidity. The past few months we’ve exposed some tools in lnd, some RPCs and we’ve seen some people writing some open source tools and we’ve got some coming out ourselves. I think dual funded channels are important but I think they will probably be used more heavily amongst other businesses or exchanges rather than on the network in the wild. Just because they do require a more thoughtful relationship between the two parties compared to a single funded channel.

Stephan: I see. One of the arguments I’ve seen by this computer science professor, I’m not sure if I’ll pronounce his name correctly, Jorge Stolfi. He made an argument that routing won’t be feasible. He was trying to say finding a viable payment path in the LN requires knowing the channel states which are unknown by definition. There is not enough data to solve the pathfinding problem no matter how many developers you put to work on it. Greg Maxwell did weigh in with his thoughts on this which is as I understand it, lightning is not dependent on absolutely perfect routing therefore you don’t need to know all that global information and you don’t need it to be completely current. Do you have similar thoughts or would you articulate a different answer to that challenge?

roasbeef: I think he’s totally right. Unlike Bitcoin, every single routing node does not need to have the actual same view of the network as a whole. If you’ve ever seen any of these explorers and you’ve seen some have different numbers to others, that’s because you don’t need a global view of the network. You also don’t need a synchronized view amongst every single node in order to do routing successfully. I think he’s looking after this really idealized model but it is one of those things where in practice, that stuff doesn’t really matter. There are a number of impossibility theorems in CS or things that are deemed very hard problems. Typically, in practice if you add a little bit of randomness then it’s fine. You may not be able to define the theoretically optimal solution but you have an ok solution which will work in practice. I think it’s funny where you’re working on a problem that someone tells you is impossible, you’re like I’ll work a little bit harder on this. I think it is the case that you don’t need more information because you can make adjustments. Because you can iterate and make adjustments and feel a bit more out on the network you can more effectively route. Right now we have something in lnd called missionControl. It learns from your past attempts which routes are more likely to be effective in future. Once again, it is a pretty basic version that we have right now but some of our engineers, Johan and Joseph are working on a more advanced system which will factor in the probabilities that certain routes work well based on past experience. It is one of those things where you don’t need global information and if you remember which routes worked before, you can bias yourself to go to Bob again because he did things more successfully in the past. Then you have a much higher success rate when you do routing.

Stephan: You mentioned there there’s also the potential of retrying. You could try the payment, it might fail but then you can retry and it will find another route.

roasbeef: Exactly. Just like on the internet, whenever I’m sending a packet somewhere, it may have failed three or four different times due to timeouts or something like that but to the end user that was never surfaced. It is the same thing in Lightning. Even though you maybe had five attempts and the last one worked, that’s ok as long as the latency is suitable for the end user. Whenever you’re doing anything, the idea is always that the lower levels hide a lot of the details and give you a much more abstracted presentation of what’s going on.

Stephan: I think that pretty much answers that question. How about Neutrino? That’s BIP 157 and BIP 158. Before we get into this can you maybe give an overview on what Neutrino is and what problem it’s solving for the non-technical people?

roasbeef: Neutrino is basically a new light client or SPV mode for Bitcoin. Typically the version people are using right now on their phones came out in maybe 2013 or so. That’s BIP 37. The way BIP 37 works is it puts all your addresses into a bloom filter which is this probabilistic data structure meaning that sometimes it’s going to tell you something is in there but it may not actually be in there. What the BIP 37 nodes do is they send this bloom filter off to the full node and then the full node for every single block can check against the bloom filter to see if an address is yours and then send you the entire block. This had some issues because you’re giving all your addresses to the full node and they can do things like get all these bloom filters and intersect them to figure out what your addresses are. The other thing that was an issue with BIP 37 was that it caused a lot of strain on full nodes. There are a bunch of nuisance DOS attacks that you can do. You can have a bloom filter that matches everything and causes the full node to always send every single block. The full node had a bunch of state for each client meaning that every time a client connected to it, it had to have more and more state. More generally, it is very difficult to manage the filters themselves. The idea was that the clients would themselves manage the false positive rate of the filters and tune them depending on what was going on. In practice, they didn’t do that effectively and they had some issues with that. So what Neutrino is, it is based on this old post, I guess it came out in 2016, by this anon post on the Bitcoin mailing list called the bfd. It was also developed on IRC and #bitcoinwizards. The idea is to flip it. Rather than you giving the full node a filter, the full node gives you a filter. The filter contains information on what addresses or outputs may be in that block. That is what Neutrino is. Every single block, we can download a filter and we can check that filter locally to see if there are any of our addresses in that filter and then we download the block itself. The cool thing about this is that I can download the block and the filter from two distinct nodes or I could even download the block using some fancy cryptography in a way that allows me to indistinguishably fetch the block itself. This is really cool because all of a sudden you can use this for a lot of other things. For example, rescans on different full nodes. Typically they have to read the entire block to scan the thing for addresses. Instead they can read these filters and check them themselves which makes things a lot faster. I think it is a much better way to build applications on top of Bitcoin because typically most applications, as far as contracts on Bitcoin, you need to know if something happened in a block. With the filter, I can find out whether something happened in a block or not because you download the filter and you can check it there. That’s an overview of Neutrino.

Stephan: My understanding is that Neutrino is now in lnd and Lightning app alpha for testnet. Has the use of it been successful in testnet?

roasbeef: Yeah. I think people like it in testnet because all of a sudden you don’t need a full node because otherwise you need to maybe wait several hours, six or seven plus hours if you have a really good computer to download the entire blockchain. Obviously you have space implications too. One thing I forgot to mention too in favor of Neutrino, is that BIP 37 has a bug where the filter can lie by omission meaning that an event happened but it won’t tell you about an event. If you’re dealing with a regular wallet that’s not so bad because you have some money there. In the case of Lightning when you need to act on an event in the blockchain in a timely manner… a notification may cause you to miss an event. Right now it is on testnet, people are using it on testnet using lnd. We also use btcd to implement the P2P protocol. You can sync pretty quickly and get it up to speed on the network in a timely manner. I think this is what people are going to be using as far as end users on their phones and their laptops. Right now it is mostly advanced users because they need to be able to serve a full node and run a full node but once we get Neutrino out there on mainnet and beyond, it is going to be as easy as downloading an application on your desktop or on your mobile phone. At that point the fun really begins because people can ship applications more easily because all of a sudden now they have a wider user base and also there’s less maintenance on their end as far as full nodes.

Stephan: Do you foresee Neutrino style wallets becoming the norm for mobile wallets or are there any other negative tradeoffs associated with it?

roasbeef: I think they may. In my opinion it is much simpler than coding up BIP 37. With BIP 37 you have to do active management of your filter on the client side in order to ensure you’re managing your false positive rate effectively. Also on the full node there’s a work to do as far as tracking filters for every single client opening the filter and things like that. Codewise it is much simpler. There are some drawbacks of course, everything has drawbacks. One of the drawbacks is you’ll have higher bandwidth usage than normal because typically with BIP 37 you’re only downloading a Merkle root of the transaction from the full node which can be pretty compact. For Neutrino, you’re actually going to get the entire block. It’s not as bad because with SegWit there’s something where you get a block without the witness data meaning you can get a block without all the signatures which can be a large proportion of the block. This makes the blocks a lot lighter, maybe you can download 60-70% of the block versus the entire block itself. One of the drawbacks of Neutrino is that we don’t get unconfirmed transactions. Because we’re always getting filters from actual blocks we don’t know a transaction is in the mempool that you can validate. As a light client you aren’t able to validate that an unconfirmed transaction is valid because you didn’t have the UTXO set. It’s more of a user experience thing. People are typically used to whenever they get sent a payment on their phone they see ok boom, there it is. It is one of those things where we can add this onto Neutrino as well either by fetching every transaction or this idea discussed on the mailing list where you get a stream of the address and the amount from a full node of every single thing in their mempool. You’d be able to effectively filter this out and have a UI indicator of ok boom, here’s a payment you’ve been receiving.

Stephan: If I understand you correctly, Neutrino style mobile wallets will definitely show you all your confirmed transactions that have been sent to you but as they currently exist, if it is an unconfirmed transaction that has been broadcast but not necessarily confirmed in a block, that will not show in your Neutrino style wallet. As I understand it, you’re saying there are ways to mitigate that also.

roasbeef: There’s ways of doing it on top of it to get that. For our use case we don’t really care about things in the mempool because the mempool doesn’t exist and you only care about blocks. It is very difficult for you to try to spend things in the mempool because it could be double spent, there could be other things like that. Once you’re in a block and you have some confirmations it’s a lot easier. To me, unconfirmed transactions in a wallet are really just more of a UX thing. People want an instant experience where you onboard someone to Bitcoin and they say “wow that arrived right there on my phone”. We can emulate that via other mechanisms but we also have Lightning for these instant payments in the future.

Stephan: Are there any reasons that Neutrino technology would be difficult to implement into all the other mobile wallets or do you think it is going to come over time?

roasbeef: I think it may come over time. We’ll see how BIP 157 and 158 emerge. Some people on the network right now disable BIP 37 altogether. It was seen in the past that some agencies were using this to probe your node for your mempool or for deanonymization attempts. Some have a flag in place to disable BIP 37. I think that people will switch over if they’re comfortable with the tradeoffs of Neutrino. Like I was saying, from a code perspective it’s a good bit simpler because there’s less active management. Right now, we’re in the first stage of deployment of Neutrino where you basically get the filter and you get a committed history from every single one of your peers about the validity of that filter. In the future, that may be committed into a Bitcoin block similar to the way we have a commitment for all the SegWit witness data. It could be the case that rather than using this thing on the side to distinguish invalid filters, you get it in the block which gives you a high level of security.

Stephan: This is quite difficult for me to follow along but definitely very interesting stuff. What would you say is at the top of your wishlist for changes that you would like merged into Bitcoin Core?

roasbeef: I’ve got a lot! One of the things which is lowest hanging fruit that does a lot for offchain protocols in general is SIGHASH_NOINPUT which is a way to allow signatures to be a little more liberal. The way signatures work typically is people use something called SIGHASH_ALL which says I am signing all of the inputs and all of the outputs and their value and everything else. The idea of that is if someone takes your transaction and modifies it in any way, your signature would be invalid. This is a pretty big thing because otherwise someone could take your transaction, modify the output value or even stick in their address and all of a sudden you send your money to them. That’s why SIGHASH_ALL is the most conservative one that people use generally but there are other SIGHASH flags that maybe can be used for other things. For example, the use case most people know about is SIGHASH values is back in 2014, 2015, Mike Hearn made this thing called Lighthouse which is where you can do a crowdsourced, crowdfunding type thing on Bitcoin. That used SIGHASH_ANYONECANPAY which lets people collaboratively make a much larger transaction. NOINPUT is taking everything back and saying I only want to give you a signature that is valid as long as the witness is satisfied which means I can pull money out. This is cool because when you’re making contracts on Bitcoin you’re doing nested transactions and those nested transactions depend on another transaction before them. If that other transaction is broken in any way then you can’t pull them in effectively. SIGHASH_NOINPUT opens things up for you to do a lot of really cool things. The thing that I’m most excited for with NOINPUT are better ways to be doing fees, things like eltoo needs them and also there are a number of things to do with multiparty channels which I talked about at Scaling Bitcoin. They are only really possible once you have NOINPUT. It is one of those things where it is a very small change to the system but the things you can do really blow up once you have that change in there which I’m really looking forward to. Another thing I’m looking forward to is something called CHECKSIG_ONSTACK which is the ability for you to check an arbitrary signature in the Bitcoin script itself. All of a sudden I can make a special address that I can spend with my key but if you give me a key from yours and I can sign that key, I can basically delegate that amount to you. You can do a number of cool things, micropayments, probabilistic payments, oracles, delegation with CHECKSIG_ONSTACK. Beyond that, there are a number of other things but I feel these are the things that people generally agree they are good ideas and they are also pretty small changes to the system itself.

Stephan: What service or product would you like to see built on top of Lightning Network or using lnd?

roasbeef: There are a lot. I think one of the most impressive things during the past year or so is all the development that’s been going on in lnd. They’ve been building a lot of cool things, there’s different hackathons, the Chaincode Residency, the Lightning Hack Days going on right now. One thing that I thought about a while back which someone has built now which I’m really excited about is a Chrome extension that uses lnd to obtain macaroons to allow different websites to have a very seamless user experience. Rather than you clicking OK in your wallet you basically just click the upvote and the payment happens in the background. I think that’s a really cool thing as far as UX in the system as it really lowers the cognitive burden of doing micropayments because rather than you approving every single payment that’s happening which can be pretty taxing, you can maybe set a budget or give a particular node some capabilities. Another thing I’m pretty excited for building eventually which I gave a talk on in 2016 is something I call HTLC-DASH. Whenever you stream video on YouTube or some other thing, they have this adaptive streaming protocol which means that anytime your quality dips low, that’s maybe because your bandwidth was getting choked up or… The idea is to layer Lightning payments on top of that where maybe I would pay you less for a 40 versus 10 video where every single chunk that you give me is a HTLC within the system itself. I’m paying for the next thirty seconds at a time and I’m paying for those thirty seconds within Lightning itself. Now it fixes the fair exchange problem. You can effectively have streaming videos, podcasts, movies or whatever else paid for over Lightning by the minute or by the second which is a pretty cool thing. Once people get in the realm of doing really cool streaming activities, maybe I’m getting my paychecks through to me by the minute or maybe I’m streaming video, I think it is a pretty cool use case that I’m excited for. That combines with other UX enhancing things like the Joule Chrome extension. Another thing that I think is cool is gaming in the context of Lightning. You can either accept payments for your game or you can sell… for your game or in-game payments. I can maybe have a goal that I buy for Lightning or when I beat the boss it gives me some payments or something like that. I think that’s a pretty cool use case that I’m looking forward to as well.

Stephan: It looks like there are potentially a lot of different business models that can be enabled with Lightning. As you mentioned, streaming on-demand, gaming, all sorts of new services, we’ll see but it may help reinvent the internet as we know it and reduce some of the reliance on advertising and clickbait.

roasbeef: Yeah and also removing a lot of these intermediaries like Stripe or whoever else. I think one other thing that people underestimate is that when we have Lightning it is easier than ever for you to put something up on the internet and accept or send payments. Before this it was more difficult because you had to go to some other third party provider, give your credit card information and do all this KYC and everything else which is pretty limiting. At any given time they can shut down your account. With Lightning, I just have my program, I put it up on the internet. I have my node, I say hey, let me get some connections. All of a sudden I can accept payments with very little cost. The setup costs are very low because we have these great open source tools that are being put out by different developers. This hasn’t been possible before, now the barriers to entry are much, much lower. There were things people wanted to do but they couldn’t do in the past because of the legacy system. Now that Lightning is there, it’s a lot more open and transparent, there’s open source implementations. I think now we’re really starting to see people put these things together and have these really cool models that couldn’t exist before but are able to exist now due to Lightning.

Stephan: Fantastic stuff. One other topic that we touched on briefly on that day when you came to Sydney. We were touching on this concept of the Lightning Spec might be a little more well defined such that you can have multiple implementations on Lightning. If you compare to Bitcoin, people are much more reliant on Bitcoin Core as being the standard bearer. What are your thoughts there in terms of the comparison between how Lightning works versus Bitcoin’s consensus.

roasbeef: Lightning itself is an overlay network, it depends on Layer 1, on Bitcoin. I have opinions about layers and things like that. It depends on Bitcoin to be the robust core to adjudicate any sort of disagreements on the base network itself. I think the main point is that on Bitcoin if you have a consensus fork you may lose money. They may double spend you or whatever else, the risk is much much higher. In Lightning if two nodes disagree on what the current commitment state is, that’s fine, they can go to the blockchain and close the channel. I feel that the risk of disagreement on the protocol level for Bitcoin versus Lightning is on a different level. On Bitcoin it’s a lot more dire because you can have all these double spend attacks and whatever else. We’re seeing all this stuff play out with all these altcoins. I feel that Lightning is a lot more flexible. There are maybe three or four different kinds of update. One of the updates is an end-to-end update. For example, AMP is an end-to-end update. With AMP, only the sender and the receiver need to upgrade to use the protocol. All of a sudden this is pretty flexible because developers can be very creative. They don’t really need every single node on the network to understand this new protocol. There’s also a link level update. Let’s say we have a new channel with some new channel type, the fixed channel integration or whatever else. They can start to use that channel on the network today as long as they have the same end-to-end HTLC that works on the network right now. That brings us to final upgrade type in my opinion which is a network level upgrade. Let’s say we were to move to a different hash function within Lightning. Right now we’re using SHA2 and RIPEMD. If we were to move to a different hash function, we would need to fragment the network because now you can’t route on any single path. This is something we may do in future because we’ll be moving to a Schnorr/ECDSA based HTLC versus the regular hash based HTLC. It is a lot more flexible because you have these different upgrades. The main thing is that you don’t need global consensus on every single change or global consensus on what the current state is. That’s the big thing that makes implementations difficult on Bitcoin. That’s why people are pretty squeamish when we talk about writing a new implementation because they are cognizant of that risk at that level.

Stephan: Fantastic insights. How about things at Lightning Labs? Do you have any updates on what’s going on with Lightning Labs, what to expect coming up?

roasbeef: There’s a tonne of cool things. We don’t talk about it much because we’re really heads down. We try to ignore the noise out there and whatever is going on. We really try to continue to execute. The highlights that are coming up are Neutrino on mainnet, that’s coming pretty soon. That’s going to be in a few different flavors. One of the flavors is going to be on our desktop application which right now is on testnet which is being worked on by two of our engineers; Val and Tankred. We also have mobile coming out pretty soon as well which will be on initially iOS and then also on Android. Another thing we’re doing is a lot of work on the backend, in terms of improving the backbone of the network. These are things like what Alex Bosworth is working on as far as different ways of doing probing in order to ensure we have nodes to connect our users to. We’re doing a lot of things in terms of revamping autopilot which is pretty basic right now. Taking a lot more different heuristics, making it a lot more intelligent. There’s a number of other things we’re doing as well for example Conner recently got an end-to-end test on watchtowers. I think last night, it was something people were pretty excited about. All of a sudden now you can have more assurances as far as safety on the network. We’ll have that rolling out pretty soon here too. On top of that we have a number of cool things we haven’t talked about yet coming out as far as added things on either lnd or the application side of things as well. I think there’s a lot of really cool things coming off right now. Some of them will be unveiled in the next month or few weeks which we’re pretty excited about.

Stephan: In terms of hiring, are you still looking for new developers or any other talent?

roasbeef: Yeah I guess we’re always hiring. Right now, we’ve been really heads down to ensure we can shift these things in the next year or so. Early next year we’ll be hiring across front end, we always need protocol developers, people that have been working on the core protocol, that know Bitcoin and Lightning pretty well. Also we’ll be looking in the future for more SRE or DevOps types once we start to build out these systems that we’re working on. We’re definitely continually hiring, you can check out our website lightning.engineering and then the careers page and the team page. One of the most advantageous things about writing an open source project as a company is that many of our hires, maybe like four of them, were sourced from contributions on lnd or our application. This is pretty cool because if someone is putting up a really good PR, it demonstrates a lot of things as far as their ability to communicate effectively, receive feedback, can they use Git well, what’s their code style look like, testing and things like that. We can save a lot of time on recruiting and interview because we have lnd in our open source projects. The thing is that someone can’t fake a good PR, it is like proof-of-work, it is like PoW. You put up a good PR and it’s like wow they’re serious about this, they’re committed. Many of our hires were former contributors to lnd. Contributing to lnd is a great way to be noticed. We’ve had a lot of cool people coming up. I think lnd has continued to grow more and more. Right now we have like 3.3K stars on GitHub. I think we’ve added 2K stars this year alone. I think it is a really cool project. It’s probably going to be one of the fastest growing open source projects in the Bitcoin space right now. We have tonnes of contributors and every single release we have new contributors which is pretty great.

Stephan: The Lightning Network more broadly, what’s the outlook for Lightning Network over say the next year?

roasbeef: I think over the next year we’ll be rolling out a lot of these changes in terms of the 1.1 version of the specification which includes anticipated features like dual funding, splicing, AMP, a bunch of cool things on the protocol as far as better error support and things like that. I think over the next year, the main thing will be improving the backbone of the network. Things I was talking about like ensuring we have reliable nodes, getting more education out there. It is in a state right now where there are few node operators who really know what they’re doing. Others are just springing up nodes and sitting there. We’ll be getting more guides out there to ensure people know what is going on, what different knobs are. I think as time goes on, we’ll have a better idea of what it takes to be a routing node operator and doing things effectively. I think over the next year we’ll see a bunch of work on the UX front of things. Once splicing is out, the UX becomes a lot easier. Now we can show a single balance on the wallet rather than maybe two or three balances. As we go on, the network will become more and more reliable because people will be able to more effectively manage their capital. I think we’ll also see a number of cool things coming out as far as people using the network in cool ways. The other day I saw this cool demo, it was like P2P chess where you can bet on a game and have that be dispersed. All that becomes a lot more easy once we have really good client implementations out there on light clients and desktop. Also once the infrastructure side is a lot better.

Stephan: I’m so excited, there’s really so many cool things going on.

roasbeef: There’s so much stuff. I have to keep up with this stuff and also company stuff and even Bitcoin stuff. There’s a lot of things going on, everyday on Twitter I see new things, on IRC and on Slack. Everyone is really excited. One thing Lightning did for Bitcoin is that people maybe got more reinvigorated by application development on Bitcoin. Before it was pretty difficult to do development on Bitcoin on your own because it was very low level, there wasn’t good documentation, you had to write in a certain language. With lnd because we have this system which lets you script lnd in effectively any language, all of a sudden it’s a lot easier for you to jump in and make a contribution. I think the developer network has grown a lot. I would say Lightning itself has contributed to the growth of the Bitcoin developer base this year versus anything else. You’re seeing a lot of people popping up to do training like Chaincode, people doing Hack Days. People are self-organizing which is great. We’ve put out an implementation, there’s some docs, there’s IRC. We’re really passionate about moving things forward.

Stephan: It’s all so exciting. We’re pretty much getting to the end of the time. If you’ve got any closing comments or just want to tell people where to find you, where to follow you that would be great.

roasbeef: You guys know I’m roasbeef on Twitter, GitHub, just about everything else. You can find me on #lnd the IRC channel. We also have a developer Slack which is really high quality, there’s maybe 3-4000 developers. There’s no trolling, there’s no BS, people just come up there and want to write code and experiment with the applications themselves. Also keep on the lookout for the drop of the desktop application and the different services we’ll be coming out with on Lightning in the next few months or so which people will be really excited about.