c-lightning developer call (2021-08-23)
Transcript By: Michael Folkson
Name: c-lightning developer call
Topic: Various topics
Location: Jitsi online
Date: August 23rd 2021
Video: No video posted online
The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.
I’ve been working on connectd, we got that merged. It was not a big change. I have complained in the past that there tends to be a slight preference for c-lightning nodes and other implementations to just go for Tor bypassing your local firewall which is good and bad. It has both sides. It is less complicated but it turns out those Tor connections aren’t that fast. I was thinking about how to improve that. There is a small change that connectd starts by looking up the IPv4 and IPv6 addresses first and if it doesn’t get a connection there it tries the Tor connection last. At least when the nodes are configured with both addresses we will always take the IP and not the Tor version. That’s better I guess and eliminated a small bug. Currently I am working on the gossmap implementation that Rusty brought up. I am committing on your branch there because I can’t directly into the main repo, that is not possible. I am fiddling a bit with the interface and messing around with it so it gets more usable. Looking forward to using it for rebalancing when done.
That would be good. You brought up Tor, somebody pointed out that Tor connections tend to die over long periods of time, the middle node vanishes. The idea that we shoulda actively ping over Tor came up on that PR.
Maybe I’ll do it, not sure. I have looked into it.
Making Tor more reliable, I know at least one person said that they just drop all Tor connections.
I already mentioned I think last meeting that I would like to add a change request to the spec repo so that we will be able to announce DNS hostnames. If some guy started announcing DNS, a URL, auto setup link to get through the firewall that would be better than using Tor. Actually I am doing it but can’t really because I have to announce a IP address obviously. What I end up doing is both. When I connect my wallet I tell it it this URL, resolve the IP address and what my daemon does when it gets a new IP, I restart the daemons nightly, so it announces a correct IP.
There is no reason not to add an address type for DNS. It is one of those things that’s driven by usage. Other Tor like networks are definitely something we would want to see when people ask for it. I look forward to your spec proposal, it should be easy.
Since I am release captain for the next release I will be tackling mostly the backlog on GitHub. There are a couple of PRs that I am already reviewing. One is Michael’s PR for the gossmap changes. I promised I’d get to that as soon as possible. Speaking of Python I have debugged the problem with the py-ln BOLT packages that are published on PyPy. That turns out to be a relative link that points to nowhere if you download the source distribution. I am basically just materializing that. Since I was working on that I also added a GitHub actions workflow that will publish all of the py-ln packages every time we have a push on master to the test PyPy and a push that is tagged as a release should eventually be auto pushed to the public PyPy. At least we get that automated from now on. I am still unsure on how to bump the versions since we currently just manually bump the versions. We are not allowed to upload multiple times for the test one which is a nightly build. I will end up with a version number
.post and then number of commits since the last tag. That seems to be the versioning scheme that they prefer. This leaves us with a major version. I’m looking into how we can extract that from Git describe by referencing the previous tag that we are based on. Speaking of which I created a new PyPy account just for GitHub Actions. If Rusty could invite pyln-bot as a maintainer to the BOLT packages we can actually automate that as well.
What could go wrong? Sure.
I created a new bot exactly because I didn’t feel like I want my other non Lightning related things be touched by GitHub. That is Lightning specific. That way at least we have that sort of compartmentalized. For stuff to land on PyPy it actually has to be committed on master. That gives us at least a gatekeeping capability that we might want. If somebody gets to push to master we are in trouble anyway. Other than that I have re-set up the Bitcoin bot which has been acting up lately. That is due to a change in either certificates or the API on the GitHub side which then caused me to redeploy which didn’t work on an old Ubuntu version so I just created a new VM and it should be running again. If there is anything just ping me. It should work. For reference that is the bot that counts acknowledgements and makes sure that some other housekeeping rules are maintained such as not having fixup commits in a pull request. I am looking into how to extend that in future but for now that’s what it has been good for, housekeeping mainly.
Other than that mostly Greenlight stuff. I got a couple of incoming requests for the publication of our hsmd code. This is the part that… counterparty to lightningd and receives incoming requests and then forwards them to the client device. These are mostly inquiries from people that are building more advanced signer implementations such as hardware wallets. Apparently someone is building something on NFC. Don’t ask me how that is supposed to work. It might be interesting. Cleaning that code up and making it public should be quite a bit of fun. It could get us some really interesting use cases as well. I’ll be doing that soon as well.
You didn’t mention that it is in Rust.
It is in Rust.
We are going to have to rebrand aren’t we?
It is rightning, r-lightning.
There’s some Greenlight code that is not currently open source but you’re happy for it to become open source and people have requested that you make it open source. Have I understood that right?
Yes. The goal eventually is to have 90 percent of Greenlight be open source, anything that touches c-lightning that could be reused in other use cases, that will be open source. That includes the subdaemons, that includes the plugins and that includes some of the things that might be interesting. For example besides the hsmd we also have a custom implementation of the routing server which allows us to instead of spinning up the node and waiting for our peers to tell us about the gossip we stream that from a central location. Very similar to what we are also doing using the LNsync prototype that we built a couple of months ago. That takes a couple of Lightning nodes that we have running in the wild, it aggregates all of the gossip information, serializes it in such a way that it makes sense for nodes to catch up on and then we can just give you all of the information. You don’t have to go through the lengthy gossip sync process. That might be interesting, the hsmd of course is interesting because it allows you to have different signer implementations and the plugin is basically just a JSON-RPC to gRPC bridge at this moment. All of these pieces we want to allow users to basically take their node hosted on Greenlight, export it into their own infrastructure and make it their own. By providing you with all the pieces you need to recreate Greenlight you can do that transparently in the background and you don’t have to change anything. You can continue using your usual apps and whatever you have built on top of Greenlight.
Some of it is embarrassing at the moment because Christian was kind of learning Rust while he did it. I can understand his desire to tidy it up.
It is really the 1+1 of Rust at the moment. All the different kinds of error handling styles that I tried along the way. It needs a bit of cleanup but I’ll do my best to make it accessible.
Just the plugins are in Rust? You are using the hsmd in C that is currently in c-lightning? No? Or you’ve rewritten the whole of hsmd?
The reason why it got refactored into libhsmd, the construction we had before was c-lightning talking to a hsm proxy which forwarded everything to a plugin and then the client would attach and stream the requests out. Then on the client we would have the streaming engine and it would talk to a full hsmd. Now that requires multiple processes on the client side because we need one, to pull in the requests, the other one is the actual hsmd that does reply to these requests. By extracting everything into libhsmd we can now go through the FFI interface and do away with the second process basically.
The hsmd core is still C at the moment. But it is not that much code really. We haven’t committed to a stable API so it would be painful to have two implementations for that reason but it would be like a weekend project to write a Rust version I would think. It is the simplest of the daemons. It takes stuff, decides what to do and spits back an answer.
That’s the exact reason why it is the first one you documented right?
I think so. I should revisit that because I think our documentation has rotted a little bit. Excellent, I look forward to seeing your Rust code. I have actually avoided looking at most of it.
It is kind of embarrassing. My focus for the next two months is definitely c-lightning and getting the release shipped, make it the best we can.
Individual updates (cont.)
Today I finally worked on lnprototest. I removed the Postgres dependency because we are using py-ln testing only to use the simple Bitcoin proxy. It was a couple of lines of code, copy and paste inside the class and it removes the dependency. Also I use this time to make integration testing inside the repository. I create a Docker image, call it
Dockerfile.clightning, maybe in the future we can have
Dockerfile.lnd or something else and we can spin up with GitHub Actions the testing with lnprototest. I put together the pieces of my Java environment to create a Docker Compose file so you can run c-lightning in pruning mode with Bitcoin Core with one command. You have your node that is storing locally data about the wallet. This is all that I’ve done in the last week.
Raspiblitz release and CLBOSS
We are preparing this Raspiblitz release. It is a lot of work. One question has come up which would be nice to discuss, CLBOSS. We have this autopilot function in lnd which is quite simple because it only opens channels and even that is not very sophisticated. CLBOSS opens them and balances them as well, it can cost money. I have spoken to at least one person who is using this and has had good experiences with it. After depositing some sats like 0.1 BTC as recommended his payments always went through and he was able to receive, channels are fairly balanced. What do you think about the idea of exposing this to complete beginners? This is an automatic node management system with all the warnings.
I gave Z-man the keys to my node and said “Just install your s*** on my node and I don’t want to know.” CLBOSS has been running on my node for a long time. It is its test ground. It has got a stupid number of sats in it, more than I would have put in these days. The Bitcoin price went up and it has still got sats. It is from my old tip jar. It has got about 0.1 Bitcoin or something in it. It has got a reasonable amount to play with. It does some weird things sometimes. What is it doing? It is looping out for some reason. In a low fee environment it can’t make too many mistakes. If it opens a channel it doesn’t need or decides to do something strange it is not all that expensive right now so I guess I’m ok leaving CLBOSS in charge of my node. It is better than me because I’m absent. I don’t have time to pimp my node and balance channels and all that stuff. I just leave it. It has kind of worked, it hasn’t broken anything. I can say that for it. But to be honest I haven’t been paying enough attention to know whether it is doing smart things or dumb things. As I said it hasn’t broken anything but would it have broken if I hadn’t had anything running? I know that isn’t a ringing endorsement but it hasn’t been a negative and it has done stuff. Spark saw a huge outflow at one stage and that was pushing funds out through a channel to get them onchain so that it could open a new channel. But Spark doesn’t show onchain activity, it only shows Lightning activity. So all I saw is him stealing 9 million sats or something. I pinged Z-man and went “Did you just spend a lot of money for my node?” and he tracked it, it was actually a Loop Out. It seems surprisingly solid.
It is a valuable experience you shared. In the releases there are a couple of experimental releases on top of the one which is tagged as the stable one. It extended to things which would even allow closing channels. You can activate an option where it can depending on whether the peer was offline a lot of times or there was no traffic at all. It does a fair bit of probing. The rebalancing payments do fail a couple of times, hopefully it limits the fee to be paid. I run it on testnet, it doesn’t do much there. It does collect a fair amount of data and recommends channels to open and things like that.
Send me your SSH key, I’ll give you access to my node and you can poke around if you want. This is how I lose sats. In my mind those sats are already gone, it is just a matter of time before someone steals them from my node. If you want to look at what it is doing on a decent sized node then give me your SSH key. It definitely spams my logs because I have my logs at debug level. It logs a lot of stuff. It is doing a lot of stuff in the background and it logs everything. It intercepts every single command.
Would you give him the rune or the SSH key?
If you have the SSH key you’ve got my node.
Would a rune not be better?
Maybe I wouldn’t want the full access but just knowing the node ID, that would help, I could probe it, and also the onchain activity is visible on the Lightning explorers. It has been running CLBOSS from the beginning?
No, it has been running for years. But I did give him CLBOSS access at least 12 months ago. It has been running quite a few versions.
Just with the node ID I would have a look. We will probably give an option to people. I always recommend people don’t use autopilot but people do use it anyway. With the proper warnings I think it would be a good experiment.
I always wanted something like a co-pilot rather than an autopilot. I always wanted something that would give me hints “Have you thought about…?” Even like a Dear Diary, I really wanted to write a Dear Diary plugin. Every day it would send me an email saying “Hi this is your node. This week the following interesting things happened. You should probably think about rebalancing this channel.” That is the kind of level that I want rather than something that will go “Move over Rusty. I am taking over your node and I am doing all these things.” CLBOSS is a little bit too bossy but that’s perhaps what people want. I would prefer something like an assistant rather than a commander.
Good luck with the ongoing release, always a stressful time.
Individual updates (cont.)
I have implemented the check functions for invoice and invoice request and I have connected to the Bootstrap API, I will probably publish the NPM, the latest version of it and put out a Tweet. Maybe we will get some feedback. In parallel I work on implementing the Noise NK protocol. I am looking over BOLT 8, that would help me to communicate with the node through web socket.
Mastering Lightning book
We have submitted Mastering Lightning to O’Reilly for final tech review. We have one chapter where we spin up several Lightning nodes via Docker images. There are several complaints currently in our issue tracker that the c-lightning build doesn’t work. In particular with the latest version. Something is not in the PPAs, the Docker image is not there. I could share the pull requests and issues. The question is with whom?
Christian, he is also timezone compatible so that makes it easier to go back and forth.
The book will go into print soonish. There is still some time left, probably a month or so. I think c-lightning shouldn’t fall out of it.
I do hope you are not printing the Docker files.
Totally. And the source code and everything (joke)
How many revisions are you planning on doing? One for each commit or one for each release?
You should number, name and date them. Two a year updating the book as the software changes. This is always the way with emerging technologies. The early versions of the book become these collectors’ items. Quaint things of “Remember when we had to do that?”.
We have been writing for two years, there is already quite some change over that time. My feeling is that it is becoming more and more stable. Of course when you look in two years “We should have written about this. We should have put this in.” I think it is fairly ok. Of course there is Taproot coming. Whole different story. The other thing is there was a discussion going on the c-lightning mailing list prioritizing for larger channels. At that time I didn’t enter the discussion. I was wondering if somebody is still working on this or willing to do some experiments because I think I have some answers to give there on what we could do. I can also put a post on the mailing list with respect to that. The main idea is very simple. Just put one over capacity in your fee function. If you do a linearization of the probabilities the linear term of the Taylor series is just 1 over the capacity of the channel. This would probably be a pretty good way for pathfinding to prioritize large channels. They are just less costly. You have a multiplier which you have to experiment around. Obviously this is not optimal at that point but I think it should probably give some good results. It would be interesting to experiment around with that. I do have a Lightning node with a lot of inbound capacity for tests so if anybody wants to reach out to me and try to pay me several times I have Christian’s test plugin running. So you get an invoice with a fake payment hash. I am not claiming the money, I am just holding the HTLCs and at some point in time timeout the payment. It is safe to do that. You don’t even pay routing fees for these experiments.
I prefer to pay routing personally.
I have to pay you back, find the routes, it is getting tricky.
But you’re the expert on finding routes so that’s ok. You can always buy me beers.
For example when we did one of the experiments we computed everything without fees. We had one channel between Okcoin and Bitrefill. That channel was to be used. Then we constructed ourselves an onion, we used lnd in that case, but lnd used the parallel channel between them. The parallel channel however had a base fee of 10 milliBitcoin. We were quite happy that we didn’t pay the fees. I don’t know why lnd chose the other channel. It didn’t make any sense. In the API you could only give node ID to node ID to node ID.
As far as I know lnd always takes the higher routing fee into account within the parallel channels.
That’s their code.
Maybe they sort it in that order and then they take the first or last or something.
BOLT 12 vendor field and lightningaddress.com
I think the most interesting thing I’ve done in the last couple of weeks, if you’ve seen lightningaddress.com there is this idea that you can have an email looking address, which would probably be an email address, username at domain name, to a LNURL. I thought that is close to what we want for vendors. Shesek, who does the Spark wallet, has implemented offers and really dislikes the vendor field. He dislikes it because it is a huge vector for… it is an unauthenticated field that says “This really is Blockstream.com, you should pay me”. He was wrestling with how to present this in the UX, in a way that doesn’t endorse it in anyway. In fact he produced this really nasty looking vendor field that made it look like there was a tick and a lock and an “Approved by”. You can do some pretty horrible things with an arbitrary string it turns out, emojis and things like that. Validating these vendor fields is actually kind of important. I always had this idea that if the vendor field contained a domain name, anything before the space kind of thing, that we should be able to use the existing internet PKI, the web PKI, to authenticate that at least that website did have a signature on some level saying that “Yes indeed this was their node ID or one of their nodes” so you should be able to close the loop and validate it. Then lightningaddress.com came out which was doing a weaker version of the same idea but for email addresses. I thought “That is kind of cool so I should implement the authentication for vendors. I might as well extend it to do this email address style authentication. The thing that I really want… They just do a GET from a well known… the fact that you’ve fetched it over HTTPS means that it was signed and deliberately served by the web server so you know what they are giving you is authenticated. But what I want is an authenticated blob that is self signed so that you don’t have to serve it over web. You can initially grab it from the web page, that is fine, but I want all the signatures in there so I can give it to a third party and they can also validate that this was signed by blockstream.com. That’s important because it is nice for a Bootstrap to be able to go “I wonder if Rusty at blockstream.com has a thing” and it would be able to reach out and go “Yes it does. There is Rusty’s node ID”. Or “There is a blinded path to Rusty’s node” at least so I can send him payments. That is really nice because you already trusting Rusty at blockstream.com. Maybe you are sending me emails anyway. You are trusting that that interaction is valid and you are already trusting the middle man, the Blockstream.com website so you might as well trust them to give you a Bootstrap address. But I don’t want you to have to trust them every time you make a payment to me. I want a Bootstrap thing, you get the information and then you can go and use it, you can contact my node directly, you are not reaching out to the website all the time to go “I’m making a payment to Rusty”. It is none of their business to some extent. The other thing is it is a Bootstrap PKI. You have that and then you can get that message in any other way. Your wallet could even come with a pre-canned set of vendor proofs. All the common vendors come with your wallet so you can validate the proofs, the certificates, yourself. So when you get an invoice from Blockstream.com it looks up in this pre-canned set “Yes there is a proof there”. Or of course it could be served over the Lightning Network itself, you would serve one of these proofs. I have a draft spec. It turns out that actually getting the web PKI to do anything other than TLS is crawling over broken glass. It is quite difficult to do. There is a OpenSSL command to get a certificate to sign a blob, it uses RSA. I have managed to extract a signature, I have not managed to figure out what parameters it uses to create that signature so I can’t figure out how to validate it in anyway, it is painful. The command line tool has no obvious way of allowing me to actually verify it without the secret key, which would defeat the whole point of asymmetric encryption. It is very primitive tooling, it is like stepping back 15 years in crypto before Bitcoin and dealing with this stuff is uniquely painful. But I’ve almost got to the point where I have a Python version. I originally wanted to do this in Shell but that proved a step too far. A Python version that can create one of these things and validate them. What it is, it is a new BOLT 12 type called lnap, an address proof. It has the things you would expect. The vendor, obviously, either an email address or a domain name. The difference is it has optionally a certificate, a certificate chain to go to the root CA and it has a signature which is a RSA signature that uses the certificate to prove that this is indeed Blockstream.com. It has a list of node IDs. These are the Blockstream.com node IDs that you can validate you are making a payment to. You fetch is and you can validate it but then you can also hand it around to other people so that they can validate it as well. I think that’s a better vision for where we should go with Lightning than relying on the web all the time. It was going to be a weekend hack, it has taken longer than I would hope, I had to read more RFCs than I ever wanted to on web standards and X509 and PKI but it is almost there. It is kind of cool because in the README.md it has the spec format and I process the README.md to generate the code using the BOLT 12 stuff that we did in the BOLT 12 repository to create the Python code that creates the class and everything else. It is kind of cute. I am hoping to release that and I will start then handing out BOLT12.org email addresses and people can test it, validating and making sure they get their offers through. Then ideally there will be integrations with things like Spark. It can actually validate the domain names in offers which is definitely a nice to have. Vendors make a lot more sense when you have some kind of assurance that it really was…At least that they own the website, that’s as good as we are going to get in the current day and age.
You are not saying that anybody with a Blockstream.com email address can send an invoice because that is dealt with with permissions, macaroons, runes. This is an additional safeguard where you can’t pretend to be Blockstream.com unless you’ve got a Blockstream.com email address?
There are two things here. One is vendor proofs. If you create an offer that says Blockstream.com I will reach out to Blockstream.com or in some other way get the Blockstream.com web signed, authorized list of node IDs and check if you are on it. Immediately I can validate if this is even from Blockstream.com. That is one huge thing. The extension to allow an email address in the vendor field is then fairly trivial. Something that offers email addresses would also allow users like rusty at Blockstream.com to provide information about their node IDs. Then you go “Where would I send funds? I want to send Rusty a beer.” You would try to fetch rusty at Blockstream.com from the Blockstream.com server. It would go “Yes. He’s got one of these.” It would return you a nice signed blob that tells you my node IDs. It does not have to tell you my node IDs because of blinded paths, the same way as we use it in offers. It could say “Here is a blinded path that can get to Rusty” which is a lot more sane than giving you out my node IDs natively in the long term anyway. Lightningaddress.com is really aimed at this idea of spontaneous payments where I am sending you an email anyway, we are chatting and rather than have you send me an invoice or something I want to just send you some sats. It is a bridge from an email address to a node ID. In the Lightningaddress.com state it is all on LNURL so what it is is a bridge to a LNURL endpoint. That means you always go through Blockstream.com to get a new LNURL and pay me. I just want to use it for Bootstrap. I want you to go “Rusty does have rusty at Blockstream.com”. Any email provider, they are already in a trusted position, you are already relying on them for your communications in practice. It is not ideal but it is very practical. Asking them “How do I pay Rusty?” is a logical question to add. It is kind of a TOFU model where that will give you the information once. Presumably then you will send some sats, it will work and you will assume that this is now valid. It is a very practical way of doing things I think.
I guess the key difference is that it is only being used for initial lookup and not directly in the payment path every time that you want to send something to Rusty. That way you don’t leak that kind of information to whoever is your provider.
That’s right. I may trust them with my email but that doesn’t mean I trust them with payments. At least in an ongoing sense. And the privacy issue, it is bad enough as it is. If you think about Gmail doing this, I don’t really want Google handling all my payments.
When I was in Berlin with the Bitcoin Citadel in Germany I was contacted by the folks from Bitcoin Beach. They have this problem in El Salvador now that everybody has to be able to accept Bitcoin payments and basically Lightning payments. There are a lot of people running into the market with custodial services. What they want to do is achieve some interoperability. They want to have something like Rusty at Blockstream or somebody else at BitcoinBeach or someone else at Strike. They want to be able to like the Cash App tag, send money back and forth through a custodial service. The question is can we reuse this and can we make a recommendation? I think the main issue here is we don’t need the best solution but we need something fast. They have this thing going live, everybody comes with something but there is certainly an urge for people to have a standard on this. They are seeking this. The idea I was thinking of was to provide every user with their own node ID and allow users to issue their own invoices with some fake payment channel as a routing hint, the channel doesn’t exist and the node can resolve this. If everybody implements this and every node knows to speak this this would be a way. I was going to post this to the mailing list but then Lightningaddress.com came up. Now I’m hesitant to post this. Can we find something that is good and those people can pick up? If we miss this then they are going to come up with something else and in the worst case they are not compatible to each other.
I think Lightningaddress.com is the immediate, that you can deploy today. Everyone understands LNURL, that is the fastest path. I wanted to produce something for BOLT 12 that was equivalent that had these properties that I like. If we step back a bit there is a meta thing here, there is a reason why we didn’t build Lightning over HTTPS. When you think about that really carefully and you understand why then you’ll understand why I’m reluctant to keep going “We’ll just build stuff on the web to tie Lightning together”. The web is in a lot of ways awkward, layered, complicated but importantly it is almost naturally centralized. The web assumption is if I am connecting to your website, if I am talking to you I know who you are. I have a certificate that has been signed by this hierarchy of people who have validated that you are indeed foobar.com or whatever. That is not my vision for Lightning. There will be some of that, there will be a lot of that but fundamentally I don’t want to nail in the protocol this requirement that one side be known. I think P2P payments is something different and the P2P web is an awkward and sickly thing. Everything is aimed towards this vendor model, this known server model. That has failed us in many ways. The internet is no longer a P2P thing, it is very much clients and servers. I am somewhat responsible for that model having written a lot of the NAT code that gets run on the internet which is very much of that genre. For past sins I am definitely going to try to avoid that in the Lightning Network. It should be very much P2P. It is very easy to build on the web, it is trivial, there is a lot of support, libraries and everything else. You can build something like Lightningaddress.com without really worrying too much about certificates, how do we get it signed. It just works and that is great for fast. But I think it is in the long term bad for the Lightning Network to hitch its wagons too closely to the web train. Even though that seems like an obvious no-brainer to everyone else. Fundamentally I guess I have a bigger vision for what Lightning can be than what the web is today. It is harder, it is a lot harder to build than just putting a web layer over the top. I understand people’s reluctance. That’s one of the reasons why I’ve got this whole BOLT 12 address thing. But in the short term I think people will use LNURL and be pretty happy with it. But I am hoping to buck the trend in the longer term.
I think there is also an argument to be made that the upgrade path is there. Especially for Lightningaddress.com, to upgrade it to become fully BOLT 12 compatible, the lnap proposal. It basically boils down to them publishing one more blob instead of just the address. They can be run together. If there is demand for this then we will slowly migrate towards a lnap version of this. I do share the desire to get it right and then upgrade instead of rushing it through. I think Lightningaddress.com might be a good interim solution until we get what we want.
The two are not mutually exclusive and that is a very good point. They are different well known endpoints. We’ll fetch this one and this one and we’ll use whichever one we understand. It is not like the existence of one destroys the existence of the other.
If you were to use email addresses then that wouldn’t be using the web layer would it? The Cash App model is like @rustyrussell. There is no domain. Cash App owns everyone’s identities.
There is an implicit domain. CashApp. They haven’t even tried. But if you squint it at the right way as a centralized… at their domain. This is one step better because it could be at anyone’s domain. Let’s not have the perfect be the enemy of the good. It is definitely better but I want to go one step further, that’s always my problem. I am more ambitious for what Lightning can be than is obvious I think.
Shall we go one level deeper and replace DNS as well?
How about replacing the agency that is assigning all the IP address blocks?
I think that should be replaced.
Everyone choose your own addresses. Isn’t that what IPv6 was for?
It is doable. There were those guys using this Namecoin in 2013 which is a good idea in of itself even though nobody uses it. It proved that it could be made decentralized and we could do the same thing with managing address space. What we could do is make a fund, buy a large enough address room of IPv6 and then we say “That’s the Lightning implementation of IANA” and we do our distribution. You can rent and trade those addresses. If someone shuts down IANA you still know this block is managed by this decentralized system. We don’t rely on this political thing there. That would be cool.
Not your keys, not your address?
We already have IPv7.5. It has 256 bit addresses. They are called public keys.
We just need to get this to the router. They need to run it.
I do like your ambition here. I always thought one of the problems with Namecoin was abandonware. You have to have an inbuilt mechanism. Unlike Bitcoin where things are fungible and you lose coins but there is always more fish in the sea, it becomes complicated because you need a method by which abandoned domains get recycled. This means that you need authority of some kind, you need a challenge period, there is a whole period of design around this. I can nominate some people, they could say “Cool this domain is abandoned. We are going to recycle it.” Then I can pop up and say “No it is not” and keep refreshing it, assume that I’ve got censorship resistance in order to do that. There is a lot of design that goes into a serious Namecoin that was never done as far as I can tell. I really like the idea of somebody doing it again but it is a whole other…
An idea I had, I think it is pretty much doable with DNS. What we will see in the future when there are attacks on the internet, those root DNS servers are obvious targets for whomever. I think the same could be done to an agency like IANA. We don’t need this, that’s just stupid. Obviously nobody will be using it before that happens and nobody will implement it until then but in theory this works in my head.
If this is the hill you want to die on I encourage you to go and implement it. I feel that you’ll spend the next 20 years battling this. I have one crazy project and I’m working on it right now and that’s Lightning. I’m having trouble expanding my horizons to other things. But you’re younger than me so you should go and do that.
It is this out of the box thinking that got us Blockstream satellite.