Current state of P2P research in Bitcoin /Erlay
Speakers: Gleb Naumenko
Date: November 13, 2019
Transcript By: Michael Folkson
Media: https://www.youtube.com/watch?v=ZUWs00Anpaw
https://twitter.com/kanzure/status/1201861687097352198
Introduction
My name is Gleb. I’ve worked on Bitcoin stuff since 2015, building wallets and exchanges in Ukraine. Then I realized that I can do more fun stuff. I want to do more protocol research, improving Bitcoin itself, not just using it. I decided I will go to Canada to do a Masters. I started working on this topic on research stuff more precisely and I realized that not many people do work on peer-to-peer (P2P) of Bitcoin. P2P is essentially the way nodes communicate. We have at least 60,000 nodes deployed which validate transactions and relay transactions and blocks across the network. The way they communicate is essentially what we call peer-to-peer. I started contributing to that. I have several PRs. Most of my PRs are usually a couple of lines of code but a lot of explanation and experiments. In this case there is not much code. The wallet in Bitcoin Core has a huge codebase but I’m not touching that. This is pretty dense and every line means something so sometimes just changing one parameter is a lot. My work is usually like that. Sometimes it is not even a PR, sometimes I’m just commenting on somebody’s solution saying “I did this experiment and what you’re suggesting is not helping at all so we shouldn’t do this or we should change something.” That happened last week. That’s sort of what I’m doing on a daily basis. I am in New York with Chaincode and Chaincode is mostly engineers working on the Bitcoin Core software trying to optimize it in various ways but usually just coding and sometimes trying to solve research problems. I’m helping with that part. I’m trying to help them understand papers which academics come up with and make their ideas more proven based on experiments I can run. I also try to help to bring more academics into the Bitcoin space because I have some experience. I understand which conferences they go to. I’m visiting here for a conference where I’m presenting my work. There was another talk from a guy who works on something completely different. Their group found that there are a lot of routers in the world on the internet publicly exposed and they have trivial passwords. Attackers did intrusion into those routers. What they actually did is that once the attacker is in the router they can add anything to a http webpage so they added scripts which mine Bitcoin. Sometimes website do this themselves. If you know The Pirate Bay, the government goes after Pirate Bay so there are many versions of it now. You can run your own Pirate Bay at home. If you Google, every week it is a new link. Some of them have mining scripts inside. That’s enforced by the person who runs it but doing this on a router is much cooler. It is something completely different. They made a lot of money, there was a talk about it. What I suggested to that guy is we should see how these miners can attack Bitcoin. If one miner has 20% of hash rate which is certainly possible under some conditions and they want to increase their revenue, they have a clear competitor which has another 20%, they can use those routers deployed on the internet to somehow redirect traffic of their competitors so their blocks are going nowhere and the attacker’s blocks are relayed to the network. That’s very bad for the security because we don’t want those games. The system should be fair for everybody essentially. That doesn’t fall in the Bitcoin white paper security model, we’re trying to prevent stuff like that. You need to study that, you need to show that some attacks are possible but it costs millions of dollars to run it and if the revenues are orders of magnitudes less then it is probably not reasonable. It is still important to show stuff like that. That’s just one example of the conversations, one out of five I had today. I am trying to do that. I’m trying to bring more academics into this space and somehow bring their expertise in this domain because in future we want to understand what is going on here.
This talk is for those who
This talk is for those who understand Bitcoin at a high level and want to contribute to peer-to-peer research directly or want to contribute to interdisciplinary research. This talk is from MIT where we were discussing both blockchain research and economics. I was trying to show people who for example do game theory how their work can be combined with peer-to-peer stuff. The problem with routers I just explained to you is certainly possible to mess up with it but will they do it or will they lose more if they try to exploit it? That’s essentially a primitive version of game theory ideas and I wanted to expand on those. I don’t think this slide is very fair, we will cover a lot of different things that are not related to these two.
Bitcoin exists to exchange transactions
Bitcoin exists to exchange transactions. I think this is fundamental. It is interesting to understand this. In the Bitcoin peer-to-peer network we relay three objects: blocks, transactions and IP addresses of other nodes. Bitcoin is an electronic cash system so transactions are the most important part, that’s the biggest issue. Then you have blocks to help synchronize transactions. What blocks help to solve is how to order transactions in a proper way. If you have two transactions which are incompatible, spending the same coins, in a distributed system it is a problem they always try to solve. How to order those transactions; which one is real and which is not. Mining attempts to solve that problem because if you mine a block with two competing transactions it is invalid and it is not going to be accepted in the network. But whoever mines the first valid block will choose which transaction will go in it. Blocks just help us to relay transactions. Then we rely on addresses in the network just to find other nodes. That’s also a helpful tool but transactions are the most important part.
Bitcoin’s p2p network consists of ~60,000 nodes
The network consists of 60,000 nodes. This is based on the closed source script somebody wrote. It is closed source because if it was open it would be easy to trick. Bitcoin is an open system. There are attempts to estimate how many nodes we have. It is not trivial because some of the nodes are under firewalls, some of the nodes are just spies who start a node, try to read something from other nodes and then shut it down. We are trying to not include those. This is an estimate. It is available on the internet, Luke Dashjr runs one of the versions of it and I can send a link. It is not trivial to find it I guess for some reason. That’s our estimate but we don’t know. Maybe someone managed to trick that script or maybe that script for some reason ignores some parts. It is unclear but most of the researchers use 60,000 right now except one group who says “You guys are wrong, it is actually a bit less.” In my work I also use this number when I’m trying to reason about the network. This is one open question - how to measure this more precisely? Everything is based on this number at least in peer-to-peer land. If your network is ten thousand nodes the protocols would look different. We would reason about transaction relay differently. I measured in my experiments that transactions are relayed across all nodes in three seconds but if the network is smaller it would be two seconds. Then spying stuff changes. It might be easier to spy on the network and all the research we did on spying would have to be adjusted including adjusting the parameters in the code. When I say spying I mean somebody has seen a transaction in the network and is trying to see which node it came from. You have public networks so you know the nodes, you know their IP addresses. If you can distinguish when you see a 100 Bitcoin transaction, if you can see which node it is coming from you can probably go after that person and I don’t know, murder them and steal their coins. Bitcoin doesn’t guarantee privacy.
Q - Is the three second figure coming from experimentation on the real network?
A - I tried to replicate the protocol as much as possible. It is actually not that difficult. There are a couple of tricks we do.
Q - It seems like you wouldn’t need to use simulation for that? Couldn’t that just be measured on the real network? You could try with real world experiments rather than simulations?
A - It wasn’t simple enough. I receive a transaction, I validate it and what Bitcoin did in the early days, I’ll just send it right away. Then it became clear that doing that will make it trivial for an attacker to understand who is the origin because there is a simple assumption. An attacker runs a lot of spy nodes and connects to everybody and then whoever sends them the transaction first is probably the origin because they knew the transaction earlier than others assuming that all the links have the same speed and the speed is good enough. It is fair to assume in most cases. It is called the first spy estimator because we just look at whoever first sent us the transaction, that’s probably the origin. To prevent that we have diffusion. It is the part where we randomly wait before sending the transaction. Over one link I will probably wait 0.5 seconds, another link I will wait 2 seconds. Then if the spy is the second link but somehow the transaction goes to them indirectly then they will think that the transaction is coming from somewhere else. That works to some degree. I also have numbers on under what conditions it works. It is fairly good now. Going beyond that requires fundamental changes to the protocol. I will talk about them too. I don’t remember where I was going with this.
Most of the papers assume 60,000
This is the script. You will see them in the video, you can parse them below.
Q - That’s the script you use for your research?
A - Usually I just open the website and see what does it say. It was interesting. I once asked Luke how it actually works. I asked Greg Maxwell is it trustworthy and he said “It is as good as we can do. Making it more precise is very untrivial.” I trusted Greg in that sense. Then I asked other people and they seemed to agree. I actually wanted to know how it works. I asked Luke and Luke said “Come in a week.” I don’t understand. Was he busy or was it he thought about some conspiracy?
Q - When other people have asked him, I think we discussed this last week, he says he won’t reveal it because as you said that would reveal the methodology and that might allow people to bias it.
A - I guess he wasn’t aware that I’m doing something interesting.
Q - It would be good to know if others have looked over the script. If Greg has seen it then at least that is a group of people rather than just trusting Luke.
A - Yeah
This is a closed source crawler. There is a paper by these guys (Liang Wang and Ivan Pustogarov). The public nodes are the backbone of the network, they are reachable from almost everywhere on the internet. They help other nodes to bootstrap. If you run a node at home by default it will be protected from the outside internet by your router which has a firewall enabled. If you’ve played video games with friends you go to the firewall settings on the router and try to open it. The same if you want to run a public node and help other nodes to find you. You should forward ports but by default it is disabled. It is fine for most of the people and those are 50,000, 10,000 are public. That is how it is right now. It is fair to assume that 10,000 public is a precise number because there are websites that are checking them all the time. They may be spies but at least they follow the protocol. They are public nodes because they relay transactions and blocks. With private it is less clear.
Properties
Properties of the peer-to-peer network includes security, privacy, latency and resource consumption. Security is how expensive it is to attack the network. For example, how expensive would it be to shut down the network so that transactions are not relayed for one hour? In the early days of Bitcoin I believe it would be totally possible to spend $100 a hour to disable the network because there were trivial bugs. For example there was no minimum transaction fee I think. That’s why you could just broadcast empty transactions paying to yourself with a 1 satoshi fee and then flood the network with those. Then the memory of the nodes would get exhausted with all those transactions and they would die basically. That cost super cheap. We’re constantly improving it to make it more and more expensive to attack the network. If it was $100 at that time, right now this would probably cost $100,000 and we’d patch it very quickly. That’s security improvements. Privacy is essentially the same but how much does it cost to deanonymize transactions for example. To link a transaction to a particular IP address. Before the diffusion with random delays that I explained it was pretty trivial. It was enough to run one node and connect to everybody and the connection should be fast enough so there is no networking latency involved because it would add noise. But if you have a good connection you will just trivially see whoever is the source. So we made a lot of progress there but we need to do more. Latency - for transactions it doesn’t really matter because I said it takes three seconds to relay across all the network. It doesn’t matter because confirmation time is ten minutes. It is probably fine if you wait five seconds instead of three to see a transaction in your wallet on your phone because it is not going to get confirmed in ten minutes anyway on average. If you’re going deeper technically there is a usability problem with latency which I think doesn’t matter if you add a couple of seconds but another problem is…. Matt Corallo invented compact block relay a couple of years ago. Compact block relay helps to relay blocks in the network very fast. It is very important to relay blocks very fast because you want miners to work on the freshest block. If for some reason miner A found a block and started working on a block on top of it but miner B didn’t receive the first block for ten seconds and is still working on the old stuff. Once miner B receives the freshest block the time they worked on the previous one is wasted because it doesn’t matter anymore. It is essentially wasted security because you can measure the security of the network using the effective hash rate. It is trivial to attack altcoins which are based on the same algorithm as Bitcoin because you can just redirect 10% of the Bitcoin hash rate into those and start mining empty blocks or something. Ideally miners don’t waste energy on outdated blocks, that’s why we want to relay blocks as fast as possible. After the compact block protocol I explained, blocks are relayed very first but it works the best when everybody agrees on the transactions, when everybody is in sync. The compact block relay protocol, I have a block, instead of sending you the whole megabyte of transactions I will just send you a block header with shortened transaction IDs. I take like 32 bits or something, I don’t remember the exact number, I send to you, it is a smaller packet so it is relayed faster. On your side you will try to map transactions in your mempool to those short IDs. Once you map them, if you have all the transactions you just validate the block, see that everything is correct and relay it forward. If you are missing one transaction you will have to ask for it from me and I will send it to you. You will ask me that short ID you cannot map and I will have to respond with a full transaction. That extra communication means latency. If everybody was in sync it wouldn’t happen or if most of the nodes in the network. If transactions are relayed slowly it will happen more and more often and there will be more cases of miners working on incompatible blocks. This is called in the recent research “proof of chain quality.” The worst case is when miner B actually mines something and now there are two blocks at the same height but you have to throw away one of them. That would mean poor chain quality according to this new definition somebody introduced. Resource consumption is also related to security because I think the goal of Bitcoin is to enable its users to be as secure as possible. They believe that running a node at home is as secure as possible. If you are using a mobile wallet which is provided by some company it is very likely to be connected to their node which they run in their office. This means that if it is not using Tor on their side they can simply see the transaction and see which phone it is coming from by IP address. Then if somebody comes after them they will have to, maybe legally in some countries, tell whoever comes to them “This transaction originated from that node.” If you are running a node at home there are still ways to figure out where a transaction is coming from but it is much more difficult than just going to the company. That’s one example of why it is better to run your node. If you don’t care, sometimes I send Bitcoin to my friends, I can’t use that other wallet for like a $10 transaction, it is fine. It is important to enable that for everybody who wants to be as secure as possible. That’s why it is important Bitcoin works on cheap laptops, Bitcoin works on Raspberry Pis, tiny computers you can buy for like $50 and Bitcoin doesn’t require a lot of bandwidth so you’re not going to get a huge internet bill. In some places internet is still limited, I don’t know about the UK. Even here if you live on a farm would you get a full super fast internet?
A - If you cross the country on the train there is no mobile internet coverage for example.
If some farmer accepts Bitcoin then they probably want to run a full node with all the features but if it is too expensive in terms of bandwidth they wouldn’t be able to. There is a project, the Blockstream satellite. They rent a part of the bandwidth from the satellite and relay blocks from it, announce blocks to the entire world. It is obviously trusting Blockstream that they provide the right block. But if somebody runs a node in Alaska and I think somebody actually does, and there is no internet where the person lives, they can get blocks from space. If blocks were bigger, if blocks were 4 megabytes Blockstream would have to pay I think ten times more for the satellite capacity. Right now if a block is 1MB, with SegWit it is a bit bigger now, it takes several minutes because the link is very slow. If you have $100 dish, it depends on the dish. If you have a $100 dish it will take several minutes to download a block. If a block is 4 megabytes it will take more than ten minutes and you will always be outdated. That’s another example why Bitcoin tries to be conservative about resources.
Major attack vectors
Let’s talk about the research I’ve been doing. These are two major attack vectors we can see there. A targeted eclipse is there is merchant running a node, you know their IP address as it is possible they don’t run Tor. IP addresses on the internet usually link to the country the person lives, sometimes to the city. Sometimes you can figure out where the merchant lives. You are going to buy a car from them, so you’re going to pay $100,000 in Bitcoin. What are you going to do? You’re going to make them only connect to you. At the same time you will try to create a parallel chain to the existing chain so you will fork at some point, assuming you have some hash rate or you can bribe somebody. You fork and you start feeding them this second chain which the network doesn’t agree on. We have the longest chain rule which is every node in the network, if they have two chains they take the longest. Actually it is not the longest, it is the chain that has more work. So miners spend more effort. It can be the same length but it can have more work. It is actually trivial to produce a super long chain with little work from the beginning. That’s why we have a checkpoint in the codebase, it is like 2012 or something, to prevent those attacks. We can remove it but then the network would be easier to DoS. It would be possible for an attacker to make you download this very long chain which at the end you download it and then you realize it doesn’t have any work really. We have checkpoints to prevent that. There are ideas to help remove checkpoints but then the codebase becomes very complicated. We decided that a checkpoint from 2012 is a fine denial of service prevention measure. Again you can run software without it but it would be easier for an attacker to make you spend a lot of bandwidth. Let’s get back. An attacker has this fork which has less work but is a valid chain and the attacker makes you connect to them exclusively. That’s called an eclipse. Instead of connecting to every node when it starts, it has 8 connections. It connects to 8 random peers in the network. If all of them are malicious and it is the same attacker you are eclipsed because you don’t have access to honest nodes. An attacker can start feeding you a chain they produced instead of the real chain and you won’t know about the other chain so you will have to trust them with that. If you are connected to the Blockstream satellite or can go online you can figure something but that’s not always possible. Maybe Blockstream will collude with the attacker. So what happens then? An attacker spends their $100,000 on their fake chain and they show you “I’ve sent you the transaction, there is like 20 blocks. Give me the car.” You give them the car and then they walk away but your transaction was never on the real chain. It was only on this fake fork they showed you. That is why eclipse is very dangerous.
Q - Wouldn’t that example be a bit impractical though because they’d have to produce 20 blocks within current difficulty parameters for $100,000 worth of gains?
A - It depends on how much money you’re transferring.
Q - We were talking about this last week and what came up was it is a big difference if it was 1 or 2 blocks versus the example you gave of 20, that is a pretty deep reorg?
A - We have the difficulty adjustment. Difficulty will remain the same for a week or something? Two weeks. If there was no stuff like that then it would be trivial for an attacker to start mining a lot of confs. That will make this attack much more trivial but there are other problems. If you are eclipsed you can still be on the valid chain, they can feed you the valid chain but they can clearly see which transaction you are sending. It is much easier to spy. It is also possible to censor your transactions so your transactions are not relayed. Or if you are a miner you will just get new blocks slower and then you’re not competitive anymore because you’re mining on the outdated stuff much more often. It is one of the biggest threats we’re considering, it is getting more and more impractical. It is usually practical when combined with some bug we find in the codebase which usually relates to address management. Every node has a database of other addresses and if some node goes down we are going to pick it from that database and establish a new connection so we always have 8. There was an attack where it was trivial to flood the database with fake addresses so that if some node goes down you connect to an attacker and if another goes down you connect to an attacker again. Another exploit used in the same research makes the node restart. They flood the database then they make that node restart and when it goes up again it connects to all the attacker’s nodes exclusively because it doesn’t know about anything else. The database is limited due to scalability reasons so if you add a new record the old record is evicted. It was possible for an attacker to flood the database with their IP addresses and then when the node restarts it connects to them and there all these problems. It usually works in a combination. You can also run a lot of public nodes. Right now we have 10,000. If an attacker runs 50,000 public nodes on the network it is a decent probability that some people will accidentally choose 8 out of 8 bad nodes. It is getting less practical now too. Usually combined attacks is the most interesting stuff.
A network split is problematic. Let’s say you have Chinese miners and Canadian miners and you cut the link which goes through the ocean and there is no internet going through. I’m not sure about recent developments but that’s certainly how it used to be like ten years ago I believe. There was just a link through the ocean with the internet. If you cut that now those two groups of miners, each of them have roughly the same hash rate and they’re producing parallel chains super fast. That’s also very problematic, you cut the chain quality in half essentially. When they reconcile back one fork will be gone and completely useless and all the payments that happened would be useless. That’s an extreme example but network splits are bad for a lot of reasons which is usually related to producing blocks.
Q - Are there examples of eclipse attacks in the wild on Bitcoin or on altcoins? Of the P2P vulnerabilities that have been fixed on Bitcoin but still exist on altcoins have you seen any of them be exploited on altcoin networks or do you not really look at altcoins?
A - Most of the alternative systems don’t really care about this research, I don’t know why. I’m always trying to engage with them, at least with Ethereum. Ethan (Heilman), he showed that attack I told you about, flooding the database. He also showed on Ethereum it is much easier to do it. In Bitcoin we have a random graph, you choose graphs randomly when you start a node. Ethereum tried to be a bit fancier. They thought if they used a cool peer discovery protocol they could optimize block relay for example. Miners will receive blocks faster if you can create a topology in a fancier shape and it is more efficient. Because of the protocol they used, it is called Kademlia, it is a classic computer science protocol for content search. It is a DHT. If you are using BitTorrent to download a movie you’re probably going to find your peers based on something similar to that. Let’s say there is a list of 50 peers but according to that protocol you will connect to 5 of them. They use the same stuff in Ethereum and it turned out it was super cheap for an attacker by exploiting the protocol because it is not suitable, to make everybody connect to them. Essentially then it means everybody is eclipsed because an attacker just prints connections. Ethan suggested how to fix it in Ethereum. A trivial fix would be to remove it completely. Ethan suggested fixes, they implemented a few of them, two out of six or something. As far as I know it is still exploitable, they just don’t care I guess. I feel they are mostly interested in proof of stake and zero knowledge. Those are two directions they are working hard on. On other stuff when I try to engage with them… We have this libp2p library which to me looks like a Frankenstein’s monster where you add this and this. Bitcoin’s peer-to-peer protocol is very precise, it does exactly what it needs to be doing and nothing else.
Q - That libp2p library, that’s not Ethereum exclusively? That came from somewhere else?
A - IPFS. That’s their project I think, at least now they’re maintaining it.
Q - They are separate projects. Filecoin has a coin, the P2P libraries don’t have a coin.
A - Whatever.
In altcoins it is certainly much more possible still. I consider miners generally as adversaries we have to live with. Usually when you’re building a protocol that’s what you need. You consider miners as just people who want as much revenue as they can get. If a miner starts an attack, if people see it they will probably will sell Bitcoin because they don’t like it and then the miner will lose a lot. That’s a part that is hard to analyze, this social, miners being fair. When building a robust thing you are trying to assume that miners can behave as bad as possible. The famous 51% assumption says that it is possible to do it but it would be much more reasonable for a miner to be honest for a lot of reasons. That’s part of how Bitcoin operates. When we’re building peer-to-peer protocols we’re trying to omit that part. What if miners have 50% for some period of time or approaching that, we are still trying to build something robust even there when it is possible. I don’t remember how I got here.
Q - I asked about altcoins. Ethereum has a bigger problem because nobody is running a full node?
A - It is a bit different, they have a different definition of a full node. I tried to learn how it works. The fundamental difference in Ethereum is that every block header has an explicit commitment to their state. A state in Bitcoin would be a UTXO set, a current UTXO set. It is not explicitly committed in a block. You have transactions and by replaying all blocks you can get that set. For example if you had it, imagine that nobody stores the history because why would you need it if we just have the recent block or we just have the headers that are valid? Somebody stores a snapshot of the state and then you see the header with the state, you just download the state and now you can work from that. Nobody would need to download the whole Bitcoin history from the very beginning if you’ve committed that. The problem with that is that if nobody stores it and miners agree to print their s***coin on a smart contract and adjust the state, build new s***coins out of nowhere. For example not paying gas or doing some weird DeFi, I don’t know how to pronounce it. Now you’re adding this extra trust in miners, you’re allowing them to exploit the state without replaying history. If nobody replays the history nobody will find out that they did something. If you did replay you’re the only person who has the capability to do it because it is super expensive. How can you tell everybody that there is something wrong there? Maybe if you tell people will say “Miners did this only once. It is fine, it is accepted.” We never wanted to go in that direction. Bitcoin was designed, this was discussed in the early days of Bitcoin. Should we commit it? It would certainly help with a lot of things.
Q - There were long discussions about UTXO…
A - Sometimes things are called in different ways. I feel as if there was something like that but I’ve forgotten. It was discussed. Proof of stake was discussed in 2012 or something on the Bitcoin IRC channel. The conclusion was that it was very hard to build something simple enough and secure, something that you can reason about. We are still learning stuff about proof of work. There was a talk about proof of work today where they combined this selfish mining, alternative mining strategies where you behave slightly differently, slightly maliciously and then you can get revenue. It is fundamental to proof of work but people still build on top of that combining ideas. I think proof of work is simple and yet we don’t understand it in full. It is going to be much more fun when we’re getting out of subsidy. There are two parts of the block reward: subsidy and fees. Subsidy slowly goes down. Right now it is 12.5 Bitcoin per block. It will be 6 next year and then it will be 3 and so on. Fees will then be the majority. Then there are new weird miner strategies. If you see a new block and it has fat transactions with high fees. It might be reasonable for you in future, instead of building on top of it, just try to compete with it and steal all those fees. Maybe you will mine a block that doesn’t have transactions or they are really low. You don’t get much, you get like $200 from this block even though you did a lot of work. But if you fork and steal transactions from the other block you can make more. That is a dangerous future. We don’t know what it is going to happen. There is some research on it but I think there should be more research there.
Q - That’s effectively the incentives for withholding a transaction with a high fee go up as the proportion of that transaction fee to the block reward goes up?
A - Just forking. Just refusing to work on the real block you see?
Q - You refuse to mine on a block that has that transaction with a high fee that was mined by somebody else because you want that transaction in a block you’ve mined?
A - Right now it is not reasonable because most of the reward is subsidy so you don’t really care which transactions are bigger. But we don’t know what is going to happen with the fee market, this is certainly possible. I’m just saying with proof of stake it is much more complicated. I have no idea how they are going to reason about it, it seems very handwavy. The protocol doesn’t exist I think yet. How did I start this?
Q - I keep bringing you onto altcoins. Just on the eclipse attack before you move on. You’re relying on a victim of an eclipse attack to report it right? You don’t have a view of the network?
A - Who is “you”?
Q - I suppose you as a researcher monitoring whether an eclipse attack is happening.
A - I’m trying to make it much more expensive. Right now you have 8 connections. If you do 16 instead of 8 it will be more difficult to do eclipse in certain ways. Certain ways will still be simple. For example if you go to the internet provider of the person and bribe them and tell them “Disconnect them from all the nodes except mine.” That will still be possible no matter how many connections you have. If that is not possible but you are considering other ways to attack. For example by just launching a huge amount of nodes then going from 8 to 16 very significantly affects the cost. It will be much more expensive to attack in this case. But going to 16 requires more bandwidth or more memory. As a researcher I am trying to optimize those things so that we can enable this higher connectivity. That’s what I’m thinking. Nobody really detects, the way we detect eclipses is we see it on Twitter. It is somebody in Russia complaining they cannot receive blocks or something. Then we are trying to figure out what actually happened. I don’t have any extra capabilities. There is no admin page in the Bitcoin project where I can go see who a person is connected to. If they want to cooperate I can ask them “What is your IP address? What were your connections? Can you try this command in the command line?” You can do this as well. I have no authority.
Q - The only idea I would have would be periodically changing your connections so that you somehow detect that there is a separate network that an attacker is feeding the wrong blocks.
A - We are discussing that. It is usually called peer rotation. Right now you connect to 8 peers and you will be connected to them forever. If some of them go down you will connect to somebody else but otherwise you’ll connect to them forever. Because of that the topology is static. If over time somebody can learn the topology, maybe they will spend a year, they will know who you are connected to. Then to eclipse you they can bribe those 8 people for example. That’s probably not realistic but sometimes to explain an attack you go to the most trivial example. But if you had rotation and it takes time for them to learn the topology it will change faster than they can learn it. That’s why they will always be chasing us and not finding us. The problem with that is imagine you change every peer every minute. Then it is possible that you will accidentally connect to an attacker through all connections and then super fast they will flood your database of addresses. Again this database is limited and is vulnerable to certain stuff because of how it is built. If you are connected to an attacker exclusively they can mess up your local state. There is this assumption which I don’t like but so far I think we are going to stick to it until I propose something more precise. Right now there is a general intuition that older nodes are more trustworthy. People who were running Bitcoin in the early days, they are probably not spies. It is much more probable that spies are running nodes more recently. If we have 10,000 public nodes today, if tomorrow 5,000 nodes are deployed at once, probably those are just spy nodes or some activity. There is this rough intuition I don’t like to be more positive towards older peers.
Q - Are Tor peers any more or less at risk? Obviously there are fewer of them. Is it harder to eclipse them?
A - There are different ways to eclipse it. In some cases there will be correlation. It is more complicated. We are discussing this topic. Rotation was proposed at least three times and I was the last one, last summer. I sent an email to the mailing list, I’m like “Why is nobody responding?” and somebody told me “You’re like the third.” What happened before, somebody suggested “Ok let’s rotate every two minutes” and Bitcoin developers asked “Why two minutes, why not four? Can you prove to us that we’re not making things worse?” You can imagine that spy nodes are more reliable. They can spend more resources, their nodes are always alive while regular nodes at home, sometimes there is an internet outage or something or a person reboots their machine. Because of that rotating peers may be even more dangerous. The way I’m trying to push peer rotation, the first step is to try to separate the functionality of links. There is transaction relay, block relay and address relay. Transaction relay seems to be very hard to not make it leak information. There are trivial ways to find that you are connected to somebody through transactions. How do you do this? You send tx A to one node and then you send tx B that double spends A to almost every node in the network but one. Then by looking at that one remaining node you can see. If it has tx A then it is connected to the first victim directly. If it has tx B it is very likely to be connected to the rest of the network and not be connected to the first node. I’m not sure these companies are even aware of this stuff because they are not very technically advanced. Sometimes they tell me “It is so hard to deanonymize stuff”. “No it is not you are just not doing it properly.” We are trying to improve it. There are trivial changes discussed this week. There is a Bitcoin review club, it is super cool. It is a discussion online every week for an hour discussing a particular PR. It is open to everybody, you can join and try to see how this process goes, how we’re discussing this in realtime. It helps to get a sense of what Bitcoin Core development is.
Q - Bitcoin Core PR Reviews on Freenode?
A - Yes. There is a website.
This week they discussed a PR: should it be possible to query a node for a transaction it never announced to us. If we allowed this behavior, I believe it is this I might be a bit wrong, if this is possible it is essentially trivial to see how transactions are relayed across the network. You start relaying tx A and start asking everybody in the network for tx A every second. Firstly it is only the first node that responds, in a second it will be the second node who responds, that is an easy way to learn who is connected to who, seeing these waves of transaction relay. It is like fluff essentially. By doing this several times you can map the topology. The solution was to not respond for stuff unless you explicitly announce it to that node. Since we have delays for announcements it is much harder. The PR is mostly about some code changes. It is not much code, it is like 50 lines. This one is pretty chill, it is peer-to-peer so pretty high level, you might enjoy it. Maybe it already happened today but they have a track of existing discussions so you can go through it.
Stale block rate
Stale block rate, that’s what I recently started calling chain quality to follow the work of other people. It is wasted work in a proof of work sense. With this attack it is more possible…
Q - Did you say you have to leave at 7 30? Because we’ve only got 15 minutes. Are you going to talk about Erlay because I was hoping to hear about Erlay? Can you stay longer Gleb? If you can stay longer?
Erlay
Are you guys familiar with this? So how it works before… You receive a transaction, you validate it. You have 8 connections, let’s say your node is behind a firewall. You are receiving a transaction from your wallet, your mobile SPV wallet or you sign a transaction on the desktop and feed it to your node. Your node will validate it. If it is valid your node will announce the transaction to all your peers. Your peers are looking up the transaction in their local database by hash. Announcing means sending the hash. They are looking in their database. If they don’t have it and they don’t have it because you are the first, they will request the full transaction from you by hash again so sending a getdata
message. Then you will send a full transaction. What happens then is that they all do the same thing. This is your node. Let’s say you have these two guys and somebody else. They start to make announcements to somewhere else in the network but if these two guys are connected this will probably receive it a bit faster for example and they will announce it here again. This node will receive the same announcement twice. It actually adds up to a total… Total bandwidth per month right now is 30GB, I believe this is gigabytes per month.
Q - At the moment they just send the full transaction to everyone and they will accept it? Is that how it works in relay? Or is it the hash that’s happening now?
A - We are already doing a hash. If we were doing full transactions this would be like 300 instead. I don’t remember if it was ever that bad, probably it was.
Right now we have 30GB before Erlay. About 14GB is tx body. The stuff you really need like signature, amount, who is sending to who. Another 14GB is just announcements, so sending those hashes more than you need. Usually an announcement is 32 bytes and the transactions is something like 250 bytes but since you are receiving this announcement, this node may receive it 8 times. If this happens too and this is slightly lower than other nodes it will receive it 8 times. You can multiply this by 8 to make 256. You are wasting more on this useless announcement than the actual transaction body. We want to save this, we want to not do 8, we want to just do 1 x 32. If it was possible to do no 32 at all and just do bodies then this would be even cooler but this is unfortunately only on possible for example with a star topology where there is a central hub and the hub knows what everybody has. It receives a transaction from a peer, it knows these guys will just need one message. That is how we can eliminate this part.
Q - You couldn’t do that here because if I just send it to one of my peers only I can’t just trust that it will automatically get out to the network?
A - Sending to everybody is important because it is redundancy in the distributed system. You are doing more than you need to eliminate trust in a single point. If this guy is corrupted, these guys will still help you to relay your transaction.
Ideally we want to do 32 so that we can have this 14GB and the resulting 4GB for stuff like block headers, addresses - helping other nodes to learn the IP addresses of all the nodes in the network, that spends a bit of bandwidth too but we don’t care about that much before we fix this. Instead of this Erlay does like 1GB or a bit less. How it does this? Sending to 8 is not super precise. I will try to show it here. Let’s say this is a public network which are reachable nodes that help other nodes. They all create 8 connections to somebody else but they also accept connections from this private node. Some of them accept like 100 connections from private nodes. If they issue a transaction from here they will not just send to 8, they will send to all the links they have. Since this node has 8 connections this is very likely to be useless because someone has probably sent the announcement anyway. This is a slightly different scheme. At the first stage public nodes announce the transaction amongst themselves. Trying to not reach everybody but trying to be one hop away from everybody. If most of this network knows a transaction then these guys are just one hop away and they can learn it through a super efficient set reconciliation protocol. It is a bit slower for them. Every two seconds this guy’s private node take one of their 8 peers in some order and then try to learn the new transactions from that peer in a very efficient way. What usually happens is this private node will have a list…. Actually I have slides on this I was presenting today.
Q - The public nodes are listening nodes are they?
A - Yeah we are all confused about that, there is no agreement.
Q - Reachable?
A - Yeah reachable
Q - Just not inbound, outbound?
A - Yes. It is a bit of a simplification. For example I was running a node at the university and it is not reachable for most of the internet but other universities see it. It is under firewall but the firewall is not exclusive. In practice it is not that often that happens. That’s why the model simplifies to a degree.
This was for people who don’t know Bitcoin much. I explain what is Bitcoin.
Current protocol (BTCFlood)
This is a bit more precise. We have 30GB per month total and we spend 18GB on just flooding transactions to other nodes, just announcing them. This is the existing protocol where you send a hash and you get a request if they are missing it. This is what I showed you, how bad it is. The part above is the real necessary stuff you need in a protocol with trust minimization. Everything below is what you can optimize and it is a substantial part so we are trying to optimize that. We want to increase connectivity to prevent eclipsing, what I just showed you guys. If we increase the connectivity that part we want to optimize goes much faster, it becomes a huge part of the bandwidth. Several times more than we actually need to spend. These are the parts I was mostly focused on.
Prior work
This is nothing to do with block relay. Block relay has very different goals. With blocks it is important to be as fast as possible. Blocks are relayed in milliseconds. Every node in the network knows a block in milliseconds unless you’re running on Tor and there is a slow Tor link or your internet is slow. There are these cool ideas. Graphene and XThin are in Bitcoin Cash. Topology based routing, the simplest example is the star topology where there is a central hub. It is trivial to optimize it but it is vulnerable. A feedback based approach, imagine every transaction you are adding some information on which nodes have already seen it. If you know that somebody saw it before you don’t relay it to them but that is very easy to map the topology if that is attached. You just see a transaction and you’re seeing how it was propagated. We cannot do that. We had to invent something new.
Erlay: Efficient transaction relay
These two parts: rapid low-fanout flooding. It is low-fanout because you are not relaying to everybody but you actually do relay just to 8 outbound. Even these guys, even though they have a lot of inbound connections they are not going to announce it there. What happens in practice, the public part gets it relayed in fast way. We reduce the delays. The delays were 2-5 seconds. You remember not sending it right away a transaction but waiting a bit to make it harder to deanonymize. We had 2-5 seconds. We have half second to a second. This is fine because what matters is the ratio, I don’t want to go through it. It is sort of safe to reduce the latency. These nodes are getting it super fast and then the private nodes are usually learning it through reconciliation. This is important. So everybody is connected to everybody. Node A receives a transaction from somewhere in the network. Node A relays it to B through the regular protocol. So announcing, getting the request and sending the full transaction in a flooding way. But you should notice that node A doesn’t forward it to C and D because it is limited flooding so we are not sending it to everybody, just a subset. B forwards the transaction to somewhere else in the network. They are connected to C and D but they don’t choose them for relay but node B and every node has this local state where they track which transactions they would have sent to their peers. They wouldn’t send it to A because it came from A, A already knows it, it doesn’t need it. But C and D, we don’t know, maybe they are missing it. So we are going to add the transaction to those sets. The same thing happens with D, D receives transaction 2 from somewhere, relays it forward, we don’t know where, and adds it to the Peer B in their local list. They don’t know if B has it so they are going to add it to the list and consider it later. Now there is some timer at B, it happens every two seconds. Every two seconds B takes a peer from the queue and in this case it takes D. It says “It is time to reconcile with D, it is time to exchange transactions.” Then there is a super efficient protocol I’m going to talk about how to help each other to exchange these sets. In this protocol, B will help D to learn tx 1 and D will help B to learn tx 2. It is important to clear the sets. After reconciliation happens both of them have both transactions and they clear the corresponding set. Then B starts reconciling with C. B still has two transactions for C and they are going to do the same protocol and sync it efficiently. What really makes it efficient is that it is arranged in a certain way, the timing and the low fanout relay directions so that transactions are relayed in a uniform way. Usually what happens in practice is something like this.
Finding set difference with BCH codes
Most of the transactions are shared. There is a little difference. In a regular protocol what Alice would do here is send all transactions to Bob. Bob would receive it, compare it to what he has and find that Alice is missing tx 15 and will send tx 15 over and request tx 10. The most efficient way is for Alice to somehow guess that Bob is missing tx 10 and for Bob to guess that Alice is missing tx 15 and to send it over or announce. There is math which allows us to do it without guessing. That is called something like “approaching information theoretic limit”. It is not possible to be more efficient than that. If I want to tell you that London is the capital of Great Britain I should say at least six letters, at least London. I cannot deliver you this. I could cut the word in half but then it is possible that it is overlapping with some other city. There is math that allows us to approach that possible bound and achieve the same efficiency as if we were super lucky. How exactly does it work? The tradeoff we take here. Alice has to estimate the diff set size. In this case the different size is 2, 2 transactions that are different in total. If Alice underestimates, if Alice thinks that the difference is 1 the protocol fails. We have some ideas and we are going to implement them, how to handle that. The default way is to fall back to the existing protocol. Then we just spent a bit more extra bandwidth compared to the existing one. In my simulations this happens in 5% of cases. We underestimate and we have to run the existing protocol. This is still good enough because we save a lot of bandwidth. If Alice estimates 2 everything is super efficient. If Alice estimates 3 it will essentially have to spend 10% extra overhead, so send one extra item that they didn’t need. It is still much more efficient than the regular protocol. In the regular protocol you will have to send 7 items in one direction and 1 back. If Alice overestimates and the difference is 3 it will have sent an equivalent of 3 items whilst the ideal is 2. So Alice has a difference, Alice computes a sketch, a shortened representation of her local set. She sends the sketch to Bob, Bob computes his own sketch, I will show how. Then Bob combines the sketches in an algebraic XOR function and from the product of that Bob can see what was the actual difference. Bob can find tx 10 and tx 15 as a product of that. So let’s say every identifier is 32 bytes. We were discussing this before. In a regular protocol Alice will have to send 32 times 7 and Bob will have to send 32 to announce tx 15 back. If Alice has estimated precisely and found the difference to be 2, the sketch will be of the size 32 x 2. The size of the sketch will depend on the difference. Then Bob will send this one item anyway.
Q - In the paper you say that the elements of the set are 64 bits so they are shortened identifiers. The sketch doesn’t consist of items that are 32 bytes, they are just 8 bytes, a quarter of the size. Is that right?
A - Yes. One trivial optimization to the protocol would be to send shorter hashes. The full hash is this. What if we cut this by 4 because Bitcoin doesn’t have many transactions, they will still uniquely map to the full transactions most of the time. We didn’t do this before because the cost of relaying one transaction across all the nodes is number of peers x number of connections x constant which is usually referred to as O(n). What a cool protocol would do, like mine, is remove this (the connections) because it is irrelevant. It should not depend on that. It should be the same cost no matter how connected the graph. Every node should receive the announcement once. What cutting this by 4 is optimizing this part (the constant), making this smaller. You sort of solve the problem a bit but if you want to increase the connectivity your function will still depend on that. There is some engineering effort with testing and deploying this feature. We decided that we wouldn’t do this unless we can solve this part. Then we can do both. We actually minimize this as well. Here I was just trying to get an intuition.
Computing a BCH sketch
So how to compute the sketch? Let’s say you have elements from 1 to n. Let’s say Alice has a set 1,2,3,4 and Bob has a set 1,2,3. Let’s say the difference is 1. In this case Alice just adds all the numbers (1+2+3+4) and sends the sum to Bob. Bob will do the sum minus (1+2+3) and then 4 is the answer. 4 is the 1 difference we had. Alice just sends one number and Bob computes his local number, Bob subtracts and finds the missing transaction. This will work with any numbers. This is usually how some of the checksum protocols work. For example you know you have 16 digits on your credit card. The last two digits are a checksum so if you f*** up in some random digit the last 2 digits will help you find that you f***ed up somewhere. The last digits are not randomized, they are pre-computed. They look at the 14 first digits and deduce from them the last two. This idea is coming from the same thing. This is a trivial checksum just correcting one error. You can do this with your credit card too, they probably do something fancier I don’t remember. It is a bit more difficult when you have a difference of two. Let’s say Bob will also have 5. Now the difference is 4 and 5. It will not be enough to do this. Bob will have minus 5 and from minus 5 you cannot tell much. Apart from this, Alice will have to send square of all her numbers (1^2 + 2^2 + 3^2 + 4^2). Bob will do the same locally. Bob will still compute minus 5, the first syndrome it is called, and Bob will also have a second number like 23. They will compute this locally and subtract one: minus 5 and 23. This becomes a system of linear equations (3x + 5 =7) and another one. From this it is a known fact that you can find both x and y. This will be the difference, 3 and 5 is something deduced from here. It is a bit more complicated but it is still understandable. You take these two numbers, plug them in the system of equations then you solve it. The answer will be those two transactions you were missing, that’s how it works. If you have a difference of 3 then there will be 3 equations and x,y,z and there will be 3 numbers that you plug there.
Q - It looks like it is related to the fact that for example if the a’s here were roots of a polynomial you’ve got the sum of the roots and the sum of the square of the roots. Given that set of data I think it is called Vieta’s formulas? You can invert that and find the polynomial coefficients. There is a relationship between those two?
A - Yes it works for polynomials and we do revert. Reverting a polynomial I believe is not efficient so we do something else instead or we do it in a non-trivial way. We have a library, it is called minisketch, we published it last winter.
Q - I remember there are some things about the set of equations that are very specific. It is not a random set of equations which means you can do it in a really efficient way right? The coefficients are repeated…
A - Pieter did a lot of crazy work of precomputing square roots of everything up to 64. He has like 13^128, he probably has it in some dictionary in the library so that every time you need to compute it you just look it up. You just go to this array and you read from the memory instead of computing it every time.
Q - The same philosophy is in libsecp256k1 where there is some context flag. There is pre-computation, enormous quantities of different multiples so that when you finally have to do an operation like find the pubkey from the private key or whatever it happens in a microsecond instead of a millisecond. That’s why everything is so fast, it throws it all in memory in advance.
A - I believe there it also helps with constant time. It is important in cryptography to do everything in constant time. How computers work, if you have two numbers like 2 x 3 it is computed much faster than 17 x 13 even though for the computer it shouldn’t matter. They are the same data structures, usually integers. This will be much faster just because transistors work. By looking at the electricity consumption of your mobile phone for example, if you have access to the battery you can guess what were the numbers. If that’s a private key, maybe not from the first attempt, by looking at the energy your phone expends you can guess what was the private key bit by bit. Because of that there are different ways. One way would be to remember both of these products, precompute them and every time you see these two numbers you go to some cell of a big memorized thing and you look it up. Both of these operations will take the same electricity. That’s why it obfuscates the fact there were different numbers. That is a trivial introduction to constant time. It is a good practice to do this if you do some cryptography.
Minisketch (PinSketch implementation) benchmark
Minisketch is better than the existing whatever. It is milliseconds. The most expensive operation is once you have found these two numbers, 5 and 23, how to go from them to transactions. Solving this and going from this to this is expensive. Computing the square roots is not. I measured that operation because we want that to be fast. It is on the cheap machine. We are usually in that bound (5-20). We reconcile every two seconds but Bitcoin has like 7 transactions per second so we are going to be somewhere there, 100 microseconds, pretty good.
Erlay bandwidth benchmark simulation
That’s the result. We save a lot of bandwidth and the bandwidth is constant so it doesn’t scale with the connectivity. Those are the two biggest contributions. It takes a bit longer to relay transactions but I told you why it is fine at the beginning of our discussion. We actually realized that we slightly improved the chain quality so that miners are in better sync. It happens because this public network, the relay there is faster. We reduce the latency, they will relay faster. Even if the miner runs a private node it sends a block here and they will all be in sync so it will be very fast going to the other miner. Instead of in the regular protocol some of them won’t have it and there will be extra latency. We actually improve it even though the overall end-to-end latency of transactions from here to here might be slower, a bit.
Q - I think I need to dig more into the math because if it is lots of transactions and in your basic case the sum is like 100. Then the second party’s sum is like 105. That doesn’t necessarily mean that the second party is missing 5, they could have completely different transaction sets that sum up to something similar.
Q - A crucial part of the protocol is that they have to estimate what they think the approximate difference is going to be and how many. You said if it is 5. It could be 5 different transactions in the symmetric difference of the 2 sets or it could be 20. According to the paper they have to try to estimate and be quite conservative and slightly overestimate. If you underestimate you fail?
A - If you underestimate you fall back to the existing protocol. Because we have an initial conversation with the node it means that we’re spending more than we’re spending in the original protocol, we don’t want that. I did a lot of experiments and we found how to make it reasonable.
Q - If it was just integers then a lot of the time you would have no idea whether the second party was just missing transaction 5 or whether they had completely different transaction sets that summed up to totals that were different by 5. But because they are square roots they’ve got crazy decimal points?
Q - We are talking about squares not square roots. You were saying square roots.
A - Sorry, it is correct on the slide. One cool trick I cannot explain here because it is a bit complicated. It helps you to detect problems. If you underestimated, looking at these two numbers you ended up you can tell that something went wrong. It is possible by looking at them that we underestimated, we should do something else. I cannot explain right now but in this case you estimate a difference of 2. You have these two numbers and you probe them, can I recover from them to the system? Systems sometimes don’t have a solution. In that case you know that you’ve failed.
Q - It is almost like a binary pass, fail?
A - Block relay protocols, their ideas often succeed like 99.999% of the time. The same math didn’t work here because they have constant overhead. Our cost strongly corresponds to the difference, there is no flat data you need to pay. In those protocols used for blocks there is usually a flat 1 kilobyte, it will eliminate the whole usefulness. 1 kilobyte is more than the entire thing which in blocks is fine.
Various configurations of the Erlay-like protocols
There was a lot of trying configurations. Should I relay to 2 outbound and some inbound or something? We just found the most bandwidth conservative configuration is pretty fast. We don’t need to optimize half a second latency but it is possible. This is the conclusion. The last two things are just side effects. I was trying to measure whether we were making something worse but it turned out to be a bit better. This was a result
Q&A
Q - What drawback would this added latency cause? Obviously it is not a problem with mining because it is not about the block relay?
A - There are fundamentally two dangerous things about increasing transaction relay latency. First it is usability. If it takes 15 seconds for the user to see it in the wallet then it probably becomes bad. But if you go from 3 to 5 as I do we think it is fine. The second problem is if miners are not in sync and nodes on the network are less synced in terms of transactions blocks are relayed slower and chain quality goes down. There are a lot of wasted blocks. I just tried to show that it is not the case. While the overall latency is going a bit up, the latency which matters is better.
Q - But they sync in the same way as the blocks? It would just be transactions that would be different?
A - Transactions affect block relay, that is important. The compact blocks protocol Matt designed works as efficient as possible only when everybody agrees on the transactions. The transactions from the last block they are trying to relay. I was talking about how they are trying to send short IDs and then the node tries to reconstruct the block, the full transactions from the short IDs locally. If it is missing at least one, it will be an extra communication. Relaying transactions slowly might potentially harm but somebody asked “Does it?” and then I measured. Because of the protocol is arranged in this specific way that problem doesn’t exist. I’m not sure I want to go any deeper.
Q - Can Erlay work with Dandelion?
A - Dandelion is another transaction relay protocol that focuses more on private, how to prevent spies from learning the origin. Instead of relaying to all the peers you have you will choose one and relay to one. Then that one will relay to one. Then on average in ten hops, it is randomized, a node will start relaying to everybody. It will look to the spy that that was the source. It is very unlikely that during the stem phase, one link, that there will be an attacker on that link. It is likely that that the attacker will see it after the fluff phase where it sends to everybody. That’s the Dandelion idea. There are difficulties with implementing Dandelion on Bitcoin because of the implementation we have. It is very hard to build Dandelion in a robust way. It is either exploitable or it will require having memory for a separate mempool. You have maybe 100 megabytes for a mempool. To build a secure Dandelion you will have to allocate another 100 megabytes just for Dandelion transactions. Or if you don’t do this it will be either trivial to attack the system or make Dandelion useless by attacking the protocol itself. We were trying to see if somebody could implement Dandelion without allocating another 100 megabytes and it seems like nobody can. Some researchers are trying to do 1 hop Dandelion. It is a bit less private but it is more practical. There are no problems with that, you don’t need another mempool. We are looking forward to that. Somebody else is working on it. We will see, if that works then we’ll probably integrate it. The question was is this compatible? And yes. We have first a low fanout phase and you just replace that with Dandelion. Instead of low fanout to 8 you just have the stem, this one link. After that you do low fanout. It is totally compatible with Dandelion. I hope Dandelion is getting implemented sometime. It is a cool idea but it is hard for practical reasons.
Q - …
A - It will be just 64 bits. You will have a few collisions but because there are not many transactions it won’t happen frequently so it is probably fine.
Q - Does this work have anything to do with public networking any cost?
A - No. Any cost is like the whole computer science concept where instead of doing regular send, receive message you subscribe to some channel and you’re listening to it basically. That was never practically used in systems like this. In some it was but it was always more research stuff. I don’t have a good answer why but no it has nothing to do with this.
Q - On TFTC you spoke about botnets spamming Tor nodes on the Tor network to bring it down. Can you apply that to Bitcoin?
A - Two months ago somebody showed that it is super cheap. Tor is an anonymous browser and network associated with it. Somebody showed that it cost only $10,000 either a day or a month, I don’t remember, it is cheap enough for the American government to do it. To make the Tor network very slow and almost not usable. Every time a user will try to use it it is very slow and nobody will use it. It is a very low cost to do this attack. What they do is the attacker hires a botnet. It is called a stressor. It is testing my service, is it robust enough to high load. It is usually a lot of hacked devices on the internet, smart fridges and stuff like this. It works in a different way. Sometimes it is just computers. A hacker usually has central control over those devices and telling them to send traffic to this victim. Then if you choose victims as Tor relay nodes they think it is regular internet traffic, they try to anonymize it. They can’t tell what is honest and what is bad. They do it this way. I wanted to show and I still want to, what you can do with Bitcoin with this attack. Can miners hire botnets to attack other miners for example? If you choose this to attack the node of another miner they become not competitive. They can’t relay their blocks because they don’t have good internet if you are attacking them. I’m still looking forward to that stuff. I did a write up but I’m waiting for somebody with botnet expertise to come, I’m trying to find them. It is hard to estimate for example do those botnets and stressors in practice have enough capacity to attack the Bitcoin network. The Tor network is smaller, there are less computers. It is not clear. I’m looking for some PhD student who wants to research to go and see how much capacity they actually provide and how much it would cost in total. Amazon Cloud have some machine learning to filter botnet traffic. Nodes running on Amazon would not be attacked by it but it is about 5-10% of our nodes in total. We want to make it smaller. That doesn’t matter I’m just saying. That should be taken into account. The understanding botnets part, I don’t have enough time. It is probably like 100 hours to go through it. If somebody with some expertise that I can find at a conference like this it would be much faster. I’ve switched to a different direction for now, that is frozen.
Q - As a researcher or a guardian of the network you have to employ different strategies to the strategies you would push in terms of Core software to normal users? In the previous discussion around potentially detecting eclipse attacks, you’d potentially want to set up certain strategies on your node as a researcher that would be very different to the software you’d push to normal users. Is that true? Maybe as a guardian of the network you’d want to try to detect eclipse attacks in a way that a normal user wouldn’t bother. Maybe you’d have a few stable connections but then you’d want to rotate your other connections which would be a totally different strategy to what a normal user would do?
A - I don’t think it would help much. Firstly, I don’t see a good way to detect it. There are ways to detect spies. For example, Greg Maxwell from time to time asks people on IRC to send their list of peers. Then he intersects the two sets. There are a lot of nodes on the network but if Greg and another person have a common peer it is a very unlikely event. It means that it is probably a spy. If three people have the same peer it is certainly a spy and we should ban them. I guess that is sort of what you are talking about. One of my research ideas is without telling Greg the actual set. There are multiparty computation protocols that you can do privately. I’ve never had time, I don’t know how well researched is this. Maybe it is a weekend hacking project. It can be on a centralized server for some applications. I would totally send my list in a private way to Greg’s server, it doesn’t matter if we can find the spies in this way. It can be done on the peer-to-peer layer but it is probably going to be more complicated and I’m not sure we need that complication in the protocol itself. Then grinding new IP addresses is not that hard. Once we do this the attacker can trivially, sometimes not trivially, just use more IP addresses and it doesn’t work anymore.
Q - I suppose an academic researcher looks very much like a spy so it is very difficult to determine whether it is a spy?
A - They do. They complain to us that sometimes we disconnect them for no reason.
Q - Have you looked at the P2P challenges on the Lightning Network or are there enough challenges on Bitcoin base layer?
A - I’m trying to contribute to Lightning too but it is changing too fast. I’m thinking about something and then the next month it completely changes. Somebody published an attack on Lightning in the last month or so where the channels accumulate too much and they fail all the payments. They provide a lot of capacity so a lot of payments go through them but they fail all of them which makes Lightning much slower. You don’t spend any money, you only spend onchain fees to open those channels. You don’t spend transaction fees. Then they published and the guys from Lightning said we accidentally figured this out last week and now it doesn’t work in the exact same way. It is probably still cool but I’m trying to find a more creative fundamental way. There are applications for Erlay in Lightning but it is also sometimes hard to coordinate with like Rusty Russell etc.