Home < Misc < Bitcoin Scaling Tradeoffs

Bitcoin Scaling Tradeoffs

Speakers: Adam Back

Date: April 5, 2016

Transcript By: Bryan Bishop

Category: Conference

Media: https://www.youtube.com/watch?v=HEZAlNBJjA0

Bitcoin scaling tradeoffs with Adam Back (adam3us) at Paralelni Polis

Institute of Cryptoanarchy http://www.paralelnipolis.cz/

slides: http://www.slideshare.net/paralelnipolis/bitcoin-scaling-tradeoffs-with-adam-back

description: “In this talk, Adam Back overviews the Bitcoin scaling and discuses his personal view about its tradeoffs and future development.”

Intro fluff

And now I would like to introduce you to a great guest from the U.K. or Malta, Adam Back. Adam is long-time professional cryptologist and also he’s a Bitcoin expert. He’s the inventor of hashcash, which was used for example by Bitcoin. I heard that Satoshi Nakamoto was inspired by hashcash.

Adam: I think I got the first email received by anyone from Satoshi just to ask about hashcash.

The second thing that slush0 told me was that hashcash is used by the bitcoin mining protocol. This is a really big thing. Also, I know hashcash is used by many anti-spam solutions. If you have implemented some anti-spam solutions, quite likely you are using hashcash.

I also read from your Wikipedia page that you developed a system based on David Chaum’s ideas. David Chaum is a cryptologist. I invited him to a cryptoanarchist conference last year. Unfortunately this guy probably he didn’t read his emails, so maybe next year.

And the last thing is that Adam is also President of the company which is Blockstream. That’s a very brief introduction. Now Adam it is your turn.

Talk intro actual

Adam: Thank you. Hi. So. I am going to talk for a relatively short period of time. People should ask questions. Sorry. Okay. Yes. So, people should interrupt and ask questions. The slides and discussion is mostly to start some structure to the conversation so feel free to ask questions as we go, or at the end as you prefer.

Many hats

So as many people in Bitcoin, I started as an enthusiast. I saw bitcoin and found it exciting and interesting. As many people, I tried to do something to move bitcoin forward by starting a company to improve on the technology and so forth. As probably other people can attest, once you start a company you have multiple hats now. I have to preface what I’m saying by explaining which hat I am speaking with. I am speaking as an individual who very much likes bitcoin and wants it to succeed. As was said in the introduction, I have been playing around with and doing applied research in electronic cash systems for a long time. I also worked at a company doing Tor-like things before Tor existed. And ecash systems relate to this as well. I am also not a spokesperson for Bitcoin Core.

Bitcoin Core is a decentralized group of people that work by consensus, perhaps similar to the way that IETF runs discussion forums. I am speaking as an individual. These sentences are the only Blockstream part. Blockstream as a company is reliant on Bitcoin and they need it to scale and succeed. Same as any other company. Everyone at Blockstream owns bitcoin for themselves and they are excited about bitcoin.

Requirements for scaling bitcoin

With that out of the way, I thought I would talk about it in terms of requirements. People who have been through software engineering in startups or various projects, you know that you often interface with a customer by stating requirements. They are not always technical requirements. They are about the effect that you want to achieve. There has to be a constructive conversation with a client who is buying something or maybe they want to buy an open-source system. It’s useful because then the conversation is at a level where hopefully everyone understands, either the customer understands or the user understands, and the technical people can get sure that they have the same idea as what success is or what they want. In Bitcoin, it’s a little bit ambiguous who the customer is. It’s an open-source project, it’s a currency, it’s a network, it’s a peer-to-peer currency so it’s also a user currency so we should most care what users want and why they value bitcoin. This can be difficult to determine because there are many users with different viewpoints.

Bitcoin also depends on companies to succeed. There are many companies popularizing access to Bitcoin, like exchanges and hosted wallets, to make it simpler to use Bitcoin, and miners who are an important part of the ecosystem to secure it, including mining pools. We have to sort of if we’re going to do requirements and engineering around bitcoin, we have to balance the interests of users and companies and different ecosystem miners and payment processors and exchanges may have different and slightly conflicting requirements.

If we were to optimize solely for the benefits of miners, we might find the outcome to be one thing, but for the sole benefit of payment processors might have a different direction. Each thing you do tends to have an impact on someone else, so it’s sort of a zero sum system in that sense. The system needs to be in balance and we don’t want one sector to have an overly strong influence over the others. One type of ecosystem gaining an advantage over others is not something desired. The companies in some respects are, they are trying to improve the value for users, users buy their services because they find them convenient, and companies succeed by delivering value to users and getting more users to start using bitcoin and build the network effect.

Let’s talk about a what if scenario. We say that we want bitcoin to scale. But how much? Let’s put some numbers on it. Hypothetically say we want the scale to double every year for the next three years. Let’s draw a rough outline of how that’s going to happen. As a requirement, that’s something that companies can think about, hopefully we can do better, but it’s something they can plan around, they can look at user growth numbers. Some ecosystem people in the business side have said that they think it has been scaling around this rate over the last year or two. It’s not an arbitrary number, it has been scaling maybe 2x maybe 2.5x times something like. And then the more recent interesting technology is Lightning and other layer 2 protocols where we get much more exciting increases in scale. It depends on some usage pattern, it depends on recirculation, but you hear numbers like 100x to 10,000x transaction throughput using the same base technology. We’ll talk in a bit about how that works roughly.

If we can achieve these targets taken together, I hope that the companies could be happy with these targets. It’s a conversation topic, because they could come back and say I need it more or I need it sooner, at least you’re having a conversation where the people designing protocol upgrades could maybe work with and consider.

What is bitcoin?

There are other system requirements that are sort of invariants or hard requirements, which are around bitcoin’s properties. Bitcoin is very importantly permissionless. The internet opened up permissionless innovation. The permissionless nature of it is significantly credited to driving the fast pace of innovation. Many people think that what bitcoin will allow is that rate of innovation into financial payment networks where it’s so far been relatively closed and more like the closed telephone networks from pre-internet era. There’s some analogy there.

There are some other interesting bitcoin properties that we don’t want to lose, such as fungibility which is important for a payment mechanism. Fungability is the concept that, like cash, one bank note or one coin is the same as the other. This concept arrosed, sometimes you can distinguish between bank notes because of serial numbers. In 17th century Scottland, there was an old case where a businessman sent some high value bank notes to someone else, they got stolen in the mail, they got deposited in the bank, he tried to sue to get them returned as his property, there was a court case, and the courts decided he should not get the notes back, because people would lose confidence in their money and reliable money is very important for an economy. That started the legal concept of fungibility. And I guess other countries arrived at similar concepts and similar reasons. It’s important for bitcoin to have fungibility in a practical sense. You have seen attempts like to trace bitcoin, or find bitcoin that have been used at Silk Road or connected by two or three hops to a transaction used on there or something, but if we have this activity then people will end up with those coins who wont be able to spend because of the hops and taint. The way that it fails in the Scottish bank note case, if the ruling had been the other way, such that the merchant got his notes back to him, then nobody would want to accept bank notes without rushing to the bank and depositing them. This would cause the currency to fail at its purpose. It’s important that bitcoin is fungible and that it has privacy because too much transparency causes taint.

We do have some functional requirements, I put these in rough priority order. These are obvious things that we need the system to do to be effective. Has to be secure. We have to scale it, so that everyone can use it. We want it to be reliable, predictable, good user experience. You want the system to be cheap so that many people could use it, such as in third world countries when the average spend is much lower, or different types of spending.

Since I was asking about requirements, I think it’s interesting to ask, what is bitcoin? It’s actually an interesting conversation to have with anyone you meet who is interested in bitcoin. To ask them, what’s most important about bitcoin, to them? Another way to ask this would be, which of the features of bitcoin if it lost, when would you stop using it? Just to run through the top ones quickly, it’s better bearer ecash, cash-like, irreversible no way to take back payments, unseizable because it’s bearer, and very importantly there’s no third party or central point of trust, no bank. That’s important to consider if that’s a requirement for Bitcoin. We can obviously scale bitcoin by running a central server that holds all the bitcoin. But that would lose that important differentiator that there’s no third party that has to be trusted. We talked about permissionless already. Also has to be borderless and network neutral, there should be no central party at the base layer that says they don’t like some transactions or whatever. In some countries, well I guess Wikileaks is an example, they had their payments blocked. It was not done through legal means, there was no court order deciding it. Some politicians decided to make some phone calls and ask some favors to some large companies, and their payments were blocked. In bitcoin there’s nobody to call up to achieve that kind of effect. Thus you have seen some adoption of bitcoin by parties that have that unfortunate vulnerability. Fungibility we discussed, privacy we discussed, it’s a virtual commodity gold-like, virtual mining etcetera. Specifically in terms of the economic properties of it, it’s not a political currency like fiat currency.

It’s purely free market, there’s no central party that can adjust the rate of new coin production, nobody can do quantitative easing or buy the money back. There’s no party that can adjust inflation, there’s no central party with special authority to set an interest rate. It’s purely free market.

If we look at the standardized three main properties of money, store of money, means of exchange and unit of account, it depends on your viewpoint but I think from my perspective probably the store of value has achieved has been of the three properties the most strongly achieved. Means of exchange somewhat, people are doing bitcoin transactions sure. But you could argue that perhaps more of the value comes from an investment perspective for now. They are related. Maybe one is important for a period of time, and another one becomes important later. I have heard people argue that it’s important for bitcoin to have a stable exchange rate and stable robust value because this would make it as a unit of account easier to use. This area is subject to debate because it’s about economic opinion. Unit of account, I guess, you know, this some like the coffee shop downstairs and everything you buy in this building is a test case unit of account, but because of the volatility of times, most people have been thinking of bitcoin as pricing it in dollars or other currencies. Maybe we will get there in the future though?

Upgrade methods

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=17m50s

Next I wanted to talk about upgrade methods. There are a number of different technical means by which we can upgrade the bitcoin network, to add new features to bitcoin. They have different tradeoffs. I think possibly the current scaling discussion should have a conversation around upgrades at an earlier stage, for example with the Scaling Bitcoin conferences or something. There needs to be a more timely discussion.

https://github.com/bitcoin/bips/blob/master/bip-0099.mediawiki

But let’s have that discussion now anyway. People are probably familiar with soft-forks and hard-forks. There are some different bitcoin upgrade method variants and some have had more recent analysis to talk about. People are still learning new things about bitcoin. There are people who understand code, but the implications of the protocol and the ways that you can extend it and upgrade it, are still new realizations and understandings are being found. That’s the topic of ongoing learning even amongst Bitcoin developers and so on.

An example of that is a “firm fork” or “soft hard-fork”, which has been talked about recently. It was quite obscure or not widely understood or known before. Let’s talk about the tradeoffs. Backwards compatibility meaning that the existing bitcoin wallets that are in use today will they be able to still send transactions after the upgrade? One of the texts is wrong, this is should be a tic not a cross. A soft-fork is backwards compatible because existing wallets can pay to new wallets and new wallets can pay to old wallets. Full hard-forks are not backwards compatible, by design they change the formats to improve something fundamental. All clients, even smartphone clients, even full node clients, have to upgrade after a full hard-fork. A simple hard-fork is a restricted hard-fork that smartphone clients don’t have to upgrade with. The firm hard-fork is a kind of hybrid between the two, which we will talk about in a moment.

One of the factors for an upgrade is, how quickly can you do it? How safe is it? A soft-fork is quite safe to use and has been the method of all previous planned upgrades to bitcoin, of which there have been quite a few, including some last year and some ongoing this year. It’s quite well understood how to do them and what the security properties are, and it doesn’t require tremendous coordination. Mostly miners have to focus on the upgrade, and people who are running merchant software or full nodes, they have to do an upgrade reasonably quickly but they have some flexibility in that, they are protected by miners if they take a little bit longer to do the upgrade.

A firm fork is also fast, a simple hard-fork it depends, if you do it very quickly it probably introduces risk. But if you want to be conservative, then it takes longer than a soft-fork, that’s a potential topic for discussion then because when someone is looking at these tradeoffs. They might say, I want to take the risk because I want to see it quickly, but others might want to take longer and be slow to be more secure, perhaps they would prefer a soft-fork done first and then a hard-fork done later, because it could happen more quickly. And a full hard-fork is similar in terms of speed.

Then what I was saying about the… how we have the experience of doing soft-forks. You don’t need to coordinate with literally everyone to achieve a soft-fork upgrade. The soft-fork and the firm fork have similarity to the historic upgrades in bitcoin so far. The hard-forks require everybody in the network to upgrade. It requires much closer coordination than has been attempted before in bitcoin. That’s something htat could be done, but there are new coordination risks. In the network, you can look at the current software running and find some quite old versions still running, we don’t know if they have value depending on them, but it’s clear that people have not been keeping up to date with versions, so that might be something to be aware of, because if we were trying to do a coordinated upgrade we would have to somehow contact them, and whether they have a timeframe for upgrading or some exercise like that…..

Another topic is whether a fork is opt-in or not. Do the people who run the software, do they all have to upgrade? Do they make the decision? With hard-forks, everyone has to upgrade. Everybody has to get together and agree to upgrade for the system to upgrade, the users have a direct choice, if a proposal was made for a hard-fork, well users could veto it and just not upgrade. Conversely, with a soft-fork, it’s somewhat more automatic, the miners are making the decision. It’s indirect in the sense that we expect miners would want to make an upgrade only if users wanted it, because miners depend on users that want bitcoin and like bitcoin. If users don’t want it, it seems unlikely for miners to want to do an upgrade. But still, there’s a more direct decision with a hard-fork. This comes with the cost that it’s more complicated to do the upgrade because you have to coordinate with all those people.

In the slide this next line about SPV or smartphone wallets whether an upgrade is needed; so, with a simple hard-fork, that is generally not the case that you know an upgrade can happen and due to some limitations in the security model that most smartphone wallets work, it turns out you can increase the block size and the SPV wallets wont notice, the software will continue to operate. With a soft-fork, users don’t have to upgrade smartphone wallets because they continue to function, and the transactions are backwards and forwards compatible, but users get an advantage by upgrading.

Another aspect of software is technical debt (1 2 3), or sort of built-up long-running bugs or design defects which you generally want to fix, because if you don’t then you tend to run into problems in software. We will talk about this a little bit more. The soft-fork proposal in bip141 segregated witness includes a number of technical debt fixes, some technical design fixes which will help many companies and use cases and generally help the state of the software. It’s hard to know what features other forks include because it depends on what you choose to implement in them. So for a firm-fork, and the tradeoff would be, the more fixes you implement, the longer it would take to do the implementation and do the design and testing. One of the criteria is, how quickly can they do the upgrade? Typically that means the simple hard-fork has included the minimal possible features. So particularly no technical debt fixes, minimal work-arounds to avoid immediate problems. So that’s why I say, if it’s done quickly, then there are only minimal fixes, and that has the side effect that the you know the problems caused by those bugs will continue to persist for 6 or 12 months more and the features that rely on those fixes, like Lightning relying on malleability fixes, those might also get delayed and therefore delay the higher scale opportunity that we talked about in requirements for layer 2. In the case of a full hard-fork, it’s actually the motivation to do technical debt fixes. Typically you wouldn’t do a full hard-fork unless you really wanted to do some overhaul or data structure reorganization in order to fix bugs or defects. So it would tend to have fixes in it as an assumption.

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=28min40s

So this is an interesting question. You could look at a hard-fork as a, we were talking about it requiring everyone to opt-in, so it’s a form of referendum in a way, where you need almost unanimous support and agreement for it to happen safely. Conversely, soft-forks are, you know, we hope that miners will take into account the, they should only make an upgrade if users want an upgrade, but if the feature is not controversial, then it’s cheaper to do a soft-fork because we have lots of experience doing them, they can be done quickly, they don’t require as much coordination.

If we look at some different kinds of examples of controversial and uncontroversial things, so, some of the you know, most of the bitcoin differentiators that we talked about in the previous slide were things that usres would be very upset about if they were removed from the system. A major reduction in privacy, an imposition that you require permission to use bitcoin at all, things of that nature, another one would be increasing the total number of coins, these are things that no user and no ecosystem company that uses bitcoin would want to contemplate. So we never want to do those. On the other end of the spectrum are uncontroversial things, things which preserve all of the interesting features of bitcoin, they are just better, they fix limitations, faster transactions, more scale, cheaper transactions, bug fixes, and so forth, in principle they should be uncontroversial. So I think where it gets interesting is that there are, unfortunately, tradeoffs and we get situations that arise where we would like both sides of the tradeoff to go in our collective favor, but this can be difficult to achieve in the short-term. One example is scale versus security and permissionless tradeoff. I’ll talk about why that tradeoff arises, in a bit. Centralization can be a problem because at the extremes it tends to put the properties that make bitcoin interesting at risk of erosion. To make a simple and extreme example, if all bitcoin ended up being controlled by a single pool or single miner, or a very small number of them, and there were companies controlling them perhaps in the United States or another country, there would be a risk that the company would be asked by law enforcement to make policy changes that users would not like. And so we can see that decentralization is the main technical mechanism that is providing many of the interesting bitcoin differentiators.

So the question then about, this is just a question but I’m not sure what the answer is, I think if you’d asked people last year or the year before, if they had preffered a hard-fork for every change, or if they were okay with soft-forks, they might have said hard-forks are generally better because it gives users the choice to opt-in. ….. the ongoing discussion about scalability has shown that referendums are expensive and have their own controversy. In the same way that if a country has a referendum, it encourages people to want to think about the decision and campaign about it and think about whether it’s good or bad for them and try to pick an outcome that is advantageous for them. So if we are actually making a change that is uncontroversial, like a small increase in scale for example, maybe you could argue in hindsight that a referendum is more expensive than the benefit you get, because if it’s not a question that users really care about, like nobody really disagreeing about a small increase in scale or something, then it might be that it’s quicker to just do it by soft-fork if we use that as a bar. That’s a question that people can have different views on.

Decentralization

So regarding decentralization, this manifests itself with miners and pools. There’s a problem called the orphan rate… because it’s a distributed system, and miners are running a lottery winning blocks every 10 minutes, there’s a certain amount of time it takes to transmit a block through the network. There’s a chance that two miners will create a block close to the same time, one of them will win, one of them will lose, the person who loses will lose mining revenue. Miners keep a close eye on their orphan rate, they monitor it, they try to optimize it away. People say that this is due to bandwidth, miners are sometimes in remote locations with bad bandwidth availability, it turns out the limit is latency and not bandwidth because the actual block is transmitted at the end, at the last 3 seconds or so. So 1 megabyte is not much in bandwidth terms, it’s actually quite technically difficult to achieve reliable and fair broadcast in 3 seconds, meaning that small miners and large miners tend to receive the block in similar timeframes and this is not happening. In many ways, you are already seeing side-effects of the broadcast latency issue. What tends to happen is that people use workarounds, one thing they do is use a pool instead of mining on their own. So you have slush0 and slush’s pool. If you were solo mining and had slow bandwidth, then you would use a pool which would solve the problem for you. Another one is the relay network, which is a custom optimized block transfer that can do much better than the p2p network both because of the roots that it has selected plus the network compression. This was introduced to help medium-sized miners and maybe small miners to help keep up with large miners in terms of block transfer time. Large miners have been more able to negotiate peering arrangements with other miners so they can reduce orphan risk in that way.

Validationless mining

Another phenomenon has been the so-called “validationless mining” where one pool or miner fetches the block proposal from another pool without checking the block themselves ((thus the block can trivially include transactions or other properties that violate the bitcoin rules)). They just accept the block on trust that the other person has checked it. This can cascade problems, like if a miner has done something inconsistent or confusing with network rules, then there could be a sequence of blocks found by other miners that have also not been checked. This happened in 2015 with a soft-fork because people did not realize how widespread validationless mining was; until intervening, there were some invalid blocks mined. Some miners lost bitcoins because of this. The reason why people would do validationless mining is because it’s faster and it reduces orphan rate. As far as I know, people are still doing it, because it’s still worthwhile overall relative to the loss they experienced when it went wrong. It reduces security for SPV wallets and smartphone wallets because the proof-of-work is sort of an assertion of transaction validity, but it turns out that PoW can be calculated without ensuring transaction validity. So there could be multiple blocks that are on an invalid chain due to validationless mining.

If we increase the block size and the orphan rate goes up, is something disadvantageous going to happen? From what we have seen from workarounds lately, we think miners are pragmatic, they are in business to make a profit, so they are going to use whatever techniques to work around any increased orphan rate. So they are going to use more validationless mining, they are going to use a larger pool, more centralization. This cuts into decentralization. The side effect of these things is more centralization and increasing the policy risk that we talked about earlier.

Bitcoin companies should participate in bitcoin mining

We don’t always hear so much about decentralization in terms of doing something about it, more as a passive problem. But actually it’s an ecosystem problem. I would argue that if the ecosystem put its mind to it, there could be things done to improve decentralization. First of all, what exactly do we mean by decentralization? There are pools and economic nodes which are topical. Some people are using smartphones, some people are running a full nodes which is the implementation of bitcoin. People who are running merchant exchanges, services, high-value vaults, you should run economic fully-validating nodes. It’s not miners that enforce the protocol consensus rules, it’s the economic fully-validating nodes. The economic nodes are receiving transactions and making decisions about what the node says. If a user was to receive some coins by running a shop and try to deposit them into an exchange, and the exchange is running a full node, and the exchange says the coins are invalid because their node says so, then the coins are invalid. It’s more important to think about a reasonable decentralization of nodes run by power users, small and medium large sized companies in many different countries, and that a large proportion of the bitcoin transactions by value are being tested against economic nodes soon, like after not too many peer-to-peer hops of payments, then that collective action enforces the network rules. It is the case that economic nodes are lower at present. An analogy of this process of economic nodes would be counterfeit testing equipment. Some shops have equipment to pass notes through to determine whether they are forgeries. This is providing integrity for the money supply even though it’s run by the shops and merchants for their own security, but it actually provides currency integrity for everybody. The number of economic nodes is unfortunately decreasing, for a few reasons, mostly because I think there are services that started outsourcing full nodes, they will run a full node for you. Perhaps this is good because they are specialized and know how to run them securely, but it’s also bad because there are fewer companies running full nodes. If this number of full nodes falls too far, then they might have policy decisions imposed on them that are undesirable from a user perspective. If we let too many undesirable side effects creep into bitcoin, then users will become disenfranchised and go use something other than bitcoin. It’s in the ecosystem’s interest to retain the differentiating properties of bitcoin because that’s what attracts users to bitcoin at all. We should all want to protect those properties.

Because mining is becoming increasingly centralized, and economic nodes are somewhat centralized, it makes it difficult to do large increases in block size for example, for the reasons just mentioned above. In terms of the side effects, BitFury did some analysis to see what percentage of nodes would disconnect if block size reached different levels ((page 4)). If the decentralization metrics were measurable and were quite strong, such as in 2013, and we increased the block size quite a bit perhaps, and 20% or 30% of the nodes dropped off, it wouldn’t be much of a problem because there were quite a lot of nodes still around and the properties could still be protected.

Let’s say there are two metrics of decentralization. If one was strong and one was weak, we could probably work with that. If we had very decentralized mining, but not many full nodes, then that might be OK, because miners would be patching over the decentralization problem, or vice versa. I think it’s not always articulated clearly that we could, you know, be more relaxed about changing the block size if decentralization was fixed. This has been discussed in a passive sense, but we should attack this in an active sense. The ecosystem should coordinate this. We have talked about the ecosystem coordinating a hard-fork if that became necessary, but we should consider proactively coordinating decentralization improvements. We could all buy some miners, even a few terahashes, even if that’s spread around, and power users running a bigger percentage of the network, that helps. Ecosystem companies that are not professional miners but perhaps vault services or wallet services, they could also buy a small percentage of mining to improve decentralization. Their reason to do this would be because their business depends on scale, and scale indirectly depends on decentralization, and their business depends on bitcoin retaining its properties that users want and want to buy services based on.

Another thing you could do is mine on smaller pools first. For whatever reason, people have tended, it’s moved around over time, one pool has been big for a while, then it shrunk and then another one takes over. This is somewhat of a user interface problem, it’s an arbitrary decision you might tend to pick the one that is biggest, with the assumption that it’s big so maybe others have validated it already and thus it might be a good choice, right? Well this actually hurts the decentralization of bitcoin.

Bitcoin mining ASICs

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=47m55s

Another problem is access to ASICs. There are only three or four companies directly selling ASICs. There were more companies a few years ago, some of them have consolidated or gone out of business, most of them have failed to produce on time because it’s very time sensitive. Companies could sell ASICs to power users and smaller users. At the moment it’s bulk discounted maybe sold in bulk to mining firms and not really available to small miners at all.

There are economies of scale. People running professional mining farms get cheaper power because they can choose their location. Residential power is often more expensive. They can also bulk discount ASICs and maybe get them cheaper, perhaps the manufacturers wont even go to the trouble of selling small amounts.

If you are a company and selling one or two miners per customer, that’s a cost to you. Then you need to handle support calls, deal with questions from people learning to mine for the first time, which is a cost that a manufacturer might want to avoid. We could encourage miners to sell them; this is self-interested. If miners let centralization build up, and it erodes interesting properties of bitcoin, then bitcoin will become less valuable and users will lose and then their ASICs wont be entirely sellable. ASICs are actually sensitive to price fluctuations, if the price goes down significantly then the ASIC’s output well that can be the difference between profit and loss on mining.

If you don’t have an ASIC, or you are paying above average for electricity, something else that coudl be hypothetically done is an open-source ASIC that could provide baseline availability for users so that everyone can get a reasonably cost effective price, perhaps the org would be non-profit or something. This is an important problem for bitcoin’s success, it’s one of the big drivers behind current scale limitations and retaining bitcoin’s differentiating properties.

How does scale via soft-fork work

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=51m

This is a brief technical detour into how soft-forks work and how you could increase scale using soft-forks. I think there was a time when people were thinking you couldn’t increase the block size by soft-fork. There’s a mechanism where soft-forks can only restrict rules and not add rules. It was counter-intuitive that it would be possible to increase block size, it turns out that it’s possible, and the segregated witness proposal uses this property. The way it works is that the average bitcoin transaction is 60% size is signatures and the other 40% is transaction information. The segwit soft-fork stores the signatures separately in a witness area from the basic transaction. The 1 megabyte limit is then applied only to that other 40%. So this would allow in principle up to 2.5x increase in block size by soft-fork. For various technical reasons, this soft-fork ends up providing 1.8 to 2 MB depending on the types of signatures, so there are multisigs that are bigger, single sigs are smaller, and so forth.

The interesting observation here is that you could do many things with soft-forks than people had been assuming a couple years ago. This is an example of a new thing being discovered about the bitcoin protocol. Actually you can, in principle, although there is a discussion to be had about whether this would be good or not, you could increase the block size beyond the segregated witness, which is sort of a one-off mechanism we can’t repeat this to get more scale, but there’s another block size increase by soft-fork in principle might not be too inelegant in fact.

We talked earlier about software engineering and technical debt. For people who have done software and programming, it’s been a hard earned experience that if you do not write-down technical debt, like fix bugs and fix design defects such that each software version tries to improve the design or fix previous bugs, or improve the design you discovered orgnaically through use, if you don’t do that, problems start to happen. What tends to happen is that, if you don’t do that, it over time creates complexity. People put in work-arounds that have limitations and arbitrary behaviors. If you do it again another time, then you have a workaround on top of another workaround. It’s a common problem in software engineering because sometimes, let’s say, the management of a software project might not themselves be technical or they might feel commercial pressures and they demand that the bugs be deferred until the next release. If they aren’t showstoppers, they say wait to fix them until later. The danger is that these bugs will persist forever. This tends to be counterproductive after a while. The software becomes more complex and slower to develop, it ends up costing more and it makes the software less reliable. There are a number of technical debt items. On the bitcoin wiki there is a page called the hard-fork wishlist, which has a large number of known issues, some of which are simple but bitcoin has some very strict backwards-compatibility requirements. It should in principle be easy to implement these, but it takes a while to deploy anything like this. The segregated witness implementation comes with it quite a wide set of technical debt fixes, which many companies are excited to see. The primary one, which is how this feature arose, and then some other ones to write down other technical debt writedowns.

Some technical debt writedowns provided by the segregated witness soft-fork proposal

The first one is malleability, which is a long-standing design defect, needs to be fixed for Lightning to work, and for payment processors and so on. Another type of fix is the signature on the amount or value of the bitcoin transaction. It’s a small design defect that created complexity for hardware wallets. There was a lot of work related to the fact that this wasn’t fixed in the early bitcoin protocol. Maybe you could have a different interface or a lower powered CPU or something on trezor if this wasn’t the case…. most people will be pretty happy about these fixes.

Some of these problems are related to scaling. As we’re talking about scaling, we should wnat to improve scaling and write down technical debt that frustrates scaling. One of them is the O(n^2) hashing problem. Another problem is change buildup. This is analogous to how some people handle physical cash. They withdraw some notes from an ATM. They end up with a pocket full of change. They throw it in a jar, and they keep doing it, and then they end up with a full jar of change. Bitcoin wallets have a design defect that makes that the optimal thing to do. It’s because it’s cheaper to split a note than it is to combine change, at least in bitcoin. So even if they have change to use up, the wallet will usually choose to split a new coin. This is a scaling problem because the ledger gets bigger, each coin needs to be individually tracked, the ledger has to be indexed, you need more storage more memory some more CPU probably. It increases the minimum footprint or specification of a full node that can exist on the network because the UTXO set is larger than it needs to be. People who have been following hte discussions may have heard about a discount for UTXOs in segwit, it makes spending change and splitting coins into change, approximately the same cost, so that wallets wont have that perverse incentive anymore.

There’s also a fix… well, in the Satoshi whitepaper there was a concept of fraud proofs discussed. There were some limits which caused fraud proofs to be impractical. Segwit has been setup to help make fraud proofs more closely realizable and possible.

Another thing is a more extensible script system, which allows for exmaple Schnorr signatures. When the developers look to make a change to Bitcoin, they have to provide high assurance that no security defects slipped in. This kind of script extension mechanism is much simpler to assure the correctness of than the existing system.

Future scale sketch (my opinion)

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=1h

At the beginning, this is coming full circle back to the requirements in the beginning. These are the requirements that we were talking about, to double the transactions per second for three years in a row or something, and in parallel have Lightning scalability as well. This is a sketch of a sequence of upgrades which should be able to easily achieve that throughput. This is my opinion. Things can be done in a different sequence, or different developers might think that IBLT should happen before Schnorr or in parallel or afterwards or something, these details can get worked out. This is my sketch of what I think is reasonably realistic, using the scalability roadmap (FAQ) as an outline.

If we start with the segregated witness soft-fork, we can get approximately 2 MB as wallets and companies opt-in, and that’s in current late-stage testing. The last testnet before production is running right now, I think segnet4. That should be relatively soon if the ecosystem wants to activate it and opt-in and start adopting it to achieve scale and the other fixes it comes with.

Another thing we could do after segwit adoption is using the script extension from the previous slide, which is that we could get interesting scale by making the transactions smaller. From the same block size, we can get more transactions if we use a different type of signature, we could get between 1.5x-2x transactions per block. The actual physical block size of the network could still be 2 MB but the equivalent throughput it could achieve as a 3 or 4 megabyte using the old signature type. The Schnorr signature mechanism is already implemented in libsecp256k1 signature library that Bitcoin uses. The mechanism to deploy that is included in segregated witness. This is relatively close technology, there are not many unknowns here, this could deliver ahead-of-schedule scale later this year assuming people adopt this.

Something to say about the adoption and opt-in of segregated witness is that it provides scale to those users that opt-in. If I am running a payment processor and I upgrade the library I am using, and move to a new types of address which are backwards compatible, then I could get cheaper transactions and access to more scale. It’s interesting also that, people who do not upgrade, get access to scale. People who upgrade, leave empty space in the base block which could be used by people who haven’t upgraded. So this does support quite well an incremental scaling that builds up over time as people opt-in and the new space left by people opting-in is used by new users coming in or existing users who haven’t upgraded their wallets, creating new transactions. Hopefully new users will of course be using segregated witness compatible wallets, though…

We were talking about the orphan problem and how that is significant for mining. There is an interesting technical solution to this, to convert a latency bottleneck into a bandwidth bottleneck. The physical network has excess bandwidth, the transport mechanisms between miners and pulls. Full nodes that are not mining are not entirely sensitive to how quickly they receive blocks. They don’t need to receive in 3 seconds, 10 seconds would probably be fine. The idea of weak blocks is that we could push the network harder by using up the excess bandwidth currently going unused. Assuming this happens next, the weak blocks and IBLT goes live, then we would be in a position to make use of the excess bandwidth without worrying about the current orphan rate problems. We could increase the use of hte extra bandwidth, perhaps with a hard-fork planned ahead, it could potentially be done with a soft-fork but I think a hard-fork would be more likely.

Another potential upgrade would be a kind of flexible size, a block size that could grow over time automatically, maybe reacting to demand in some way, and that’s what the flexcap outline proposal kind of does. It’s possible that this would happen next or a simpler block size change would happen next. This should deliver another 2x scale increase. We can see with these three changes we get to the scale that was talked about in the requirements section of this presentation at the beginning.

Future scale sketch (Lightning)

Then we can talk about the layer 2 or Lightning and so, that can happen in parallel, it’s not waiting or deferring, there are different people on different teams developing Lightning today. I think there are 4 or 5 companies working on this. Most of it is open-source where you can contribute as well, with mailing lists and source code posted. The one requirement is that it needs some of the technical fixes in segregated witness, or those fixes can be deployed in other ways, it needs the malleability fix and it needs bip65 checklocktime which has already been upgraded previously, and it needs ideally CSV (bip112 CHECKSEQUENCEVERIFY) which is in the process of being upgraded now separately from segregated witness. So basically the existing segregated witness testnet is now being used by people working on lightning, because it provides everything they need, once it goes live, there will be no network features missing that would prevent Lightning. So that’s an exciting progress. In terms of Lightning, the estimates are, they vary it depends on how the usage pattern works out, but there are estimates that are maybe 100 to 10,000 times more transactions than on-chain transactions. It’s important to point out that Lightning transactions are real native Bitcoin transactions ((zero-conf (zero confirmations) but with an initial transaction that gets committed into the blockchain)). They could be posted to the on-chain but there’s a caching mechanism that collapses them so that they don’t all need to be sent to the chain. This is a write coalescing disk cache or something.

What could we do with this huge amount of scale? We might see new types of use cases, like micropayments, or low-value payments, bringing in new users and use cases. For example, sending a small amount of bitcoin with an email or some people have talked about using this to pay a website a small amount of money per page view and that would maybe provide them a better source of revenue and less frustration than ad blocking, and something like Lightning might be able to provide this.

Another interesting property of Lightning is that it provides instant and secure final confirmation of transactions. One of the problems that people have with Bitcoin payments is that technically you should wait until the first or second confirmation, which is about 10 minutes for the first confirmation. This is far too slow for retail payments. There’s a chance of “accepting” a payment and then not getting paid. Lightning can provide a secure and instant confirmation which is great for that retail problem as well.

Questions

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=1h10m30s

Time for questions.

Thank you for your comprehensive and interesting presentation about Bitcoin scaling issues. I see the audience lotta of important and people from Czech and Slovic bitcoin community. I think these people can wait for question to you, any questions? Don’t hesitate.

Q: I have question of course. Thank you Adam for coming. Your presentation was quite technical. My question is completely untechnical from the social part. You were involved in bitcoin from the very beginning, maybe one of the first people to interact with Satoshi Nakamoto. But I heard that you really started to think about bitcoin in 2013 when you bought your first bitcoin for real? How could this happen?

A: I guess I am happy to think about the technical protocols. Other people are more practical and are eager to try software out. At the time, Hal Finney was the first people to try out Bitcoin and write a report about how it works. I was content to read the report and think “that’s very cool”. Also for some reason it seemed to me it was uncertain whether this would bootstrap. I was kind of taking a wait and see approach. Different people saw potential earlier or later, some tried it out and kept some coins, yeah that’s how that happened.

Q: This is contrary to how people think, that people who were in the beginning saw that bitcoin would previal and rise. For example, for me, I was really switching to bitcoin really fast but this is only because I didn’t see the past where a lot of trials failed.

Q: Hello everybody. My name is Maria. I am relatively new to this topic. I wanted to make two questions. I read before that bitcoins can be stolen. What if someone sends you virus over internet? Is that even possible? The second question is, let’s say with money, with paper, with this system that we have, people make illegal ways to make money. Like money laundry and so on and so on. Is there a legal way to make bitcoins?

A: Can you repeat the first question?

Q: I read that bitcoins can be stolen.

A: Right, the theft problem. Bitcoin is interesting but irreversible transactions means that it’s relatively unforgiving. It stresses computer security. For an average Windows machine, it could be dangerous to store private keys. Maybe you only want to put $10 or $20 or something you would feel OK losing. At least if you lost it you would know that you had a virus and that you should reinstall the machine or something. For higher security applications, people should be using hardware wallets like the trezor or smartphones where they don’t install much software on, even smartphones can have security vulnerabilities. You can also potentially use trustless vaults, there’s a multisignature mechanism where you can work with a provider that helps you retain security. Some services can prevent you from spending more than $100/day but still leave you in control of your bitcoins, it can help protect you from theft because you can set rules about spending money. Your second question was regarding illegal uses of bitcoin. It has some privacy, but it’s not great privacy. All of the transactions can be followed on the network. you can follow the trail. The entire ledger is public data. You can see, for example, in the second Silk Road trial where two of the FBI and DEA law enforcement agents got greedy and stole some of the Silk Road bitcoins. There was a presentation recently by one of the internal investigation team members who were investigating the corrupt law enforcement agents, they were able to trace the transactions and figured out how much BTC they took and determined it was indeed them. In many ways, bitcoin is more traceable than other forms of payment. Physical paper cash is more attractive these days for illegal behavior. There’s far more volume in physical paper cash too– way more crime and greymarket transactions go on in the world than the entire market value of bitcoin by a big magnitude. If people want to focus on reducing crime, there’s other areas to focus on that are much more productive to focus on.

Q: I have a question about hashcash. Have you thought about this idea of for combatting the asymmetric problems like DDoS attacks. Very easy to send traffic to a website but difficult to consume. I think the hashcash problem is very nice. I implemented this in an anti-DDoS proxy. It seems like the idea is frozen. I am wondering if you have any new thoughts on this or new developments.

A: There were some attempts to use hashcash for DDoS. There was somebody using it to deter click fraud where people are receiving money per advertising click. They would cause people to mine hashcash on the CPU and only count the click if it actually happened. There’s a wordpress plugin that does something similar to deter abusive blog spam, which is trying to artificially increase search engine rankings by pasting links everywhere. Another idea was a more dynamic, I think there was an internet draft by some people at Cisco some years ago where they proposed that you would connect to a web server and if the web server was underload, it would request some work, and if it was under more load it would request more work. If the web server would have crashed anyway, some people could get through this way, but only the people with the plugin or the person with the most powerful hardware would get to use the service. This was such that people would be able to get some level of service rather than nobody. There is a really old RFC document from IETF about hashcash about this. I forgot the name of the primary author of this proposal. I don’t know if it was ever used. The other things it was used for was anti-spam, so spamassasin actually respects hashcash postage stamps. The microsoft mail suite, like outlook and exchange and hotmail, have their own hashcash with a different format. It’s not compatible with hashcash. They implemented that as an anti-spam mechanism in that ecosystem, I think it’s called postmark, they released it as an open specification so that anyone can implement it in theory although I think Microsoft was the only one to implement it. The other problem with hashcash is that people make ASICs. I figured that if this was massively successful that spammers would make hardware to overcome this limitation. I was thinking that if hashcash would become widely adopted, that individuals should have ASICs too to keep the playing field level.

Q: …

A: Yeah that could help. That’s a related topic for bitcoin. Some people wonder about if in the very long-term we should consider, with a lot of notice like 3 years notice, to change the hash function in some way. Or changing it, I think there are some coins or proposals to change the hash function every 6 months. And maybe that would prevent or deter ASICs, but I think it wouldn’t ultimately solve the problem because it’s a universal rule of software that hardware wins. People would look at the catalog of hash functions, look at the common properties, and make things that accelerate them, or make optimized FPGAs, or make GPUs that are optimized for that purpose, without the graphics IO and whatstuff. Ultimately, specialized hardware always wins, especially if the problem is dynamic, because the space of techniques and functions has to be specified upfront. The other problem is that it makes it harder to make ASICs. There was finally an scrypt ASIC because it was an ugly complex thing with memory and it wasn’t inconvenient to put memory into ASIC technology. So in many ways it’s best to have a very simple hash function, so that others can make the ASIC rather than only one person making the ASIC tech. Also, it’s part of the social contract that changing the PoW hash function is controversial.

Q: It’s possible to not just change the hash function, but the problem itself. If you have a web server giving you a javascript PoW problem, if you solve it, and if an attacker can create an ASIC that can solve all the problems, it’s profit anyway.

A: I think the general fast problem solver for proof-of-work is something like a GPU or there’s a company making CPUs sort of like a high core count, a very simple RISC count like 1000 cores in a chip. Those kinds of things are maybe, maybe they are single-threaded performance is weak, but they have a much higher execution throughput than a conventional CPU. ASICs are, with mining, mining is fault-tolerant, if you make a general purpose CPU you need some huge number of 9s of reliability. But for an ASIC, even one 9 of reliability would be OK. You can get those kind of CPUs and push it to the limit and then use it for general purpose execution with a JIT compiler for whatever the web server is sending you. You can still get pretty decent advantage over a conventional user. It’s very difficult to get a, it doesn’t take a strong advantage to break things. ASICs are 1000s or 10,000x faster than GPUs, which are several times faster than CPUs. Even if the advantage from the somewhat customized hardware solution is 50% at least, then it’s probably already economically broken. It doesn’t take much for a miner to win all of the mining output or all of it. It is challenging to make a fair algorithm. It pushes the hardware design in a different direction which is more complicated.

Q: Regarding IBLT and weak block, we operate Slush pool in China. For us it’s both latency and bandwidth.

A: Interesting.

Q: Because of the great firewall of china. I could imagine a scenario where you build a block on the other end, but you are still missing the transactions stuck in China. What are your thoughts on this? Perhaps the solution is to get rid of China so that they don’t have the majority, but that’s hard to do.

A: So you have a node in China? That’s the right thing to do. Two nodes? Because the interesting observation is that, to the extent that there is a lower bandwidth situation in China or a higher latency, it’s actually that China has more than 50%, that’s the problem for people outside of China. That’s not China’s problem. People misunderstand this sometime. But you said you were concerned about bandwidth? Well there are some separate things to do about bandwidth. There are a number of proposals to compress blocks. The IBLT part is for compressing asymptotically by a factor of 2 that all the transactions go against the network and then again in a block but with IBLT you basically send the transactions only once then you send a compact list of which transactions, like “these are the transactions I am using” which could fit in a handful of TCP packets. This is what you see with the relay network. It conveys how to construct the block with a single TCP packet most of the time. There’s a high bandwidth saving. There’s not much more bandwidth saving you could do. You must receive the transaction. Transactions exist and there’s a compression limit beyond which you cannot compress them further. Other things which could be done for people trying to run nodes in constrained bandwidth situations is to turn off relaying. It turns out that relaying is using the majority of the bandwidth, like 80% of the bandwidth being used is actually relaying transactions to other nodes on the p2p network. You could be a leech on the p2p network where you just receive transactions and not relay them. It’s not giving back to the network, but perhaps it’s better for people with high bandwidth to provide the relay function to the network instead. I think it’s important that blocks be constructed, sometimes when people talk about bandwidth constraints in China or something, they all say well okay but they can just rent a server in Singapore or somtehing with high bandwidth and low cost and relatively close and that will solve the problem, but it’s another form of centralization. What makes bitcoin have its properties like policy neutrality and permissionless is that there are too many jurisdictions involved to impose policy. Because of the many jurisdictions, there might be one thing blocked in Singapore but not blocked in China. If they are not constructing their own local blocks, we lose that diversity of policy. It’s interesting to know that you have gone to the step of obtaining a node in China. That’s interesting, I don’t think many people have done that:

Q: It’s much faster to.. than it is to…

A: The relay network is doing some really odd things with routing. Sometimes and I guess this is why you have nodes, you are probably doing the same thing. You would think maybe we should route over the internet and it will go through the shortest route. But in the relay network, BlueMatt has rented VPS in very strange places that can achieve a very short route faster than you could achieve over the public internet because the routes are otherwise too ridiculous.

Q: I have a question about the hard-fork which happened in the beginning of the year. I am wondering about your take on this. What do you think about this hard-fork by Bitcoin Classic team? Do you think it will happen again? How do you prevent this? Is it preventable? So it brings another question more philosophical like, is decentralization attainable? Is it utopic? When you take something like the linux kernel, it’s open-source software and everyone can add new features, but you still have one guy managing Linux. Is it a mistake to not have one person deciding?

A: I’ll talk about the second question first. The counterargument has been that if you’re applying it to yourself and imagine that you have decisionmaking power about what features go into Bitcoin, I would feel scared to be in that position because over time powerful forces such as governments that would like to change the properties. You can see a preview of this in the Ripple company, where the government asked them to make changes to the protocol which were not popular with the users. They were in central control so the government asked them to make those changes. We have to achieve neutrality and keep primary the features that users value in Bitcoin. Having developers in different countries working for different companies and having strong independence, is maybe a more robust way to keep the system independent and retain the properties that Bitcoin users prefer. We’re looking at a snapshot in time wondering what will happen in the future as companies and governments might want to influence Bitcoin protocol design. I think that if too many properties are lost, Bitcoin will lose its value. Bitcoin companies at the moment should find it in their own interest to retain the existing Bitcoin features even if governments maybe bully them. Your other question about hard-forks, I think it’s a question about tradeoffs. There are different ways to do upgrades. They have advantages and disadvantages. It’s possible for different ecosystem companies to have different views because maybe they specialize in a different area of business. A miner might prefer one feature, a payment processor might like another, and a user might like neither. What you are seeing is that some people have different preferences. In some sense it ties back to “what are bitcoin’s differentiating properties?”. Not necessarily everyone agrees on Bitcoin’s desirability. If you make different assumptions about what’s important, you can end up with different conclusions. I think that one of the debates is how fast can you do a hard-fork. On my slide, I put the simple hard-fork which is like bip109 which I think you are referring to, and in my view, and I think quite a lot of the developers believe that there is a tradeoff where the faster you do it the more risky it would be. If you do it tomorrow it would be a disaster, in a month it might be a rush to see everyone upgrade and there are risks for people who haven’t upgraded, and it requires a lot of coordination which the bitcoin network and ecosystem hasn’t tried to do before, like how do I call up this person, how do I reach this node, how do I know who is running a node, there are even bitcoin services running that are economically active but the nodes are running with old software. There was a story about a mining pool still running that had a few petahashes and was mining invalid blocks for a few months. There were miners pointing at the pool but the miners weren’t checking if they were receiving shares. There had also been cases about pools htat were defunct, nobody was maintaining them, they had reasonable hashrate but no payouts, but they kept getting listed on mining pool comparison sites as having a 100% fee because none of the payouts were working. It’s hard to do strict planning. In a top-down managed network, there are reporting responsibilities and who to contact and the people they contact will cooperate and collaborate to achieve that. But in bitcoin, it’s a peer-to-peer network so it’s difficult sometimes to reach or identify people. In many ways this is a feature of bitcoin that some components of it will be possible without identity. It’s good that miners can be permissionless and not necessarily identified, because this makes it harder to apply policy requests to the miner. This at the same time also makes it difficult to contact the miner if they are mining invalid blocks or if we want them to upgrade or something. It’s a tradeoff, it’s a grey area, it depends on how optimistic you are, it depends on how important immediate higher scale is to you. Maybe some other users wont want to take the same risks you would prefer to take. I think it’s a question of different users and types of ecosystem companies having slightly conflicting views and preferences. We’ll see how it plays out. Ultimately, for the network to upgrade and scale, it needs people to work together, it needs backwards compatibility, upgrade mechanisms will not work if people don’t work together. It’s up to the ecosystem and the users really. At the end of the day, developers are writing software and if nobody runs their software then that kind of shows that the decision is the users’ to make, they can choose what software to run. As I mentioned briefly, the economic nodes control the consensus rules, the miners have to follow the economic views of the running software on the network. There’s no hard and fast answer but I am hopeful that Bitcoin will scale. I also gave my sketch regarding what I think will happen. We’ll see, it really depends on the users and which software they choose to run and whether miners choose to activate one method or another.

Q: Some people proposed hard-forks and bigger block sizes. They say that soft-forks are inherently secure because in segwit for example the new version is lying to the old version about what’s in the blocks. It’s saying it’s not a transaction when it is a transaction. All of the versions cannot verify it correctly.

A: This is an argument that has been made. It’s not a good argument in the sense that, this is not a new observation. All bitcoin protocol upgrades so far have been soft-forks. They all have the exact problem and nobody was complaining for the last X protocol upgrades really…. It depends on what kind of node you are running regarding the risk. If you were running a smartphone node SPV wallet, then you’re not checking a lot of stuff anyway and a soft-fork isn’t going to suddenly switch you to using the Bitcoin protocol rules. If you are running a fully-validating full node with some economic value on it, the fact that it is a soft-fork does not mean you can relax, you should upgrade definitely, but if you are on a holiday and don’t get back for a few days then miners are going to protect you while that happens. It means that people can upgrade more flexibly and in a less coordinated way. For the period where you haven’t upgraded, you are at a reduced security model in the same way the smartphone is trusting miners, but the intention is that it’s temporary– and your wallet isn’t using the upgraded coins anyway so it should not be a concern. There are non-trivial parts of the network running 0.8, 0.9, 0.11 versions of Bitcoin Core for example. This means that all of those nodes are actually not upgraded. It might be like 30% of the network. We don’t know if those are economic nodes. We don’t know if they are being used by an exchange, or if they are nodes not being used by anyone. This is a problem because SPV clients could connect to them and get bad information. SPV clients tend to connect to multiple nodes and cross check, so they would notice a discrepancy, we hope. There’s so much old software on the network, it’s an interesting effect another example of anonymity or privacy is beneficial with a negative side-effect is that we can’t tell which nodes are economic nodes, and this is good because otherwise you could have a map with geolocation to go steal those bitcoin held by those nodes. The ones that are economic are important to upgrade, someone should try to contact them and suggest that they upgrade for their own safety kind of thing. From that perspective, segwit soft-fork is the same as any other soft-fork. I have heard some people try to make the case that it is somehow different but I don’t agree. A soft-fork is a soft-fork. There exists a transaction that you can create which would look valid to an old node but isn’t actually valid. Using that, you can abuse that if you have hashrate to potentially defraud someone who is using an SPV client. It doesn’t matter what feature that packet is using, or what the effect of soft-fork is, it would just create the same problem. If someone has a remote root exploit for Linux, and it gains root, you don’t care how it’s a root exploit it’s already game over, I think any soft-fork means that economic nodes should try to upgrade quickly but they are better protected than during a hard-fork. With a hard-fork, a different kind of failure occurs which is that the old nodes are ignoring the current network. They are on a low hashrate network. There is some risk that they would accept invalid transactions by someone with a moderate amount of hashrate if they were connecting to that network. This also splits the currency. You can make wallets full-node wallets and SPV wallets that stop and warn you if they see transactions on the longest chain that are a more recent version than you could understand, and at that point the user should be given a choice to upgrade or continue with the weakened security model, it’s the same in both the soft-fork and hard-fork cases, there have been some proposals to do this in future releases of Bitcoin Core. Perhaps the default should be stopping rather than continuing with risk without making a decision to do that.

Unfortunately we are out of time, thanks a lot Adam for your presentation. Thanks a lot for attending this presentation everyone.