# Dan Boneh

Speakers: Dan Boneh

*Date: August 1, 2016*

*Transcript By: Bryan Bishop*

Category: Meetup

A conversation with Dan Boneh

Part of the challenge was that the Google Maps directions in the email were going down some roads that had construction…. been here 10 minutes, and some arriving in another 10 minutes. They are building a new biology building. Large campus here. In whatever direction you walk, there are buildings and more buildings. In two years there will be a new biology building, and two years later there will be a new computer science building.

How many users at Stanford? 5000 grads. Really small school, considering the surface area. It’s mostly a research university.

Actually yes there is biocomputing, it’s across the street. It already exists, yes.

So what would you guys like to talk about? Well the theme of the weekend has been unplanned discussion.

We can talk about the tech stuff when more folks come in. As Greg knows, we have been doing research on blockchain and bitcoin. Also we’ve been running a class on bitcoin and cryptocurrency in general. Last fall we ran the first version of the class, you guys actually spoke at the class. We had 100 students. 21 came in and we had a lab. They gave the miners. This was the 21 computer. So in the lab everyone was generating coins on a fake blockchain, and building out applications to use those coins. The students liked it. It was fun. We are doing this again in fall. We have another 100 students. Maybe if we do well we will have more than 100 students. They will be well-versed in bitcoin.

I spoke to some of the students in the class. Their understanding of the technology was very high. Much better than random technical people in the Bitcoin space. So high praise. But yes I did say random technical people, not well-selected technical people. Your classes did a good job of giving people a broad exposure to the tech. They were aware of many corner cases.

Last time, we had, the last project was an ethereum project. We realized there was a lot to say about ethereum than just 2 weeks. Next time we might have more space. It’s kind of a big deal I know you guys are in the bitcoin space, it’s another big deal. There was a paper that came out of this with ethereum smart contracts. So did elaine shi. They were using the python version of the scripting language. Ours is the javascript version of the scripting language.

It tells people about the design and risk of the tech. The only way to learn about this, otherwise, are events like TheDAO hack. Much better for student projects to suffer complexity, and write a paper, rather than wait for TheDAO hack problems in other situations.

If you look at the contracts, the contract is encapsulating the state machine. Writing a state machine in javascript is not the best way to encode a state machine. TheDAO was a slightly different example. People forget to implement specific state transitions. That’s where the bugs come from. I was hoping we would do it, it hasn’t happened yet, maybe someone else will make, it would be a nice frontend to ethereum to write a contract in a state machine language. You wouldn’t be able to compile without taking into account all states and all state transitions is the important thing. I know they wanted to do javascript because everyone knows JS, but it might not be the best language.

There has been a long-standing dispute between technical people in the bitcoin space and Ethereum space about models for smart contracts and blockchain. The argue that we make is that you can accommodate the uses with state transition model that maps well into the bitcoin UTXO set model. Well people want to do procedural programming language stuff, and we argue it’s unsafe. But also it’s inefficient. There is this massively replicated set of machines that run the contracts. I already know it’s valid, I don’t need everyone to recompute it. It’s much neater to just be able to prove it’s valid, without everyone needing to recompute everything. The typical example is that say I have a program that needs to divide two numbers, division is a complex operation but instead I can provide an extra input where I say this is the result of the division and it does multiplication to verify. If you take that to an extreme, you get a different language approach and a more functional language where you describe a condition for spending.

So even without getting into things like zero knowledge, you can do things like encode state transition tables as a merkleized data structure and then only reveal the parts that you use.

I think our group is complete. You guys know each other, but let’s do introductions.

I am Dan Boneh. We already talked a little bit, but we do some research on bitcoin. We taught a class on bitcoin last year. We are doing it again this fall. It’s a fun topic to teach. It’s a great way to combine money and crypto. Love this space, look forward to talking to you.

- short signature schemes
- BLS signature schemes
- pairing crypto
- group signatures
- signature aggregation
- high-level overview of recent areas of innovation in cryptography
- which groups of academic cryptographers and mathematicians would be most likely to get interested on bitcoin problems?
- educational efforts expansion
- academic efforts and expansion of that
- bitcoin tech side things

One of the big challenges for our industry is both educating more people about bitcoin so that we can grow it, and bringing more knowledgeable people into it. And how do we facilitate academic research into it so that we have help with the hard problems.

At a high-level, something we looked at a while ago was the issue of proving solvency, building on Greg’s work. And then we can talk about signatures and merkle tree solvency was Greg.

We also have been doing a new signature scheme for Bitcoin and happy to talk about how that is going.

The exchange solvency is that, you have an exchange, you keep coins for your customers. And now the question is, the customer would like to know that the amount of obligations, namely the money the exchange owes to the customers, is less than or equal to the amount or bitcoins that the exchange owns. So it would always be able to pay its debt. The simplest way is to have the exchange publish all account balances and all of its assets, and then everyone can check that the obligations are less than the assets. Unfortunately this is too privacy invasive. Nobody does this, as a result. The next step is Greg’s approach, which is where everyone - where the exchange commits to its assets, and then there’s a way to check that customers can actually withdraw their funds and that the bitcoin exchange has enough bitcoin to pay for the withdrawals. Unfortunately this forces the exchange to reveal all of its assets. So the next step is to have a solvency proof where everything is private. So the exchange wouldn’t reveal which bitcoins it owns. You will not learn the UTXOs, what the customers have, you will still have an exchange that can convince everyone that the assets that they have without using UTXOs in particular is greater than their outstanding obligations. We do this with commitments. The exchange commits to the bitcoins that it owns, without revealing which UTXOs it has, and how many assets. Then commits to obligations. Any customer can check that something was actually committed to, and that their balance was in there. There is a zero knowledge mechanism that allows you to prove and check that there was a commitment to the obligations was less than or equal to the total assets. This is done without revealing details about the obligations.

The nice thing about this being automated is that the exchange can run this everyday. Every day you can publish a solvency proof. With exchanges, old things we have seen cannot happen with this approach. We have talked with some exchanges about deploying it. Their long-term keys are actually stored away in hardware, so it’s hard for them to have their keys being used in a daily proof of solvency. You can’t do the proof if the keys are offline. So we use valay keys… the keys live in storage, but what you’re able to do is you can publish a key out of storage, the key itself is of no value for spending the coins, but it’s sufficient for participation in the actual proof of solvency. The key itself would not be good for spending the bitcoin. So without touching torage, you can still do proof-of-solvency. So this could take away the objections for deploying this. Proving you own at least nothing is pretty easy, so yes you need actual assets to make use of this. Well, you could have debt….

Anyway, we have a library ready, the code is available, if anyone wants to play with this, we’re interested in having real experiments with this. If you encounter people interested in using this, it’s on github. We would be happy to help if there are integration stumbling blocks or something. Coinbase should be doing it right now. It’s a chicken-egg problem. Customers aren’t demanding it, so.. someone needs to be the first one to step up and do it. On a daily basis, this is not something you can do in the real world, on a daily basis you would be able to publish a proof-of-solvency. This is better than auditors once a year. The once a day requirement is that, the proof itself is not small, it’s about a gigabyte. So for a coinbase with a few million users, you would get about a gigabyte that you have to publish. You don’t have to keep old proofs around, you only need a new file each day, so you would need to do 1 GB.

Many exchanges don’t want others to know how much you have. No, you would not reveal the total number of coins to others. Literally what happens is that we use the entire set of UTXO as your mask. What you do, all you do is you commit to, you provide a commitment to a zero-one value that says Iown this UTXO or I don’t own it. To the outside world, this commitment is hiding. Anyone who looks from the outside can’t figure out if you committed to a zero or 1 for a particular UTXO. They don’t know if you own it. Nobody learns anything about the actual UTXOs that you own. All they get to see is a commitment to, like I said, to these zeroes or ones. And then a commitment to the sum of your assets. They don’t see the assets in the clear. They just see crypto commitments. Put a number, put it in an evelope, looking at the envelope you have no idea what numbers are inside. You don’t reveal trade volume, account activity, total number of customers is not revealed, it’s completely private.

How does the size of the scale with the number of UTXOs? So, it scales linearly. It’s a ring signature over the UTXO set, effectively. It scales linearly. That’s where the gigabyte comes from.

If you want absolute privacy for the exchange, then you would use the entire set. If you want to optimize, you could choose. For example, dust, anything that’s small you could exclude because it won’t affect your solvency. In the future, full set of UTXos might not be something you can get. So in that case you would use a subset. What’s happening here is that you have your set of UTXOs that you own, and then there’s a larger set of UTXOs that you use. The reason to use tthe larger set is that so that you don’t reveal which UTXOs are yours. So it’s basically a privacy for the exchange. If you don’t care about privacy, you could use all your UTXOs. If you care about privacy, you use a larger set. The more you care about privacy, the larger the UTXO set you use. If you can’t get all the UTXOs from the network, fine, just use a large enough set that provides you the privacy you want.

One of the main limitations for the security of this scheme is that there is a potential for participants to borrow money to be solvent. I would use some BTC from someone else, because I can ask him to sign my solvency proof. There’s no way to avoid that, unfortunately. If multiple exchanges do this, you could show that they use different UTXo sets that are disjoint or not overlapping. So we can’t both ask you to lend your coin. If multiple exchanges public proof, you could show this is not happening. This would still be a big step forward for the kind of security provided by exchanges. It can’t prove fiat balances. Maybe there’s some altcoin with a strange design that the ring signature method wouldn’t be able to work with; it might only work on bitcoin. There are limits, but they are reasonable ones.

The paper is on my webpage. The library is on github. The paper is called “Provisions”.

Let’s talk crypto. One of the things we have been doing, one of my students Alex has been talking with Greg and others. The question is:we would like to suggest, we’re just doing research who knows if this catches on; a more appropriate signature for bitcoin. ECDSA was a poor choice. If you move to Schnorr sigs, which you should, there are other signatures that have other interesting properties. I would like to tell you about signature aggregation and such. Is that interesting and appropriate? Let me explain how this works.

BLS signatures for bitcoin. Me, lin and shocakum. So the point of these sigs, we were interested in them because they were short, but they have other interesting properties for bitcoin. Let me explain how they work and how the properties operate.

The setup is that there is a, .. i’m not sure what I can assume in background knowledge. Ican stop and explain if necessary.

There is a group, G, of order p with a generator g in G. normally you would use this group, say zero to p-1 mod p, all those numbers up to this, … DSA would use the group zero to p-1, and ECDSA uses the elliptic curve group. Here we are going to be using an elliptic curve groups, but not the ones that NIST suggests. We are using pairing friendly curves. They have been around for 17 years, there are commercial systems using them. They are deploying them in lots of places. At (mumble) anything you buy is signed by these BLS signatures. Home Depot uses them. They are widely deployed. I can talk about why they are using them, but not right now.

I want to talk about the property of these signatures and see how they apply. The magical property of this group is in fact that, they have what is called a pairing. Iam going to write it like this. Given an integer x, in the range 0 to p-1, I can obviously compute the generator g to the power of X a member of the element group G. I can do exponentiation in this group. This is an elliptic curve group, but not a regular one. It’s a pairing group.

e is takes two group elements and maps it to some other group like G prie. If you want to be concrete, every element in this group is an elliptic curve group and we can take two points and map them to some other group. The property is that this pairing is something that is called, it’s what’s called bilinear. We will write out the property and show how it is useful. If I compute a pairing e(g^x, g^y), what you get is the pairing of… G to the power of X times Y. This is the fundamental relation that makes this pairing interesting.

I give you g^x and g^y. All of the sudden the x and y comes out of the parentheses. Is g prime of the same order p?It’s the same order as p. Exactly.

Just using this one magical property, we can already build amazing things. Let me show you the simplest thing. That’s BLS signature scheme that works as follows. The setup is that, in key generation, it’s really simple. All you do is chose a random number, i will use alpha to denote secret keys. You choose a random number between 1 and p-1. This will be your secret key. The public key is just g to the alpha. This is the same as ECDSA. You chose a base, you raise the base to the exponent.

Signing here is a lot simpler. If I want to use my secret key to sign a message m, all Ihave to do is just compute the signature, which I denote by sigma, all I do is compute a hash of the message. The hash goes from …..you take any message and map into the group g, done efficiently. We raise it to the power of alpha. That’s the whole signature. It’s much simpler than Schnorr and ECDSA. You take a message, map it to the group, and raise it to the power of alpha. In elliptic curve language, you multiply it. We are more used to additive. But please, continue. It may be confusing. Every time I say raise to the power of exponent, in elliptic curves, it’s multiply by alpha.

How to verify the signatures? Signing is fast. How to verify? Now we have the public key, which is g^alpha. We have a signature which is H(m)^alpha. So the way we verify, all we do is compute the pairing e of the public key and g, and we ask, is that equal to the pairing of the message e of hash of the message and sigma. So that’s the whole verification condition. This is much simpler than ECDSA and Schnorr. In Schnorr, it’s a flurry of equations. ECDSA is definitely worse.

Here it is simple. If the sig is valid, this is the pairing of g^alpha and g. Oh wait I got this wrong. You swapped them a little bit.

e(PK, H(m)) =?e(g, sigma).

That’s the verification question.

e(g^alpha, H(m)) = e(g, H(m))^alpha the alpha came out of the parentheses.

On the right side, e(g, H(m)^alpha). The alpha comes out on the left hand side on the left, but here it comes out on the right hand side, so we have equality here as well. If the sigs are valid, it’s because of these chains of equality. So very simple signature.

You can prove that under standard computation hellman assumption, assuming the hash function is a random oracle, you can… schnorr signatures also assume random oracle. Same assumptions as Schnorr sigs, which work with general elliptic curves, and here we have to use pairing friendly crypto curves.

The signature is only one element in g. Whereas Schnorr is how many elements? It’s two elements. So this can be 1/2 the size of a Schnorr signature. It’s shorter. It means you can, you dont have to store as much space on signatures instead of blocks. Only a factor of 2.

That can save lot of space. Blockchain is storing lots of extra data. With segwit, different story, yeah.

Why is this relevant to bitcoin? Well there’s another property, the aggregation property. Aggregate signatures. This might seem like a magical crypto primitive.

We have a bunch of public keys, like PK1, m1, and sig1. Pk2, m2, sig2. We have that up to a million public keys, whatever. Sure.

You should think about this as one block. A block has what, a few thousand transactions. About 2000 transactions in a block. About 2000 messages signed by different people producing different signatures. You have to store all these things in the block. Ignoring segwit, you have to store all the signatures in the block.

The interesting thing about BLS is that you can take all these signatures and aggregate them all together, so that you get one signature, a single group element, 32 bytes for the entire block. You don’t have to store signature sat all. No signatures in the block. Well, one signature for the entire block. So, exactly, right. There are some issues with making this work. Let’s talk about the math.

How does this work? Aggregation is simple. Sigma is just, you just take all these signatures and multiply them together. Each signature is just a group element. So, in the language of elliptic curves, every signature is a point on the curve. Take all the signatures and add them together.

There is by the way some technicality in what needs to be hashed, if you do it this way, you can’t just hash the message, you should also hash the public key as you’re signing. This is good practice in general. It’s unfortunate that bitcoin doesn’t do it; well they do it indirectly by hashing the txid. Generally, whenever you implement signatures… segwit fixes this. Few schemes commit to the pubkey. Commercially used?It’s not typically done. This is not just bitcoin’s fault. When you build signatures in the real world, it’s important that when you sign things, you prepend what you’re signing to the message. Otherwise you get DKS attacks. It’s important to also stick the pubkey into the hash. When you do sig aggregation, it’s crucial you stick the pubkey into the hash or else things could go wrong.

How do we verify that single signature for an entire block?What do you need to verify?What you need to verify is all the public keys and all the messages. Pk1 and m1, pk2 and m2, etc. And then the single signature. We are not saving on messages, we are not saving on public keys, only saving on signature size.

How do we verify it? Well, we are going to compute e(PK1, H(m1)) and we are going to multiply that by e(PKn, H(mn)) and we will test if that is equal in its entirety to e(g, sigma). That is the question to check.

Why does this work? E(pk1, H(m1) is just… … e(g^sigma, H(m1)) = … the property of the pairing is that the alphas come out. So the alphas come out, e(g^sigma, H(m1))^alpha. So just like they came out, they can come back in. SO it’s e(g^sigma, H(m1)^alpha1) * all the others of those.

It turns out this is equal to … I can take the product and stick it on the right hand side. The product of i from 1 to n, is equal to ..r. this is just a property of pairings. If I have a product of pairings where on the left hand side everything is fixed, and on the right hand side I’m pairing with different things, then the product just moves inside. I’m comparing g with a bunch of different things, and taking the product. The gi s always the same. I can move it inside the pairing. It’s the product of all of these to alpha i. Look at how we compute the sigma. So we have equality here. That’s it.

What I want you to remember is that you can take the remarkable thing here, we can take signatures that were generated by different public keys under different messages and aggregate all of them. Schnorr has some aggregation properties, but they are weak aggregation properties. If 1 million people all sign the same message, then we can aggregate these, these are kind multi-signatures. Oh and it’s interactive. These are called multi-signatures, you can aggregate in Schnorr if they are the same message. In BLS sigs, you can do aggregation offline.

The reason why we came up with BLS in 2001 was the original application was certificate chains. If you think about cert chains, you have 4 or 5, Google sends you 4 certs every time you go to Google. Each one has its own signature on it. With signature aggregation, you can take those and compress it into one, and anyone would be able to do the aggregation. It doesn’t have to be Google. They can look at all the signatures that the CA gave, and you can aggregate it and give it out to all the browsers. It’s just adding points together. Aggregation is really simple.

Pairing is a bit harder. Let’s talk about timing. It’s the one issue that should be discussed. To verify an aggregate signature, you realize that what you have to do is compute a pairing over here. But you have to compute a product of pairings. So say 4000 signatures per block, and 2000 transactions per block. So you have to verify 4000 signatures. So you will be doing 4000 pairing operations to do this. There are many pairing libraries out there, like 5 or 6, and they are available. Most of the pairing libraries don’t support. … the way they would do the product of pairings is the dumb way, do it separately and then multiply. You can compute a product of pairings in a faster way than normal. Whenever you compute a pairing, one of the expensive operations at the end is a big exponentiation. There is a cleanup step using exponentiation. Every time you compute a pairing, more than half of time is spent on this final exponentiation. So if you need to compute a product of pairing, it would be idiotic to do this and do exponentiation each time. Do a product of uncleaned pairings, and then do exponentiation on the whole thing at once. This is an example of an optimization for computing products of pairings.

Alex is working on trying to optimize that. He built all this. The library, the basic library just from an off the shelf pairing library is already available. BLS is now easy enough to use, now the poor guy is digging into the implementation and is optimizing and working to optimize multiplication. We wil have a fast multiply there.

The challenge here is that a single ordinary pairing operation by itself even with fast library takes like 0.5 millisecond. It’s much smaller than a ECDSAsig validation. It’s 7x slower than ECDSA.. This is before optimization.

Is addition of points pretty much the same?Yes the same as elliptic curve. The curve is different, but it’s basically p256. Actually many of these things are j invariant zero curves that have the same speed of optimization as bitcoin secp curve has. I can use those words here?

So that’s the benefit. What I’m hoping to do, I am hoping we can… Alex needs to finish the optimizations. Then we need to write a paper to explain the performance benefits. It’s not all rosy. There are downsides, like if the signature doesn’t verify, then you can’t tell which signature caused it to not verify. SO if there is a failure, you can’t tell who’s to blame. This is not an issue in block verification, we only care if the whole thing is valid or not.

This part can be transferred for fpga compilation and ASICs. Yes dramatic speedup. There is a group in Japan that did FPGA implementation, they got from 500 microseconds to a few microseconds. It will be 100s of nanoseconds. The greater implementation challenge is that this implemented like you describe interacts poorly with caching. In bitcoin block propagation, we validate blocks quickly because the signatures have already been validated and cached. So figuring out how to work this with caching is a challenge. There could be ways to do caching, but they break the product of pairing optimization. So Pieter and I worked with Alex to try to figure out engineering implications of Bitcoin and some of the tradeoffs and contours and gave him some advice on how to measure this and justify it and argue for it.

In terms of how this could actually end up in bitcoin, we’re far away from that. But there are incremental steps towards that the we can take. Something I hope to be proposing in a while is that we switch to Schnorr signatures as a first step. Then do aggregation, but only at a transaction level, not at the whole block level. Where you could, assuming you have many inputs, like coinjoin. No, just normal transactions. Coinjoin is usually like 40 sigs or so. Ordinary transactions often have 2-3 inputs. So we actually computed against the current blockchain, if all transactions were to aggregate, just the inputs in transactions, it’s like a 30% savings on block size. So that’s good, but not anywhere near aggregation sig benefits. TO deploy this in bitcoin, if we were to deploy it, it would be a soft-fork. This would be a soft-fork for BLS and transaction signature aggregation, and that’s a good step to block-level aggregation. So it’s a good first step that gets us there.

How validation currently works is strictly a per-input thing. And there are changes needed to have part of the validation go beyond that. It’s not a problem. But yeah needs work. You don’t need to aggregate all the signatures in a block, perhaps only those that you care about. You could aggregate in sets of 100 if you want. You don’t have to aggregate everything together If you abandon product of pairing optimizations, you could merkleize; generate the intermediate products of pairs of signature, merkleize them, and send partial proofs from this. But this is going to be inefficient. The groups could be quite large. They don’t have to be just pairs. If you have 4000 signatures… sure. But the point is that if you have all of these signatures and if you compute them and you were to take a pair and multiply them together, a tree, then I could, if you were interested in all the signatures in any diaetic segment of it, I just show you the rash tree down to this, and I show you some signatures, and here’s some product of the other side of the tree that Ihave not shown you, but I have to commit to that at least. You have a product at the top that you commit to, to verify you agree, this allows you to show any sig with a log size proof, but this requires sending g^t for the products which is usually big, but it’s a cute trick, so it might not be worth doing. You don’t have to aggregate everything, you could choose the number of signatures you’re aggregating. The merkelizing trick is a good possibility.

I don’t know if bitcoin will move in this direction. Quantum computing destroys a lot of things. It’s a wonderful problem to look at. We have signatures based on.. but there’s problems with… post quantum crypto doesn’t do aggregation?No, that’s a problem I am interesting in.

These are super-singular. There are multiple families. One of them is super-singular. You could go that way. You can shave off a few… the aggregate signature with super-singular curves will be bigger; there are other families of curves that will shrink this.

I wanted you to be aware of the mechanism so that when you design the next system, you know that it exists. Yesterday I mentioned that the amount of new tech that we can deploy in bitcoin to make it more scalable and expand to more use cases, I was talking about some of the things discussed here today. There are many interesting pieces of technology that, sometimes invented a decade ago, and they didn’t find much of a home, that are very useful for us.

These are not NIST curves. You are using a curve, a different type of curve. It’s regular elliptic curve like in ECDSA and Schnorr. They are different types of curves. Same mathematics. These curves have been around 17 years. The math behind the pairings is beautiful. It’s addictive. There are many things you can do with this problem. Most problems are easy to solve with pairing. A lot of people have looked at pairing. So far there are no attacks to this that don’t also apply to general elliptic curves. There are some bias in the crypto community that arises because the pairing operation was originally used as an attack to break discrete log security in some poorly-chosen elliptic curve groups. But what Dan’s work done is takes this attack, and using careful group parameter choices, turns this into something productive. It’s a feature, not a bug. It turns the attack into a feature. For many people, particularly more CS people, they think oh there was an attack and we’re less comfortable with this. It has been standardized, 1353133 standardized pairing crypto. The bitcoin curve is not a NIST curve either.

So commercially, there’s a product that secures a connection, it was convenient to use identity-based encryption, another appication of pairing. The PoS just needs to know the name of the credit card processor, not the actual key. The post office has thousands of PoS terminals. Traditionally it’s quite difficult for large merchants to install a key in each one of the point of sale terminals. So that was the ease of use. Also financial applications.

The new curve type for pairing? Could you do an ECDSA operation using this other g?Let’s say you want to do schnorr signatures on the same curve. Does that work?Yes. Discrete log is hard on these curves. You can use this for Schnorr. There is another problem called decisional diffie-hellman problem, which is practical on these curves. You can have schemes that are secure on secp256k1 which fail on others. For example, with some kinds of HD wallet schemes, you could see if two separate pubkeys are linked, if you were using a pairing friendly group. That’s a good example. Generally the discrete log and computational diffie hellman problem, for Schnorr, are computationally hard. So yes you could use these for that as well.

Non-interactive three-way diffie hellman. But not more than three-way. Do you have applications for that? I just think it’s cool. No. It’s very easy to come up with cool things you could do with pairing. There are many interesting problems I have looked at in bitcoin where I say I can’t solve it, well then Iook at pairing, and then I have figured out how to do that, then I get rid of pairing and see if the solution still works. That line of thinking has been helpful.

So let’s see. Another thing I was hoping to talk about was password hashing. This would not be applicable to proof-of-work. There have been many developments in password security. This would not be directly for bitcoin. This would be wallet-side. Can I tell you about password hashing? scrypt?Scrypt is used in mining. The mining equipment takes advantage of time-memory tradeoffs.

History of password hashing, then talk about new stuff. Right, so. Traditionally ppkf2 type hashing is vulnerable of course to hardware attacks. Scrypt came up with this interesting idea, to evaluate the hash function you need a lot of space. There are some time-space tradeoffs that apply to scrypt. The whole point of scrypt was so that if an attacker needs to run through a dictionary of passwords, the attacker would need to spend a lot on hardware to do that attack. So you need 1 billion scrypt engines on a chip or something, but each one needs to be large because of all the memory. Running scrypt 1 billion times, ignoring time-space tradeoffs, requires 1 billion hardware devices, rather than running scrypt once.

Turns out there’s a problem with scrypt, beyond time-space tradeoffs, the memory-access pattern of scrypt depends on the password being hashed. This is a problem. In the world of password hashing, this is an issue. Side channel stuff.

It’s quite easy to demonstrate cache timing attacks on scrypt. Given the time it takes to compute scrypt, it’s actually possible to then disqualify many password candidates without spending memory and without using memory. The whole point of scrypt was that each time you evaluate it, you need to spend a lot of memory. If you have the memory access patterns, for the scrypt or to evaluate with a real password, I can cross out password candidates without using any memory at all. I can do a dictionary attack without spending memory. This is a problem with the design of scrypt. It can’t do what it’s supposed to do if there’s access to side channel information.

So the next challenge is, ….if you are on a shared VPS or something like that, and you decode your password on it… you hash with scrypt a password from your customer, and someone looks at the measure of how long it took to hash, they gain a little bit of knowledge, and that lets them filter the candidate passwords without memory. Normally with scrypt you spend a lot of memory, but if you have access to a side channel, then that guarantee goes away.

Can we build a hash function that is memory hard, but the memory access pattern is independent of the password being hashed?it can depend on a salt, but not the password. The answer is yes. Argon is one of the candidates that provides this. Unfortunately argon doesn’t have a security proof, and we were able to show a time-space tradeoff, in the version that has a constant… there was a recent paper that does give a … time-space tradeoff on argon, and then it shows that tradeoff is optimal.

We wanted to achieve the best of all worlds. We want memory hardness. We want memory access independent of the password. We want a fast scheme. You should be able to evaluate 1000 passwords/sec. We want a security proof that achieves all these goals, to show that it’s memory hard and has no time-space tradeoff. These requires balloon functions. The implementation is 10 lines of python. So it’s really simple to implement. I can show you how they work.

This would be more for wallet security. Client has to provide a password to the wallet, to open up and expose secret keys for spending money. Like in a VPS. Or website login. Or an exchange. Login to an exchange for an example. Prevent sidechannel attack.

Imagine you’re an exchange, customers login to the exchange, the exchange, you have to provide a password, exchange hashes the password and compares to the hash in the database. You would like the hash function to be fast, because lots of logging in customers, and you would like the memory access pattern to be independent of the password while you are computing the hash so that a side channel attack will not expose the password. I can show you how the password scheme works.

You take your password, the first thing you do is you use a hash function to fill up a large buffer. So basically, you hash the password with 1, with 2, with 3, with 4, and you fill up this large buffer. Now what you do is the following. You have a large buffer derived from the password, potentially also from the salt. How large is large?That’s a security parameter, that’s the memory hardness. So maybe something that fills up your L2 cache. The attacker has to at least replicate your L2 cache. You don’t want to go beyond that, because then a large website would get too slow in practice.

So now what you do is you have a pointer that runs sequentially through the array, and at each step, say the pointer is over here, this is the pointer i, it just runs through the array. At each step it does the following operation, it looks at the current value in the bucket, and the value in the previous bucket, and then using a hash function it generates 4 more random points (other buckets). We have 6 points total. It XORs them all together. Hashes the results. It concatenates all the points together, hashes the results, and sticks them as the new value at this point. Are the 4 other locations determined by the data in the bucket? No, heaven forbid. It’s only determined by the indices and the salt. Salt-based but not password-based. These 4 random entries are determined by a hash function. You give i to the hash function, to get the 4 other pointers. You concatenate all them together, then you hash them, that’s the new value here. Then you do the same thing on the next entry, you generate 6 points, you hash the values, you continue around on. That’s one round of balloon. You iterate this, k times, and k would be like 3, and it turns out that, the security is proportional to this k. The more times you iterate, the stronger the security proof becomes. There’s some graph property here, it’s a pebbling type argument. If the point is, if you try to evaluate this hash function with 1/4 the space, your runtime will blow up exponentially in k. If you want to be paranoid, set k to 10. If you try to evaluate it with a quarter of space, then your runtime blows up a lot.

This is called balloon hashing. We have, again, we have a website for this. Sample code is available. This is a perfectly good hash function. The password hashing competition, there were often people dismissing side-channel attacks. It seemed to be missing the point.

The fact that we can get side channel resistance, well we might as well use those functions if we can get them. We shouldn’t even bother debating about the problem, just use this. People might misuse the tech anyway, so we have to be as secure as possible. In your scheme, in this scheme, it is important that there is a salt in it, because otherwise the memory access patterns are fixed and this leads to hardware issues. So you must use a salt. This is important when talking with bitcoin people (perhaps not us), one application is that people use hashing for brain wallets, where there is no place to store…. they wouldn’t need a brain wallet if they had a place to store the data, so this might not be as useful for that application, which is really not advised. There have been papers about attacking brain wallets. This would also make this unavailable for proof-of-work because you need to enforce random salt. I did not bring this up as an alternative to Proof-of-Work, that’s a great point. For a KDF , you want something that doesn’t lend itself to PoW because if someone makes a cryptocurrency and they start trying to optimize your KDF for you …. (laughter).

I have to say, like, we have also been talking with companies about this. pbkdf2, Apple is using that everywhere. It’s disappointing. It’s 20 year old tech. It’s not the right thing to use. We have developed better things since then. The standards haven’t really evolved, though. So people who want to be standards compliant, well they are screwed because NIST has only approved pbkdf2. If we can bring leverage to NISt, we should update the standards. Argon to balloon is the upgrade path.

Ignoring speed, is there any disadvantage to using more memory than L2 cache?Would it be less secure?no, it’s better if you use more memory. The moment you go off L2 cache, time to hash goes up.

This would be going slow going into main memory, because it’s making random probes. But if it was staying in L2, and making less frequent calls to main memory, well this could introduce a time-space tradeoff. But could you show it would still be better than just having the cache alone?Say I am time-bounded in execution, which I usually am, but perhaps I have more memory. I could use the L2 cache and be free of tradeoff, but would I be better off using main memory in some limited way, but still get improvement? Perhaps bias the hash function to more likely hit within a certain window? Well the assumption is that the hash function is a random oracle.

You could use computer memory, when you do that, you are losing a lot of time because it’s regeneration and during this cycle it doesn’t perform the function. When you’re transferring the same memory, into an ASIC, it will be approximately 1000 times faster. In result, because using like, really fine password user choice based on word book or random generator, you can also generate a rainbow tables…. not for this with a salt. The salt… without the salt, the hardware optimizations would be really straight-forward.

One argument against memory-hard is that in a model where you are up against a motivated adversary, perhaps you would be better off where they would not be able to create dedicated hardware. Well, let me expand this argument. Basically, if you have pbkdf2 kind of function, your cost of it is joules of energy for computing. Just assume that the silicon is going to be optimized as much as possible, and then we can compute the cost of cracking it. Ilike this assumption that buying the hardware is amortized. If we make a different function where much of the cost has been moved into buying the hardware, and the hardware is going to be heavily used, and if it does not take as much energy to attack, perhaps the cost of attacking is down. The amount of cost is proportional to area on the chip , if the area on the chip to evaluate this is large. But this is not the case for dynamic RAM, it uses much less energy per unit gate area. You can use different type of memory that does not consume energy. Energy consumption because you are running lots of cycles, so it wi….. in the PHD competition, if you ignore memory hardness, the amount of actual computation being done is still similar is if you were… if the algorithm was pipelined enough, you keep your multipliers multiplying, make the ALU work hard. If it is working as hard as it possibly can, then the memory hardness has not cost you any joules of energy. If memory access is free, it’s not any less than if you had used a typical hash function. It’s not any weaker in this model where we assume the hardware is free or amortized. We agree. We have been working under the assumption about the area. It’s not a good assumption for everything. If that’s an issue, then it’s an issue with the whole approach of memory hardness. It requires some careful analysis to figure out what the ratio is, so as long as memory is not free beyond this point, or not just this point much more heaper, … perhaps a tenth of a cost of a transistor. Just assume it doesn’t use any when it’s not being accessed.

You may end up against the fact that someone can go make an optimized ASIC, they could use something different that scales differently. The reason why, power usage is proportional to area is because of optimizations in current processes to be fast. Your power is essentially the charge in the transistors leaking. You could imagine an architecture that are very different, maybe super cooled in nitrogen, who knows what kind of crazy things… perhaps for the attacker it would be better to compute slower in more space, and take less energy. You might be able to spend $100 billion and crack passwords, and it’s very hard to know. Greg’s approach about certainly no worse is interesting.

Well it’s funny we’ve had this, we’ve been giving my students a hard time about this question. The problem is ,we deal with things we can prove. We’re never going to publish an ad hoc construction. But we could prove something under the assumption that something is free. Yes, for sure. We think about this. In memory hardness we have a model, we can prove it. For computational hardness, we can’t prove complexity lower bounds. It’s a little bit harder. It’s a fantastic question. There are some underlying hash operations, you assign it complexity 1, and then say… assume it’s a random oracle, complexity 1, and then bound the number of oracle queries. When you get into the engineering of these things, mining ASICs make some remarkable tradeoffs, like running at very high error rates, which is unusual logic. It’s a terrific question. I am going to go back to the students and push them on this. On the PHC mailing list, there was an earlier post I made where I presented the question and it didn’t get much traction. The Oregon team replied, Dmitry had a hand-wavy answer I guess. Maybe there is a way to argue security like that.

It would be interesting to look at reversible computing models and that kind of theoretical work. You could do operations with no power usage until the very end, mountain of destroyed information… in reversible computation, the only energy is for destroying information. You could make elegant academic arguments about these kinds of functions because you either have to store lots of garbage, or you need lots of and lots of burning energy as you go. The proof would be, it takes this much energy in a reversible computing model. Yes people have done this on a small scale, like 1 bit stuff.

Iused to work on quantum ish stuff. They argued that quantum computing would never exist, because the cost of computing would scale exponentially with the number of bits. (laughter) I think we’re running out of time, but I did want to bring up the issue of quantum computing and the future of bitcoin. There are some criticism about the lack of preparation regarding what bitcoin has been doing regarding post-quantum. There should be a quantum resistant system.

One thing you need to ask is how real is the threat? IBM group and Google group are working on building a quantum computer. More interesting, there’s a startup called Regelating Computing. Once it hits a startup level and they want to make money off this is, I think it’s getting to the point where we will be seeing a Moore’s law. The architecture they have works. They will scale. It will take effort and lots of engineering, but there’s no fundamental reason why what they are doing won’t work. Maybe you are skeptical, but I could explain the physics. In general, the answer I have in the bitcoin space is that hash-based signatures are quite boring… Yes, keep it boring. That’s the right way to go. The problem is that … hash-based signatures will blow up the block size. Lamport aggregation or something.

We kind of have a hash-based signature aggregation scheme. It’s not really aggregation. Imagine a hash-based signature, you could use the block hash as a cut-n-choose selection for a subset of a larger signature because for a hash-based signature you can drop security by cutting off parts of the signature. You commit to the whole of the signature, you use the block hash to select some subset of the signature that you reveal. You can reveal less and less of the signatures as they get back… I see what you are saying. The argument would be that the computation to reproduce all the blocks to rig the random selection to get your random subset of signatures, is a hardness assumption for the signatures. You could kind of compress. Tailor it to the particular blockchain application. By using the fact that we have this random beacon that is computationally hard in bitcoin, we could use that to take a bunch of signatures and then throw out parts of them, which is an interesting combination. This is the best I have come up with.

Yes that’s a great question, we would love to work on that. I would have deployed hashes for hash-based signatures for bitcoin years ago, if it were not for the efficiency problems. Would core developers consider moving to hash-based functions?We need to make it an option. We have people with thousands of bitcoin in a cold wallet that they very rarely move. Even if it takes 40 kb, we should secure those with a hash-based signature.

One of the problem with hash-based signature is the reuse problem. Not reusing. Maybe inside MAST or something. You can make a tree of keys, there is ultimate convergence of these schemes. We call these nonce-reuse based hash resistance signatures. You can use hash-based signatures that are only supposed to be used once, but if you reuse them once then it’s okay. There’s a limit. There’s spinix scheme, sphinx or whatever. 40 kilobytes per signature. That’s why I said 40 kilobytes per signature, yeah. Yeah we are interested in somewhat reusable, like strong claims like 2 or 3 times is okay.

So it’s interesting. The structure we have is that you’re supposed to use them only once, if you happen to use them twice, then the attacker would be able to forge some messages, but very few. If you use them three times, then the set of messages the attacker can forge, grows, and if you use them more, then they can forge whatever they want. This is also true for Lamport signatures. We have a slightly different structure. We should think about applying that to bitcoin.

The other quantum thing is that you could say sha256 is still okay, but we … no, sha256 is not okay. You should move to sha384. For PoW… it’s fine. So what do you… the transition is a mess?over night… There have been some recent papers. Quantum ASIC overnight. Quantum resistance… is not quite what it has been assumed.

Under a quantum attack… on a classical computer you can break a n-bit hash fun… collision.. 2^n/2, in quantum computer it’s time 2^n/3, so if you want a collision-resistant hash function you need sha384. The criticism of this is that the classic result is a birthday attack. If you naively eimplement a birthday attack it takes 2^n/2 space. It’s impractical. But there are constant-space birthday attacks. There was an argument on the bitcoin-dev mailing list where someone was making a protocol that used a 64-bit hash and they argued it was infeasible to find a collision. My response included a 64-bit collision using their message, as a response. Preimage attack. Okay, nice.

But how many blocks is it? Say bitmain has a quantum ASIC, and for sha256 they are a few billion times faster. You can do PoW solutions faster if you have a quantum computer. It’s 2^70 right now? You could do it in 2^35 with a quantum computer. In bitcoin’s economic model, having a difference of double is already a big problem. 2^35 would kill bitcoin. Having multiple orders of magnitude would be a huge disaster, it would kill bitcoin overnight.

No, what would happen is that the difficulty would just go up overnight. It would probably kill the system. One person gets a couple million bitcoin overnight. Having a 2x improvement is already kind of dubious. If overnight a new miner showed up that had a 2x efficiency, they would probably rapidly push everyone out of business, and it would be bad for the system, even though the system would continue to work technically.

Since you’re doing search in a 70 bit space, the quantum computer can do this in 2^35 time. Yes, this is a problem. I understand what you’re saying. Well, understanding the quantum computer’s base operation rate isn’t some 2^30 factor slower. Which is a big condition.

Well, the first quantum computers hopefully will have massive overhead. If they are not sold on open market, then we’re screwed. Well the problem is that any PoW has this issue. You just make them big enough that hopefully nobody can build a big enough quantum computer… Any sequential PoW has this problem. Let’s talk with Bram Cohen.

The blockchain itself is a sequential PoW. Yeah there’s nothing inherent, the current miners would be in trouble. If everyone had quantum ASICs with fair distribution, and if we get a diverse group of miners buying them, then bitcoin will survive. But otherwise we’re fucked. It’s a huge problem. It’s true of any improvement, yeah. If ASICs were not available on the open market, then bitcoin woudl be dead by now.

Financial Crypto. Journal of Crypto. Eurcrypt. Who reads journals anymore? 9 simultaneous conversations can’t type aaaaaaaaaaaaaa

Symmetric encryption and hashing, can be okay in the quantum world, if you double the key size. So generally for someone who is not familiar with these speed-ups, any search that you can do a space of size m in a classical computer, would take m steps there, on a quantum computer would take square root of m steps. Bitcoin is doing a search over space 2^70. On a quantum computer it’s the square root, 2^35. So it’s a quadratic speed-up. That’s the only speed-up you are guaranteed across all functions and all searches. That’s the gruber bound. For normal crypto systems, that’s not too threatening because you just double your key sizes, and then you’re just as secure as from the classical world. But for bitcoin, it would require an exponential increase in difficulty because you’re doubling the number of bits. So we have to wait for the quantum ASICs or whatever.

The quantum computers today require working at absolute zero. There are really big fridges. They are not going to appear in your iphone any time soon. The very first quantum computers are not going to be something you can buy. The first ones won’t be so fast. They will get a 2^35 speedup, but they will work at 100 operations per second. There will not be as much of a speedup as you might guess. This will give some breathing time. If we get ones that operate reasonably fast, we are screwed. Perhaps we will see that the tech gets regulated. Maybe you won’t be able to buy one.

Let me give you some intuition into how quantum computers operate. The measure is qubits. You need 1000s of qubits for doing something for bitcoin. How many has been demonstrated?IBM just put out 5 qubits, and 10 has been demonstrated or less. There’s a decoherence. The thing starts to interact with the environment. The measure is not just the number of qubits, but the decoherence time. How long does it stay in coherence? If you look at the time in coherence versus time per operation, yeah, today that’s measured in thousands which means the only thing you can do - it’s interesting to program this way. You can do arbitrary parallel computation, and after 1000 steps the thing dies. That’s with error correction, and without error correction you barely go anywhere at all. It’s very parallel computation, but you have to stay shallow. Gruber’s algorithm, we’re probably nowhere near running Gruber’s algorithm in any point of my lifetime. You have to reduce , a shallow set of gates. You need the oracle that gruber’s algorithm calls, which needs to implement your function under several conditions, which is that it has to be perfectly reversible for one bit. So a while back, I tried to figure out what sha256 looked like for a reversible circuit that returns one bit. It’s the depth that matters. The algorithm itself is kind of sequential. You don’t have to worry about this, it’s a good exercise to go through. You should be more worried about the signatures. If we see it coming, we can deploy hash-based functions. There’s some scaling ouch to that, and segwit might be able to fix that. The scaling ouch is in signatures, so segwit helps that. There’s a phasing-in mechanism where everyone starts by generating EC keys and hash keys, hash them and use a merklized abstract syntax tree and say everyone is using 1-of-2. You make your pubkey say use the post-quantum sig or an ordinary pubkey, you make pubkeys that say either one, and then there’s a network rule that says this is no longer an OR, you must use the quantum-safe scheme. So the other one becomes invalid. We can have users widely-deploying quantum-safe keys, years before they are signing anything with quantum-safe keys, and then later the network blocks the other one. The migration needs to start very early. They need to store the two keys next to each other. It’s a burden on the wallets early on, and then there’s barely no burden on the network until there’s an actual threat. This is going to take many years to deploy. You should start designing and doing this now. Yeah we have been, but we haven’t been publishing enough about this. The danger is that the people who build the hardware for cold storage, well they probably won’t even do that until it’s necessary. If you want to store something for 20 years, perhaps only use the quantum scheme, but if it’s cold storage… many transactions have more than 5 kb of signatures because of multiple inputs, so someone’s cold storage key requiring 5 kb sig, wweell that’s fine they pay a higher fee. So they should look for cold storage that has quantum resistance, and people should not buy cold storage devices that don’t have post-quantum resistance built in.