# Blockchain is Just One Chapter in the Larger Story Being Written by Cryptography Right Now

--

*TL/DR: A piece about the history of cryptography, how it works, zero knowledge proofs, and the future potential impact. Having a basic knowledge of blockchain would be helpful; however, this is intended to be relatable (and enjoyable) to wide audience.*

# A Warm Up…

In Agatha Christie’ famous series, it is said of Miss Marple, the elderly woman famous for solving all sorts of crimes in her home town, that…

“…there she sits an elderly spinster, sweet, placid or so you’d think yet her mind has plumb the depths of human iniquity and taken it all on day’s work. She has lived all her life in a little rural village of Saint Mary Meed and it’s extraordinary. She knows the world only through the prism of that village and it’s daily life but by knowing the village so thoroughly she seems to know the world.”

And to quote Tolstoy,

“If you want to be universal, speak of your own village.”

What the writers above point to is the tremendous ability to deduce the most profound truths and knowledge from a more narrow perspective and the smaller considerations of this world. Not only is this a great encouragement for the soul, that feels trapped, burdened and bored in a *“mundane”* life, but it is absolutely freeing to the soul that feels unrelenting pressure to understand any and everything before wholly stepping out to take risks and explore. And both states are resignations, by the way, the former to that everything is understood, hence boring and the latter to that nothing could possibly be understood, hence unattainable. The speed of innovation and change often leaves us drifting from one polarizing side to the next.

However, there exists another option: the pursuit of understanding. A pursuit implies a plan, a starting point. If you’re *“pursuing”* a career as a doctor, you didn’t wake up that morning and decide, “I’d like to perform surgery today.” No, you’ve likely gone to school, taken classes, read, applied for medical school or are in a residency program. Think about we ask children, “What do you want to be when you grow up?”, but we ask adults, “What are you studying?” or “What are you working on?” There is a marked difference in our expectations for an answer — wanting to be something is one thing, but pursuing to be something is completely different. And if we really want to understand something, we have to pursue it from where we start, which is in understanding the smaller details and history as they relate to our world.

What follows is a pursuit in understanding of cryptography and its application. What began as an investigation to its immediate applications and popularized terms as they relate to blockchain ended in the realization that this is really where the whole story began. The chief aim here is to bring an understanding of cryptography, its past and present history as well as its future application and potential, to a far larger audience that likely has no background in mathematics, computer science or cryptography to start. Edgar Allen Poe inspired William Friedman with his short story, *The Gold Bug* and its references to cryptography, so who knows what mind one more good story may attract to this space…

# The Beginning…

As a whole, cryptography aims to enable secure communication or sharing of information between two or more parties through protocols that prevent potentially malicious third parties from reading private messages or obtaining private data. Tangentially, cryptography covers various encryption models that protect data in storage from being revealed if “stolen” by third parties.

## History

The history of cryptography can be separated into two time frames: classical and modern. In the classical world, messages were ‘encrypted’ by a map of ‘keys’, or corresponding letters or numbers, and ‘decrypted’ by the same corresponding set of keys. A simple example is the Ceasar cipher in which letters were simply shifted in the alphabet order to encode or decode messages. Note that a discovery of the secret key in this example leads to the unraveling of all previously encrypted messages. Overall, even throughout World War II, the methodology of encrypting did advance but not beyond masking messages based on a series of shifts and configurations that were eventually broken either by hand or with the aid of computers.

On the advent of information theory, proposed by Claude Shannon in 1948, who had himself worked in cryptography for Bell Labs, it was stated that the best encryption methods should reveal no information about the plain text being encrypted. Keep in mind that information theory was about quantifying information in order to share it. Information is now defined as *entropy*, or the amount of uncertainty involved in a variable. For example, imagine you were recording that results of flipping a fair coin — 50% chance the coin would land on heads as the number 1, 50% chance that the coin would land on tails as the number 0. Your series of 1’s and 0’s could not be compressed into a shorter string — the chances of a 1 or a 0 are equal, so how could we shorten the results? We simply could not. However, imagine there was an 80% chance of heads and 20% chance of tails, you would have far more 1’s in your string than 0’s, so we could compress the string to *represent* a real, larger amount of 1’s and 0’s accordingly. This representation of some probability is *information* and is how compression schemes actually work.

Shannon knew that in order to conceal information a good encryption method should produce randomness such that the original message cannot be derived from it. For example, if we encrypt COLOR and COLOUR, we know that these words are similar. However, if we encrypt them using the same encryption scheme, in order to be perfectly secret the result has to be totally and completely different. This means that even a small change in the original message to be encrypted should translate to a completely different encrypted message, bearing no resemblance to the original message’s encrypted message. (It is interesting to note that there has not been an encryption technique developed yet in which a change in one bit would affect *all* of the encrypted message. Cryptography is still chasing perfect secrecy.)

Next, with the arrival of computers, the modern era of cryptography emerged in the 1970’s, taking advantage of complexity theory in developing encryption methods in which users were able to easily encrypt and decrypt or verify message, but the computational power required to “break” such methods without knowledge of a secret key would prove to be *significantly* harder — quantum computer difficult. Thus, unlike classically cryptography in which encryption methods had to be kept secret, modern cryptography methods and algorithms are be shared and battled tested as knowledge of them offers virtually no advantage to “break them” without secret keys.

Two milestones launched the world into the modern cryptography era:

1. The public encryption standard (DES)

2. Public key cryptography (e.g. RSA and Diffie-Helman)

The Data Encryption Standard (DES) standardized the encryption methods of electronic data, and this drove a much broader study of cryptography. (Off topic, US government intervention in the development of DES fostered distrust of government intervention in encryption techniques with backdoors, etc. This debate continues to this day as to the benefits and drawbacks of enabling backdoor techniques.) Back to the point, DES has been replaced by the Advanced Encryption Standard (AES) as of 2002.

As to public key cryptography, it works in the following way:

1. Alice generates a (1) private key and (2) a public key.

To define what a “key” is, a key is a piece of information that determines the output of an algorithm. For a very simplified example, imagine Alice has an algorithm *F(x,k)* in which she wants to “mask” a number *x* using key *k* to send to another party, Bob, as in

`F(x, k) = x * k * 7`

The *x* input would vary based on the data, or number, Alice wanted to share. Alice would then multiply *x* by key, *k* in order to “mask” it. Suppose Alice’s key was 10, and she wanted to send Bob the number 3. She would multiply 3 * 10 * 7 = 210 to “encrypt” the number 3. Alice would send Bob that number, 210. If Bob, knew the key and the algorithm F, he could “decrypt” the secret number was 3 by knowing the secret key, 10, and the decryption algorithm, dividing 210 by 10 and 7. However, in this example the encryption key and the decryption key are the same (i.e. the same key is used to encrypt and decrypt the number 3) — symmetric cryptography.

In asymmetric cryptography, the public key (encryption) and the private key (decryption) are two different numbers in a far more complex algorithm than the above.

In generation, the public key is derived from the private key; however, it is “computationally infeasible” to find the private key from the public key. In proper terminology, this is referred to as a *trapdoor function** — *a function that is easy to process in one direction but extremely challenging to execute in another. So, it is easy to generate a public key from a private key, but it is very challenging to compute the private key from the public key. The larger this spread exists, the more secure the methodology is thought to be. Fundamentally, in computing, this leans on the fact that multiplication can occur extremely fast while division is slow.

Continuing on…

2. Alice sends her public key to Bob.

3. Bob uses the Alice’s public key to encrypt a message to send to Alice.

4. Bob sends Alice an encrypted message.

5. Alice uses her private key to decrypt the message and read what Bob has sent her.

In RSA, to be simple, private and public keys are generated based off two large prime numbers multiplied together to form semiprime numbers. Recall, that factorization (division) is computationally much more difficult than multiplication. However, RSA is fading as a method of cryptographic integrity. According to the Global Security measure, which quantifies the security of a cryptographic system by translating the computational power required to break it into that of how much is required to boil water, 288-bit RSA encryption can be broken with less than the energy required to boil 1 teaspoon of water. Most RSA cryptography uses a 2048-bit key these days.

However, a newer form of private/public key cryptography, Elliptic Curve Cryptography, would require enough energy to boil all the water on earth to break a 288-bit system. Consequently, this method has rapidly progressed in replacing RSA and forms the basis of the cryptographic systems used in blockchain and zero knowledge proofs. This is a great comprehensive overview of ECC in comparison to RSA.

*Before moving on, it is worth it to note just how important the usage of cryptography has been in history. From Ceasar to present day, the value to a nation or a people to be able to communicate securely is incalculable in both human lives and economics. Even as far back as the Babylonian captivity of Israel in the book of Jeremiah, Babylon is referred to as Sheshach (Jeremiah 25:26), or Babylon in the **Atbash cipher**, not unlikely meant to protect the prophet against retribution. Even Thomas Jefferson participated in this field, creating the **Jefferson disk** used by the US army well into the 1900s. Later, Alan Turing’s work in breaking the German naval **Enigma** is credited with shortening the war by years. Without a doubt, cryptography has shaped history.*

# A Need for Zero Knowledge

In the private/public key examples of cryptography above note that Alice should never risk exposing her private key as any malicious party that obtains her private key would be able to decrypt each and every encrypted message obtained.

Consider another case in which regular passwords are stored as a *hash** *in most databases rather than the plain text. A hash is a function that will take an input and convert into another unique string of data, thereby masking or hiding the original data. In a hash function, it is “impossible” to practically determine the original data from the unique data string created from the hash function. For instance, a system could use the keccak256 hashing algorithm to hash a password “3nY82$pwt4” to 0xc24ea779490258728751c1789aa30fa007261f5c052e22914599b46ae13ccc5a. Seeing that combination of letters and numbers, one would not be able to derive the original password “3nY82$pwt4,” even knowing the hashing algorithm and using significant computational power. Importantly, hashing functions are deterministic by definition, meaning with the same input they will always give the same output. Therefore, if a website stores your password as 0xc24ea779490258728751c1789aa30fa007261f5c052e22914599b46ae13ccc5a, then when you enter “3nY82$pwt4”, that website can check that you are entering your correct password by hashing it and comparing it with the hash stored in their database.

In the above example, note that although the website would not *store* your plaintext password, you would still need to *share* the password with the website through a secure channel in order to prove that you knew your correct password.

But what if you could prove to the website that you knew your correct password without sharing or revealing that password to them? Or better, prove that that you were who you say you are?

Overall, this method is how information is validated throughout most industries today — information needs to be provided in order to verify it, and computation needs to be reperformed in order to verify that it was performed correctly with integrity. For example, if a bank wants to approve a wire transfer from your account to another, the bank has to verify that you have enough money in your account prior to the transfer by checking your account to be assured that you are not trying to spend money that you do not have. Likewise, if you want your identity verified, you have to provide your social security number or other government issued identification.

There are other instances in which the knowledge of details are not required for to check an outcome. For example, was Supplier A’s bid higher than Supplier B’s? Supplier B should not be able to see Supplier A’s pricing, and likely both do not want to disclose their pricing to a third party other than the customer. However, zero knowledge proofs could prove to a regulatory or auditing authority that Supplier A had lower pricing than Supplier B.

This is what is offered by **zero knowledge proofs** — the ability for one party (the prover) to prove that they have a certain piece of information to another party (the verifier), without revealing what that information is.

# Zero Knowledge Proof Systems

ZKP systems were first proposed in 1989 by Shafi Goldwasser, Silvio Micali, and Charles Rackoff in *“The Knowledge Complexity of Interactive Proof Systems**”*. (It is interesting to note that this paper was rejected for like 3 or 4 years before it was finally published.) These first ZKPs were *interactive, *meaning that the completion of the mathematical proof required the interaction of both verification and prover parties to complete it. This meant that the verifier could not operate independently, and that the prover needed to be present or available in order to complete the proof. Now, we have progressed to *non-interactive *proof systems in which a prover can publish a proof and leave it to the verifier to check.

Emphasized in the verification methods of zero knowledge proofs are *soundness* and *completeness*. Soundness refers to principle that no prover can convince a verifier of a wrong statement — in reality, this rests on the probability that a prover is able to produce a false proof is very, very, very low, which is no different than how current cryptographic schemes that we have been trusting for decades work. Completeness refers to the principle that a prover *can* convince a verifier of a correct statement.

It is obvious that a chief feature of zero knowledge proofs is their ability to prove the knowledge of information while maintaining privacy, but a far more interesting point in zero knowledge proof systems that is often overlooked is their increasing

. ZKPs are able to prove information in a far more succinct way than alternatives.The time to verify a proof is exponentially smaller than the time required to replay a computation to verify its correctness — and the latter is the most common way of varying computations currently.succinctness

Replaying computations is expensive — requiring time and resources. (Note that this is different to verifying algorithm or program correctness — verifying computational integrity is another category all together.)

More importantly, this means that blockchains, inefficient in performing computations themselves, should be used for verification of computational proofs, not for general computations themselves.

Fast forward to today’s environment, and we have several different implementations of zero knowledge proofs.

Here, we will briefly discuss:

- ZK-SNARKs
- ZK-STARKs
- Bulletproofs

# ZK-SNARKs

ZK-SNARKS are *zero knowledge succinct non-interactive arguments of knowledge. *Otherwise known as ZK-SNARKs, this is the methodology employed by ZCash, now known as Electric Coin Company, to anonymize cryptocurrency payments. (Note: The Zcash blockchain is, in fact, a fork of the Bitcoin blockchain.)

In the Zcash blockchain, miners do not need to know…

- Who is sending Zcash (and consequently, how much Zcash they own).
- Who is receiving Zcash.
- The amount of Zcash to be transferred.

Yet, miners are still able to validate transactions.

Using ZK-SNARKs, miners validate that no sender transfers or creates more Zcash than they currently own and that the receiver only receives the amount that was intended to be sent from the sender. In this way, Zcash operates as a truly anonymous system. With Bitcoin, and most other public blockchains including Ethereum, all transaction information is public — meaning the sending address, receiving address and amounts are known. Also, the values held in each individual accounts are known.

(It should be noted that in reality Zcash exists in two different address formats. T-addresses are public, and information sent and received from these addresses can be viewed just as the Bitcoin blockchain. Z-addresses are private — if a z-address sends Zcash to another z-address then the information is kept completely private. However, if a z-address sends Zcash to a t-address, then the information becomes public. It is estimated that only about 1% of the transactions on the Zcash blockchain are using completely private transactions. Until the Sapling upgrade at the end of 2018, private transactions on the Zcash blockchain could only be performed from a laptop due to size and memory requirements. Now, private transaction times are on the order of 2–3 seconds and can be theoretically performed from mobile devices, which is pretty amazing recalling the *“Great Privacy Debate”* that happened at the Scarbrough 2017 Thanksgiving less than two years ago between myself and my cybersecurity brother when neither of us imagined transacting so quickly so fast in zero knowledge. And I lost the debate, but to be fair, I was outnumbered — still, I have a good feeling about this year.)

## How do ZK-SNARKs actually work?

The math behind zk-snarks is elaborate and intensive but can be (slightly) condensed with the right principles and theorems explained. What follows is a compressed version of Christian Reitweissner’s *ZK-SNARKs in a Nutshell.*

First, the problem is encoded and condensed to a set of equality polynomials as a Quadratic Arithmetic Program.

`t(x)h(x) = w(x)v(x)`

Using these equations, the goal of the prover is to convince the verifier that the equality holds.

These polynomials can be several terms long, and it is not efficient to check equality for a large set of points. In order to introduce succinctness, ZK-SNARKs lean on the Schwartz-Zippel Lemma that different polynomials will evaluate differently at most points so it is practical to only check a small number of points to verify the prover is using the correct polynomial. In this way, the evaluation only occurs at a subset of points to prove equality — these evaluation points are random and secret. The randomness and secret points are often referred to as a the *toxic waste* of the *trusted set up* of ZKSNARKs. The set up phase generates a *common reference string* (CRS) that generates a random point *s *from which to evaluate the polynomials and a secret α number to “shift” the values of the polynomial to maintain secrecy. *S* and α should be immediately destroyed after the setup phase so that malicious actors do not obtain them and construct false proofs on their basis.

The verifier now checks that the polynomials hold equality at a random point *s *as in

`t(s)h(s) = w(s)v(s)`

Next, to mask the randomness, secret evaluation points and allow the prover to put together a proof in which a form of homomorphic encryption is used. In homomorphic encryption, values are encrypted in such a way that mathematical operations may be performed on those values and later decrypted to reveal a value as though the original numbers were used in the evaluation. In other words, it allows you to hide numbers, perform an evaluation and unhide them as if you did the operation on the original, unhidden numbers(not all mathematical operations in this case but some).

The prover only knows *E(s) *but is able to compute *E(t(s)), E(h(s)), E(w(s))* and *E(v(s))*.

By multiplying by another secret value *k* to obfuscate the homomorphic encrypted values, the prover is able to hide their original information as well.

Essentially, the verifier is checking an equality of the form,

`t(s)h(s)k = w(s)v(s)k`

## How are ZK-SNARKs set up?

In the above, there was some hand waving as to how the “polynomial equality” was generated and random setup.

As to the handwaving for polynomial equality, the shortest version I can give is that the original equations to be proven (i.e. Is A > B? or A + B = C?) are condensed to an circuit from which the constraints are used to create these polynomials.

Also, how do you choose the random numbers?

In the first release of Zcash, the original founding members used an elaborate method to create this randomly generated *toxic waste* through what they called a “ceremony” — you can listen to the full story here. The “ceremony” is ultimately a multiparty computation (MPC) that produces a randomized result. In other words, each member of the ceremony (6 parties total) produced their own unique randomized key that were then combined to make a further randomized key. More recently, in Zcash’s latest release, Sapling, they employed a new methodology for the MPC in which over 80 participants helped to generate the randomized private key for the ZKSNARKs. In this new method, *only one party has to remain loyal* so that the private keys may not be reproduced — said a different way, it means that all parties who participated in the ceremony would have to deflect in order to subvert the system.

# ZK-STARKs

By comparison, STARKs, or *succinct transparent arguments of knowledge*, are heralded for their transparency, and lean cryptography. Alternative to SNARKs, STARKs do not require a trusted set up, and therefore do not have any of the toxic waste issues seen in SNARKs — hence transparency. STARKs are able to eliminate the need for a trusted set up by making use of the Arthur-Merlin protocol. In this protocol, Arthur, the verifier generates the randomness for each problem while Merlin, the prover solves problems to provide the proof.

STARKs also employ leaner cryptography by making use of minimal cryptographic assumptions and leveraging secure and collision resistant hash functions. This offers potential post-quantum security. Minimal cryptographic assumptions hold well for interactive STARKs, but noninteractive STARKs require the Fiat-Shamir heuristic. Starkware is working on an incredible project with 0x to use ZK-STARKs in both decentralized and centralized token exchanges, and their writing on the subject is profoundly clear, if you are interested in learning more. Both STARK prover and verifier times are faster than those of SNARKs and Bulletproofs, but the first STARK project and developer tools are just beginning to emerge for this category.

# Bulletproofs

Bulletproofs are another form of zero knowledge proof system that do not require a trusted setup, yet they do require a longer proving time than SNARKs and STARKs. These proofs are currently implemented in Monero — the implementation occurred at an incredible fast pace, only 6 months or so after publishing the academic paper. Bulletproofs build on the existing methods of range proofs to allow multiple range proofs to be collected as one in a smaller size than previous methods. Interestingly, bulletproofs allow for proof aggregation — meaning that you could collect and verify multiple proofs from different parties at the same time through the use of multiparty computations. In the recently published, Zether bulletproofs are employed for smart contract privacy, and more recently, JP Morgan added these features their private, permissioned blockchain, Quorum.

# Zero Knowledge Proof System Challenges

There are a few main challenges in wider adoption of zero knowledge proofs.

## Proof Set Up Time (Developer Tool/Workforce Readiness)

For each computation or scenario, a set of mathematical proofs must be generated for implementation to code. To date, several developer tools are emerging; however, this remains a challenging and specialized skill to obtain. In this way, the skills gaps in zero knowledge (and to a large extent cryptography in general) is similar to that of quantum computing in that more developers must be trained up in how to put together an application before widespread adoption can occur.

## Proof Generation and Verification Time/Size Requirement

Zero knowledge requires that the proving party generate a proof for the verifying party to verify. Both of these activities take time — take that has been reduce significantly in recent years (and months really) but it is still a consideration for scalability.

## Proof Standardization

There exist different methodologies for generating proofs as explained here, but there are similar starting points to each. Standardization is needed to bring zero knowledge proofs out from addressing specific problems on an adhoc basis to addressing larger sets of relative problems and scenarios. The ZK Standards organization is working on this.

## ZK-SNARK Trusted Set Up Requirement

Another consideration for ZK-SNARKs is the requirement of a trusted set up, or what has become known as *toxic waste*, in the cryptography world. Recall that in a trusted set up, a randomly generated “private key” is produced that is meant to be kept as a secret for the system to produce the zero knowledge proofs as required — it is part of the foundations of the set up. However, if the private key/toxic waste were to be leaked, then whomever possess the private key could produce faulty proofs — meaning they could produce proofs saying they know a piece of information when in fact they do not, and therein lies the vulnerability.

Remember ZK-STARKs and Bulletproofs do not have a trusted setup, but just published this year, Sonic is a methodology for a universal and updateable reference string for the trusted setup of ZK-SNARKs, proposed a solution to simplify the trusted setup for a larger systems of proofs.

## Perception

In private industry, we have barely begun to evaluate and understand where zero knowledge proof systems could best be used and which type of proof would be best suited for which scenario. To a large extent, much of industry is still figuring out cyber security to begin with, much less how to incorporate zero knowledge into that strategy.

In the public arena, zero knowledge proofs are assuredly impressing an audience in their rapid advancements as to speed and scale, but in cryptocurrency, a larger audience has not put their value in this privacy solution to date — as evidenced by lower participation in privacy preserving blockchains. Although to be fair, privacy solutions for smart contracts (more so than simple transactions) are still very much in the very beginning stages, and the expectation is such that this will accelerate the adoption of public blockchains that privacy preserving methods.

# Conclusion

The zero knowledge proof systems detailed above only touch a portion of what is happening in the cryptography space — there are other forms of zero knowledge proofs, such as ZK-SHARKs and mimblewimble. There are interesting developments in other areas of cryptography too, such as fully homomorphic encryption and quantum cryptography.

The privacy and secrecy offered by zero knowledge proofs and cryptography is in somewhat of a superposition — it’s role in society depends on how you measure it. To an individual guarding personal data or a corporation guarding trade secrets, it is a right and imputes a responsibility on those that have it to do no harm, much like those who enjoy freedom of speech are implored not to use it to harm others. To a government (ideally), it is an accountability, as most rights of individual citizens are — in which citizens hope and trust that government will simultaneously achieve a way to allow us to keep our own privacy while protecting us from the misuse of privacy from malicious actors; however, in reality, we know this scenario cannot exist today. Still, the aim of cryptography is exist in both realms so that a society can be truly free — free to keep information private without the fear of misuse of privacy to inflict harm. Given the track record of cryptography over the centuries, it does not seem so crazy that one day, with the right mathematics and science yet to be understood, that cryptography may just help the world achieve something like this.