top of page

Quantum Computing

Updated: Jan 16, 2020

I've been reading vast amounts of scientific articles about Quantum Computing and its implications for solving extremely difficult problems. There are some problems so difficult, so incredibly vast, that even if every supercomputer in the world worked on the problem, it would still take longer than the lifetime of the universe to solve.

For example, as a fundamentally different way of computation, quantum computing could potentially transform our businesses and societies. Quantum computers hold the promise to solve some of our planet's biggest challenges - in environment, agriculture, health, energy, climate, materials science, and problems we’ve not yet even imagined. The impact of quantum computers will be far-reaching and have as great an impact as the creation of the transistor in 1947, which paved the way for today’s omnipresent all-consuming digital economy.

We experience the benefits of classical computing every day. However, there are challenges that today’s systems will never be able to solve. For problems above a certain size and complexity, we don’t have enough computational power on Earth to tackle them.

To stand a chance at solving some of these problems, we need a new kind of computing. Universal quantum computers leverage the quantum mechanical phenomena of superposition and entanglement to create states that scale exponentially with number of qubits, or quantum bits.

Quantum physics is counter intuitive because every day phenomenon are governed by classical physics, not quantum mechanics -- which takes over at the atomic level.

First, before we learn how a quantum computer works, we need to talk about and understand how modern digital computers work, the binary number system 1s and 0s, transistors & semiconductors, the important history of the computers we now use on a daily basis to survive, and how quantum mechanics in physics has lead us to a quantum computer.

Full report I've written on Quantum Computing:

Quantum Computing


There are some problems so difficult, so incredibly vast, that even if every supercomputer in the world worked on the problem, it would still take longer than the lifetime of the universe to solve.

For example,

As a fundamentally different way of computation, quantum computing could potentially transform our businesses and societies.

Quantum computers hold the promise to solve some of our planet's biggest challenges - in environment, agriculture, health, energy, climate, materials science, and problems we’ve not yet even imagined. The impact of quantum computers will be far-reaching and have as great an impact as the creation of the transistor in 1947, which paved the way for today’s omnipresent all-consuming digital economy.

We experience the benefits of classical computing every day. However, there are challenges that today’s systems will never be able to solve. For problems above a certain size and complexity, we don’t have enough computational power on Earth to tackle them.

To stand a chance at solving some of these problems, we need a new kind of computing. Universal quantum computers leverage the quantum mechanical phenomena of superposition and entanglement to create states that scale exponentially with number of qubits, or quantum bits.

Quantum physics is counterintuitive because every day phenomenon are governed by classical physics, not quantum mechanics -- which takes over at the atomic level.

First, before we learn how a quantum computer works, we need to talk about and understand how modern digital computers work, the binary number system 1s and 0s, transistors & semiconductors, the important history of the computers we now use on a daily basis to survive, and how quantum mechanics in physics has lead us to a quantum computer.

Binary Number System in Computers

From simple mechanics to sophisticated quantum modeling, our world has evolved greatly over time. The only thing that hasn’t changed is our ‘will’ to count. A hundred or so years ago, the primary system that humans used for calculation was the Decimal Number System.

It’s called decimal because it has 10 symbols or characters. Deci means ten in latin.


That’s it. We combine these ten symbols in the decimal system to create very large numbers and communicate with others information like money or quantities of objects.

But computers and other technological advancement fueled the need for a more sophisticated and technologically number system. This is what prompted the birth of the Binary Number System. Here, we look at this number system’s history, applications and advantages!

A few hundred years ago, humans invented the decimal number system. This served the purpose for a while but the development of the machine and this system’s inability to perform complex functions forced mathematicians to develop a number system that could cater to the aforementioned-needs. A Boolean logic manifestation, the binary number system exists in only two states: A True or a False. This is represented by 1 and 0. Moreover, different combinations of these two states define all other states.

First introduced in the 1930s by George Boole, an English mathematician, logician and educator, Boolean logic was a noticeable breakthrough in the world of electronics and computers. Ever since then, the binary number system has been used for a number of applications. This includes image processing, recording of high-end audio and HD movies, storing millions of data entry and processing numerous digital signal processing applications. A tool that can ensure the success of these application is the binary convertor. Before we discuss the applications and advantages of the binary number system further, let’s take a brief look at its history.

A Quick Look at the Boolean Logic

Now that we a have good understanding of the binary system, it is time to look at the logic driving it and how the system’s symbols interact with each other. This is how the Boolean logic works. The comparison between the two values is the primary principle behind this logic. As per Boolean, there are three primary logics. This includes the AND, OR, and NOT logic. Following is what each logic represents.

AND: This logic says that if both the comparative values have a True value (1) then the outcome would be a value of TRUE (1)

OR: This logic says if that if either of the comparative values have a True value (1) then the outcome would be a value of TRUE (1)

NOT: This logic simply reverses a given value. For example, if the given value is a True value then this value will invert it to False and if it is False value then it will be inverted to a True value

Of the three logics mentioned above, two required a minimum of two variables and only NOT can function with a single variable. In addition to the aforementioned- primary logics, there are some other logics too but these are only a combination of the three primary logics. With the Boolean logic discussed, it’s time to move onto the applications of the binary number system and how to use a binary convertor.


The computer technology is the where the most common application for this number system can be seen. After all, a two-digit number system used in digital encoding is what all computer language and programming is based on. Taking data and then depicting it with restrained bits of information is what makes up the digital encoding process. The restrained information comprises of the binary system’s 0s and 1s. An example of this is the images on your computer screen. A binary line for each pixel is used to encode these images.

In case a screen uses a sixteen-bit code, each pixel will be given instructions about what color to show based on which bits are 0s and which are 1s. The outcome of this is more than sixty-five thousand colors being represented by 2^16. In addition to this, a mathematics branch known as Boolean algebra is where you will find the application of the binary number system. Logic and truth values are what this mathematics field is concerned with. In this application, based on whether they are true or false, statements are assigned a 0 or 1. If you’re looking for a tool that aids in this application, then you may want to try a binary convertor.

The Advantage of the Binary Number System

The binary number system is useful for a number of things. For example, to add numbers, a computer flips switches. By adding binary numbers to the system, you can stimulate computer adding. Now, there are two main reasons to use this number system for computers. First is that it can provide a safety range for reliability. Secondary and most importantly, it helps minimize the circuitry required. This lower the space required, the energy consumed and costs spent.

Bits and binary

Computers use binary - the digits 0 and 1 - to store data. A binary digit, or bit, is the smallest unit of data in computing. It is represented by a 0 or a 1. Binary numbers are made up of binary digits (bits), eg the binary number 1001.

The circuits in a computer's processor are made up of billions of transistors. A transistor is a tiny switch that is activated by the electronic signals it receives. The digits 1 and 0 used in binary reflect the on and off states of a transistor.

Computer programs are sets of instructions. Each instruction is translated into machine code - simple binary codes that activate the CPU. Programmers write computer code and this is converted by a translator into binary instructions that the processor can execute.

All software, music, documents, and any other information that is processed by a computer, is also stored using binary.


Everything on a computer is represented as streams of binary numbers. Audio, images and characters all look like binary numbers in machine code. These numbers are encoded in different data formats to give them meaning, eg the 8-bit pattern 01000001 could be the number 65, the character 'A', or a colour in an image.

Encoding formats have been standardised to help compatibility across different platforms. For example:

· audio is encoded as audio file formats, eg mp3, WAV, AAC

· video is encoded as video file formats, eg MPEG4, H264

· text is encoded in character sets, eg ASCII, Unicode

· images are encoded as file formats, eg BMP, JPEG, PNG

The more bits used in a pattern, the more combinations of values become available. This larger number of combinations can be used to represent many more things, eg a greater number of different symbols, or more colours in a picture. In the early days of computing, the only way to enter data into a computer was by flicking switches or by feeding in punched cards or punched paper tape.

Since computers work using binary, with data represented as 1s and 0s, both switches and punched holes were easily able to reflect these two states - 'on' to represent 1 and 'off' to represent 0; a hole to represent 1 and no hole to represent 0.

Charles Babbage's Analytical Machine (in 1837) and the Colossus (used during the Second World War) were operated using punched cards and tapes. Modern computers still read data in binary form but it is much faster and more convenient to read this from microchips or from magnetic or optical disks.

Bits and bytes

Bits can be grouped together to make them easier to work with. A group of 8 bits is called a byte.

Other groupings include:

· Nibble - 4 bits (half a byte)

· Byte - 8 bits

· Kilobyte (KB) - 1000 bytes

· Megabyte (MB) - 1000 kilobytes

· Gigabyte (GB) - 1000 megabytes

· Terabyte (TB) - 1000 gigabytes

Most computers can process millions of bits every second. A hard drive's storage capacity is measured in gigabytes or terabytes. RAM is often measured in megabytes or gigabytes.

Amount of storage space required

Different types of data require different amounts of storage space. Some examples of this follow:



One extended-ASCII character in a text file (eg 'A')

1 byte

The word 'Monday' in a document

6 bytes

A plain-text email

2 KB

64 pixel x 64 pixel GIF

12 KB

Hi-res 2000 x 2000 pixel RAW photo

11.4 MB

Three minute MP3 audio file

3 MB

One minute uncompressed WAV audio file

15 MB

One hour film compressed as MPEG4

4 GB

What is a Microprocessor ?

A microprocessor is an integrated circuit (IC) which incorporates core functions of a computer’s central processing unit (CPU). It is a programmable multipurpose silicon chip, clock driven, register based, accepts binary data as input and provides output after processing it as per the instructions stored in the memory.

How does a Microprocessor work ?

A processor is the brain of a computer which basically consists of Arithmetical and Logical Unit (ALU), Control Unit and Register Array. As the name indicates ALU performs all arithmetic and logical operations on the data received from input devices or memory. Register array consists of a series of registers like accumulator (A), B, C, D etc. which acts as temporary fast access memory locations for processing data. As the name indicates, control unit controls the flow of instructions and data throughout the system.

So basically a microprocessor takes input from input devices, process it as per instructions given in the memory and produces output.


A bus is a set of conductors intended to transmit data, address or control information to different elements in a microprocessor. Usually a microprocessor will have 3 types of buses : Data Bus, Control Bus and Address Bus. An 8-bit processor will be using 8-bit wide bus.

Instruction Set

Instruction set is the group of commands that a microprocessor can understand. So instruction set is an interface between hardware and software (program). An instruction commands the processor to switch relevant transistors for doing some processing in data. For eg. ADD A, B; is used to add two numbers stored in the register A and B.

Word Length

Word Length is the number of bits in the internal data bus of a processor or it is the number of bits a processor can process at a time. For eg. An 8-bit processor will have an 8-bit data bus, 8-bit registers and will do 8-bit processing at a time. For doing higher bits (32-bit, 16-bit) operations, it will split that into a series of 8-bit operations.

Cache Memory

Cache memory is a random access memory that is integrated into the processor. So the processor can access data in the cache memory more quickly than from a regular RAM. It is also known as CPU Memory. Cache memory is used to store data or instructions that are frequently referenced by the software or program during the operation. So it will increase the overall speed of the operation.

Clock Speed

Microprocessors uses a clock signal to control the rate at which instructions are executed, synchronize other internal components and to control the data transfer between them. So clock speed refers to the speed at which a microprocessor executes instructions. It is usually measured in Hertz and are expressed in megahertz (MHz), gigahertz (GHz) etc.

If it’s a Static Random Access Memory (SRAM) then there are 6 transistors per unit cell “bit” but there are 8 bits in a Byte so a 1 GigaByte (GB) memory has 6*8 billion transistors or 48 billion transistors.

each memory location contains 8, 16, 32 or 64 bits. So 0101 would be stored in an 8 bit machine as 00000101

got this output from a file that says Hello World

H : 1001000
e : 1100101
l : 1101100
l : 1101100
o : 1101111
  : 100000
W : 1010111
o : 1101111
r : 1110010
l : 1101100
d : 1100100

Hello: 10010001100101110110011011001101111

We write programs all the time using high level programming languages like Java, C etc... With these programs we can instruct the computer to do something useful. But have you ever wondered how exactly the computer is able to execute these instructions? Idea of this post is not to teach you how to program in a low level language or in worst case binary form but to show you what happens under the hood.

As you might already know, computers work with bits. A bit is something that just represents two states. An on or off, 1 or 0, true or false, plus or minus etc… We can use any two things to represent these two states but in general we use 1 and 0 to say the bit is on and off respectively. We know that computers can store words, pictures, sounds etc… But in reality all these are nothing but bits and we humans have grouped them and given it some meaning (a code). For examples let’s take a byte which is in fact 8 bits. How many patterns can we make up with 8 bits? With 1 bit there are only two possibilities, 1 or 0. With 2 bits, 4 patterns; likewise with 8 bits we have 256 possible ways of arranging bits. So what did people do? They got together and held some meetings and agreed on some code and made it a standard. Take ASCII for example,the bit pattern ‘01000001’ represents letter ‘A’. We just gave some meaning to a bit pattern. But keep in mind that how these patterns are interpreted is based on the context in which it is being used.

In a high level sense basically what happens when you run a program is, your program gets loaded into the RAM first and then the processor starts to execute the instructions in that program. An instruction is simply a sequence of one or more bytes and different processors follow different Instruction Set Architectures(ISAs) in their instruction encoding. (In Linux, you can use the following command to find out what ISA is being used by your machine. In my case it is x86_64)

uname -m

Let’s take a simple program written in C to print a “Hello World!” and see what exactly happens under the hood when we run it.

Sample Program

First we write our program in a particular high level language (I have written it in C) with ASCII characters and saved it in a file. Next we will use a special program called a compiler to translate this text file to assembly language statements. Assembly language represents the symbolic version of machine instructions that the hardware can understand and the binary version is called the machine language.

efore we go into the physical limitations of transistors, it helps to know what a transistor is made of and what it actually does. Basically, a transistor is a switch made out of a special kind of matter. One way you can classify matter is by looking at how well it can conduct electricity. That divides matter into three categories: conductors, insulators and semiconductors. A conductor is any type of material made of atoms with free spaces for electrons. An electric current can pass through conductive material -- metals tend to be good conductors. An insulator is matter composed of atoms that don't have any electron spaces available. As a result, electricity can't flow through these materials. Ceramic or glass are good examples of insulators.

Semiconductors are a bit different. They are composed of matter with atoms that have some space for electrons, but not enough to conduct electricity the way metals do. Silicon is such a material. Under some circumstances, silicon can act as a conductor. Under others, it acts as an insulator. By tweaking these circumstances, it's possible to control the flow of electrons. This simple concept is the foundation for the most advanced electronic devices in the world.

Transistors & Modern Classical Computers

Although its name is not known to many, the transistor has revolutionized modern electronics and computers.

A transistor is an electrical device which can amplify and boost an electrical current. Transistors can also function as switches and turn different electrical currents on and off. Transistors are tiny switches that form the bedrock of modern computing—billions of them route electrical signals around inside a smartphone, for instance.

Walter Brattain, John Bardeen, and William Shockley, the inventors of the transistor, earned the Nobel Prize for physics in 1956. The three, in fact, discovered the so-called “transistor effect”, obtained by overlapping layers of germanium and silicon between them and making them cross by a current.

It is probable that the three physicists were not yet sure on the potential of the discovery that, at the time, was mainly used in FM radios, but that in the following years radically opened the doors to microelectronics. The spread of this innovation has been so disruptive that, from 1950 to today, we have printed about a sextillion (that is, a one followed by twenty-one zeros) of transistors.

From the radio to the smartphones. Just 10 years after the discovery of the transistors, Gordon Moore, American computer scientist and entrepreneur, has hypothesized that the processing capacity of these components could have doubled every eighteen months. This consideration passed to history as Moore’s first law, proved to be correct: from the famous radio to the first integrated circuits, passing through the logic gates (which we can define as the operations of the binary system), the evolution was very fast.

Anatomy of a Transistor

Before we go into the physical limitations of transistors, it helps to know what a transistor is made of and what it actually does. Basically, a transistor is a switch made out of a special kind of matter. One way you can classify matter is by looking at how well it can conduct electricity. That divides matter into three categories: conductors, insulators and semiconductors. A conductor is any type of material made of atoms with free spaces for electrons. An electric current can pass through conductive material -- metals tend to be good conductors. An insulator is matter composed of atoms that don't have any electron spaces available. As a result, electricity can't flow through these materials. Ceramic or glass are good examples of insulators.

Semiconductors are a bit different. They are composed of matter with atoms that have some space for electrons, but not enough to conduct electricity the way metals do. Silicon is such a material. Under some circumstances, silicon can act as a conductor. Under others, it acts as an insulator. By tweaking these circumstances, it's possible to control the flow of electrons. This simple concept is the foundation for the most advanced electronic devices in the world.

Engineers discovered that by doping -- introducing certain kinds of material -- into silicon, they could control its conductivity. They'd start with a base called a substrate and dope it with either negatively-charged or positively-charged material. Negatively-charged material has an excess of electrons while positively charged material has an excess of holes -- places where electrons could fit. In our example, we'll consider an n-type transistor, which has a positively-charged substrate.

­On this foundation are three terminals: a source, a drain and a gate. The gate sits between the source and the drain. It acts as a door through which voltage can pass into the silicon, but not back out. The gate has a thin layer of insulator called an oxide layer that prevents electrons from passing back through the terminal. In our example, the insulator is between the gate and the positively-charged substrate.

The source and drain in our example are negatively-charged terminals. When you apply a positive voltage to the gate, it attracts the few free electrons in the positively-charged substrate to the gate's oxide layer. This creates an electron channel between the source and drain terminals. If you then apply a positive voltage to the drain, electrons will flow from the source through the electron channel to the drain. If you remove the voltage from the gate, the electrons in the substrate are no longer attracted to the gate and the channel is broken. That means when you've got a charge to the gate, the transistor is switched to "on." When the voltage is gone, the transistor is "off."

Electronics interpret this switching as information in the form of bits and bytes. That's how your computer and other electronic devices process data. But because electronics depend on the movement of electrons to process information, they're subject to some special laws of physics. We'll take a closer look into them in the next section.

Principles of CPU architecture – logic gates, MOSFETS and voltage

The use of transistors for the construction of logic gates depends upon their utility as fast switches. Transistors are electricity current switches. Transistors have three sticks.

Logic gates are the basic building blocks of any digital system. It is an electronic circuit having one or more than one input and only one output. Many inputs and outputs at the speed of light to compute information. The relationship between the input and the output is based on a certain logic. Based on this, logic gates are named as AND gate, OR gate, NOT gate etc. Example an AND gate in a computer:

AND, OR gates: 3 transistors. NAND, NOR gates: 2 transistors.

The underlying principles of all computer processors are the same. Fundamentally, they all take signals in the form of 0s and 1s (thus binary signals), manipulate them according to a set of instructions, and produce output in the form of 0s and 1s. The voltage on the line at the time a signal is sent determines whether the signal is a 0 or a 1. On a 3.3-volt system, an application of 3.3 volts means that it’s a 1, while an application of 0 volts means it’s a 0.

Processors work by reacting to an input of 0s and 1s in specific ways and then returning an output based on the decision. The decision itself happens in a circuit called a logic gate, each of which requires at least one transistor, with the inputs and outputs arranged differently by different operations. The fact that today’s processors contain millions of transistors offers a clue as to how complex the logic system is. The processor’s logic gates work together to make decisions using Boolean logic, which is based on the algebraic system established by mathematician George Boole.

The main Boolean operators are AND, OR, NOT, and NAND (not AND); many combinations of these are possible as well. An AND gate outputs a 1 only if both its inputs were 1s. An OR gate outputs a 1 if at least one of the inputs was a 1. And a NOT gate takes a single input and reverses it, outputting 1 if the input was 0 and vice versa. NAND gates are very popular, because they use only two transistors instead of the three in an AND gate yet provide just as much functionality. In addition, the processor uses gates in combination to perform arithmetic functions; it can also use them to trigger the storage of data in memory.

Logic gates operate via hardware known as a switch – in particular, a digital switch. In the days of room-size computers, the switches were actually physical switches, but today nothing moves except the current itself. The most common type of switch in today’s computers is a transistor known as a MOSFET (metal-oxide semiconductor field-effect transistor). This kind of transistor performs a simple but crucial function: When voltage is applied to it, it reacts by turning the circuit either on or off. In a CPU, the voltage at which the MOSFETs react determines the voltage requirements of the processor. So, in a 2V processor, logical circuits are built with MOSFETS that react at 2V, hence an incoming current at or near the high end of the voltage range, 2V, switches the circuit on, while an incoming current at or near 0V switches the circuit off.

Millions of MOSFETs act together, according to the instructions from a program, to control the flow of electricity through the logic gates to produce the required result. Again, each logic gate contains one or more transistors, and each transistor must control the current so that the circuit itself will switch from off to on, switch from on to off, or stay in its current state.

A quick look at the simple AND and OR logic-gate circuits shows how the circuitry works. Each of these gates acts on two incoming signals to produce one outgoing signal. Logical AND means that both inputs must be 1 in order for the output to be 1; logical OR means that either input can be 1 to get a result of 1. In the AND gate, both incoming signals must be high-voltage (or a logical 1) for the gate to pass current through itself.

The flow of electricity through each gate is controlled by that gate’s transistor. However, these transistors aren’t individual and discrete units. Instead, large numbers of them are manufactured from a single piece of silicon (or other semiconductor material) and linked together without wires or other external materials. These units are called integrated circuits (ICs), and their development basically made the complexity of the microprocessor possible. The integration of circuits didn’t stop with the first ICs. Just as the first ICs connected multiple transistors, multiple ICs became similarly linked, in a process known as large-scale integration (LSI); eventually such sets of ICs were connected, in a process called very large-scale integration (VLSI).

Modern day microprocessors contain tens of millions of microscopic transistors. Used in combination with resistors, capacitors and diodes, these make up logic gates. Logic gates make up integrated circuits, and ICs make up electronic systems. Intel’s first claim to fame lay in its high-level integration of all the processor’s logic gates into a single complex processor chip – the Intel 4004 – released in late 1971. This was 4-bit microprocessor, intended for use in a calculator. It processed data in 4 bits, but its instructions were 8 bits long. Program and data memory were separate, 1KB and 4KB respectively. There were also sixteen 4-bit (or eight 8-bit) general purpose registers. The 4004 had 46 instructions, using only 2,300 transistors in a 16-pin DIP and ran at a clock rate of 740kHz (eight clock cycles per CPU cycle of 10.8 microseconds).

Intel no longer lists the transistor count for their chips, but the values can be estimated using the known transistor density metric (MTr/mm2) for 6th gen Skylake chips.

A dual-core mobile variant of the Intel Core i3/i5/i7 has around 1.75 Billion transistors for a die size of 101.83 mm² according to WikiChip. This works out at a density of 17.185 million transistors per square millimetre. Assuming, the same transistor density was used, a quad-core i5 or i7 with its die size of 122.3 mm², would have roughly 2.1 Billion transistors.

For the 7th generation (Kaby Lake), the top end quad-core i7 (eg: i7-7700K) had its die size bumped to 126 mm² for an estimated transistor count of 2.16 Billion transistors. The die size for the quad-core parts remained the same for 8th gen chips (Coffee Lake) but Intel introduced a 6-core count i7 variant (eg: i7-8700K) which has a die size of 149.6 mm² so this gives us around 2.57 Billion transistors.

The 9th gen dubbed Coffee Lake Refresh brought an 8-core i7 variant (eg: i7-9900K) with a die size of 174 mm² which yields just under 3 Billion transistors. On the HEDT side, an 18-core Skylake-X chip clocks in at 485 mm² for around 8.33 Billion transistors.

A single bit (1 or 0) can be stored in, Static random access memory (SRAM) which consists of 6 transistors. So, in 1 byte there are 8 bits and hence 1GB will have 6*8*10^12 or 48 Billion transistors.

Dynamic random access memory (DRAM) uses only one transistor and a capacitor per bit. The capacitor can be charged or discharged which represents 1 or 0 respectively. So, in 1 GB there are slightly more than 8 billion transistors.

64-bit processors

A 64-bit register can hold any of 264 (over 18 quintillion or 1.8×1019) different values. The range of integer values that can be stored in 64 bits depends on the integer representation used. With the two most common representations, the range is 0 through 18,446,744,073,709,551,615 (264 − 1) for representation as an (unsigned) binary number, and −9,223,372,036,854,775,808 (−263) through 9,223,372,036,854,775,807 (263 − 1) for representation as two's complement. Hence, a processor with 64-bit memory addresses can directly access 264 bytes (=16 exabytes) of byte-addressable memory.


June 2018, the US Department of Energy and IBM unveiled Summit, America’s latest supercomputer, which is expected to bring the title of the world’s most powerful computer back to America from China, which currently holds the mantle with its Sunway TaihuLight supercomputer.

With a peak performance of 200 petaflops, or 200,000 trillion calculations per second, Summit more than doubles the top speeds of TaihuLight, which can reach 93 petaflops. Summit is also capable of over 3 billion billion mixed precision calculations per second, or 3.3 exaops, and more than 10 petabytes of memory, which has allowed researchers to run the world’s first exascale scientific calculation.

On 7 May 2019, The U.S. Department of Energy announced a contract with Cray Inc. to build the "Frontier" supercomputer at Oak Ridge National Laboratory. Frontier is anticipated to be operational in 2021 and, with a performance of greater than 1.5 exaflops, should then be the world's most powerful computer.

How to measure computer performance in FLOPS?

The performance capabilities of supercomputers (for example, Indiana University's research computing systems) are expressed using a standard rate for indicating the number of floating-point arithmetic calculations systems can perform on a per-second basis. The rate, floating-point operations per second, is abbreviated as FLOPS.

Measure storage capacity in bytes

Computer storage and memory capacities are expressed in units called bits and bytes. A bit is the smallest unit of measurement for digital information in computing. A byte is the number of bits a particular computing architecture needs to store a single text character. Consequently, the number of bits in a byte can differ between computing platforms. However, due to the overwhelming popularity of certain major computing platforms, the 8-bit byte has become the international standard, as defined by the International Electrotechnical Commission (IEC ).

An uppercase "B" is used for abbreviating "byte(s)"; a lowercase "b" is used for abbreviating "bit(s)". This difference can cause confusion. For example, file sizes are commonly represented in bytes, but download speeds for electronic data are commonly represented in bits per second. With a download speed of 10 megabits per second (Mbps), you might mistakenly assume a 100 MB file will download in only 10 seconds. However, 10 Mbps is equivalent to only 1.25 MB per second, meaning a 100 MB file would take at least 80 seconds to download.


Order of magnitude (as a factor of 10)

Computer performance

Storage capacity





gigabyte (GB)





terabyte (TB)





petabyte (PB)





exabyte (EB)





zettabyte (ZB)





yottabyte (YB)

Understand orders of magnitude in computer performance


A 1 gigaFLOPS (GFLOPS) computer system is capable of performing one billion (109) floating-point operations per second. To match what a 1 GFLOPS computer system can do in just one second, you'd have to perform one calculation every second for 31.69 years.


A 1 teraFLOPS (TFLOPS) computer system is capable of performing one trillion (1012) floating-point operations per second. The rate 1 TFLOPS is equivalent to 1,000 GFLOPS. To match what a 1 TFLOPS computer system can do in just one second, you'd have to perform one calculation every second for 31,688.77 years.


A 1 petaFLOPS (PFLOPS) computer system is capable of performing one quadrillion (1015) floating-point operations per second. The rate 1 PFLOPS is equivalent to 1,000 TFLOPS. To match what a 1 PFLOPS computer system can do in just one second, you'd have to perform one calculation every second for 31,688,765 years.


A 1 exaFLOPS (EFLOPS) computer system is capable of performing one quintillion (1018) floating-point operations per second. The rate 1 EFLOPS is equivalent to 1,000 PFLOPS. To match what a 1 EFLOPS computer system can do in just one second, you'd have to perform one calculation every second for 31,688,765,000 years.


Quantum Mechanics & Physics

Quantum computers use the behavior of the subatomic in quantum mechanics to compute faster than anything seen before.

Let’s talk about that behavior.

Classical physics, the description of physics existing before the formulation of quantum mechanics, describes nature at ordinary (macroscopic) scale, and in general, things we can see with our own eyes.

An example of Classical physics theory done at macroscopic scale is Sir Issac Newton’s three laws of Motion:

1. Every object (a planet, a person, a car, a tennis ball, sand) in a state of uniform motion will remain in that state of motion unless an external force acts on it. Or law of inertia.

2. Force equals mass times acceleration

3. For every action there is an equal and opposite reaction.

Gravity, motion, momentum. Scientist saw these things with their own eyes. This is a very condensed example of classical physics and what it dealt with. Then came quantum mechanics in 1900.

Quantum mechanics, (also known as quantum physics) is a fundamental theory in physics which describes nature at the smallest – including atomic and subatomic – scales.

In the physical sciences, a particle is a small localized object to which can be ascribed several physical or chemical properties such as volume, density or mass.[1][2] They vary greatly in size or quantity, from:

Quantum mechanics differs from classical physics in that:

1. energy, momentum, angular momentum, and other quantities of a bound system are restricted to discrete values (quantization),

2. objects have characteristics of both particles and waves (wave-particle duality), and

3. there are limits to the precision with which quantities can be measured (the uncertainty principle)

The word quantum derives from the Latin, meaning "how great" or "how much". In quantum mechanics, it refers to a discrete unit assigned to certain physical quantities such as the energy of an atom at rest (see Figure 1). The discovery that particles are discrete packets of energy with wave-like properties led to the branch of physics dealing with atomic and subatomic systems which is today called quantum mechanics.

Quantum mechanics is essential to understanding the behavior of systems at atomic length scales and smaller. If the physical nature of an atom were solely described by classical mechanics, electrons would not orbit the nucleus, since orbiting electrons emit radiation (due to circular motion) and would quickly collide with the nucleus due to this loss of energy. This framework was unable to explain the stability of atoms. Instead, electrons remain in an uncertain, non-deterministic, smeared, probabilistic wave–particle orbital about the nucleus, defying the traditional assumptions of classical mechanics and electromagnetism.[23]

Quantum mechanics was initially developed to provide a better explanation and description of the atom, especially the differences in the spectra of light emitted by different isotopes of the same chemical element, as well as subatomic particles. In short, the quantum-mechanical atomic model has succeeded spectacularly in the realm where classical mechanics and electromagnetism falter.

Quantum mechanics gradually arose from theories to explain observations which could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, and from the correspondence between energy and frequency in Albert Einstein's 1905 paper which explained the photoelectric effect. Early quantum theory was profoundly re-conceived in the mid-1920s by Erwin Schrödinger, Werner Heisenberg, Max Born and others. The modern theory is formulated in various specially developed mathematical formalisms. In one of them, a mathematical function, the wave function, provides information about the probability amplitude of position, momentum, and other physical properties of a particle.

Broadly speaking, quantum mechanics incorporates four classes of phenomena for which classical physics cannot account:

· principle of uncertainty - superposition

1. In physics, quantization is the process of transition from a classical understanding of physical phenomena to a newer understanding known as quantum mechanics. Also related is field quantization, as in the "quantization of the electromagnetic field", referring to photons as field "quanta" (for instance as light quanta). Einstein entertained the possibility that there might be actual physical quanta of light—what we now call photons.

2. Quantum entanglement is a label for the observed physical phenomenon that occurs when pairs or groups of particles are generated, interact, or share spatial proximity in ways such that the quantum state of each particle cannot be described independently of the state of the others, even when the particles are separated by a large distance. As it is an apparent violation of causality, the topic of quantum entanglement is at the heart of the on-going disparity between classical and quantum physics, also referred to as locality.

a. A big mind-fuck is teleportation. Quantum teleportation is a process in which quantum information (e.g. the exact state of an atom or photon) can be transmitted (exactly, in principle) from one location to another, with the help of classical communication and previously shared quantum entanglement between the sending and receiving location. quantum teleportation is limited to the transfer of information rather than matter itself.

b. What's amazing about it is that the quantum 'information' is 'sent' instantaneously — faster than light — because that is how two entangled particles communicate. ... The fact that it is destroyed during the entanglement-enabled teleportation ensures that there is never a duplicate.

3. In quantum mechanics, the uncertainty principle, The Heisenberg uncertainty principle declares that for any given instant of time, the position and momentum of an electron or another subatomic particle cannot both be exactly determined and that a state where one of them has a definite value corresponds to a superposition of many states for the other. superposition and uncertainty principles logically dependent.

a. A big mind-fuck is The many-worlds interpretation (MWI) is an interpretation of quantum mechanics that asserts that the universal wavefunction is objectively real, and that there is no wavefunction collapse.[2] This implies that all possible outcomes of quantum measurements are physically realized in some "world" or universe.

4. Wave–particle duality is the concept in quantum mechanics that every particle or quantum entity may be described as either a particle or a wave.

Wavefunctions of the electron in a hydrogen atom at different energy levels. Quantum mechanics cannot predict the exact location of a particle in space, only the probability of finding it at different locations.[1] The brighter areas represent a higher probability of finding the electron.

What is the difference between a Photon and an atom?

1. Photon is a bulk of energy while atom is made up of proton, neutron and electron.

2. Photons can have energy according to its frequency, atom can have different number of proton, neutron and electrons.

3. Photons always travel at the speed of light while atoms can vibrate within their equillibrium position.

4. Photons can be polarized while atoms are neutral.

5. Photons can travel through material without any interaction or loss of energy which is not possible with atoms.

Photons are the smallest measure of light, and no, they don't have mass. So that's easy, right? Light is composed of photons, which have no mass, so therefore light has no mass and can't weigh anything.

So photons are particles of light and other electromagnetic waves, they transmit electric and magnetic interactions. ... Electrons have mass, photons have not, electrons have electric charge, photons have not. Electron has the spin which is half of the spin of photon.

quantum computers use qubits, which are typically subatomic particles such as electrons or photons. So quantum computers can use either photons or electrons.

Not so fast. Because photons have energy -- and, as Einstein taught us, energy is equal to the mass of a body, multiplied by the speed of light squared. How can photons have energy if they have no mass? (One imagines Einstein thinking about photons re: mass and shrugging, hoping that nobody noticed the discrepancy.)

Actually, what Einstein was proving is that energy and mass could be the same thing -- all energy has some form of mass. Light may not have rest (or invariant) mass -- the weight that describes the heft of an object. But because of Einstein's theory (and the fact that lightbehaves like it has mass, in that it's subject to gravity), we can say that mass and energy exist together. In that case, we'd call it relativistic mass -- mass when an object is in motion, as opposed to at rest.

Photon is the smallest unit of electromagnetic fields whereas an atom is the building block of all the matter around us.

Though the more specific difference would be that an atom has a definite rest mass , but the rest mass of a photon is always zero. It doesn’t mean that a photon can’t impart momentum like an atom , it can , when it moves.

An atom is the Greek definition.

The smallest way an Element can be divided without it no longer being an Element.

The Greek word atom means indivisible, which is why you see the word also in computer science, referring to the smallest division of a program instruction.

An atom is the Greek definition.

The smallest way an Element can be divided without it no longer being an Element.

The Greek word atom means indivisible, which is why you see the word also in computer science, referring to the smallest division of a program instruction.

The Greek proof of atom proves molecules not atoms but that is because they did not understand elements.

Elements are formed from a combination of atomic particles.

Since the science of this is complex, I will just describe the simplest Hydrogen atom, where the Hydrogen molecule is two of them joined together.

The Hydrogen atom is one electron and one proton, where the electron has most of the non-inertial energy and the proton has the inertial energy (also known as rest mass, because mass is equivalent to energy).

In other words the electron can be said to be moving and the proton can be said to be the lowest possible energy state of that electron.

Note already how I am confusing you because I am saying that the “position" of the “particle" with the greatest inertial mass (rest mass) - the proton — is equivalent to an energy “state".

But you did ask the question, and I know that you don't want either the mathematical explanation of the Physics professor or the dumbed down explanation of the Physics teacher.

Now for how the photon is part of this.

The electrons in atoms have energy states.

You can call this energy “quantum energy" because the value of energy is an integer multiple of a fixed quantity of energy.

You can call this energy “relative" because all non-inertial energy is relative to something else, OTHER than the energy of a photon (or of the other particles equivalent to photons in that they TRANSFER (although if you know the math the better word is mediate or transit) energy.

You can call this energy “kinetic" because the electrons are “moving" and because it is convenient to treat electrons as moving with kinetic energy, when we say that they have quantum energy.

You can call this energy “wave energy" because it is convenient to treat the “orbits" of these “moving electrons" as standing waves, and because there are two types of math used to describe the energy of the electron and the energy of the photon. Wave and Particle.

But the (by now) very simple way to explain the photon follows on from this.

A photon is just the smallest possible way to divide the energy of that electron.

And this energy is what is TRANSITED or MEDIATED between two electrons, even when they are in the same atom.

Yes, you heard that right.

A photon transits between two electrons in different atoms, separated by a relative distance of about 300000000 metres, and the relative time for the TRANSIT is about one second.

That is because the photon represents a loss of energy (and hopefully by now you can see that all energy, other than inertial) is relative to something else) of the first electron and a gain of energy of the second electron.

Note to PhD in Physics getting very upset with me for not mentioning how there is a lot more to this, or anything else I have left out.

Yes I know. I think he gets the general idea.

So in summary the atom is where the photon usually comes from, or usually goes to, when energy is transferred from one atom to another.

But photons involve the acceleration of electrons. By acceleration I mean either going faster or slower, and hopefully by now you can see that faster or slower don't mean what you used to think they meant.

If we accelerate electrons in a vacuum, the photons are x-rays. Very high frequency “waves"

If we accelerate electrons in the metal skin of a radio transmitter, the photons are radio waves. Low to very low frequency “waves".

If we accelerate the electrons in the atom by the atom having a temperature higher than absolute zero, the photons are infrared radiation.

If we accelerate the electrons so that the radiation of the “wave" or “particle" is visible, we call this light.

To see photons up close and personal at a wide range of frequencies, rub your feet on the carpet and then touch the metal doorknob.

The spark will generate photons that can be seen, that can be detected by the very small circuits in your smartphone (which is why they have to be designed to ignore them), and that can be felt — because photons transfer energy from the knob to your hand.

The main thing to understand, in my very difficult to understand answer, to your question, which is a lot better than many people who have just “heard of” photons or atoms would think, is that photons, just like the “waves" we can say that light, radio, and x-rays are also, do not really exist.

Electromagnetic spectrum

The electromagnetic spectrum is the range of frequencies of electromagnetic radiation and their respective wavelengths and photon energies.

What are electromagnetic fields?

Definitions and sources

Electric fields are created by differences in voltage: the higher the voltage, the stronger will be the resultant field. Magnetic fields are created when electric current flows: the greater the current, the stronger the magnetic field. An electric field will exist even when there is no current flowing. If current does flow, the strength of the magnetic field will vary with power consumption but the electric field strength will be constant.

Natural sources of electromagnetic fields

Electromagnetic fields are present everywhere in our environment but are invisible to the human eye. Electric fields are produced by the local build-up of electric charges in the atmosphere associated with thunderstorms. The earth's magnetic field causes a compass needle to orient in a North-South direction and is used by birds and fish for navigation.

Human-made sources of electromagnetic fields

Besides natural sources the electromagnetic spectrum also includes fields generated by human-made sources: X-rays are employed to diagnose a broken limb after a sport accident. The electricity that comes out of every power socket has associated low frequency electromagnetic fields. And various kinds of higher frequency radiowaves are used to transmit information – whether via TV antennas, radio stations or mobile phone base stations.

Key points:

1. The electromagnetic spectrum encompasses both natural and human-made sources of electromagnetic fields.

2. Frequency and wavelength characterise an electromagnetic field. In an electromagnetic wave, these two characteristics are directly related to each other: the higher the frequency the shorter the wavelength.

3. Ionizing radiation such as X-ray and gamma-rays consists of photons which carry sufficient energy to break molecular bonds. Photons of electromagnetic waves at power and radio frequencies have much lower energy that do not have this ability.

4. Electric fields exist whenever charge is present and are measured in volts per metre (V/m). Magnetic fields arise from current flow. Their flux densities are measured in microtesla (µT) or millitesla (mT).

5. At radio and microwave frequencies, electric and magnetic fields are considered together as the two components of an electromagnetic wave. Power density, measured in watts per square metre (W/m2), describes the intensity of these fields.

6. Low frequency and high frequency electromagnetic waves affect the human body in different ways.

7. Electrical power supplies and appliances are the most common sources of low frequency electric and magnetic fields in our living environment. Everyday sources of radiofrequency electromagnetic fields are telecommunications, broadcasting antennas and microwave ovens.

How is electricity made?

It all starts with atoms.

Atoms are small particles and put simply, they are the basic building blocks of everything around us, whether it is our chairs, desks or even our own body. Atoms are made up of even smaller elements, called protons, electrons and neutrons.

When electrical and magnetic forces move electrons from one atom to another, an electrical current is formed.

When electrical and magnetic forces move electrons from one atom to another, an electrical current is formed. Spinning turbines spin magnets

Firstly, to generate electricity, you’ll require a fuel source, such as coal, gas, hydropower or wind.

In Australia, most of our electricity supply is generated from traditional fuels, such as coal and natural gas, with around 14 percent coming from renewable energy sources.1

Regardless of the chosen fuel, most generators operate on the same proven principle: turn a turbine so that it spins magnets surrounded by copper wire, to get the flow of electrons across atoms, which in turn generates electricity.

Coal and gas work in similar ways; they are both burned to heat water, which creates steam and turns the turbine.

Renewable energy sources such as hydropower and wind operate slightly differently, with either the water or the wind being used to turn the turbine, and generate the electricity.

Solar photovoltaic panels take a different approach again: they generate electrical power by converting solar radiation into electricity using semi-conductors.

Think of a qubit as an electron in a magnetic field.

The most common metals used for permanent magnets are iron, nickel, cobalt and some alloys of rare earth metals.

Magnets attract iron due to the influence of their magnetic field upon the iron. ... When exposed to the magnetic field, the atoms begin to align their electrons with the flow of the magnetic field, which makes the iron magnetized as well. This, in turn, creates an attraction between the two magnetized objects.

copper wire coil magnetic field.

An electromagnetic coil is an electrical conductor such as a wire in the shape of a coil, spiral or helix.[1][2] Electromagnetic coils are used in electrical engineering, in applications where electric currents interact with magnetic fields, in devices such as electric motors, generators, inductors, electromagnets, transformers, and sensor coils. Either an electric current is passed through the wire of the coil to generate a magnetic field, or conversely an external time-varying magnetic field through the interior of the coil generates an EMF (voltage) in the conductor.

Photons dual wave-particle nature

Photons, like all quantum objects, exhibit wave-like and particle-like properties. Their dual wave–particle nature can be difficult to visualize. The photon displays clearly wave-like phenomena such as diffraction and interference on the length scale of its wavelength. For example, a single photon passing through a double-slit experiment exhibits interference phenomena but only if no measure was made at the slit. A single photon passing through a double-slit experiment lands on the screen with a probability distribution given by its interference pattern determined by Maxwell's equations.[61] However, experiments confirm that the photon is not a short pulse of electromagnetic radiation; it does not spread out as it propagates, nor does it divide when it encounters a beam splitter.[62] Rather, the photon seems to be a point-like particle since it is absorbed or emitted as a whole by arbitrarily small systems, systems much smaller than its wavelength, such as an atomic nucleus (≈10−15 m across) or even the point-like electron. Nevertheless, the photon is not a point-like particle whose trajectory is shaped probabilistically by the electromagnetic field, as conceived by Einstein and others; that hypothesis was also refuted by the photon-correlation experiments cited above. According to our present understanding, the electromagnetic field itself is produced by photons, which in turn result from a local gauge symmetry and the laws of quantum field theory (see § Second quantization and § The photon as a gauge boson below).

Heisenberg's thought experiment for locating an electron (shown in blue) with a high-resolution gamma-ray microscope. The incoming gamma ray (shown in green) is scattered by the electron up into the microscope's aperture angle θ. The scattered gamma ray is shown in red. Classical optics shows that the electron position can be resolved only up to an uncertainty Δx that depends on θ and the wavelength λ of the incoming light.

Hendrik A. Lorentz was chairman of the first Solvay Conference held in Brussels from October 30th to November 3rd, 1911.[2] The subject was Radiation and the Quanta. This conference looked at the problems of having two approaches, namely classical physics and quantum theory. Albert Einstein was the second youngest physicist present. The famous French-Polish Marie Curie was the first woman in the conference. Einstein in the middle.

Quantum Computers

Quantum computers store information using subatomic particles, which behave according to very different rules than the ones that govern our macro world. For example, quantum particles can exist in a "superposition" of two different states at the same time, and particles can be separated by light-years yet still be "entangled," affecting each others' properties.

Entanglement and the theory of quantum physics is really different than that. If particles get entangled, they lose their identity. So before they are entangled, you can assign them certain properties. They can have definite values. But once they are entangled, they now have only an identity together as a whole. The weird thing is that this entanglement bond stays there even if you pull the particles apart. So even when you place those particles on opposite sides of the galaxy, they will behave as if they are actually one particle.

Superposition can be thought of as the following. You have a coin with head or tails, just like 0 or 1. It can only be either heads, or tails, right? Wrong! If you toss the coin, you see that while is in the air, is it head or tails? It’s both. That’s the idea of superposition. It’s both 0 and 1, at the same time.

Going deeper, what is actually representing the abstract idea of 0 and 1 simultaneously in the behavior of the subatomic particle? How do you read and write information on a qbit?

To do that, we need a qbit, and for that we could be using an electron in a phosphorus atom as a qbit. We could be using a photon of light as a qbit. The nucleusof the phosphorus of an atom as a qbit.

How to build a qbit:

Let’s use a phosphorus atom.

1. The tiny atom is embedded in a silicon crystal right next to a transistor (remember transistors are the gates for the operations of the 1s and 0s). Silicon also serves as a photovoltaic, light-absorbing material, and semiconductor.

2. The electron of the atom has a magnetic die pole called a spin, up or down, like the classic 1 or 0. The electron can be facing up, or facing down, its state depending on an outside force which we will explain in the next sentence. To differentiate the energy state of the electron when it is spin up or spin down, you need to apply a strong magnetic field. To do that we use a superconducting magnet. Magnets are metals, for example Iron. A coil of super conducting wire interacts with the atom.

3. So now the electron will line up with its psin pointing down. That its lowest energy state. And it would take some energy to put it into the spin up state. But not that much energy. If it was at room temperate, the electron would have so much thermal energy, it would be bouncing around from spin up and spin down and back.

4. You need to cool down the whole apparatus to only a few hundreds of a degree of cero. From top to bottom, the system gradually cools from four Kelvin -- liquid-helium temperatures -- to 800 milliKelvin, 100 milliKelvin and, finally, 10 milliKelvin. Inside the canister, that's 10 thousandths of a degree above absolute zero. The wires, meanwhile, carry RF-frequency signals down to the chip.

5. isolating the qbit from unwanted "noise." This includes electrical, magnetic and thermal noise -- just the temperature of the room renders the whole machine useless.

6. This way you know the electron is spin down and stays there stable. There’s not enough thermal energy in the surroundings to spin it the other way before we spin it ourselves on purpose as part of creating the qbit.

7. If you want to write info to qbit, you can put the electron into the spin up state by hitting it with a pulse of micro waves. That pulse has to have a very specific frequency. That frequency depends on the magnetic field that the electron is sitting in.

8. Electron is like a radio, can tune only to one station. And when that station is broadcasting, the electron gets all excited and spins up and turns into the up state.

9. You can stop it at any point, if you stop pulse at any point, what you have created is a special quantum superposition of the spin up and spin down phase with a specific phase between the two superposition

10. How do you read the information? You use the transistor that this phosphorus atom is embedded next two.

a. The spin down has the lower energy, and the spin up has the higher energy.

b. The transistor has a puddle of electrons. This puddle of electron is filled up with a certain level of energy. If the electron (qbit) is pointing up, IT CAN JUMP OVER INTO THE TRANSISTOR!!! That’s how you write a qbit, because the spin up electron has more energy than all the others.

c. This leaves the bare nucleus charge of the phosphorus.

11. Because the qbit is so small and isolated from the rest of the world, it’s a qbit that can live for a very long time.

12. You need to eliminate all spin from silicon crystal to not mess up the aforemention and bring other unwanted forces into the mix.

13. Use crystal of silicon 28, which has no nucleus spin.

Superposition is the ability of a quantum system to be in multiple states at the same time until it is measured.

Superposition reduces the number of steps to do a calculation or computation.

How does a quantum computer work:

a. Qbit can be in both states at once. When you measure the spin, it will be up or down, but before you measure it, the electron can exist in what’s call a quantum superposition - Superposition is the ability of a quantum system to be in multiple states at the same time until it is measured – with two quoefficients 0.80 (80%) or 0.2 (20%) probabilities of finding the electron in one state of the other.

b. You need TWO interacting quantum bits. Now there are four possible states. [Up, Up];[Up, down];[Down, Up];[Down, Down]

c. Super position allows me to make a superposition of each of those four states.

Quantum computing harnesses the unique behavior of quantum physics to provide a new and powerful model of computing. The theory of quantum physics posits that matter, at a quantum level can be in a superposition of multiple classical states. And those many states interfere with each other like waves in a tide pool. The state of matter after a measurement "collapses" into one of the classical states.

Thereafter, repeating the same measurement will produce the same classical result. Quantum entanglement occurs when particles interact in ways such that the quantum state of each cannot be described independently of the others, even if the particles are physically far apart.

Quantum computing stores information in quantum states of matter and uses its quantum nature of superposition and entanglement to realize quantum operations that compute on that information, thereby harnessing and learning to program quantum interference.

Quantum computing might sound daunting, but with the right resources you can start building quantum applications today.

Numbers beside it oscillate between 1.0000 and 0.0000. This is one of the strengths of qubits: they do not have to be the all-or-nothing 1 or 0 of binary bits but can occupy states in between. This quality of “superposition” allows each qubit to perform more than one calculation at a time, speeding up computation in a manner that seems almost miraculous. Although the final readout from a qubit is a 1 or 0, the existence of all of those intermediary steps means it can be difficult or impossible for a classical computer to do the same calculation.

Squiggly lines display waveforms that correspond to the functions being performed on the qubits. Next to that section is a box about the size of a desktop printer, which sends those waveforms as electrical pulses through wires and into the silver cylinder. If the cylinder were open, one would see a series of six chambers, arranged in layers like a wire-festooned upside-down wedding cake. Each chamber is chilled to a temperature significantly colder than the one above it; the bottommost layer is a frigid 15 millikelvins, nearly 200 times as cold as the depths of outer space. Wires passing through the successive stages relay control signals from the warm outside world and pass back results from the chamber.

The qubit

Quantum computing defines computing concepts that reflect the quantum behavior. Quantum computing begins with the notion of a qubit. In quantum computing, a quantum bit - qubit - is a unit of quantum information, like a classical bit. Where classical bits hold a single binary value such as a 0 or 1, the state of a qubit can be in a superposition of 0 and 1 simultaneously.

The act of measuring a qubit changes a qubit state. With measurement, the qubit goes from being in superposition to one of the classical states.

Multiple qubits can also be entangled. When we make a measurement of one entangled qubit, our knowledge of the state of the other(s) is updated as well.

numbers beside it oscillate between 1.0000 and 0.0000. This is one of the strengths of qubits: they do not have to be the all-or-nothing 1 or 0 of binary bits but can occupy states in between. This quality of “superposition” allows each qubit to perform more than one calculation at a time, speeding up computation in a manner that seems almost miraculous. Although the final readout from a qubit is a 1 or 0, the existence of all of those intermediary steps means it can be difficult or impossible for a classical computer to do the same calculation.

That chamber is in vacuum, shielded from the light and heat that would otherwise disrupt the delicate qubits, which sit on a chip at the end of all the wires, isolated in the dark and cold. Each qubit is about 0.2 millimeter across, big enough to be visible through an ordinary microscope. But chilled and hidden away from external influences, each becomes a superconductor that lets electrons flow freely, acting as if it were a single atom so that the laws of quantum mechanics scale up to dictate its behavior.

Gentle pulses of microwaves cause the qubits to vibrate. And when two neighboring qubits reach the same resonant frequency, they become entangled—another quantum-mechanical property meaning that measuring the state of one tells you the state of the other. Electromagnetic pulses at a different frequency cause the bit flips. The quantum computer is rather like a box containing a bunch of pendulums, says Craig Gidney, a quantum software engineer at Google. I and others outside the chamber sending the signals into it are pulling on the strings of the pendulums, changing their swings to perform different logical operations.

All this chilling and vibrating, Google’s quantum team says, has allowed it to achieve quantum supremacy, the point at which a quantum computer can do something that an ordinary classical computer cannot. In a paper published this week in Nature—but inadvertently leaked last month on a NASA Web site—Google engineers describe a benchmark experiment they used to demonstrate supremacy. Their program, run on more than 50 qubits, checks the output of a quantum random-number generator. Some critics have complained this is a contrived problem with a limited real-world application, says Hartmut Neven, manager of Google’s Quantum Artificial Intelligence Lab. “Sputnik didn’t do much either,” Neven said during a press event at the Goleta facility. “It circled the Earth. Yet it was the start of the space age.”

David Awschalom, a condensed matter physicist specializing in quantum-information engineering at the University of Chicago, who was not part of the research, agrees that the program solved a very particular problem and adds that Google cannot claim it has a universal quantum computer. Such an achievement would require perhaps a million qubits, he says, and lies many years in the future. But he believes the company’s team has reached an important milestone that offers other scientists real results to build on. “I’m very excited about this,” Awschalom says. “This type of result offers a very meaningful data point.”

Google’s quantum computing chip, dubbed Sycamore, achieved its results using exactly 53 qubits. A 54th one on the chip failed. Sycamore’s aim was to randomly produce strings of 1’s and 0’s, one digit for each qubit, producing 253 bit strings (that is, some 9.700199254740992 quadrillion bit strings). Because of the way the qubits interact with one another, some strings are more likely to emerge than others. Sycamore ran the number generator a million times, then sampled the results to come up with the probability that any given string would appear. The Google team also ran a simpler version of the test on Summit, a supercomputer at Oak Ridge National Laboratory, then extrapolated from those results to verify Sycamore’s output. The new chip performed the task in 200 seconds. The same chore, the researchers estimated, would have taken Summit 10,000 years.

Yet a group of researchers at IBM, which is also working to develop quantum computing, posted a preprint paper on earlier this week arguing that, under ideal conditions and using extra memory storage, Summit could accomplish the task in two and a half days. “Because the original meaning of the term ‘quantum supremacy,’ as proposed by [California Institute of Technology theoretical physicist] John Preskill in 2012, was to describe the point where quantum computers can do things that classical computers can’t, this threshold has not been met,” the scientists wrote in a post on the IBM Research Blog. Perhaps, then, Google’s achievement might be better labeled “quantum advantage.”

But Scott Aaronson, a theoretical computer scientist at the University of Texas at Austin, who sometimes collaborates with the Google researchers, says it is not really correct to say quantum supremacy has not been achieved—even if it is not an unambiguous “man on the moon” sort of result. After all, Sycamore was still far faster at the task than Summit. And as the number of qubits in Google’s setup grows, its computing power will expand exponentially. Moving from 53 to 60 qubits would give the company’s quantum computer the equivalent computational heft of 33 Summit supercomputers. At 70 qubits, a Summit-like classical supercomputer would have to be the size of a city to possess the same processing power.

Aaronson also suspects that what Google achieved might already have some unintended practical value. Its system could be used to produce numbers verifiably guaranteed by the laws of quantum physics to be random. That application might, for instance, produce far stronger passwords than humans or classical computers are able to come up with.

“I’m not sure the right thing is to argue whether it’s ‘supremacy’ or not,” Awschalom says. The quantum computing community has yet to agree on the best ways to compare different quantum computers, he says, especially those built on different technologies. Whereas both IBM and Google are using superconductors to create their qubits, another approach relies on trapped ions—charged atoms suspended in a vacuum and manipulated by laser beams. IBM has proposed a metric called “quantum volume,” which includes factors such as how fast qubits perform their calculations and how well they avoid or correct errors.

Error correction is, in fact, what quantum computer scientists must master to make truly useful devices—ones containing thousands of qubits. At that point, researchers say, the machines could run detailed simulations of chemical reactions that might lead to new drugs or better solar cells. And they could also quickly crack the cryptographic codes most commonly used to protect data on the Internet.

To reach that kind of performance, however, a quantum computer must self-correct, finding and fixing errors in its operations. Errors can arise when a qubit flips from 1 to 0 spontaneously or when its quantum superposition decays because of interference from the outside world. Google’s qubits currently last about 10 microseconds before decaying. “They have a finite lifetime,” says Marissa Giustina, one of the project’s researchers. “They’re very fragile. They interact with their surroundings, and we just lose the quantum information.”

Classical computers tackle error correction with redundancy, deciding whether a digital bit is on or off by measuring not a single electron in a capacitor but tens of thousands. Conversely, qubits are, by nature, probabilistic, so trying to clump them together to perform one bulk measurement will not work. Google is developing a statistical method to correct errors, and John Martinis, a physicist at the University of California, Santa Barbara, who teamed with the company to develop Sycamore, says the tentative results so far have revealed no fundamental aspect, no showstopper, that would prevent error correction from getting better and better. The show, it seems, will go on.

Meanwhile Google’s engineers will be working to improve their qubits to produce fewer errors—potentially allowing many more qubits to be interlinked. They also hope to shrink down their large, desktop-printer-sized control boxes—each can handle 20 qubits and associated circuitry, so three are needed to run Sycamore’s 53 qubits. And if their system grows to reach about 1,000 qubits, its cooling needs will exceed the capacity of those large silver cylinders.

Julian Kelly, who works on quantum hardware and architecture at Google, says the company’s announcement is an engineering achievement above all else, but it is one that could open up unexplored terrain. “We’ve demonstrated that the quantum hardware can do something that is extremely difficult,” he says. “We’re operating in a space where no one has been able to experiment before.” What the outcome of that progress will be, he says, is something “we don’t know yet, because we’ve just got here.”

More qubit

A qubit is a quantum bit, the counterpart in quantum computing to the binary digit or bit of classical computing. Just as a bit is the basic unit of information in a classical computer, a qubit is the basic unit of information in a quantum computer.

In a quantum computer, a number of elemental particles such as electrons or photons can be used (in practice, success has also been achieved with ions), with either their charge or polarization acting as a representation of 0 and/or 1. Each of these particles is known as a qubit; the nature and behavior of these particles (as expressed in quantum theory) form the basis of quantum computing. The two most relevant aspects of quantum physics are the principles of superposition and entanglement.


Think of a qubit as an electron in a magnetic field. The electron's spin may be either in alignment with the field, which is known as a spin-up state, or opposite to the field, which is known as a spin-down state. Changing the electron's spin from one state to another is achieved by using a pulse of energy, such as from a laser - let's say that we use 1 unit of laser energy. But what if we only use half a unit of laser energy and completely isolate the particle from all external influences? According to quantum law, the particle then enters a superposition of states, in which it behaves as if it were in both states simultaneously. Each qubit utilized could take a superposition of both 0 and 1. Thus, the number of computations that a quantum computer could undertake is 2^n, where n is the number of qubits used. A quantum computer comprised of 500 qubits would have a potential to do 2^500 calculations in a single step. This is an awesome number - 2^500 is infinitely more atoms than there are in the known universe (this is true parallel processing - classical computers today, even so-called parallel processors, still only truly do one thing at a time: there are just two or more of them doing it). But how will these particles interact with each other? They would do so via quantum entanglement.


Particles that have interacted at some point retain a type of connection and can be entangled with each other in pairs, in a process known as correlation. Knowing the spin state of one entangled particle - up or down - allows one to know that the spin of its mate is in the opposite direction. Even more amazing is the knowledge that, due to the phenomenon of superposition, the measured particle has no single spin direction before being measured, but is simultaneously in both a spin-up and spin-down state. The spin state of the particle being measured is decided at the time of measurement and communicated to the correlated particle, which simultaneously assumes the opposite spin direction to that of the measured particle. This is a real phenomenon (Einstein called it "spooky action at a distance"), the mechanism of which cannot, as yet, be explained by any theory - it simply must be taken as given. Quantum entanglement allows qubits that are separated by incredible distances to interact with each other instantaneously (not limited to the speed of light). No matter how great the distance between the correlated particles, they will remain entangled as long as they are isolated.

Taken together, quantum superposition and entanglement create an enormously enhanced computing power. Where a 2-bit register in an ordinary computer can store only one of four binary configurations (00, 01, 10, or 11) at any given time, a 2-qubit register in a quantum computer can store all four numbers simultaneously, because each qubit represents two values. If more qubits are added, the increased capacity is expanded exponentially.

Real qbit example

What does a qbit have to do with computers?

Let’s go back to those bits in classical binary number system. An either 1 or 0 is 1 bit.

bit = 1 or 1bit = 0

the bit in a state of 1 (electricity pulse in the computer circuit is ON)


the bit in a state of 0 (No electricity pulse, nada, so it’s OFF)

Computer use electrical power to create two states.

You can also see a binary code as signal thanks to a series of electrical pulses that represent numbers, characters, and operations to be performed. A device called a clock sends out regular pulses, and components such as transistors switch on (1) or off (0) to pass or block the pulses.

4 bits would have 16 possible combinations of a binary number, because 16=2^4

1 or 0, 2 options

1 or 0, 2 options

1 or 0, 2 options

1 or 0, 2 options

2 options, repeated four times. 2 x 2 x 2 x 2 = 16

So all possible binary 4-bit numbers would be this:

How can you write all binary numbers in order in your head? (think of it as from smallest if it was a decimal number, to largest):

0000 (smallest)

0001 (1 in decimal if this was a decimal system)

0010 (10 in decimal, although it doesn’t mean 10 in binary)

0100 (100 if you see it from a point of view of a decimal number)

0101 (101, but remember is not decimal and doesn’t mean the same in binary,)

0110 (110, and son on)

0111 (111 is the next bigger number than 110 if you still look at it from the decimal point of view)

1000 (1000, ok I think you got the technique to write fast in binary)

1001 (1001)

1010 (1010)


1100 (1100)

1101 (1101)

1110 (1110)

1111 (1111)






0 ( 20 )



1 ( 20 )



1 ( 21 ) + 0 ( 20 )



1 ( 21 ) + 1 ( 20 )



1 ( 22 ) + 0 ( 21 ) + 0 ( 20 )



1 ( 22 ) + 0 ( 21 ) + 1 ( 20 )



1 ( 22 ) + 1 ( 21 ) + 0 ( 20 )



1 ( 22 ) + 1 ( 21 ) + 1 ( 20 )



1 ( 23 ) + 0 ( 22 ) + 0 ( 21 ) + 0 ( 20 )



1 ( 23 ) + 0 ( 22 ) + 0 ( 21 ) + 1 ( 20 )



1 ( 23 ) + 0 ( 22 ) + 1 ( 21 ) + 0 ( 20 )

Decimal numerals represented by binary digits

Lets say now that you want to crack is one of those 16 binary numbers. You don’t know which one though. So you create a machine to all 16 binary numbers. A normal computer takes one 4bit number, and if it’s wrong, so you to the next individual number, until you find the right answer.

If we use a quantum computer, we use 4 quantum bits, and remember, due to superposition where an electron is in a state for a moment of both spin up and spin down at the same time, a qbit is both 1 or 0, spin up or spin down.

So, 4qbits is 1 and 0, 1 and 0, 1 and 0, 1and 0. It’s both of them.

The computer machine will say that you are both right and wrong, because the machine detects you are saying both the correct password and a bunch of other incorrect passwords at the same time.

You still want to know which one is the correct password. We just know that we sent it the correct password, but don’t know which one.

You use something called “Grover operator”, where you can sweep away all of the wrong passwords, and be left out only with the correct passwords.

So that’s the beauty of quantum computing. Instead of trying every options one at a time, you use ALL AT THE SAME TIME. Use the Grover operator to sweep away the wrong answer. What would take you years with a classical operator, in a quantum computer you do it in seconds.

An application for this quantum computing would be for example Google Maps, look at all the possible paths between two points and pick which route is the optimal and best one. With a quant computer, you can try ALL paths at once. A master quantum system optimizing every vehicle, self-driving vehicle and lights,

In AI. Give a quant computer ability to reprogram itself. Traditional computer do it by trial and errors and would be thousands of years. A quant can try ALL possible programs to find the best program. In an instant a quant computer can figure out how to reprogram itself and become the best AI.

Example of what classical computers can do:

Classical computers have been able to create Internet, space flight, transatlantic flight, modern computers.

What can’t do:

Optimization, best solution of a problem out of many solutions.


A round table of 10 CEOs at a conference 10 chairs. How many different ways to configure 10 CEOs around the table?

Answer is 10 factorial. 10!. 10x9x8x7x6x5x4x3x2x1= 3,628,800 different ways to seat them.

Is there an optimal arrangement? We make some estimate of best way to seat them.

If we add 1 more CEO, 11 CEOs total, our problem has grown exponentially, to 11 factorial, 11!, 11x10x9x8x7x6x5x4x3x2x1= 39,916,800 different ways to seat the CEOs. What’s the best 1 way to seat all of them? Is there one best way?


Chemistry to cure disease, make new fertilizers for food. A more sustainable world. Chemist need to computer simulate chemicals, molecules, atoms, electrons. Gizillions of different things happening simultaneously and interacting with each other. Chemist can’t use classical computers to simulate.

Take a food enzyme.

A Nitrogenase enzyme involved in N NH reaction.

There are different reaction stages.

Take 3 molecules (molecules are two elements together) in an enzyme. Iron Sulfate. Different sizes of each molecule. The smallest molecule 4 Iron atoms, and 4 sulfate atoms. It’s so small, but it’s the ONLY molecule, in only one enzyme, in ONLY one stage of the chemical reaction, that a classical computer can simulate. Why?

Because to actually similate for what’s going on in that molecule, you need to account for every electron electron repulsion and every attraction of the electron to the nucleus. Every single electron exerts an electric force into every other electron. When you add another rone recalculate all electron energies. The other two molecule CAN NOT be simulated.

So many other problems in the world that can not be solved with classical computers.


Why is Quantum Different?

N qbits

2^N paths, a full quantum state

1 QBITS can be in a super position of 2 states or 2

2 qbits can be in a super position of 4 states or 2x2

3 qbits can be in a super position of 8 states or 2x2x2

Where computational complexity is exponential, that’s what quantum computers are good at.

Example of its power:

each additional qubit doubles the processing power. Three qubits gives you 23, which is eight states at the same time; four qubits give you 24, which is 16. And 64 qubits? They give you 264, which is 18,446,744,073,709,600,000 possibilities! That’s about one million terabytes worth.

While 64 regular bits can also represent this huge number (264) of states, it can only represent one at a time. To cycle through all these combinations, at two billion per second (which is a typical speed for a modern PC), would take about 400 years vs an instant by a quantum computer.

What makes a qubit?

To make a qubit, you need an object that can attain a state of quantum superposition between two states.

An atomic nucleus is one kind of qubit. The direction of its magnetic moment (it’s “spin”) can point in different directions, say up or down with respect to a magnetic field.

The challenge is in placing and then addressing that single atom.

An Australian team led by Michelle Simmons at the University of New South Wales, has made atomic qubits by placing a single phosphorus atom at a known position inside a silicon crystal.

Another idea is to strip an electron off the atom and turn it into an ion. Then you can use electromagnetic fields to suspend the ion in free space, firing lasers at it to change its state. This makes for a “trapped ion” quantum computer like one being developed at MIT.

A current in a loop of superconducting metal can also be in a superposition (between clockwise and anticlockwise), a bit like a little treadmill running forwards and backwards at the same time. The Canadian company D-WAVE bases its quantum computing technology on these so called flux qubits. Its customers include Lockheed Martin, NASA and Google.

A photon of light can be in superposition in the direction it’s waving. Some groups, such as at the University of Bristol in the UK, have been assembling quantum circuits by sending photons around a maze of optical fibres and mirrors.

How do you create the superposition?

Have you ever tried to balance a coin exactly on its edge? That’s what programming a qubit is like. It involves doing something to a qubit so that, in a sense, it ends up “balanced” between states.

In the case of the atomic nucleus, this might be through zapping it with an electric or magnetic field, leaving is with an equal probability of spinning one way or the other.

So how do you read information from the qubits?

There’s an aura of the mystical about what goes on during a quantum computation. The more way-out physicists describe the qubits as engaging in a sort-of quantum séance with parallel worlds to divine the answer.

But it’s not magic, it’s just quantum mechanics.

Say you’ve got your new 64-qubit quantum computer up and running for its first computation. You place all 64 qubits in superposition, just like 64 coins all balanced on edge. Together, they hold 264 possible states in limbo. You know one of these states represents the right answer. But which one?

The problem is, reading the qubits causes the superposition to collapse – like banging your fist on the table with all those balanced coins.

Here’s where a quantum algorithm like Shor’s comes in handy. It loads the qubits to make them more likely to fall on the correct side, and give us the right answer.

Because bits are so small, you rarely work with information one bit at a time. Bits are usually assembled into a group of eight to form a byte. A byte contains enough information to store a single ASCII character, like "h".

A kilobyte (KB) is 1,024 bytes, not one thousand bytes as might be expected, because computers use binary (base two) math, instead of a decimal (base ten) system.

Computer storage and memory is often measured in megabytes (MB) and gigabytes (GB). A medium-sized novel contains about 1 MB of information. 1 MB is 1,024 kilobytes, or 1,048,576 (1024x1024) bytes, not one million bytes.

Similarly, one 1 GB is 1,024 MB, or 1,073,741,824 (1024x1024x1024) bytes. A terabyte (TB) is 1,024 GB; 1 TB is about the same amount of information as all of the books in a large library, or roughly 1,610 CDs worth of data. A petabyte (PB) is 1,024 TB. 1 PB of data, if written on DVDs, would create roughly 223,100 DVDs, i.e., a stack about 878 feet tall, or a stack of CDs a mile high. Indiana University is now building storage systems capable of holding petabytes of data. An exabyte (EB) is 1,024 PB. A zettabyte (ZB) is 1,024 EB. Finally, a yottabyte (YB) is 1,024 ZB.

Many hard drive manufacturers use a decimal number system to define amounts of storage space. As a result, 1 MB is defined as one million bytes, 1 GB is defined as one billion bytes, and so on. Since your computer uses a binary system as mentioned above, you may notice a discrepancy between your hard drive's published capacity and the capacity acknowledged by your computer. For example, a hard drive that is said to contain 10 GB of storage space using a decimal system is actually capable of storing 10,000,000,000 bytes. However, in a binary system, 10 GB is 10,737,418,240 bytes. As a result, instead of acknowledging 10 GB, your computer will acknowledge 9.31 GB. This is not a malfunction but a matter of different definitions.

We count in base 10 by powers of 10:

101 = 10

102 = 10*10 = 100

103 = 10*10*10 = 1,000

106 = 1,000,000

Computers count by base 2:

21 = 2

22 = 2*2 = 4

23 = 2*2*2 = 8

210 = 1,024

220 = 1,048,576

So in computer jargon, the following units are used:



1 kilobyte (KB)

1,024 bytes

1 megabyte (MB)

1,048,576 bytes

1 gigabyte (GB)

1,073,741,824 bytes

1 terabyte (TB)

1,099,511,627,776 bytes

1 petabyte (PB)

1,125,899,906,842,624 bytes

As each qubit has two possible values, the 18 qubits can generate a total of 218 (or 262,144) combinations of output states. Since quantum information can be encoded in these states, the results have potential applications anywhere quantum information processing is used.

Quantum algorithms

Quantum algorithms are designed to take advantage of quantum nature and behavior to speed up classical algorithms, or to provide entirely new ways of modeling physical systems. These algorithms exploit the way qubits encode information and the parallel nature of operating on multiple entangled qubits in superposition.

Classical computers encode information in bits; each bit encoding two possible values, 0 or 1. One qubit encodes two values simultaneously, 0 and 1. Two classical bits encode one of 4 possible values, (00, 01, 10, 11) whereas two qubits encode any superposition of the 4 states simultaneously, although we can obtain only one of those values when measuring. Four qubits encode any superposition of 16 values simultaneously, and so on, exponentially. 100 qubits can encode more information than is available in the largest computer systems today.

Furthermore, when multiple entangled qubits act coherently, they can process multiple options simultaneously. Entangled qubits can process information in a fraction of the time it would take even the fastest non-quantum systems.

Harnessing these quantum attributes has been the pursuit of multiple decades of quantum algorithm research, and there are many innovative techniques that have been found that solve problems in a fraction of the time it takes to solve classically.

One of the most famous quantum algorithms is Shor's algorithm for factorization, which makes the classically intractable problem of factorization of a large number into two prime numbers fast enough to challenge traditional cryptography.

On the more constructive side, algorithms for secure cryptographic key distribution are made possible by superposition, quantum entanglement, and the no cloning property of qubits, meaning the inability for qubits to be copied without detection.

Grover's algorithm highlights a quantum algorithm technique that provides a quadratic speed-up for searching unstructured data.

Drug discovery is a promising area of application that will find a number of uses for these new machines. As a prominent example, quantum simulation will enable faster and more accurate characterizations of molecular systems than existing quantum chemistry methods. Furthermore, algorithmic developments in quantum machine learning offer interesting alternatives to classical machine learning techniques, which may also be useful for the biochemical efforts involved in early phases of drug discovery.

Quantum hardware

In classical computers, bits correspond to voltage levels in silicon circuits. Quantum computing hardware can be implemented by many different physical realizations of qubits: trapped ions, superconducting, neutral atoms, electron spin, light polarization, topological qubits. Quantum hardware is an emergent technology. Qubits are fragile by nature and become less coherent as they interact with their environment. Balancing fidelity of the system with scalability is needed. The larger the scale (that is, number of qubits), the higher the error rate.

Sep 18th, 2019

The news: IBM’s new computer, due to launch next month, will boast 53 quantum bits, or qubits, the elements that are the secret to quantum machines’ power (see our explainer for a description of qubits and the phenomena that make quantum computers so powerful). Google has a 72-qubit device, but it hasn’t let outsiders run programs on it; IBM’s machine, on the other hand, will be accessible via the cloud.

Cloud power: IBM has been promoting quantum computing via the cloud since 2016. To boost those efforts, the firm is opening a new center in New York state to house even more machines. Other companies developing quantum computers, like Rigetti Computing and Canada’s D-Wave, have also launched cloud services. Behind the scenes, there’s a race on to demonstrate quantum supremacy.

Despite rapid and impressive experimental progress, most researchers believe that "fault-tolerant quantum computing [is] still a rather distant dream". As of September 2019, no scalable quantum computing hardware has been demonstrated.

On “Quantum Supremacy”

Recent advances in quantum computing have resulted in two 53-qubit processors: one from our group in IBM and a device described by Google in a paper published in the journal Nature. In the paper, it is argued that their device reached “quantum supremacy” and that “a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.” We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity.

Moving from 53 to 60 qubits would give the Google’s quantum computer the equivalent computational heft of 33 Summit supercomputers.

The battle for top-dog status in the emerging field of quantum computing took a strange turn last week when rivals IBM and Google both made important and—in Google’s case—mysterious claims about where they are in a quest that most experts believe is still at least a decade away from the finish line.

IBM announced that it will add its 14th quantum computer to its fleet in October. This will be a new 53-qubit model which it says is the single largest universal quantum system made available for external access in the industry to date. IBM also announced the opening of the first IBM Quantum Computation Center in Poughkeepsie, NY, bringing the number of quantum computing systems available online via its IBM Q Experience platform to 10, with an additional four systems scheduled to come online in the next month.

Meanwhile, Google scientists posted, and then quickly took down, a research paper on a NASA web site that claimed that it had achieved a major milestone called “quantum supremacy,” meaning it can solve problems that even the most powerful conventional supercomputers cannot.

According to a report in the FT, the report claimed that Google’s 72-qubit quantum computer chip Bristlecone, introduced in March 2018, performed a calculation in just over 3 minutes that would take 10,000 years on IBM's Summit, the world's most powerful commercial computer. The report reportedly said:

To our knowledge, this experiment marks the first computation that can only be performed on a quantum processor.

If true, this would be a very big step in the advance toward quantum computing, but it appears that the researchers may have gotten a little too far out over their skis and the post was quickly taken down. Since then, Google PR and marketing has refused to discuss the topic and the paper has gone.

IBM takes the communal approach

Big Blue has taken an open and communal approach to the development of quantum computing and it seems to be paying off. In 2016, it built and put the IBM Q experience prototype 5-qubit machine in the cloud, and made it available for the world from which to learn, use, and explore.

I’ve used it and created an account here

Quanting Computer Huge Hardware

Look inside a quantum computer

In order to work with qubits for extended periods of time, they must be kept very cold. Any heat in the system can introduce error, which is why quantum computers are designed to create and operate at temperatures near absolute zero.

Here’s a look at how a quantum computer’s dilution refrigerator, made from more than 2,000 components, exploits the mixing properties of two helium isotopes (this ibm computer doesn’t use photons) to create such an environment for the qubits inside.

An isotope is a form of a chemical element whose atomic nucleus contains a specific number of neutron s, in addition to the number of proton s that uniquely defines the element.

Steps are:

1 Qubit Signal Amplifier

One of two amplifying stages is cooled to a temperature of 4 Kelvin.

2 Input Microwave Lines

Attenuation is applied at each stage in the refrigerator in order to protect qubits from thermal noise during the process of sending control and readout signals to the processor.

3 Superconducting Coaxial Lines

In order to minimize energy loss, the coaxial lines that direct signals between the first and second amplifying stages are made out of superconductors.

4 Cryogenic Isolators

Cryogenic isolators enable qubits signals to go forward while preventing noise from compromising qubit quality.

5 Quantum Amplifiers

Quantum amplifiers inside of a magnetic shield capture and amplify processor readout signals while minimizing noise.

6 Cryoperm Shield

The quantum processor sits inside a shield that protects it from electromagnetic radiation in order to preserve its quality.

7 Mixing Chamber

The mixing chamber at the lowest part of the refrigerator provides the necessary cooling power to bring the processor and associated components down to a temperature of 15 mK — colder than outer space.

Quantum processors

This list contains quantum processors, also known as quantum processing units (QPUs). Please note that some devices listed below have only been announced at press conferences so far, with no actual demonstrations or scientific publications characterizing the performance.

These QPUs are based on the quantum circuit and quantum logic gate-based model of computing.








Release date


7×7 lattice


99.7% [1]

49 qb [2]

Q4 2017 (planned)

IBM Q 53




53 qb

October 2019


Nonlinear superconducting resonator




54 transmon qb 53 qb effective





99.5% [1]

20 qb


IBM Q 5 Tenerife

bow tie


99.897% (average gate) 98.64% (readout)

5 qb

2016 [1]

Design for a qubit

– the smallest unit of quantum information – that could help get round some of the difficulties of manufacturing quantum computers at an atomic scale.

At the moment, making quantum systems using silicon is difficult because the qubits have to be very close to each other, about 10 to 20 nanometres apart, in order to communicate. This leaves little room to place the electronics needed to make a quantum computer work.

But by combining an electron and nucleus into one qubit, Morello and his team think they’ve found a way to let qubits communicate over distances of up to 500 nanometres. “This would allow you to cram other things between qubits,” says Morello.

Making the leap

Until now, most silicon-based qubits have been made from the electron or the nucleus of a single phosphorus atom. The team’s design uses both the nucleus and the electron of a phosphorus atom to create a single qubit inside a layer of silicon.

Qubits in silicon systems interact through electric fields, and Morello’s team shows that it’s possible to extend the reach of those electric fields by pulling the electron further away from the nucleus of each atom.

This overcomes a couple of the major hurdles that held back silicon-based quantum systems, says Simon Devitt at Macquarie University in Sydney, and could eventually make it possible to create quantum computers with millions of qubits that can simulate simple chemical reactions.

Silicon-based qubits aren’t the only candidates for quantum computers. Google is making superconductor-based quantum chips, and claims it is on track to build the first quantum computer capable of surpassing some of the abilities of ordinary computers later this year.

Open race

“Silicon is a bit further behind the pack,” says Devitt. But since the computer industry is already used to building chips out of silicon, silicon is well-placed to catch up or even surpass the performance of other quantum systems. Quantum computers made using silicon qubits might be less-error prone than other systems when it comes to building computers with thousands or millions of qubits, says Devitt.

However, silicon and superconducting quantum systems both only work in temperatures that are close to absolute zero, says Michele Reilly at Turing, a quantum start-up in California. She says diamond-based systems could be easier to scale-up because they use similar types of qubits to the silicon systems, but don’t need to be cooled to such extreme temperatures.

“The path is pretty open,” says Barbara Terhal at RWTH Aachen University in Germany. She says it’s still too early to know which system will end up powering the quantum computers of the future.

We’ll have to wait and see whether this new way of defining a qubit really does unleash the potential of silicon-based quantum computers, says Devitt. “This could be a potential solution that is kind of staring us in the face,” he says. “But they’re going to have to go into the lab and make this work.”

Quantum computing – a full hardware and software stack


Running our first program in a quantum computer. IBM offers access to the most advanced quantum computers for you to do real work.

Learn, develop, and run quantum programs on our systems with the IBM Q Experience quantum cloud platform.


Introducing the first commercial trapped ion quantum computer. By manipulating individual atoms, it has the potential to one day solve problems beyond the capabilities of even the largest supercomputers.

Microsoft's quantum program is unique in that we focus on scaling each and every component of the system to deliver real quantum impact. This comprehensive approach involves:

· building a quantum computer using reliable, scalable, and fault-tolerant topological qubits,

· engineering a unique cryogenic control plane with low power and heat dissipation,

· developing a complete software stack to enable programming the quantum computer and controlling the system at scale.

The open source Quantum Development Kit (QDK) has been introduced to make quantum programming and algorithm development more accessible. Our high-level programming language, Q#, addresses the challenges of quantum programming. We designed Q# as a high-level quantum-focused programming language focused on algorithm and application development. The Q# compiler is integrated in a software stack that enables a quantum algorithm to be compiled down to the primitive operations of a quantum computer. Up to a certain scale (number of qubits), quantum computing can be simulated on a classical computer. Using simulation, you can start to write quantum programs today for running on quantum hardware tomorrow. We’ve also paired Q# with samples, libraries, and learning exercises to make it easy to begin quantum programming today.

Top Scientists working on Quantum Computing

Prof. An drea Morello University of New South Wales

Dr. Talia Gershon, IBM

The explosion of the Internet, rapid advances in computing power, cloud computing, and our ability to store more data than was even considered possible only two decades ago has helped fuel the Big Data revolution of the 21st century, but the rate of data collection is growing faster than our ability to process and analyze it.

In fact, 90 percent of all data produced in human history was produced within the last two years.

As scientific instruments continue to advance and even more data is accumulated, researchers classical computing will be unable to process the growing backlog of data.

Fortunately, scientists at MIT partnered with Google to mathematically demonstrated the ways in which quantum computers, when paired with supervised machine learning, could achieve exponential increases in the speed of data categorization.

While only a theory now, once quantum computers scale sufficiently to process these data sets, this algorithm alone could process an unprecedented amount of data in record time.

Quantum Computing is not always preferred

Most experts believe the first quantum computer that can do the miraculous things its advocates promise is still a decade off but that hasn’t stopped IBM, Microsoft, Google, AT&T, and other heavyweights from pressing ahead in a race that represents the next Mt. Everest of computing challenges.

Real world projects

Me, Arturo Devesa, is working on this:

Experiment with classification problem with quantum-enhanced support vector machines

Classification of medical and non-medical internet articles using quantum computer for faster classification using a machine learning algorithm called SVM.

Other ML projects for Qbits

Machine learning and optimization

In general, quantum computers aren’t challenged by the amount of computation needed. Instead, the challenge is getting a limited number of answers and restricting the size of the inputs. Because of this, machine learning problems often don’t make for a perfect fit because of the large amount of input data. However, optimization problems are a type of machine learning problem that can be a good fit for a quantum computer.

Imagine you have a large factory and the goal is to maximize output. To do so, each individual process would need to be optimized on its own, as well as compared against the whole. Here the possible configurations of all the processes that need to be considered are exponentially larger than the size of the input data. With a search space exponentially bigger than the input data, optimization problems are feasible for a quantum computer.

Additionally, due to the unique requirements of quantum programming, one of the unexpected benefits of developing quantum algorithms is identifying new methods to solve problems. In many cases, these new methods can be brought back to classical computing, yielding significant improvements. Implementing these new techniques in the cloud is what we refer to as quantum-inspired algorithms.


Modelling molecules is a perfect application for quantum computing. In Richard Feynman’s own words, “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.”

While we have an accurate understanding of organic molecules—those with S and P orbitals—molecules whose orbitals interact with each other are currently beyond our ability to model accurately. Many of the answers we need to address significant issues, such as world hunger and global warming, come by way of understanding these more difficult molecules. Current technology doesn’t allow us to analyze some of the more complex molecules, however, this is an excellent problem for a quantum computer because input and output are small. There’s a unique approach in quantum computing where, instead of loading the input data, you’re able to encode it into the quantum circuit itself. Modelling molecules are an example of this; the initial positions of the electrons would be the input—also referred to as ‘preparation’—and the final positions of the electron would be the output.

computational cost represents one of the major challenges for the future of ML. In particular, polynomial scaling in the number of data points might not be adequate in the age of large-scale ML. The quantum algorithms presented here allow the complexity of some, currently used, regularization methods to be reduced. We classified the quantum approaches into four main categories: linear algebra, neural networks, sampling and optimization. The QML algorithms based on linear algebra subroutines are those that promise the greatest computational advantages (i.e. exponential). However, it is not clear whether fundamental limitations related to how quickly these algorithms need to access the memory might compromise their ability to speed up the analysis of classical data. Quantum methods for training neural networks, for sampling and for optimization, provide so far mostly quadratic advantages and some of these might be implementable on first-generation quantum computers. Unfortunately, the theoretical framework on which they are based is not yet well established (e.g. the quantum Boltzmann machines described in §9) and only practical experiments will determine their true performance.

possibility of a quantum speed-up for some ML problems.

Recent Posts

See All

Generative AI report

Top GenAI companies: OpenAI Google Anthropic Meta Mistral Stability AI MidJourney Top GenAI Models GPT 4 Gemini 1.5 Llama2 Mistral Claude Stable Diffusion

1 comentário

15 de out. de 2020

I see that you are in the AI industry which I am currently working on a research project and very interested in understanding your company and would like to speak to you. Can we connect? Thank you.

bottom of page