|
Jerry Chow |
Editor’s note: This
article is by Jerry Chow, manager of Experimental Quantum Computing at IBM
Research.
Quantum computers promise to open up new
capabilities in the fields of decryption and simulation not possible in today’s
computers. And when made a reality, the performance improvement will be due to
their fundamental unit of information: a qubit. Qubits are two-level systems
that obey the laws of quantum mechanics. Imagine emulating a quantum computer
using today’s approaches to building the largest computer systems in the world.
If a quantum computer could be built with just 50 qubits, there is not a
combination of today’s Top500 supercomputers that could successfully emulate it.
The holy grail of quantum applications is to perform tasks like large number
factorization and simulation of complex quantum systems, problems which are
intractable with today’s supercomputers.
But like today’s machines, quantum computers suffer from errors,
and, worse, these errors seem to be fundamental, since quantum information is
so fragile. Our team at the Thomas J Watson Research Center published results
in the paper Implementing a strand of a scalable fault-tolerant quantum computing fabric (doi: 10.1038/ncomms5015) in Nature Communications (1) about recent experimental steps toward a
“surface code” that shows promise for correcting these errors – and bringing
fault-tolerant quantum computers a step closer to reality.
Understanding a
qubit’s peculiar properties
|
3-qubit, 5 resonator device |
The classical equivalent of a qubit is the digital bit, the “1”
and “0” ubiquitous in all modern computers. Qubits, though, can exist in some
combination of 0 and 1 simultaneously, a different state altogether that is
called a “superposition.” When qubits interact with each other, they can form a
special kind of superposition that is called entanglement. Entangled states
exhibit perfect correlation no matter how far the qubits are separated in
space, and this may be one of the phenomena that grant quantum computing its
power.
Entanglement is necessary for quantum computing, but can also lead
to errors when it occurs between the quantum computer and the environment (i.e.
anything that is not the computer itself). Quantum effects disappear when the
system entangles too strongly to the external world, which makes quantum states
very fragile. Yet, there is a kind of tension, since the quantum computer must
be coupled to the external world so the user can run programs on it and read
the output from those programs.
This need to couple the quantum computer to its environment sets a
limit to how well the system can maintain its quantum behavior. And as it
interacts more strongly with the world, errors are introduced in the
computation. How long a qubit retains its quantum properties is referred to as
the coherence time and is a common metric to benchmark the quality of a qubit.
The art therefore lies in building quantum systems with reduced errors and long
coherence times.
Quantum error
correction theories
In order to build a fully-functional, large-scale, universal,
fault-tolerant quantum computer, we will need to figure out how to have long
coherence times and deal with errors which may arise from the manipulation of
the quantum computer. The path forward is via quantum error correction (QEC), a
robust theory which has been developed from the ideas of classical error
correction in order to deal with errors in qubits. In classical error
correction, a bit (taking values 0 or 1) is encoded into multiple physical
bits. For example, three physical bits, 000, can encode the logical bit value
of 0. If any one of the physical bits happens to flip its state because an
error has occurred (001), the original logical value (0) can still be recovered
by “majority voting” (two “00s” overrule the “1”).
Encoding qubits are substantially more challenging than a bit. For
one, they can’t be cloned, so we cannot simply
copy to an analogous “000” bit state. We also can’t “see” the quantum
information in the same way because looking at or measuring the qubit, which
could be in any superposition of “0” or “1”, forces the state to choose either
“0” or “1”. But it turns out that all of these problems can be overcome by the
clever use of entanglement and superposition.
|
A magnified look at a single quantum bit |
QEC protocols rely on parity measurements. An example of a QEC
protocol that protects a logical qubit from a single bit-flip error is the
three-qubit Shor code. In that code, via superposition
and entanglement, pairs of qubits in a three qubit register can be interacted
with in such a way to give parity information, (i.e. are both qubits either 00
or 11 having even parity, or are both qubits either 01 or 10, having odd
parity). Through the accumulation of this parity information from the register,
it is then possible to detect and locate a single error in the qubit register.
But to make a fully-fledged quantum computer work, we need codes
that protect against a continuum of errors on multiple qubits. Our team is
focused on surface code. It has a high error
threshold and only nearest-neighbor parity checks. This means that error rates
do not need to be excessively low to see the benefits of coding, and each
operation we need to do only involves a few physically adjacent qubits. This
makes the surface code an attractive option for an experimental demonstration
with superconducting qubits.
Surface code with
superconducting qubits
We have been exploring superconducting qubits to build a universal
quantum computer based on the surface code architecture for quantum
error correction. Because their properties can be designed and manufactured
using standard silicon fabrication techniques, we anticipate that once a
handful of superconducting qubits can be manufactured reliably and repeatedly,
and controlled with low error rates, there will be no fundamental obstacle to
scaling up to thousands of qubits and beyond.
Coherence times for superconducting qubits have been increasing
steadily for the past 10-15 years, and in 2010, those values, together with the
ability to couple and control multiple qubits with low error rates, reached a
point where we could start to consider potentially scalable architectures. In
our newest paper, we combined a number of state-of-the-art advances within
superconducting qubits in order to demonstrate a crucial stepping stone towards
the surface code quantum error correction architecture. Using a three
superconducting qubit network, we successfully detected the parity of two
“code” qubits via the measurement of a third “syndrome” qubit (the “error
detection” qubit). A larger surface code system would involve similar parity
checks as we have demonstrated in this reduced system.
Our result and recent findings on high accuracy controls from UC
Santa Barbara show the promise for superconducting qubits. The architectural
and engineering challenges that lay ahead are ripe to be addressed to get
towards a fault-tolerant quantum computer.
(1) Implementing a strand of a
scalable fault-tolerant quantum computing fabric
IBM Thomas J. Watson Research Center: Jerry M. Chow, Jay M.
Gambetta, Easwar Magesan, David W. Abraham, Andrew W. Cross,
Nicholas A. Masluk, John A. Smolin, Srikanth J. Srinivasan, M. Steffen
Raytheon BBN Technologies: B.R. Johnson, Colm A. Ryan
Labels: quantum computing, quantum error correction, qubit, US Santa Barbara