The branch of physics that deals with subatomic particles and the universe within a tiny atom is known as Quantum physics. The laws of physics within an atom doesn’t act the same way as everything else in the universe at large.
On the atomic scale, the rules shift, and the “classical” laws of physics that we take for granted in our daily world no longer work necessarily. Read on this article as we get quantum computing explained.
One of the greatest physicists of the Twentieth century, Richard Feynman, once put it: “Things on a very small scale behave like nothing you have any direct experience about … or like anything you’ve ever seen.”
If you’ve been studying light, you might even know a little bit about quantum theory. You should understand that sometimes a beam of light acts as though it were made up of particles, and sometimes as if it were waves of energy rippling through space.
That’s called wave-particle duality, and it’s one of the principles that quantum theory brings to us. It is difficult to understand that a particle and a wave can be two things at once since it is utterly unknown to our daily experience: a car is not a bicycle and a bus at the same time.
However, in quantum theory, that’s just the sort of weird thing that can happen. The most striking case of this is the baffling mystery known as Schrödinger’s cat.
Briefly, we can imagine a scenario in the strange world of quantum theory where anything like a cat might be alive and dead at the same time!
What is quantum computing?
What do computers have to do with all this? Suppose we try to push Moore’s law to make transistors smaller until they reach the point that they follow the more bizarre laws of quantum mechanics, not the ordinary physics (like old-style transistors).
Computers built in this way can do things that our traditional computers can not do. If we can mathematically foresee that they would be able to do so, can we make them function in practice like that?
For many decades, people have been asking those questions. IBM research physicists Rolf Landauer and Charles H. Bennett were amongst the first.
In the 1960s, Landauer opened the door to quantum computing when he suggested that data was a physical entity that could be processed under the rules of physics.
One significant effect of this is that computers waste energy manipulating the bits within them (which is partially why computers use so much energy and get so hot, even though they don’t seem to do very much at all).
Bennett demonstrated in the 1970s, building on Landauer’s work, how a computer could circumvent this problem by operating in a “reversible” way, meaning that without using massive amounts of energy, a quantum computer could perform massively complex computations.
In 1981, Argonne National Laboratory physicist Paul Benioff attempted to visualize a straightforward machine that would function in a similar way to an ordinary device, but according to quantum physics principles.
The next year, Richard Feynman sketched roughly how simple computations could be carried out by a computer using quantum principles.
A few years later, David Deutsch (one of the leading lights of quantum computing), from Oxford University, explained the theoretical basis of a quantum computer in more detail.
How do quantum computers work (quantum computing explained)
Quantum computers use qubits instead of bits. Qubits may also be in what’s called ‘superposition’ instead of either being on or off, where they are both on and off at the same time, or somewhere on a spectrum between the two.
If you flip a coin, it can be either heads or tails. If you spin it, it has a chance of landing on heads and a chance of landing on tails.
It can be either, before you weigh it, by stopping the coin. Superposition is like a spinning coin, and it is one of the aspects that makes it so efficient for quantum computers.
If you ask a standard computer to figure out its way out of a maze, in turn, it will try every single branch, independently ruling them all out before it finds the right one.
A quantum computer can go down any maze route at once. It can keep its head in uncertainty.
It’s a bit like keeping your finger on the page of the adventure book you are reading. If your favorite character dies, you can immediately choose a different path instead of returning to the book’s start.
Using the actual location of a physical state, classical computers conduct logical operations. These are typically binary, which means that one of two positions is the basis of its operations.
In quantum computing, operations instead use an entity’s quantum state to create what is known as a qubit. A single state, such as on or off, up or down, 1 or 0, is called a bit.
These states are the undefined properties of an entity, such as the spin of an electron or the photon’s polarization, until they have been observed.
Unmeasured quantum states exist in a mixed ‘overlay’ instead of having a specific location, not unlike a coin flipping through the air until it lands in your hand.
These superpositions can be entangled with those of other objects, meaning that even though we don’t yet know what they are, their final results would be mathematically connected.
What can quantum computers do that ordinary computers can’t?
Quantum computers are not just about doing things more effectively or more rapidly. They’re going to let us do stuff we couldn’t even conceive of without them. Stuff that even the strongest supercomputer is just not able to do.
They can accelerate the growth of artificial intelligence rapidly. Google is now using them to develop self-driving vehicle applications. In modeling chemical reactions, they will also be vital.
Right now, only the most simple molecules can be analyzed by supercomputers. Yet quantum computers use the same quantum features as the molecules they are attempting to simulate. Even with the most challenging responses, they should have no trouble handling.
New battery technologies in electric cars to better and cheaper drugs, or vastly improved solar panels, could mean more efficient goods. Scientists hope quantum simulations could also help find a cure for Alzheimer’s disease.
Although people sometimes believe that quantum computers have to be better than traditional computers necessarily, that’s by no means certain.
So far, the only thing we know for sure that a quantum computer might do better than a regular one is factorization: finding two unknown prime numbers that give a third, known number when multiplied together.
Mathematician Peter Shor was working at Bell Laboratories in 1994 when he demonstrated the algorithm that was followed by quantum computers to find many ‘prime factors’. This is the algorithm that basically made the problem solving process way faster than conventional computers.
Shor’s algorithm sparked interest in quantum computing because virtually every modern computer (and every secure, online shopping and banking website) uses public-key encryption technology based on the virtual impossibility of quickly finding primary factors (in other words, it is essentially an “intractable” computer issue).
If quantum computers could easily factor in huge numbers, today’s online security could be made redundant at a stroke.
But what goes around comes around, and some researchers assume that quantum technology can lead to far stronger encryption types.
(For the first time, Chinese researchers showed in 2017 how quantum encryption could be used to make a very safe video call from Beijing to Vienna.) Does that mean that quantum computers are better than traditional ones? Exactly not.
Apart from the Shor algorithm and the Grover algorithm search method, hardly any other algorithms that would be better performed by quantum methods have been produced.
Given ample time and computing power, conventional computers should still be able to solve any issue that, ultimately, quantum computers could solve.
In other words, it remains to be proven that quantum computers, especially given the difficulties of actually creating them, are generally superior to conventional ones.
Who knows how traditional computers might advance in the next 50 years, possibly rendering obsolete and even impractical the concept of quantum computers. Hope this got a little of quantum computing explained.
Building a quantum computer
Getting to know and having quantum computing explained, its tempting to know what building a practical quantum computer involves.
It requires keeping an object long enough to perform different processes in a superposition state. Unfortunately, whenever a superposition encounters materials that are part of a calculated scheme, it loses its in-between state. It becomes a boring old classical piece in what is known as decoherence.
While still making them easy to read, Devices must be able to protect quantum states from decoherence. Different methods, whether using more efficient quantum methods, or seeking better ways to search for errors, approach this problem from different angles.
When are we getting a working quantum computer?
Quantum computers remain mostly theoretical three decades after they were first proposed. Even so, there has been some promising progress towards a quantum computer being realized.
In 2000, there were two impressive breakthroughs. First, Isaac Chuang (now a professor at MIT, but then at IBM’s Almaden Research Center) used five fluorine atoms to build a primitive quantum computer of five qubits.
The same year, Los Alamos National Laboratory researchers found out how to use a drop of liquid to create a seven-qubit computer.
Researchers at the University of Innsbruck added an extra qubit five years later and created the first quantum machine capable of manipulating a qubyte (eight qubits).
These were tentative first steps, but necessary. Researchers have announced more ambitious experiments over the next few years, introducing growing numbers of qubits gradually.
By 2011, D-Wave Systems, a pioneering Canadian company, revealed in Nature that it had developed a 128-qubit machine; the announcement was highly controversial.
There was a lot of controversy over whether the company’s machines had displayed quantum behavior.
Three years later, Google revealed that it was recruiting a team of scholars (including the physicist John Martinis from the University of California at Santa Barbara) to build its quantum computers based on the D-Wave method.
The Google team revealed in March 2015 that they were “a step closer to quantum computation,” creating a new way for qubits to detect and defend against mistakes.
In 2016, Isaac Chuang of MIT and scientists from the University of Innsbruck unveiled a five-qubit, ion-trap quantum computer that could measure 15 factors; one day, the long-promised, fully-fledged encryption buster could grow into a scaled-up version of this system.
There is no question that these are enormously significant developments. And the signs that quantum technology will finally bring a computing revolution are gradually becoming more promising.
Microsoft released a full quantum development kit, including a new programming language, Q #, designed explicitly for quantum applications, in December 2017.
D-wave revealed plans to begin rolling out quantum power to a cloud computing platform at the beginning of 2018.
A few weeks later, Google unveiled Bristlecone, a 72-qubit array-based quantum processor that could, one day, form the foundation of a quantum computer that could solve real-world issues.
Google declared in October 2019 that it had achieved another milestone: the achievement of “quantum dominance” (the stage at which a quantum computer can defeat a traditional machine at a standard computing task), although not everyone was convinced; IBM, for example, challenged the argument that one thing is beyond dispute: quantum computing is fascinating!
Even so, for the entire field, it’s early days, and most experts believe that we’re unlikely to see the appearance of functional quantum computers for several years and more likely for many decades.
What will change with quantum computing?
By now we believe you have got a bit of quantum computing explained, what changes shall it bring forth? RSA encryption relies on the practical impossibility of finding the correct prime factors of a large integer, such as a large 500-digit-long form, from a classical machine.
Using our most powerful algorithms on a classical device, this could take hundreds of years, and thousands of years if you just tried to brute force your way to an answer.
The explanation of why this is solvable with a quantum computer is that in 1994, Peter Shor had already solved it.
As his solution is named, Shor’s algorithm requires a sufficiently powerful quantum computer to run on to crack RSA encryption, but one does not exist yet.
It will soon be as efficient as using a hook and loop latch on your front door to protect your house and the way we secure data. We’re going to have to develop an entirely new way of protecting all our existing knowledge, as far as we know.
As for the stuff we assume will improve quantum computing, the first candidate is the way we organize macro-level structures like networks and roads for telecommunications.
One example of the kind of problem quantum computers might solve via the quantum superposition of qubits is how to make these devices optimally balanced between cost and usefulness.