Seminar Report On QUANTUM COMPUTERS



Quantum computer is a device that can arbitrarily manipulate the quantum state of a part of itself.  The fled of quantum computation is largely a body of the theoretical promises for some impressively fast algorithms which could be executed on quantum computers. How ever, since the first significant algorithm was proposed in 1994 experimental progress has been rapid with several schemes yielding two and three quantum-bit manipulations
                               
Quantum computers were first discussed by Benio® in the context of simulating classical Turing machines (very elementary conventional computers) with quantum unitary evolution. Feynman considered the converse question of how well classical computers can simulate quantum systems. He concluded that classical computers invariably super from an exponential slow-down in trying to simulate quantum systems, but that quantum systems could, in principle, simulate each other without this slowdown. It was Deutsch , however, who first suggested that quantum superposition might allow quantum evolution to perform many classical computations in parallel.

                              A schematic model of a quantum computer is described as well as some of the subtleties in its programming. The Shor’s algorithm for efficiently factoring numbers on a quantum computer is presented in two parts: the quantum procedure within the algorithm and the classical algorithm that calls the quantum procedure. The mathematical structure within the factoring problem is discussed, making it clear what contribution the quantum computer makes to the calculation. The complexity of the Shor’s algorithm is compared to that of factoring on conventional machines and its relevance to public key cryptography is noted. In addition, we discuss the experimental status of the field and also quantum error correction which may in the long run help solve some the 2 most pressing difficulties. We conclude with an outlook to the feasibility and prospects for quantum computation in the coming years.

  
2.DEFINITION OF QUANTUM COMPUTERS

A quantum computer is a machine that performs calculations based on the laws of quantum mechanics, which is the behavior of particles at the sub-atomic level.
A technology of quantum computers is also very different. For operation, quantum computer uses quantum bits (qubits). Qubit has a quaternary nature. Quantum mechanic’s laws are completely different from the laws of a classical physics. A qubit can exist not only in the states corresponding to the logical values 0 or 1 as in the case of a classical bit, but also in a superposition state.
A qubit is a bit of information that can be both zero and one simultaneously (Superposition state). Thus, a computer working on a qubit rather than a standard bit can make calculations using both values simultaneously. A qubyte is made up of eight qubits and can have all values from zero to 255 simultaneously. “Multi-qubyte systems have a power beyond anything possible with classical computers.” (Quantum Computers & Moore's Law, p.1).
Forty qubits could have the same power as modern supercomputers. According to Chuang a supercomputer needs about a month to find a phone number from the database consisting of world's phone books, where a quantum computer is able to solve this task in 27 minutes.

The Major Difference between Quantum and Classical Computers is
 Memory of a classical computer is a string of 0s and 1s, and it can perform calculations on only one set of numbers simultaneously. The memory of a quantum computer is a quantum state that can be a superposition of different numbers. A quantum computer can do an arbitrary reversible classical computation on all the numbers simultaneously. Performing a computation on many different numbers at the same time and then interfering all the results to get a single answer, makes a quantum computer much powerful than a classical one.

3.Quantum Mechanics
Quantum mechanics is generally about the novel behavior of very small things. At this scale matter becomes quantized, this means that it can be subdivided no ore. Quantum mechanics has never been wrong, it explains why the stars shine, how matter is structured, the periodic table, and countless other phenomena. One day scientists hope to use quantum mechanics to explain everything, but at present the theory remains incomplete as it has not been successfully combined with classical theories of gravity. Some strange effects happen at the quantum scale.  The following effects are

Important for quantum computing:

1. Superposition and interference

2. Uncertinity

3. Entanglement.

This chapter is broken into two parts. In the first part we’ll look briefly at the History of quantum mechanics. Then, in the second part we will examine some important concepts (like the ones above) of quantum mechanics and how they relate to quantum computing.
The main references used for this chapter are, Introducing Quantum Theory by McEvoy and Oscar Zarate, and Quantum Physics, Illusion or Reality by Alistair. Both of these are very accessible introductory books.

3.1 History

3.1.1 Classical Physics
Classical physics roughly means pre-20th century physics, or pre-quantum physics. Two of the most important classical theories are electromagnetism, by James time due to the large body of work he produced that is still relevant today. Prior to this, Nicolas Copernicus, 1473 - 1543 and Galileo Galilee, 1564 - 1642 4.2) created the modern scientific method (we might also include Leonardo Davinci,1452 - 1519) by testing theories with observation and experimentation. Classical physics has a number of fundamental assumptions, they are:

1       The universe is a giant machine.
2       Cause and effect, i.e. all non-uniform motion and action is caused by something (Uniform motion doesn’t need a cause; this is Galileo’s principle of Inertia).
3       Determinism, if a state of motion is known now then because the universe is predictable, we can say exactly what it has been and what it will be at anytime.
4       Light is a wave that is completely described by Maxwell’s wave equations. These are four equations that describe all electric and magnetic phenomena.
5       Particles and waves exist, but they are distinct.
6       We can measure any system to an arbitrary accuracy and correct for any errors caused by the measuring tool.

3.1.2 Important Concepts
In the lead up to quantum mechanics there are some important concepts from Classical physics that we should look at. These are the concepts of atoms, thermodynamics, and statistical analysis.

Atoms
Atoms are defined as indivisible parts of matter first postulated by Democritus, 460 - 370 BC. The idea was dismissed as a waste of time by Aristotle, (384 - 322 BC) but two thousand years later the idea started gaining acceptance. The first major breakthrough was in 1806 when John Dalton, predicted properties of elements and compounds using the atomic concept.

Thermodynamics:
Thermodynamics is the theory of heat energy. Heat is understood to be disordered energy; e.g. the heat energy in a gas is the kinetic energies of all the molecules. The temperature is a measure of how fast the molecules are traveling (if a gas or liquid; if solid, how fast they are vibrating about their fixed positions in the solid).
Thermodynamics is made up of two laws:

i.       The first law of thermodynamics
In a closed system, whenever a certain amount of energy disappears in one place an equivalent amount must appear elsewhere in the system is some form. This law of conservation of energy was originally stated by Herman VonHelmholtz, 1824 – 1894.

ii.     The second law of thermodynamics
The total entropy of a system increases when heat flows from hot body to a coldone. Heat always flows from hot to cold [Rae, A. 1996, p.18].
Rudolf Clausius, 1822 - 1888 called the previous law the first Of two laws. He introduced a new concept, entropy which in terms of heat transfer .So this implies that an isolated system’s entropy is always increasing until the system reaches thermal equilibrium (i.e. all parts of the system are at the same temperature).

3.1.3 Statistical Mechanics
In 1859, J.C. Maxwell, using the atomic model, came up with a way of statistically averaging the velocities of randomly chosen molecules of a gas in a closed system like a box (because it was impossible to track each one). The graph is shown in figure 4.1, remember hotter molecules tend to go faster, the graph’s n axis denotes the number of molecules (this particular graph shows CO2). The v axis denotes velocity. The letters a, b, and c represent molecules at 100K, 400◦K,
and 1600K respectively.

In the 1870s Ludwig Boltzmann, 1844 - 1906 generalized the theory to any collection of entities that interact randomly, are independent, and are free to move. He rewrote the second law of thermodynamics to say: As the energy in a system degrades the system’s atoms become more disordered and there is an increase in entropy. To measure this disorder we consider the number of configurations or state that the collection of atoms can be in.

If this number is W then the entropy S is defined as:

S = k logW

Where k is Boltzmann’s constant k = 1:38 £ 10¡23 J/K.

So the behavior of”large things” could now be predicted by the average statistical Behavior of their smaller parts, which is important for quantum Mechanics. There also remains the probability that a fluctuation can occur, a statistical improbability that may seem nonsensical but nonetheless the theory must cater for it. For example, if we have a box containing a gas a fluctuation could be all particles of the gas randomly clumping together in one corner of the box.

3.1.4 Important Experiments
There are two major periods in the development of quantum theory, the first culminating in 1913 with the Niels Bohr, 1885 - 1962 model of the
Atom and ending in about 1924. This is called old quantum theory. The new quantum theory began in 1925. The old quantum theory was developed in some part to explain the results of three experiments which could not be explained by classical physics, they are:
1       Black body radiation and the ultraviolet catastrophe.
2       The Photoelectric effect.
3       Bright Line Spectra.

These experiments and their subsequent explanations are described in the next three sections.

3.1.5    Black Body Radiation
A black body absorbs all electromagnetic radiation (light) that falls on it and would appear black to an observer because it reflects no light. To determine the temperature of a black body we have to observe the radiation emitted from it.

Associates of Max Planck, 1858 - 1947,measured the distribution of radiation and energy over frequency in a cavity, a kind of oven with a little Hole for a small amount of heat (light, radiation) to escape for observation. Because the radiation is confined in the cavity it settles down to an equilibrium distribution of the molecules in a gas. They found the frequency distributions to be similar to Maxwell’s velocity distributions. The color of the light emitted is dependent on the temperature. E.g. the element of your electric stove goes from red hot to white hot as the temperature increases. It didn’t take long for physicists to apply a Maxwell style statistical analysis to the wave’s electromagnetic energy present in the cavity. The difference being is that classical Physics saw waves as continuous which means that more and more waves could be packed into a ”box” as the wavelengths get smaller, i.e. the frequency gets higher. This means that as the temperature was raised the radiation should keep getting stronger and stronger indefinitely. This was called the ultraviolet catastrophe. If nature did indeed behave in this way you would get singed sitting in front of a fire by all the ultraviolet light coming out of it. Fortunately this doesn’t occur so the catastrophe is not in nature but in classical physics which
Predicted something that doesn’t happen. The results of several experiments had given the correct frequency distributions and it was Max Planck who found a formula that matched the results. He couldn’t find a classical solution, so grudgingly he used Boltzmann’s version of the second law of thermodynamics. Planck imagined that the waves emitted from the black body were produced by a finite number of tiny oscillators (a kind of precursor to modern atoms). Eventually he had to divide the energy into finite chunks of a certain size to fit his own radiation formula. This finally gives us the first important formula for quantum mechanics:

E = hf……………………………………………………….. (4.2)

Where E is energy, f is frequency and h is Planck’s constant which is:

h = 0:000000000000000000000000006626………………………….. (4.3)

3.1.6 The Photoelectric Effect
If a light is shone onto certain kinds of material (e.g. some metals or semi conductors) then electrons are released. When this effect was examined it was found that the results of the experiments did not agree with classical electromagnetic theory which predicted that the energy of the released electron should depend on the intensity of the incident light wave. However it was found that the energy released was dependent not on
intensity (an electron would come out no matter how low the intensity was) but on the frequency of the light.
E = hf

Albert Einstein, 1879 - 1955 (figure 4.7) showed that if we look at the light as a collection of particles carrying energy proportional to the frequency (as given by Planck’s law E = hf) and if those particles can transfer energy to electrons in a target metal then the experimental results could be explained. Put simply a light particle hits the metal’s surface and its energy is transferred to an electron and becomes kinetic energy; so the electron is ejected from the metal. With Different kinds of metals it can be easier or harder for electrons to escape.

3.1.7 Bright Line Spectra
When a solid is heated and emits white light then, if that light is concentrated and broken up into the separate colors by a prism, we get a rainbow like spectrum (Continuous spectrum) like the following:
If we do the same thing with a hot gas emitting light then the spectrum consists of a number of bright lines that have the colors of the rainbow above, with dark regions in between. The spectrum for this, which is called an emission spectrum, is shown below.
If a cold gas is placed between a hot solid emitting white light and the prism we get the inverse of the above. This is called an absorption spectrum, shown below.
The hot gas is emitting light at certain frequencies and example three shows us that the cold gas is absorbing light at the same frequencies.
These lines are different for each element, and they allow us to determine the composition of a gas even at astronomical distances, by observing its spectrum. In 1885 Johann Jacob Balmer, 1825 – 1898 derived a formula for the spectral lines of Hydrogen. Which is:

Where R is the Rydberg constant of 3:29163 £ 1015 cycles/second and nf and ni are whole numbers. The trouble was that no one knew how to explain the formula. The explanation came in 1913 with Niels Bohr’s atomic model.

3.1.8 Proto Quantum Mechanics
During the last part of the 19th century it was discovered that a number of”rays” were actually particles. One of these particles was the electron, discovered by Joseph J. Thomson, 1856 - 1940. In a study of cathode ray
tubes Thompson showed that electrically charged particles (electrons) are emitted when a wire is heated. Thomson went on to help develop the first model of the atom which had his (negatively charged) electrons contained within a positively charged sphere (figure 4.8).
This first atomic model was called the Christmas pudding model. Then, in 1907,Ernest Rutherford, 1871 - 1937 (figure 4.9), developed a new model, which was found by firing alpha particles (Helium ions) at gold foil and observing that, very occasionally, one would bounce back. This model had a tiny but massive nucleus surrounded by electrons (figure 4.9).

The new model was like a mini solar system with electrons orbiting the nucleus, but the atomic model was thought to still follow the rules of classical physics. However, according to classical electromagnetic theory an orbiting electron, subject to centripetal acceleration (the electron is attracted by the positively charged nucleus) would radiate energy and so rapidly spiral in towards the nucleus.
                  But this did not happen, atoms were stable, and all the atoms of an element emitted the same line spectrum. To explain this Bohr assumed that the atom could exist in only certain stationary states - stationary because, even if the electron was orbiting in such a state (and, later, this was questioned by Heisenberg) it would not radiate, despite what electromagnetic theory said. However, if the electron jumped from a stationary state to one of lower energy then the transmission was accompanied by the emission of a photon; vice versa there was absorption of light in going from a lower to a higher energy. In this scheme there was a lowest stationary state, called the ground state below which the electron could not jump; so this restored stability to the atom.
The frequency of the light emitted as a jump was given by Einstein’s formula:

f =E/h……………………………………………..(4.5)

Where E is the difference in the energies of the stationary states involved. These energies of the stationary states could be calculated from classical physics if one additional assumption was introduced: that the orbital angular momentum was an integer multiple of Planck’s constant. Then the calculated frequencies were found to agree with those observed.
So Bohr developed a model based on stable orbital shells which only gave a certain number of shells to each atom. Bohr quantized electron orbits in units of Planck’s constant. He gave us the first of several quantum numbers which are useful in quantum computing, the shell number, n (see figure 4.10) of particular interest are the ground state at n = 1 and the excited state at n > 1of an atom. Bohr developed a formula for the radius of the electron orbits in a hydrogen atom:
Where r is the radius of the orbital, h is Planck’s constant, and m and q are the mass and charge of the electron. In real terms the value of r is 5:3 nanometers for n = 1.
Bohr went on with this model to derive the Balmer’s formula for hydrogen by two postulates:

1. Quantum angular momentum:


1.     A jump between orbital will emit or absorb radiation by:

hf = Ei – Ef………………………………………………. (4.8)

Where Ei is the initial energy of the electron and Ef is the final energy of the electron. Although very close, it didn’t quite match up to the spectral line data. Arnold Sommerfeld, 1868 - 1951  then proposed a new model with elliptical orbits and a new quantum number was added, k to deal with the shape of the orbit, Bohr then introduced quantum number m to explain the Zeeman effect which produced extra spectral lines when a magnetic field was applied to the atom (i.e. the direction the field was pointing).
                                 It was soon discovered that m could not account for all the spectral lines produced by magnetic fields. Wolfgang Pauli, 1900 - 1958 hypothesized another quantum number to account for this. It was thought, but not accepted by Pauli that the electron was ”spinning around” and it turns out that Pauli was right but the name stuck, so we still use spin up and spin down to describe this property of an electron. Pauli then described why electrons fill the higher energy levels and don’t just occupy the ground state which we now call the Pauli Exclusion Principle. Niels Bohr went on to explain the periodic table in terms of orbital shells with the outer most shell being the valence shell that allows binding and the formation of molecules.

3.1.9 The New Theory of Quantum Mechanics
In 1909, a few years after demonstrating the photoelectric effect, Einstein used his photon hypothesis to obtain a simple derivation of Planck’s black body distribution. Planck himself had not gone as far as Einstein: he had indeed assumed that the transfer of energy between matter (the oscillators in the walls of the cavity) and radiation was quantized (i.e. the energy transferred to/from an oscillator occurred in”grains” less than h times the frequency of the oscillator).But Planck had assumed the energy in the electromagnetic field, in the cavity, was continuously distributed, as in classical theory. By contrast, it was Einstein’s hypothesis that the energy in the field itself was quantized: that for certain purposes, the field behaved like an ideal gas, not of molecules, but of photons, each with energy h times frequency, with the number of photons being proportional to the intensity. The clue to this was Einstein’s observation that the high frequency part of Planck’s distribution for black body radiation(described by Wien’s law) could be derived by assuming a gas of photons and
applying statistical mechanics to it. This was in contrast to the low frequency part (described by the Rayleigh-Jeans law) which could be successfully obtained using classical electromagnetic theory, i.e. assuming waves. So you had both particles and waves playing a part. Furthermore, Einstein looked at fluctuations of the energy about its average value, and observed that the formula obtained had two forms, one which you would get if light was made up of waves and the other if it was made up of particles. Hence we have wave-particle duality.
In 1924, Louis de Broglie, 1892 - 1987extended the particle duality for light to all matter. He stated: The motion of a particle of any sort is associated with the propagation of a wave. This is the idea of a pilot wave which guides a free particle’s motion. De Brogliethen suggested the idea of electron waves be extended to bound particles in atoms, meaning electrons move around the nucleus guided by pilot waves. So, again a duality, de Broglie waves and Bohr’s particles. de Broglie was able to show that Bohr’s orbital radii could be obtained by fitting a whole number of waves around the nucleus. This gave an explanation of Bohr’s angular momentum quantum condition (see above).

                                       The new quantum theory was developed between June 1925 and June 1926.Werner Heisenberg, 1901 - 1976, using a totally different and more simple atomic model (one that did not use orbits) worked out a code to connect quantum numbers and spectra. He also discovered that quantum mechanics does not follow the commutative law of multiplication i.e. pq 6= qp. When Max Born, 1882 – 1970 saw this he suggested that Heisenberg use matrices. This became matrix mechanics, eventually all the spectral lines and quantum numbers were deduced for hydrogen. The first complete version of quantum mechanics was born. It’s interesting to note that it was not observation, or visualization that was used to deduce to theory - but pure mathematics.
Later we will see matrices cropping up in quantum computing.

                                    At around the same time Erwin Schr¨odinger, 1887 – 1961 built on de Broglie’s work on matter waves. He developed a wave equation (for whichª is the solution) for the core of bound electrons, as in the Hydrogen atom. It turns out that the results derived from this equation agree with the Bohr model. He then showed that Heisenberg’s matrix mechanics and his wave mechanics were equivalent. Max Born proposed that ª, the solution to Schrödinger’s equation can be interpreted as a probability amplitude, not a real, physical value. The probability
amplitude is a function of the electron’s position (x; y; z) and, when
Squared, gives the probability of finding the electron in a unit volume at the point (x; y; z). This gives us a new, probabilistic atomic model, in which there is a high probability that the electron will be found in a particular orbital shell. A representation of the ground state of hydrogen is shown in figure 4.16 and at the places where the density of points is high there is a high probability of finding the particle. The linear nature of the wave equation means that if ψ1 and ψ2 are two solutions then so is ψ1 + ψ2, a superposition state (we’ll look at superposition soon). This probabilistic interpretation of quantum mechanics implies the system is in both states until measured.

Now back to Niels Bohr. In 1927 Niels Bohr described the concept of complementarity: it depends on what type of measurement operations you are using to look at the system as to whether it behaves like a particle or a wave. He then put together various aspects of the work by Heisenberg, Schr¨odinger, and Born and concluded that the properties of a system (such as position and momentum) are undefined having only potential values with certain probabilities of being measured. This became know as the Copenhagen interpretation of quantum mechanics. Einstein did not like the Copenhagen interpretation and, for a good deal of time, Einstein kept trying to refute it by thought experiment, but Bohr always had an answer. But in 1935 Einstein raised an issue that was to later have profound implications for quantum computation and lead to the phenomenon we now call entanglement, a concept we’ll look at in a few pages.

3.2 Important Principles for Quantum Computing
The main parts of quantum mechanics those are important for quantum computing are:
1       Linear algebra
2       Superposition.
3       Dirac notation.
4       Representing information.
5       Uncertainty.
6       Entanglement.
7       The 4 postulates of quantum mechanics.

3.2.1 Linear Algebra
Quantum mechanics leans heavily on linear algebra. Some of the concepts of quantum mechanics come from the mathematical formalism, not thought experiments that’s what can give rise to counter intuitive conclusions

3.2.1Superposition
Superposition means a system can be in two or more of its states simultaneously. For example a single particle can be traveling along two different paths at once. This implies that the particle has wave-like properties, which can mean that the waves from the different paths can interfere with each other. Interference can cause the particle to act in ways that are impossible to explain without these wave-like properties.
The ability for the particle to be in a superposition is where we get the parallel nature of quantum computing: If each of the states corresponds to a different value then, if we have a superposition of such states and act on the system, we effectively act on all the states simultaneously.

3.2.3 Dirac Notation
As described in the previous chapter Dirac notation is used for quantum computing. We can represent the states of a quantum system as kets. For example, an electron’s spin can be represented as  = spin up and  = spin down. The electron can be thought of as a little magnet, the effect of a charged particle spinning on its axis. When we pass a horizontally traveling electron through an inhomogeneous magnetic field, in say, the vertical direction, the electron either goes up or down. If we then repeat this with the up electron it goes up, with the down electron it goes down. We say the up electron after the first measurement is in the state  and the down electron is in state j1i. But, if we take the up electron and pass it through a horizontal field it comes out on one side50% of the time and on the other side 50% of the time. If we represent these two states as and    can say that the up spin electron was in a superposition
of the two states and  :

such that, when we make a measurement with the field horizontal we projectthe electron into one or the other of the two states, with equal probabilities ½ given by the square of the amplitudes.

3.2.4 Representing Information
Quantum mechanical information can be physically realized in many ways. To have something analogous to a classical bit we need a quantum mechanical system with two states only, when measured. We have just seen two examples: electron spin and photon direction. Two more methods for representing binary information in a way that is capable of exhibiting quantum effects (e.g. entanglement and superposition) are: polarization of photons and nuclear spins.

3.2.5 Uncertainty
The quantum world is irreducibly small so it’s impossible to measure a quantum system without having an effect on that system as our measurement deviceis also quantum mechanical. As a result there is no way of accurately predicting all of the properties of a particle. There is a trade off - the properties occurring complementary pairs (like position and momentum, or vertical spin and horizontal spin) and if we know one property with a high degree of certainty then we must know almost nothing about the other property. That unknown property’s behavior is essentially random. An example of this is a particle’s position and velocity: if we know exactly where it is then we know nothing about how fast it is going. It has been postulated (and currently accepted) that particles in fact DO NOT have defined values for unknown properties until they are measured. This is like saying that something does not exist until it is looked at.

3.2.6 Entanglement
In 1935 Einstein (along with colleagues Podolski and Rosen) demonstrated paradox (named EPR after them) in an attempt to refute the undefined nature of quantum systems. The results of their experiment seemed to show that quantum systems were defined, having local state BEFORE measurement. Although the original hypothesis was later proven wrong (i.e. it was proven that quantum systems do not have local state before measurement). The effect they demonstrated was still important, and later became known as entanglement.

Entanglement is the ability for pairs of particles to interact over any distance instantaneously. Particles don’t exactly communicate, but there is a statistical correlation between results of measurements on each particle that is hard to understand using classical physics. To become entangled, two particles are allowed to interact; they then separate and, on measuring say, the velocity of one of them (regardless of the distance between them), we can be sure of the value of velocity of the other one (before it is measured). The reason we say that they communicate instantaneously is because they store no local state [Rae, A. 1996] and only have well defined state once they are measured. Because of this
limitation particles can’t be used to transmit classical messages faster than the speed of light as we only know the states upon measurement. Entanglement has applications in a wide variety of quantum algorithms and machinery, some of which we’ll look at later.
As stated before, it has been proven that entangled particles have no local state.

3.2.7 The Four Postulates of Quantum Mechanics
The theory of quantum mechanics has four main postulates. These are introduced here as simple sentences. Later, in section 5.4, they will be explained in more detail in terms of quantum computing.

1. In a closed quantum system we need a way of describing the state of all the particles within it. The first postulate gives us a way to do this by using a single state vector to represent the entire system. Say the state is to be a vector in Cn, this would be C2 for a spin system.

2. The evolution of a closed system is a unitary transform. Say that, while the system is evolving under its own steam - no measurement - the state at some stage  is related to the state at some previous stage (or time) by a unitary transform  =. This means that we can totally describe the behavior of a system by using unitary matrices.

3. The third postulate relates to making measurements on a closed quantum system, and the affect those measurements have on that system.

4. Postulate four relates to combining or separating different closed quantum systems using tensor products.
4. Quantum Computing

4.1 Elements of Quantum Computing

4.1.1 Introduction
Generally we’ll think of a quantum computer as a classical computer with a quantum circuit attached to it with some kind of interface between conventional and quantum logic. Since there are only a few things a quantum computer does better than a classical computer it makes sense to do the bulk of the processing on the classical machine.

4.1.2 History
In 1982 Richard Feynman theorized that classic computation could be dramatically improved by quantum effects, building on this, David Deutsch developed the basis for quantum computing between 1984 and 1985. The next major breakthrough came in 1994 when Peter Shor described a method to factor large numbers in quantum poly-time (which breaks RSA encryption). This became known as Shor’s algorithm. At around the same time the quantum complexity classes were developed and the quantum Turing machine was described.

Then in 1996 Lov Grover developed a fast database search algorithm (known as Grover’s algorithm). The first prototypes of quantum computers were also built in 1996. In 1997 quantum error correction techniques were developed at Bell labs and IBM. Physical implementations of quantum computers improved with a three qubit machine in 1999 and a seven qubit machine in 2000.

4.1.3 Bits and Qubits
This section is about the”nuts and bolts” of quantum computing. It describes qubits, gates, and circuits.

 Quantum computers perform operations on qubits which are analogous to conventional bits (see below) but they have an additional property in that they can be in a superposition. A quantum register with 3 qubits can store 8 numbers in superposition simultaneously and a 250 qubit register holds more numbers (superposed) than there are atoms in the universe!

The amount of information stored during the”computational phase” is essentially infinite - it’s just that we can’t get at it. The inaccessibility of the information is related to quantum measurement: When we attempt to readout a superposition state holding many values the state collapses and we get only one value (the rest get lost). This is tantalizing but, in some cases, can be made to work to our computational advantage.

4.1.4Representation of Data - Qubits
A bit of data is represented by a single atom that is in one of two states denoted by |0> and |1>.  A single bit of this form is known as a qubit.
A physical implementation of a qubit could use the two energy levels of an atom.  An excited state representing |1> and a ground state representing |0>.    

5.Single Qubits:
Classical computers use two discrete states (e.g. states of charging of a capacitor) to represent a unit of information, this state is called a binary digit (or bit for short). A bit has the following two values:
There is no intermediate state between them, i.e. the value of the bit cannot be in a superposition.
Quantum bits, or qubits, can on the other hand be in a state ”between” 0 and 1, but only during the computational phase of a quantum operation. When measured, a qubit can become either:

i.e. we readout 0 or 1. This is the same as saying a spin particle can be in a superposition state but, when measured, it shows only one value.
The  symbolic notation is part of the Dirac notation (see chapters 3 and 4). In terms of the above it essentially means the same thing as 0 and 1, just like a classical bit. Generally, a qubit’s state  during the computational phase is represented by a linear combination of states otherwise called a superposition state.
  
Here  and  are the probability amplitudes. They can be used to calculate the probabilities of the system jumping into j0i or j1i following a measurement or readout operation. There may be, say a 25% chance a 0 is measured and a 75% chance a 1 is measured. The percentages must add to 100%. In terms of their representation qubits must satisfy:This the same things as saying the probabilities add to 100%.Once the qubit is measured it will remain in that state if the same measurement is repeated provided the system remains closed between measurements. The probability that the qubit’s state, when in a superposition, will collapse to states  is

are actually vectors, they are called the computational basis states that form an orthonormal basis for the vector space C2.

5.1 Operations on qubit – reverse logic:

Due to the nature of quantum physics, the destruction of information in a gate will cause heat to be evolved which can destroy the superposition of qubits.

7. Shor’s Algorithm
Shor’s algorithm is used to factor numbers into their components (which can potentially be prime). Since the difficulty of factoring is believed to be exponentially hard it forms the basis of most modern crypto systems. Hence, being able to factor in polynomial time on a quantum computer has attracted significant interest.
In this section we are going to begin by constructing a set of tools needed to actually implement Shor’s algorithm.

The algorithm is dependant on

§ Modular Arithmetic
§ Quantum Parallelism
§ Quantum Fourier Transform

7.1 Shor’s Algorithm – Periodicity:

An important result from Number Theory:

          F(a) = x  mod N  is a periodic function

7.2 Shor’s Algorithm - In Depth Analysis:

To Factor an odd integer N (Let’s choose 15):

1.     Choose an integer q such that N  < q < 2N     let’s pick 256


2.     Choose a random integer x such that GCD(x, N) = 1 let’s pick 7


3.     Create two quantum registers (these registers must also be entangled so that the collapse of the input register corresponds to the collapse of the output register)
•   Input register: must contain enough qubits to represent numbers as large as q-1.  up to 255, so we need 8 qubits
•   Output register: must contain enough qubits to represent numbers as large as N-1. up to 14, so we need 4 qubits

7.3 Preparing Data:

4.   Load the input register with an equally weighted superposition of all integers from 0 to q-1.  0 to 255

5.   Load the output register with all zeros.

7.4 Modular Arithmetic:

6.     Apply the transformation x   mod N to each number in the input register, storing the result of each computation in the output register.

Note that we are using decimal numbers here only for simplicity

7.5 Superposition Collapse:

7.     Now take a measurement on the output register.  This will collapse the superposition to represent just one of the results of the transformation, let’s call this value c.

7.6 Entanglement:

8.   Since the two registers are entangled, measuring the output register will have the effect of partially collapsing the input register into an equal superposition of each state between 0 and q-1 that yielded c (the value of the collapsed output register.)

7.7 QFT:
We now apply the Quantum Fourier transform on the partially collapsed input register.  The Fourier transform has the effect of taking a state |a> and transforming it into a state given by:
So we no longer have an equal superposition of states, the probability amplitudes of the above states are now higher than the other states in our register.  We measure the register, and it will collapse with high probability to one of these multiples of 64, let’s call this value p.

With our knowledge of q, and p, there are methods of calculating the period (one method is the continuous fraction expansion of the ratio between q and p.)

7.8 The Factors:

10.Now that we have the period, the factors of N can be determined by taking the greatest common divisor of N with respect to x ^ (P/2) + 1 and x ^ (P/2) - 1The idea here is that this computation will be done on a classical computer.

7.9.1Shor’s Algorithm – Problems:

1       The QFT comes up short and reveals the wrong period.  This probability is actually dependant on your choice of q.  The larger the q, the higher the probability of finding the correct probability.

2        The period of the series ends up being odd

8. CONCLUSION
It is important that making a practical quantum computing is still far in the future. Programming style for a quantum computer will also be quite different.
Development of quantum computer needs a lot of money. Even the best scientists can’t answer a lot of questions about quantum physics. Quantum computer is based on theoretical physics and some experiments are already made. Building a practical quantum computer is just a matter of time.
 Quantum computers easily solve applications that can’t be done with help of today’s computers. This will be one of the biggest steps in science and will undoubtedly revolutionize the practical computing world.

No comments:

Post a Comment

leave your opinion