Quantum Structural Complexity


Quantum Structural Complexity

Quantum Computing generalizes and extend the notion of conventional computation by directly using the quantum mechanical phenomena such as entanglement and superposition to perform operation (quantum rule) on data encoded in physical system. With the discovery of Shor’s Factorization Algorithm and Grover's Search Algorithm, significant interest has been drawn in the field of Quantum Computing. Though practical quantum computing is still in its infancy, both practical and theoretical continues. It has become an attractive inter-disciplinary research area in Physics, Mathematics and Computer Science with profound implication to all of these. Quantum effects like interference and entanglement play no direct role in conventional information processing, but they can—in principle now, but probably eventually in practice—be harnessed to break codes, create unbreakable codes, and speed up otherwise intractable computations. Following the sequences of results suggesting that quantum computers are more powerful than classical probabilistic computers, a great deal of attention has focused on quantum computing. Several outstanding problems in Theoretical Computer Science can be tackled in a new approach. Several Important results have been found in Quantum Computational Complexity which can potentially shake the foundations of Theoretical Computer Science. Here, I give a brief introduction to quantum computing and track through the developments in Quantum Computational Complexity along with its implication to Computer Science.

Quantum Computation: Historical Development 

A nice historical perspective of evolution of computational model as a physical system is given at. The first person to look at the interaction between computation and quantum mechanics appears to have been Benioff. Although he did not ask whether quantum mechanics conferred extra power to computation, he showed that reversible unitary evolution was sufficient to realize the computational power of a Turing machine, thus showing that quantum mechanics is at least as powerful computationally as a classical computer. This work was fundamental in making later investigation of quantum computers possible.

Feynman seems to have been the first to suggest that quantum mechanics might be more powerful computationally than a Turing machine. He gave arguments as to why quantum mechanics might be intrinsically expensive computationally to simulate on a classical computer. He also raised the possibility of using a computer based on quantum mechanical principles to avoid this problem, thus implicitly asking the converse question: by using quantum mechanics in a computer can you compute more efficiently than on a classical computer? Deutsch was the first to ask this question explicitly. In order to study this question, he defined both quantum Turing machines and quantum circuits and investigated some of their properties.

The question of whether using quantum mechanics in a computer allows one to obtain more computational power was addressed by Deutsch and Jozsa and Berthiaume and Brassard. These papers showed that there are problems which quantum computers can quickly solve exactly, but that classical computers can only solve quickly with high probability and the aid of a random number generator. However, these papers did not show how to solve any problem in quantum polynomial time that was not already known to be solvable in polynomial time with the aid of a random number generator, allowing a small probability of error; this isthe characterization of the complexity class BPP (defined later), which is widely viewed as the class of efficiently solvable problems.
Further work on this problem was stimulated by Bernstein and Variani. One of the results contained in their paper was an oracle problem (that is, a problem involving a “black box” subroutine that the computer is allowed to perform, but for which no code is accessible) which can be done in polynomial time on a quantum Turing machine but which requires super-polynomial time on a classical computer. This result was improved by Simon, who gave a much simpler construction of an oracle problem which takes polynomial time on a quantum computer but requires exponential time on a classical computer. Indeed, while Bernstein and Vaziarni’s problem appears contrived, Simon’s problem looks quite natural. Simon’s algorithm inspired the work presented in this paper. Two number theory problems which have been studied extensively but for which no polynomial-time algorithms have yet been discovered are finding discrete logarithms and factoring integers.   It’s been sown that these problems can be solved in polynomial time on a quantum computer with a small probability of error.Currently, nobody knows how to build a quantum computer, although it seems as though it might be possible within the laws of quantum mechanics. Some suggestions have been made as to possible designs for such computers, but there will be substantial difficulty in building any of these. The most difficult obstacles appear to involve the decoherence of quantum superposition through the interaction of the computer with the environment, and the implementation of quantum state transformations with enough precision to give accurate results after many computation steps. Both of these obstacles become more difficult as the size of the computer grows, so it may turn out to be possible to build small quantum computers, while scaling up to machines large enough to do interesting computations may present fundamental difficulties.

Even if no useful quantum computer is ever built, this research does illuminate the problem of simulating quantum mechanics on a classical computer. Any method of doing this for an arbitrary Hamiltonian would necessarily be able to simulate a quantum computer. Thus, any general method for simulating quantum mechanics with at most a polynomial slowdown would lead to a polynomial-time algorithm for factoring.

Complexity Theory

Complexity theory is concerned with the inherent cost required to solve information   processing problems, where the cost is measured in terms of various well-defined resources. In this context, a problem can usually be thought of as a function whose input is a problem instance and whose corresponding output is the solution to it. Sometimes the solution is not unique, in which case the problem can be thought of as a relation, rather than a function. Resources are usually measured in terms of: some designated elementary operations, memory usage, or communication. We consider three specific complexity scenarios, which illustrate different advantages of working with quantum information
                       
3.1 Complexity Models      
Ø    Computational complexity
Ø    Query complexity
Ø    Communication complexity

Despite the differences between these models, there are some intimate relationships among them. The usefulness of many currently-known quantum algorithms is ultimately best expressed in the computational complexity model; however, virtually all of these algorithms evolved from algorithms in the query complexity model. The query complexity model is a natural setting for discovering interesting quantum algorithms, which frequently have interesting counterparts in the computational complexity model. Quantum algorithms in the query complexity model can also be transformed into protocols in the communication complexity model that use quantum information (and sometimes these are more efficient than any classical protocol can be). Also, this latter relationship, taken in its contra-positive form, can be used to prove that some problems are inherently difficult in the query complexity model

Quantum Complexity Theory

4.1 Definition & Importance
The inherent difficulty, or hardness, of computational problems is a fundamental concept in computational complexity theory. Hardness is typically formalized in terms of the resources required by different models of computation to solve a given problem, such as the number of steps of a deterministic Turing machine. A variety of models and resources are often considered, including deterministic, nondeterministic and probabilistic models; time and space constraints; and inter- actions among models of differing abilities. Many interesting relationships among these different models and resource constraints are known.
One common feature of the most commonly studied computational models and resource constraint is that they are physically motivated. This is quite natural, given that computers are physical devices, and to a significant extent it is their study that motivates and directs research on computational complexity. The predominant example is the class of polynomial-time computable functions, which ultimately derives its relevance from physical considerations; for it is a mathematical abstraction of the class of functions that can be efficiently computed without error by physical computing devices.
In light of its close connection to the physical world, it seems only natural that modern physical theories should be considered in the context of computational complexity. In particular, quantum mechanics is a clear candidate for a physical theory to have the potential for implications, if not to computational complexity then at least to computation more generally. Given the steady decrease in the size of computing components, it is inevitable that quantum mechanics will become increasingly relevant to the construction of computers—for quantum mechanics provides a remarkably accurate description of extremely small physical systems (on the scale of atoms) where classical physical theories have failed completely. Indeed, an extrapolation of Moore’s Law predicts sub- atomic computing components within the next two decades; a possibility inconsistent with quantum mechanics as it is currently understood.

That quantum mechanics should have implications to computational complexity theory, how- ever, is much less clear.  It is only through the remarkable discoveries and ideas of several researchers, including Richard Feynman, David Deutsch, Ethan Bernstein and Umesh Vazirani, and Peter Shor, that this potential has become evident.  In particular, Shor ’s polynomial-time  quantum factoring and discrete-logarithm algorithms give strong support to the conjecture that quantum and classical computers yield differing notions of computational hardness. Other quantum complexity-theoretic concepts, such as the efficient verification of quantum proofs, suggest a wider extent to which quantum mechanics influences computational complexity.
It may be said that the principal aim of quantum computational complexity theory is to understand the implications of quantum physics to computational complexity theory. To this end, it considers the hardness of computational problems with respect to models of quantum computation, classifications of problems based on these models, and their relationships to classical models and complexity classes
The notion of promise problems is central to quantum computational complexity. These are decision problems for which the input is assumed to be drawn from some subset of all possible input strings. More formally, a promise problem is a pair A = ( Ayes, Ano), where Ayes, Ano Σ are sets of  strings satisfying Ayes  ∩ Ano   = .   The strings contained in the sets Ayes  and Ano are called the  yes-instances and no-instances of the problem, and have answers yes and no, respectively.  Languages  may be viewed as promise problems that obey the additional constraint Ayes Ano = Σ. Although complexity theory has traditionally focused on languages rather than promise problems, little is lost and much is gained in shifting one’s focus to promise problems. Karp reductions (also called polynomial-time many-to-one reductions) and the notion of completeness are defined for promise problems in the same way as for languages.

Quantum complexity theory is a part of computational complexity theory in theoretical computer science. It studies complexity classes defined using quantum computers and quantum information which are computational models based on quantum mechanics. It studies the hardness of problems in relation to these complexity classes, and the relationship between quantum complexity classes and classical (i.e., non-quantum) complexity classes.

A complexity class is a collection of problems which can be solved by some computational model under resource constraints. For instance, the complexity class P is defined to be the set of problems solvable by a Turing machine in polynomial time. Similarly, one may define a quantum complexity class using a quantum model of computation, such as a standard quantum computer or a quantum Turing machine. Thus, the complexity class BQP is defined to be the set of problems solvable by a quantum computer in polynomial time with bounded error.

Two important quantum complexity classes are BQP and QMA which are the bounded-error quantum analogues of P and NP. One of the main aims of quantum complexity theory is to find out where these classes lie with respect to classical complexity classes such as P, NP, PP, PSPACE and other complexity classes.

No comments:

Post a Comment

leave your opinion