Computers have enhanced human life to a great extent.The goal of improving on computer speed has resulted in the development of the Very Large Scale Integration (VLSI) technology with smaller device dimensions and greater complexity.
VLSI technology has revolutionized the electronics industry and additionally, our daily lives demand solutions to increasingly sophisticated and complex problems, which requires more speed and better performance of computers.
For these reasons, it is unfortunate that VLSI technology is approaching its fundamental limits in the sub-micron miniaturization process. It is now possible to fit up to 300 million transistors on a single silicon chip. As per the Moore’s law it is also estimated that the number of transistor switches that can be put onto a chip doubles every 18 months. Further miniaturization of lithography introduces several problems such as dielectric breakdown, hot carriers, and short channel effects. All of these factors combine to seriously degrade device reliability. Even if developing technology succeeded in temporarily overcoming these physical problems, we will continue to face them as long as increasing demands for higher integration continues. Therefore, a dramatic solution to the problem is needed, and unless we gear our thoughts toward a totally different pathway, we will not be able to further improve our computer performance for the future.
Optical interconnections and optical integrated circuits will provide a way out of these limitations to computational speed and complexity inherent in conventional electronics. Optical computers will use photons traveling on optical fibers or thin films instead of electrons to perform the appropriate functions. In the optical computer of the future, electronic circuits and wires will be replaced by a few optical fibers and films, making the systems more efficient with no interference, more cost effective, lighter and more compact. Optical components would not need to have insulators as those needed between electronic components because they don’t experience cross talk. Indeed, multiple frequencies (or different colors) of light can travel through optical components without interfacing with each others, allowing photonic devices to process multiple streams of data simultaneously.
1.1 Why Use Optics for Computing?
Optical interconnections and optical integrated circuits have several advantageous over their electronic counterparts. They are immune to electromagnetic interference, and free from electrical short circuits. They have low-loss transmission and provide large bandwidth; i.e. multiplexing capability, capable of communicating several channels in parallel without interference. They are capable of propagating signals within the same or adjacent fibers with essentially no interference or cross-talk. They are compact, lightweight, and inexpensive to manufacture, and more facile with stored information than magnetic materials.
Most of the components that are currently very much in demand are electro-optical (EO). Such hybrid components are limited by the speed of their electronic parts. All optical components will have the advantage of speed over EO components. Unfortunately, there is an absence of known efficient nonlinear optical materials that can respond at low power levels. Most all optical components require a high level of laser power to function as required.
Optics has a higher bandwidth capacity over electronics, which enables more information to be carried and data to be processed arises because electronic communication along wires requires charging of a capacitor that depends on length. In contrast, optical signals in optical fibers, optical integrated circuits, and free space do not have to charge a capacitor and are therefore faster.
Another advantage of optical methods over electronic ones for computing is that optical data processing can be done much easier and less expensive in parallel than can be done in electronics. Parallelism is the capability of the system to execute more than one operation simultaneously. Electronic computer architecture is, in general, sequential, where the instructions are implemented in sequence. This implies that parallelism with electronics is difficult to construct. Using a simple optical design, an array of pixels can be transferred simultaneously in parallel from one point to another. To appreciate the difference between both optical parallelism and electronic one can think of an imaging system of as many as 1000x1000 independent points per mm2 in the object plane which are connected optically by a lens to a corresponding 1000x1000 points per mm2 in the image plane. For this to be accomplished electrically, a million nonintersecting and properly isolated conduction channels per mm2 would be required. Parallelism, therefore, when associated with fast switching speeds, would result in staggering computational speeds.
Assume, for example, there are only 100 million gates on a chip (optical integration is still in its infancy compared to electronics). Further, conservatively assume that each gate operates with a switching time of only 1 nanosecond (organic optical switches can switch at sub-picosecond rates compared to maximum picosecond switching times for electronic switching). Such a system could perform more than 1017 bit operations per second. Compare this to the gigabits (109) or terabits (1012) per second rates which electronics are either currently limited to, or hoping to achieve. In other words, a computation that might require one hundred thousand hours (more than 11 years) of a conventional computer could require less than one hour by an optical one.
Another advantage of light results because photons are uncharged and do not interact with one another as readily as electrons. Consequently, light beams may pass through one another in full-duplex operation, for example without distorting the information carried. In the case of electronics, loops usually generate noise voltage spikes whenever the electromagnetic fields through the loop changes. Further, high frequency or fast switching pulses will cause interference in neighboring wires. Signals in adjacent fibers or in optical integrated channels do not affect one another nor do they pick up noise due to loops. Finally, optical materials possess superior storage density and accessibility over magnetic materials.
Obviously, the field of optical computing is progressing rapidly and shows many dramatic opportunities for overcoming the limitations described earlier for current electronic computers. The process is already underway whereby optical devices have been incorporated into many computing systems. Laser diodes as sources of coherent light have dropped rapidly in price due to mass production. Also, optical CD-ROM discs have been very common or even outdated in home and office computers.
2.0 Optical computer
An optical computer (also called a photonic computer) is a device that uses the photons in visible light or infrared (IR) beams, rather than electric current, to perform digital computations. Optical computing could produce computers tens of thousands of times faster than today's computers, because light can travel that much faster than electric current. With all the advantages described for the optical circuits compared to the electronic counterparts, the need for optical computing is one of the main requirements in this century. Visible-light and IR beams, unlike electric currents, pass through each other without interacting. Several (or many) laser beams can be shone so their paths intersect, but there is no interference among the beams, even when they are confined essentially to two dimensions. This makes optical computers smaller also. Figure 1 and 2 shows the differences between the optical circuits and the electronic circuits.
The optical computer is not at all a new idea. It started form the early 80’s itself. Currently even though, a complete optical computer has not been built, there are various researches undergoing on constructing one. The basic building blocks for an optical computer can be optical transistors and optical gates. There are various methods on construction of optical gates. Recent research on optical transistors reached a state where an optical transistor can be made from a single molecule. Other hardware component required is in the data storage field. CDs and DVDs are some optical data storage medium. Holographic memories serve the facility of optical data storage with a huge memory storage ability with very low space requirements. Various input-output optical devices are available even now also. Optical keyboard, mouse, scanners, printers etc hold supreme place even in this period of development of optical computer. Networking, in optical computers, use the facility of optical fibers for very high speed data transfer with very low power requirements.
3.0 Various Optical Hardware components
The various Optical Computer Hardware components are
1. Optical Transistor
2. Optical Gate and Switch
3. Holographic Memory
4. Input-Output Devices
5. Optical Networking
6. Optical processor
3.1 Optical Transistor
Every since the 1930s, the advantages of light were recognized for carrying information within the newly emerging computer science. The problem was that, back then, they lacked to tools needed to make light compute. As a result, the task fell to electrons, and the electronic computer age was born. Since then, three major events have laid the groundwork for the present effort at producing fully photonic (optical) digital computers. The first was the invention of the laser. The next discovery was the computer-generated hologram. The third background element that has brought forth is the photonic transistor.
The transistor is one of the most influential inventions
of modern times and is ubiquitous in present-day technologies. To replace electronic components with optical ones, an equivalent "optical transistor" was required. The optical transistor is one of the most basic components of an optical computer. It does the same thing its electronic counterpart does, but with the difference of using photons instead of electrons. The Photonic Transistor is vacuum compatible, meaning that they can be operated in air or even in a vacuum where there light moves at the universal speed limit. The first ever photonic transistor was invented in 1989 by the Rocky Mountain Research Center, and then tested in the laboratories of the University of Montana, and Montana State, USA. The photonic transistor can be used to build up various gates and switches same as that its electronic counterpart does. Therefore it can be called as the basic building block of an optical computer
There had been various methods on creating optical transistors. The interferometer method was one of the primate type of transistor appeared. Fabry-Perot Interferometer (etalon) method can be used to create optical transistors based on the interference patterns created. Another method of optical transistors emphasized on laser light to create junctions as in electronic counterpart. There are some other methods using two lasers and gold plate by creation of plasmons. A method of using a Y-coupler to act as transistor is another type of optical transistor which can create the logical gates required.
Various types of Optical Transistors
1. Interferometric Transistors
2. Laser Transistors (Single Molecule Laser Transistor)
3. Based on Plasmon Creation
These are the various types of transistors that are primarily used. Out of these the Interferometric Transistors was the first to evolve. Then the Plasmon transistor was invented. The Y-coupler Transistor could do the various logic operations. The laser transistor from a single molecule is the most recent one, which was announced to public on July 2009. Now the various transistors and their working are described below.
3.1.1 Interferometric Transistors
A "photonic" computer should use photons. Photons are the basic unit of electromagnetic energy just as electrons are the basic unit of electricity. Unlike the nonlinear optical materials that require a large supply of photons to bias them up to some switching level, Photonic Transistors need only signal levels of photons to work. Photonic Transistors do not use electricity in any way shape or form. The fundamental physical control and manipulation processes used do not slow down the light. The only retardation occurs during the very short time that the energy must pass through a dense medium such as a thin hologram.
Back in 1801 Thomas Young preformed an experiment that proved that light does has a wavelike nature. He did this by setting up an experiment whereby two beams of light from a common source were superimposed upon each other (see Figure 1). The light pattern produced was called interference, which can be measured in a manner similar to ocean waves. Later individual photons were also shown to possess this ability.
Optical Interference is a process of energy rearrangement that occurs when two laser beams pass through the same point at the same time. The energy pattern forms an interference image that depends upon the wavefront pattern, input energy levels, and phase components of each the two input beams, along with the geometry of the encounter. Interference has another very important property. If we accurately know all of the input parameters of all of the inputs, the output interference image formed can be calculated by a process called the "Linear addition of amplitudes" or the "Vector addition of amplitudes."
Since digital computers operate at discrete energy levels, (two levels in the case of binary logic). Each two-input photonic logic gate will have 4 possible combinations of its inputs being either high or low...on or off. As a result, 4 different images need to be calculated, one for each input combination. During high speed computing, the interference image will switch continually among this set of images. Taken together, they form a "Dynamic Image". Therefore, at any given location within the dynamic image, the amplitude, and thus the energy level (which is proportional to the square of the amplitude,) will change among 4 different static states as determined by the optical arrangement of the transistor. If we place an image component separator, such as a mask with a hole in it, at any location within the dynamic image, then any energy that shows up at the hole goes through the mask into the output. Any energy that does not show up at the hole is prevented by the mask from contributing to the output of the transistor.
All Photonic Amplifiers
It requires only two basic Boolean devices in order to produce all of computing. However, in order to make up for energy loss from one device to another, one needs an amplifier. So, if one beam is kept on all the time as a sort of photonic power supply, and the 2nd beam is switched on, the output through the maxima-positioned hole jumps from the single beam level to 4 times that level. Thus, the information-carrying portion of the output has 3 times as much energy as the original modulated input. Thus, the invention is also a light speed amplifier. If two such amplifiers are interconnected, just as in electronics, the result is a flip flop, a light speed binary information storage device.
Beyond the first Transistors
There are certain limits to the operation of some of the devices. The first is the existence of phase and amplitude fluctuations in the output of all but the NOT device. As a result, a number of means and methods have been devised so as to either accomplish the same job a different way, or to be able to compensate the output so that the logic information produced can still be used without causing problems in succeeding devices.
Interference is not something that is easily accomplished on the macro scale. But then, we are usually not interested in making big transistors. Little ones are what we want. So how small can they be made?
The 3M company has demonstrated its ability to produce 20,000 independent holographic-like lens on a single square centimeter of material. While there's no reason to imagine that is the limit for making small scale devices, certainly it's a fine start. Unlike the economic vitality (or lack of it) in the electronics world that depends upon the ever-increasing cost of silicon real estate, the inexpensive glass, plastic and aluminum that will be used to make photonic computers permits one to use as much material as is needed. Certainly there's no reason why photonic computers cannot eventually be made even smaller than today' lap tops. When they do, they certainly will have a lot more horse power.
Speed trials of working transistors
Light really moves. In one second, electromagnetic energy can circle the earth seven and a half times in one second. In one nanosecond, (one billionth of a second,) light goes 30 cm, or about 11 3/4 inches. By measuring the dimensions of the smallest working model of our photonic transistor we can calculate the amount of time it takes light to pass through the device in order to accomplish the above photonic logic and amplifications functions. In a working photonic computer, these will be the switching times used to determine how fast we will be able to make a photonic supercomputer go.
If the transit time through an electronic transistor is one nanosecond, the input must remain either completely on or completely off for that full nanosecond. Otherwise considerable noise will be introduced into the system. The Photonic transistor, however, is able to operate using pipelined pulses.
That is, a continuous stream of very short pulses can be introduced into a single transistor, pulses that are much shorter than the transit time of the device, and they will all be processed independently without any noise buildup.
Just as information pipelining is an important part of the architecture of the Pentium and many supercomputers, so too, pipelining information into the various light beams that make up a photonic computer can greatly increase its throughput.
The theoretical limit for the shortest pipelined pulse would be equal to the period of oscillation for a one-wavelength-long pulse. If it can be reached, the switching time for that same red laser light would be 2.1 femtoseconds! A 'femtosecond' is one millionth of a nanosecond. If a shorter wavelength is used, the pulse time is shorter. If 300 nm ultraviolet light is used, the switching time is 1 femtosecond!
However, such switching time comparisons to today's electronic computers do not take into account light's ability for accomplishing massively parallel computing. That is, by doing millions of things at the same time, far more work can be accomplished. Channelizing the visible part of the spectrum provides over 4 billion separate channels. Photonic transistors are capable of operating using all of them individually and all together. They can be manipulated as easy as forming the right kind of dynamic images and separating the appropriate energy patterns from them.
3.1.2 Laser Transistor (Single Molecule Laser Transistor)
Conventional computers are based on transistors, which allow one electrode to control the current moving through the device and are combined to form logic gates and processors. Among the possible choices of signal carriers, photons are particularly attractive because of their robustness against decoherence, but their control at the nanometre scale poses a significant challenge as conventional nonlinear materials become ineffective. To remedy this shortcoming, resonances in optical emitters can be exploited, and atomic ensembles have been successfully used to mediate weak light beams. However, single-emitter manipulation of photonic signals has remained elusive and has only been studied in high-finesse micro cavities or waveguides.
Amplification in a conventional laser is achieved by an enormous number of molecules. By focusing a laser beam on only a single tiny molecule, the ETH Zurich scientists have been able to generate stimulated emission using just one molecule. They were helped in this by the fact that, at low temperatures, molecules seem to increase their apparent surface area for interaction with light In this case, the enlarged surface area corresponded approximately to the diameter of the focused laser beam.
For creating an optical transistor with a single molecule, the fact used is that a molecule’s energy is quantized: when laser light strikes a molecule that is in its ground state, the light is absorbed. As a result, the laser beam is quenched. Conversely, it is possible to release the absorbed energy again in a targeted way with a second light beam. This occurs because the beam changes the molecule’s quantum state, with the result that the light beam is amplified. This so-called stimulated emission, which is the basic working principle of Laser. By using one laser beam to prepare the quantum state of a single molecule in a controlled fashion, scientists could significantly attenuate or amplify a second laser beam. This mode of operation is identical to that of a conventional transistor, in which electrical potential can be used to modulate a second signal.
For the single molecular laser transistor, a green laser beam is used to control the power of an orange laser beam passing through the device. Tetradecane, a hydrocarbon dye, was suspended in an organic liquid. Then frozen the suspension to -272 °C using liquid helium – creating a crystalline matrix in which individual dye molecules could be targeted with lasers. When a finely tuned orange laser beam is trained on a dye molecule, it efficiently soaks up most of it up – leaving a much weaker "output" beam to continue beyond the dye. But when the molecule is also targeted with a green laser beam, it starts to produce strong orange light of its own and so boosts the power of the orange output beam. This effect is down to the hydrocarbon molecule absorbing the green light, only to lose the equivalent energy in the form of orange light. Using the green beam to switch the orange output beam from weak to strong is analogous to the way a transistor’s control electrode switches a current between “on” and “off” voltages, and hence the 0s and 1s of digital data.
For a single molecular laser transistor, a single dye molecule can operate as an optical transistor and coherently attenuate or amplify a tightly focused laser beam, depending on the power of a second 'gating' beam that controls the degree of population inversion. Such a quantum optical transistor has also the potential for manipulating non-classical light fields down to the single-photon level.
The central phenomenon behind the operation of a transistor is nonlinearity. A simple two-level atom is known to undergo nonlinear interaction with light, but it is usually not considered as a sufficiently strong medium for manipulating laser beams in free space. Recent studies showed that in the weak excitation regime, an atom can block a propagating light beam fully if it is in a directional dipolar mode, and by up to 85% if it is a tightly focused plane wave. In these cases, photons are confined to an area comparable with the scattering cross-section of the atom, and their electric fields become large enough to achieve atomic excitation with unity or near unity probability. This strong coupling between an emitter and light also makes it possible to observe stimulated emission from a single molecule, and paves the way for the realization of various nonlinear phenomena at the single-emitter level.
The emitters of choice are dye molecules embedded in organic crystalline matrices. These are highly suitable for quantum optical investigations because under cryogenic conditions, the zero-phonon lines (ZPLs) connecting the vibronic ground states of their electronic ground (|1) and excited (|2) states become lifetime limited (Fig. 4a). If the doping concentration is low enough, single dye molecules can be selectively addressed spectrally. The most common way of achieving this is through fluorescence excitation spectroscopy, where the frequency of a narrow-band laser is scanned across the inhomogeneous distribution of the ZPLs, and the Stokes-shifted fluorescence to the vibronic excited states (|4 in Fig. 4a) of the electronic ground state is recorded. In addition, it has been demonstrated that single molecules can be detected resonantly by the interference of a laser beam with its coherent scattering.
3.1.3 Plasmon Transistor
All-optical circuit components are light-based analogues of electrical transistors and other devices. All-optical devices built in the past have been far too large and power hungry to be practical. Physicists at the Queen’s University in Belfast appear to have solved the problems with a prototype optical amplifier that is both small and low power. The key to the device is a layer of gold film pierced by an array of holes 0.2 millionths of a meter in diameter and coated in a layer of polymer. The researchers shine two beams of light on the structure: a signal beam and a control beam. When the beams strike the patterned film they produce plasmons, which are essentially blobs of electron gas near the surface of a metal. Varying the intensity and the color of the control beam causes the plasmons to interact in ways that enhance or decrease the transmission of the signal beam through the film. That is, the film acts as an all-optical transistor, with the potential to serve as a building block in optical circuits and optical versions of microelectronic devices.
Photons rarely interact—which makes it challenging to build all-optical devices in which one light signal controls another. Even in nonlinear optical media, in which two beams can interact because of their influence on the medium’s refractive index, this interaction is weak at low light levels. Here, we propose a novel approach to realizing strong nonlinear interactions at the single-photon level, by exploiting the strong coupling between individual optical emitters and propagating surface plasmons confined to a conducting nanowire. We show that this system can act as a nonlinear two-photon switch for incident photons propagating along the nanowire, which can be coherently controlled using conventional quantum-optical techniques. Furthermore, we discuss how the interaction can be tailored to create a single-photon transistor, where the presence (or absence) of a single incident photon in a ‘gate’ field is sufficient to allow (or prevent) the propagation of subsequent ‘signal’ photons along the wire. Practical realization is challenging because the requisite single-photon nonlinearities are generally very weak.
A new method to achieve strong coupling between light and matter was proposed. It makes use of the tight concentration of optical fields associated with guided surface plasmons on conducting nanowires to achieve strong interaction with individual optical emitters. The tight localization of these fields causes the nanowire to act as a very efficient lens that directs the majority of spontaneously emitted light into the surface-plasmon modes, resulting in efficient generation of single surface plasmons (that is, single photons). Here, we show that such a system enables the realization of remarkable nonlinear optical phenomena, where individual photons strongly interact with each other. As an example, we describe how this nonlinearity may be exploited to implement a single-photon transistor. Although ideas for developing plasmonic analogues of electronic devices by combining surface plasmons with electronics are already being explored.
Nanowire Surface Plasmons: Interaction With Matter
Surface plasmons are propagating electromagnetic modes confined to the surface of a conductor–dielectric nterface. Their unique properties make it possible to confine them to subwavelength dimensions , which has led to fascinating new approaches to waveguiding below the diffraction limit enhanced transmission through subwavelength apertures, subwavelength imaging and enhanced fluorescence. Recently, signatures of strong coupling between molecules and surface plasmons have also been observed via a splitting of the surface-plasmon mode dispersion. It is important to emphasize that these observations can be described in terms of classical, linear optical effects. Below, however, we consider how the confinement of surface plasmons on a conducting nanowire and their coupling to an individual, proximal optical emitter can also give rise to controllable nonlinear interactions between single photons.
3.2 Optical Logic Gates
Optical interconnections and optical integrated circuits are strongly believed to be the most feasible technology that can provide the way out of the extreme limitations imposed on the speed and complexity of present day computations by conventional electronics. Logic gates are the building blocks of any digital system. An optical logic gate is a switch that controls one light beam by another; it is “ON” when the device transmits light and it is OFF” when it blocks the light. The various logical operations such as NOT, AND, OR, NAND, NOR, XOR etc can be verified with an optical logic gate, similar to the case of electronic circuits. Fast optical switches, such as those using electro-optic or magneto-optic effects, may be used to perform logic operations; also included in this category are the semiconductor optical amplifiers, which are optoelectronic devices that can be used as optical switches and be integrated with discrete or integrated microelectronic circuits.
There are several types of optical logic gate implementations. There are more than 8000 present patents for switches which can be used as logic gates. The primate type of implementation is that using an interferometer, just like the interferometric transistors. Another important type of optical logic gates is by using SEED (self –electro optic devices). A Mach-Zehnder interferometer is another type of logical gate which can work with logical functions with electro optic effect of materials or with coupled ring resonators. A gate using the laser transistor is also a nearby future probability.
Optical Logic gates currently have switching speeds varying from nanoseconds to picoseconds. This is indeed much higher compared to the electronic counterparts so that these switching speeds can increase the speed of the optical computer to terabytes or even to petabytes or exabytes per second.
The types of logical gates considered here is the Interferometer based Logic gate.
3.2.1 Interferometer based Logic gate
The interferometer type logic gate is the most primate type of logic gates implemented. It has a coherent beam laser as its light source. The light from the laser is fed through slits and gratings and this causes constructive and destructive interferences. This interference pattern is focused to a photo-detector material using a convex lens. The photo-detector material, upon measuring the intensity of light falling, converts the light to corresponding current values. These values vary for destructive and constructive interference of light through slits. During the constructive interference, the light input to the detector is at logic 1 and during the destructive interference; the light input to detector is logic 0. By using slits of different slit size, the input can be varied for obtaining desired output.
The figure-10 describes the working of the interferometer based logic gate with the coherent laser input and a photo-detector is connected to the input of a milliammeter. This is the experimental setup of interferometric logic gates.
This type of logic gate has a faster switching speed. The switching speed can be made for about nanosecond range. But the power consumed by this type of switch is much higher. Also a sufficient condition for the interferometer based logic gate is that it requires coherent light sources for obtaining interference patterns.
3.3 Holographic Memory
Memory storage in an optical computer has been all-optical with the invention of Holographic memory. If there wasn’t a typical optical type of memory, the data rates of the electronic memory would have caused a relatively high delay with the optical data rates which causes the optical computer a non worthier device.
Holographic data storage is a potential replacement technology in the area of high-capacity data storage currently dominated by magnetic and conventional optical data storage. It is essentially a 3-D memory storage device. Magnetic and optical data storage devices rely on individual bits being stored as distinct magnetic or optical changes on the surface of the recording medium. Holographic data storage overcomes this limitation by recording information throughout the volume of the medium and is capable of recording multiple images in the same area utilizing light at different angles. Additionally, whereas magnetic and optical data storage records information a bit at a time in a linear fashion, holographic storage is capable of recording and reading millions of bits in parallel, enabling data transfer rates greater than those attained by optical storage. Holographic memory offers the possibility of storing 1 terabyte (TB) of data in a sugar-cube-sized crystal.
Basic components & working:
Here are the basic components that are needed to construct an HDSS:
- Blue-green argon laser
- Beam splitters to spilt the laser beam
- Mirrors to direct the laser beams
- LCD panel (spatial light modulator)
- Lenses to focus the laser beams
- Lithium-niobate crystal or photopolymer
- Charge-coupled device (CCD) camera
Recording data:
When the blue-green argon laser is fired, a beam splitter creates two beams. One beam, called the object or signal beam, will go straight, bounce off one mirror and travel through a spatial-light modulator (SLM). A spatial light modulator (SLM) is an object that imposes some form of spatially-varying modulation on a beam of light. Usually, an SLM modulates the intensity of the light beam, however it is also possible to produce devices that modulate the phase of the beam or both the intensity and the phase simultaneously. SLMs are used extensively in holographic data storage setups to encode information into a laser beam in exactly the same way as a transparency does for an overhead projector. An SLM used here is a liquid crystal display (LCD) that shows pages of raw binary data as clear and dark boxes. The information from the page of binary code is carried by the signal beam around to the light-sensitive lithium-niobate crystal. Some systems use a photopolymer in place of the crystal. A second beam, called the reference beam, shoots out the side of the beam splitter and takes a separate path to the crystal. When the two beams meet, the interference pattern that is created stores the data carried by the signal beam in a specific area in the crystal -- the data is stored as a hologram. The interference pattern results from the crossing of the beams’ paths, creating a chemical and/or physical change in the photosensitive medium; the resulting data is represented in an optical pattern of dark and light pixels. By adjusting the reference beam angle, wavelength, or media position, a multitude of holograms (theoretically, several thousand) can be stored on a single volume. Figure 11 shows the diagram of information storage in a Holographic memory.
In order to retrieve and reconstruct the holographic page of data stored in the crystal, the reference beam is shined into the crystal at exactly the same angle at which it entered to store that page of data. Each page of data is stored in a different area of the crystal, based on the angle at which the reference beam strikes it. During reconstruction, the beam will be diffracted by the crystal to allow the recreation of the original page that was stored. This reconstructed page is then projected onto the charge-coupled device (CCD) camera, which interprets and forwards the digital information to a computer. The CCD device often is integrated with a sensor, such as a photoelectric device to produce the charge that is being read, thus making the CCD a major technology where the conversion of images into a digital signal is required. The detector is capable of reading the data in parallel, over one millions bits at once, resulting in the fast data transfer rate. Files on the holographic drive can be accessed in less than 200 milliseconds.
The key component of any holographic data storage system is the angle at which the second reference beam is fired at the crystal to retrieve a page of data. It must match the original reference beam angle exactly. A difference of just a thousandth of a millimeter will result in failure to retrieve that page of data.
Holographic data storage can provide companies a method to preserve and archive information. The write-once, read many (WORM) approach to data storage would ensure content security, preventing the information from being overwritten or modified. Manufacturers believe this technology can provide safe storage for content without degradation for more than 50 years, far exceeding current data storage options. Counterpoints to this claim point out the evolution of data reader technology changes every ten years; therefore, being able to store data for 50-100 years would not matter if you could not read or access it. However, a storage method that works very well could be around longer before needing a replacement; plus, with the replacement, the possibility of backwards-compatibility exists, similar to how Blu-ray technology is backwards-compatible with DVD technology, which in turn was backwards-compatible with CD technology.
Holograms can theoretically store one bit per cubic block the size of the wavelength of light in writing. In practice, the data density would be much lower, for at least four reasons:
- The need to add error-correction
- The need to accommodate imperfections or limitations in the optical system
- Economic payoff (higher densities may cost disproportionately more to achieve)
- Design technique limitations--a problem currently faced in magnetic Hard Drives wherein magnetic domain configuration prevents manufacture of disks that fully utilize the theoretical limits of the technology.
Unlike current storage technologies that record and read one data bit at a time, holographic memory writes and reads data in parallel in a single flash of light.
3.4 Input Output Devices
The optical computer hardware requires optical input and optical output devices. Many of the input output devices of optical computer are identical or similar to those that we see even now. The various optical based electronic devices can be made optically and can do parallel processing much faster than the current electronic or opto-electronic devices.
3.4.1 Input devices
The input devices are those devices which accept input interrupts and do corresponding input functionality. Some of the input devices include the optical mouse, the optical keyboard or the virtual keyboard, the optical pen, the optical microphone, the webcam, optical scanner etc. Some of them are described below.
Optical Mouse: An optical mouse uses a light-emitting diode and photodiodes to detect movement relative to the underlying surface. Modern surface-independent optical mice work by using an optoelectronic sensor to take successive pictures of the surface on which the mouse operates. In optical mouse, the optoelectronic sensor and circuitry can be replaced with the all optical circuitry. Inside each optical mouse is a small camera that takes more than a thousand snapshot pictures every second. A small LED (light-emitting diode) provides light underneath the mouse, helping to highlight slight differences in the surface underneath the mouse. Those differences are reflected back into the camera, where digital processing is used to compare the pictures and determine the speed and direction of movement. The laser mouse uses an infrared laser diode instead of an LED to illuminate the surface beneath their sensor.
Optical Keyboard: An optical keyboard is otherwise known as a virtual keyboard. The virtual keyboard has a virtual display of the keyboard using lasers. It can be projected and touched on any surface. The keyboard watches finger movements and translates them into keystrokes in the device. A laser or beamer projects visible virtual keyboard onto level surface. A sensor or camera in the projector picks up finger movements. The camera associated with the detector detects co-ordinates and determine actions or characters to be generated. A direction technology based on an optical recognition mechanism enables the user to tap on the projected key images, while producing real tapping sounds.
All mechanical input units can be replaced by such virtual devices, optimized for the current application and for the user's physiology maintaining speed, simplicity and unambiguity of manual data input. In this virtual keyboard also, the mechanical parts have become purely optical but the electronic parts remain the same or has become opto-electronic circuits. In future, this electronic circuitry will be replaced by optical circuitry.
Optical Microphone: The main principle of the optical microphone is to detect the vibration of a membrane using light. The optical microphone transfers the oscillation of its diaphragm to a beam of light, a process that does not involve any electrical signal. The emitted light from an LED is sent through an optical fiber onto the membrane which is furnished with a reflecting spot. The reflected light is coupled into the receiving fiber. When sound waves agitate the membrane it starts to vibrate, resulting in a toggling of the light spot on the receiving fiber. Consequentially a different intensity can be detected at the photo diode and is transformed into an electrical signal. It is only later in the conversion process that a photodetector transforms the light into an electrical current. One of the special advantages of this novel type of microphone is that the actual microphone head and the photodetector (plus the light source) can be placed several hundred meters apart - thanks to low-loss transmission via glass fibres. Glass fibres of this type are widely used in high quality data and phone networks, and experience only very minor losses in light transmission. This makes the optical microphone an ideal choice for use in strong magnetic fields or in locations which are difficult to reach.
In the medical field, for example, the optical microphone is ideally suited for use in magnetic resonance imaging (MRI) in order to maintain contact with the patient during MRI scans or to provide active noise cancellation. Due to its metal-free and current-free design, the microphone head does not interfere with the imaging process and is itself not influenced by the strong magnetic fields inside magnetic resonance imaging equipment.
The optical microphone also benefits from its metal-free and current-free design when used in measuring applications, as it does not influence the magnetic field. In EMI/EMC laboratories, for example, it functions like an “artificial ear” on a mobile phone without distorting the measurements.
A special version of the optical microphone is available for use in potentially explosive atmospheres and for outdoor applications. For example, it can be employed for the acoustic monitoring of gas dehydration plants in natural gas production. In this case, the microphone can “hear” slow leaks that are otherwise too small to cause a pressure loss or to trigger an alarm message in other monitoring systems.
3.4.2 Output Devices
The output devices are those devices which produce outputs as per the given set of instructions. Some of the output devices include the laser printer, optical projectors, and various optical displays on various technologies etc account for optical output devices. Some memories such as CD/DVD etc are also optical output devices. The development of optical computing can make these output devices purely optical, even though the CD/DVD are already purely optical.
3.5 Optical Networking
A photonic (or optical) network is a communication network in which information is transmitted entirely in the form of optical or infrared transmission (IR) signals. In a true photonic network, every switch and every repeater works with IR or visible-light energy. A recent development in this field is the erbium amplifier. Conversion to and from electrical impulses is not done except at the source and destination (origin and end point).
Optical or IR data transmission has several advantages over electrical transmission. Perhaps most important is the greatly increased bandwidth provided by photon signals. Because the frequency of visible or IR energy is so high (on the order of millions of megahertz), thousands or millions of signals can be impressed onto a single beam by means of frequency division multiplexing (FDM). In addition, a single strand of fiber can carry IR and/or visible light at several different wavelengths, each beam having its own set of modulating signals. This is known as wave-division multiplexing (WDM).
A subtle, but potentially far-reaching, advantage of photonic systems over electronic media results from the fact that visible and IR energy actually moves several times faster than electricity. Electric current propagates at about 10 percent of the speed of light (18,000 to 19,000 miles or 30,000 kilometers per second), but the energy in fiber optic systems travels at the speed of light in the glass or plastic medium, which is a sizable fraction of the speed of light in free space (186,000 miles or 300,000kilometers per second). This results in shorter data-transmission delay times between the end points of a network. This advantage is especially significant in systems where the individual computers or terminals continuously share data. It affects performance at all physical scales, whether components are separated by miles or by microns. It is even significant within microchips, a phenomenon of interest to research-and-development engineers in optical computer technology.
With traditional computers all the components must be fairly close together since the electrical signals fade the father they travel. However, photons can travel vast distances before they begin to fade. This would allow the different components of a computer to be separated at a greater distance than a traditional computer even up to the length of a city or greater. This could potentially lead to the ability to rent hard drive space, ram or even processing power on another computer or server within the same city (assuming you are on a fiber optic internet connection, which could be common place when photonic computers are released) and the whole time transferring speed would be seamless. This could be a viable alternative to upgrading a system for some people and could potentially be cheaper then purchasing a whole system. With the possible exception of some computers games, depending upon how complex they become in the future, photonic computers would be able to handle almost all applications with little effort because of light’s vast speed. This would mean that users would need to upgrade their systems far less frequently then is required with current computers.
3.5.1 Optical Fiber
The photonic networking was possibly achieved by the invention of the fiber-optic cables. An optical fiber (or fibre) is a glass or plastic fiber that carries light along its length. Optical fibers are widely used in fiber-optic communications, which permits transmission over longer distances and at higher bandwidths (data rates) than other forms of communications. Fibers are used instead of metal wires because signals travel along them with less loss, and they are also immune to electromagnetic interference. Fibers are also used for illumination, and are wrapped in bundles so they can be used to carry images, thus allowing viewing in tight spaces. Specially designed fibers are used for a variety of other applications, including sensors and fiber lasers.
Construction and Types
An optical fiber consists of a core, cladding, and a buffer (a protective outer coating), in which the cladding guides the light along the core by using the method of total internal reflection. The core and the cladding (which has a lower-refractive-index) are usually made of high-quality silica glass, although they can both be made of plastic as well. Connecting two optical fibers is done by fusion splicing or mechanical splicing and requires special skills and interconnection technology due to the microscopic precision required to align the fiber cores. The figure 15 shows the constructional details of optical fibers.
Two main types of optical fiber used in fiber optic communications include multi-mode optical fibers and single-mode optical fibers. A multi-mode optical fiber has a larger core (≥ 50 micrometres), allowing less precise, cheaper transmitters and receivers to connect to it as well as cheaper connectors. However, a multi-mode fiber introduces multimode distortion, which often limits the bandwidth and length of the link. Furthermore, because of its higher dopant content, multimode fibers are usually expensive and exhibit higher attenuation. The core of a single-mode fiber is smaller (<10 micrometres) and requires more expensive components and interconnection methods, but allows much longer, higher-performance links.
Although fibers can be made out of transparent plastic, glass, or a combination of the two, the fibers used in long-distance telecommunications applications are always glass, because of the lower optical attenuation. Both multi-mode and single-mode fibers are used in communications, with multi-mode fiber used mostly for short distances, up to 550 m (600 yards), and single-mode fiber used for longer distance links. Because of the tighter tolerances required to couple light into and between single-mode fibers (core diameter about 10 micrometers), single-mode transmitters, receivers, amplifiers and other components are generally more expensive than multi-mode components.
Working and Advantages
Light is kept in the core of the optical fiber by total internal reflection. This causes the fiber to act as a waveguide. Fibers which support many propagation paths or transverse modes are called multi-mode fibers (MMF), while those which can only support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a larger core diameter, and are used for short-distance communication links and for applications where high power must be transmitted. Multimode fibers can be classified into graded-index and step-index, depending on the refraction index between the core and the cladding - on graded-index there is a gradual change between the core and the cladding, while on step-index this change is abrupt, hence the name. Step-index fibers can transmit data up to 50 Mbps, while grade-index fibers can transmit data up to 1 Gbps. Single-mode fibers are used for most communication links longer than 550 metres (1,800 ft). Joining lengths of optical fiber is more complex than joining electrical wire or cable. The ends of the fibers must be carefully cleaved, and then spliced together either mechanically or by fusing them together with an electric arc. Special connectors are used to make removable connections.
Optical fiber can be used as a medium for telecommunication and networking because it is flexible and can be bundled as cables. It is especially advantageous for long-distance communications, because light propagates through the fiber with little attenuation compared to electrical cables. This allows long distances to be spanned with few repeaters. Additionally, the per-channel light signals propagating in the fiber can be modulated at rates as high as 111 gigabits per second, although 10 or 40 Gb/s is typical in deployed systems.
Each fiber can carry many independent channels, each using a different wavelength of light (wavelength-division multiplexing (WDM)). The net data rate (data rate without overhead bytes) per fiber is the per-channel data rate reduced by the FEC overhead, multiplied by the number of channels. Over short distances, such as networking within a building, fiber saves space in cable ducts because a single fiber can carry much more data than a single electrical cable. Fiber is also immune to electrical interference; there is no cross-talk between signals in different cables and no pickup of environmental noise. Non-armored fiber cables do not conduct electricity, which makes fiber a good solution for protecting communications equipment located in high voltage environments such as power generation facilities, or metal communication structures prone to lightning strikes. They can also be used in environments where explosive fumes are present, without danger of ignition. Wiretapping is more difficult compared to electrical connections, and there are concentric dual core fibers that are said to be tap-proof.
Although fibers can be made out of transparent plastic, glass, or a combination of the two, the fibers used in long-distance telecommunications applications are always glass, because of the lower optical attenuation. Both multi-mode and single-mode fibers are used in communications, with multi-mode fiber used mostly for short distances, up to 550 m (600 yards), and single-mode fiber used for longer distance links. Because of the tighter tolerances required to couple light into and between single-mode fibers (core diameter about 10 micrometers), single-mode transmitters, receivers, amplifiers and other components are generally more expensive than multi-mode components.
In order to package fiber into a commercially-viable product, it is typically protectively-coated by using ultraviolet (UV), light-cured acrylate polymers, then terminated with optical fiber connectors, and finally assembled into a cable. After that, it can be laid in the ground and then run through the walls of a building and deployed aerially in a manner similar to copper cables. These fibers require less maintenance than common copper cables, once they are deployed.
Transmitters: The most commonly-used optical transmitters are semiconductor devices such as light-emitting diodes (LEDs) and laser diodes. The difference between LEDs and laser diodes is that LEDs produce incoherent light, while laser diodes produce coherent light. For use in optical communications, semiconductor optical transmitters must be designed to be compact, efficient, and reliable, while operating in an optimal wavelength range, and directly modulated at high frequencies.
In its simplest form, an LED is a forward-biased p-n junction, emitting light through spontaneous emission, a phenomenon referred to as electroluminescence. The emitted light is incoherent with a relatively wide spectral width of 30-60 nm. LED light transmission is also inefficient, with only about 1 % of input power, or about 100 microwatts, eventually converted into launched power which has been coupled into the optical fiber. However, due to their relatively simple design, LEDs are very useful for low-cost applications.
Communications LEDs are most commonly made from gallium arsenide phosphide (GaAsP) or gallium arsenide (GaAs). Because GaAsP LEDs operate at a longer wavelength than GaAs LEDs (1.3 micrometers vs. 0.81-0.87 micrometers), their output spectrum is wider by a factor of about 1.7. The large spectrum width of LEDs causes higher fiber dispersion, considerably limiting their bit rate-distance product (a common measure of usefulness). LEDs are suitable primarily for local-area-network applications with bit rates of 10-100 Mbit/s and transmission distances of a few kilometers. LEDs have also been developed that use several quantum wells to emit light at different wavelengths over a broad spectrum, and are currently in use for local-area WDM networks.
A semiconductor laser emits light through stimulated emission rather than spontaneous emission, which results in high output power (~100 mW) as well as other benefits related to the nature of coherent light. The output of a laser is relatively directional, allowing high coupling efficiency (~50 %) into single-mode fiber. The narrow spectral width also allows for high bit rates since it reduces the effect of chromatic dispersion. Furthermore, semiconductor lasers can be modulated directly at high frequencies because of short recombination time.
Laser diodes are often directly modulated, that is the light output is controlled by a current applied directly to the device. For very high data rates or very long distance links, a laser source may be operated continuous wave, and the light modulated by an external device such as an electroabsorption modulator or Mach-Zehnder interferometer. External modulation increases the achievable link distance by eliminating laser chirp, which broadens the linewidth of directly-modulated lasers, increasing the chromatic dispersion in the fiber.
Receivers: The main component of an optical receiver is a photodetector, which converts light into electricity using the photoelectric effect. The photodetector is typically a semiconductor-based photodiode. Several types of photodiodes include p-n photodiodes, a p-i-n photodiodes, and avalanche photodiodes. Metal-semiconductor-metal (MSM) photodetectors are also used due to their suitability for circuit integration in regenerators and wavelength-division multiplexers.
The optical-electrical converters are typically coupled with a transimpedance amplifier and a limiting amplifier to produce a digital signal in the electrical domain from the incoming optical signal, which may be attenuated and distorted while passing through the channel. Further signal processing such as clock recovery from data (CDR) performed by a phase-locked loop may also be applied before the data is passed on.
Applications
Optical fiber is used by many telecommunications companies to transmit telephone signals, Internet communication, and cable television signals. Due to much lower attenuation and interference, optical fiber has large advantages over existing copper wire in long-distance and high-demand applications. However, infrastructure development within cities was relatively difficult and time-consuming, and fiber-optic systems were complex and expensive to install and operate. Due to these difficulties, fiber-optic communication systems have primarily been installed in long-distance applications, where they can be used to their full transmission capacity, offsetting the increased cost. Since 2000, the prices for fiber-optic communications have dropped considerably. The price for rolling out fiber to the home has currently become more cost-effective than that of rolling out a copper based network.
Since 1990, when optical-amplification systems became commercially available, the telecommunications industry has laid a vast network of intercity and transoceanic fiber communication lines. By 2002, an intercontinental network of 250,000 km of submarine communications cable with a capacity of 2.56 Tb/s was completed, and although specific network capacities are privileged information, telecommunications investment reports indicate that network capacity has increased dramatically since 2002.
3.6 Optical Processor
The processor is the brain of a computer. The design of an efficient and reliable processor for optical computer requires development of various transistors and logic gates circuits in the submicron values. The optical transistors and logic gates have been still in developing conditions. Even though there are no all-optical processors available commercially, there are opto-electronic hybrid optical processors available.
Intel has already designed and created a photonic processor, which is an opto-electronic hybrid processor. The processor is an entirely solid-state photonic processor assembly - a chip which processes data as light waves, without the need for microscopic, yet movable, parts. The processor is a ceramic material based on indium phosphide that could produce a monochromatic wavelength of laser light when electricity is applied to it, and could also be produced as a wafer that bonds to a silicon substrate. That major development eliminated the need for movable gratings that refract laser light from a multiple-wavelength source, so that a single wavelength could emerge.
A single-wavelength light source is critical, because modulations to that beam of infra-red light will be interpreted as data, so it needs to be a simple and regular as possible. Indium phosphide was chosen because it emits light predictably at regular wavelengths when voltage is applied to it. It’s obviously not silicon nor a silicate, so if silicon is to be used to guide light produced by an indium phosphide laser, there needs to be some way to offload the light from the laser onto the waveguide. In previous prototypes, this was done using moving parts, which can’t be expected to work in a production environment.
With a novel bonding process called evanescent coupling that takes place between the indium phosphide layer and the silicon waveguide layer, the surfaces of both layers are coated with an oxygen plasma. This causes both surfaces to oxidize, forming what is called a “glass glue.” When both oxidized surfaces are joined together under 300-degrees Celsius heat (which is half as hot as for other bonding processes), they create a transparent seal about 25 atoms thick, through which light from the laser is handed off to the silicon waveguide. This solves the need for active coupling devices, which would in effect use microscopic mirrors to pull off the same feat.
There are essentially six building blocks that need to drive- a way to guide the light – route it, split it, couple it, get it in and out of the chip efficiently, a way to modulate the data, to encode optical bits, a way to photodetect the light, and eventually convert the photons back to electrons, a way to enable low-cost, high-volume assembly technology and lastly intelligence-for the electronics to drive the circuits, to drive the photonics, and to do the computation.
This Intel processor will be introduced in fiber optic networking though could conceivably be integrated into general computing platforms in subsequent years, that is faster, smaller, and less expensive to produce, all at the same time.
Another company named Lenslet Ltd, a leader in optical digital signal processing, has created the world's first commercial optical digital signal processor. The processor is specified to run at a speed of 8 Tera (8,000 Giga) operations per second, one thousand times faster than any known DSP. It is a general-purpose, fixed-point ODSP with an embedded optical core. It consists of three elements: vector matrix multiplier (VMM) that performs its ultra-fast vector-matrix operations; vector processor unit (VPU) that handles 128 Giga operations per second; and standard DSP for control and scalar processing.
Also, software for EnLight is developed using three main tools: Matlab APL bit exact simulator, APL Studio bit exact and cycle exact simulator, and APL Studio Emulator. These tools ensure a smooth development path starting from the floating-point algorithm to running and debugging code.
It targets computationally intense applications, such as video compression, video encoders, security (baggage scanning and multi-sensor threat analysis), and defense and communication systems. It can be applied either as a system-embedded accelerator or a standalone processor. Potential benefits of the optical processor include enhanced communications in noisy channels, multi-channel interference cancellation, and replacement of existing multi-DSP boards.
Another photonic integrated circuit developed recently uses the quantum-optical principles, made out of indium phosphide is a cost effective device which can efficiently perform the wave division multiplexing (WDM) sending or receiving 1.6-terabit information per second.
The future of optical computers is not so farther with the recent development of the laser transistor made up from a single molecule. Researches are being going on in these fields of optical processors and optical transistors. We are currently in an age similar to that of the dawn of transistors. The development of all optical processors can change the world drastically.
4.0 Optical Computer Software Part
The software section of the optical computer requires a special mention. The optical networks and optical circuitry has the immense advantage of parallelism. The immense usage of parallel circuits and parallel processes requires special attention in the programming side. Programming with the optical computers requires special skills. New algorithms have to be developed for programming the optical computer.
Due to immense parallelism in the design of the optical computer, the data to be handled at a time increases to the range of tera to exa flops. This huge data cannot be handled effectively with the current algorithms used. So a change in programming algorithms is required. As per the algorithms, hardware can be designed for handling maximum amount of data. So for using the large data bandwidth, programming models have to be developed.
In the current programming techniques, due to the serial processing models, there are several bottlenecks which causes the data transfer much slower. This will be eliminated using the optical computer designs. Also the counting of the flops is not required for the parallel programming algorithms.
Parallelism can be employed in many ways. The selection of the correct and required one needs good designing skills.
The above figure shows some of the parallel control models. Selecting the right model for requirements is essential.
The usage of programming language required for the processing in parallel is another subject of consideration. A new programming language working in ease with the parallelism is a requirement for the optical computer. The current languages used can also be converted for usage in optical computer, but will have different concepts to be implemented like matrix factorization etc.
Some dynamic algorithms such as Krylov subspace method are available for parallel computing. This minimizes network latency costs on parallel machine. Using matrix will be a satisfactory replacement of programming model from the ground up.
5.0 Advantages of Optical Computer
There are several advantages for the optical computers. They are
1. Immune to electromagnetic interference.
The optical computer explores the properties of light for its advantageous working over the electronic counterparts. One of the important advantages of optical computing over the electronic counterpart is the immunity towards electromagnetic interferences. Electromagnetic interference is a disturbance that affects an electrical circuit due to either electromagnetic conduction or electromagnetic radiation emitted from an external source. The disturbance may interrupt, obstruct, or otherwise degrade or limit the effective performance of the circuit.
The information carriers in optical computers are photons that are uncharged and do not interact with one another as readily as electrons. Consequently, light beams may pass through one another in fullduplex operation, for example without distorting the information carried. In the case of electronics, loops usually generate noise voltage spikes whenever the electromagnetic fields through the loop changes. Further, high frequency or fast switching pulses will cause interference in neighboring wires. Signals in adjacent fibers or in optical integrated channels do not affect one another nor do they pick up noise due to loops. Since there is no electric circuit used in optical computer there will be no electromagnetic interference.
2. Free from electrical short-circuits.
The electronic circuits require electrical wires and connectors. The electrical wires have the disadvantage of having much losses and it can cause short-circuits leading to severe damage of the computer and associated devices. Since light waves are used in optical computer it has the advantage of maximum safety over that of its electronic counterpart. So there is no requirement of insulators in the case of optical circuits.
3. Low Loss Transmission.
The metal wires used in the electronic circuits can cause several losses due to transmission of electric currents through it by various factors such as temperature, resistance of wire etc. But the optical fiber uses laser light to transmit through it. This eliminates losses immensely compared to that of the metal wires. Even though, small scaled losses are affected in long distance communication by the mirror loss of the fiber.
4. Large bandwidth (multiplexing capabilities).
Multiple frequencies (or different colors) of light can travel through optical components without interfacing with each others, allowing photonic devices to process multiple streams of data simultaneously. This can create larger bandwidth of data transfer which is another important advantage of optical computer.
5. Ease of usage.
The optical computer requires little metal wires which causes the size to reduce much smaller. The devices made of optical circuitry can be highly compact and lightweight. The user friendliness will be associated with tremendous speed of processing data.
6. Parallelism.
Since the light wave does not interfere, it have the multiplexing capabilities which enables them to be capable of communicating several channels in parallel without interference. They are capable of propagating signals within the same or adjacent fibers with essentially no interference or cross-talk. Parallelism is the capability of the system to execute more than one operation simultaneously. Electronic computer architecture is, in general, sequential, where the instructions are implemented in sequence. This implies that parallelism with electronics is difficult to construct. This is because as more processors are used, there is more time lost in communication. On the other hand, using a simple optical design, an array of pixels can be transferred simultaneously in parallel from one point to another. To appreciate the difference between both optical parallelism and electronic, one can think of an imaging system of as many as 1000x1000 independent points per mm2 in the object plane which are connected optically by a lens to a corresponding 1000x 1000 points per mm2 in the image plane. For this to be accomplished electrically, a million nonintersecting and properly isolated conduction channels per mm2 would be required.
Parallelism, therefore, when associated with fast switching speeds, would result in staggering computational speeds. Assume, for example, there are only 100 million gates on a chip. Further, conservatively assume that each gate operates with a switching time of only 1 nanosecond (organic optical switches can switch at sub-picosecond rates compared to maximum picosecond switching times for electronic switching). Such a system could perform more than 1017 bit operations per second. Compare this to the gigabits (109) or terabits (1012) per second rates which electronics are either currently limited to, or hoping to achieve. In other words, a computation that might require one hundred thousand hours (more than 11 years) of a conventional computer could require less than one hour by an optical one.
7. Storage devices.
The optical computer uses the Holographic memory for data storage. These possess immense data storage density of a terabyte on a sugar cube sized crystal. This memory can have parallel accessibility of data which can create higher data transfer rates. The Holographic memory has the advantage of longer lifetime from 50-100 years. Other types of optical storage devices are CD/DVD, blu-ray disks etc, which are common now-a-days.
6.0 Limiting Factors of Optical Computers
There are some limiting factors for the optical computers which keep them away from current scene of commercial computers. They are:
1. Developing technology
The optical computer was actually a dream for the past 15-20 years. There were researches done on the various devices requires for the optical computers. The researches were started for inventing optical transistors, which are the basic building blocks of the optical computer. There were several transistor models, but many of them did not have the size limitation that was required. Many of the models were costly. Developments and researches are being done in the field of transistor development, which recently had reached its size limit as small as a molecule. Further developments have to be done by connecting these transistors. An all optical processor constructed using the optical transistor is required for the maximum performance of the optical computer.
Also the storage devices are in their development stages. Holographic memory has been costlier. Reducing the cost will be the next job for researchers. The optical fibers are also undergoing several researches. Various logic gate implementations and optical switches are also in the developing field of research. So a full fledged optical computer cannot be devised currently and it requires decades to be available commercially.
2. Fabrication Technology.
Currently, fabrication technologies are unavailable for the optical devices. Optoelectronic devices are having some fabrication technologies. The all optical device fabrication thus will be much costlier and there are no cheaper solutions available.
3. Cost of devices and components.
The cost of various components is a limiting factor for achieving optical computer requirements. The data storage such as Holographic memory has higher costs compared to other magnetic memories. The precision required for Holographic memories is higher for data storage and retrieval. This increases the cost of the device. Also the cost of optical fibers is troublesome. The implementation cost is much higher for an optical fiber cable. The various optical devices such as precision lasers etc also have much unaffordable costs, which keeps optical computer away from commercial stores.
4. Software requirements
The software part and the parallel algorithms are harder to develop and implement. There is no typical programming model or programming language for the optical computer software section. The parallelism will take the programming for optical computers to a new level of programming requirements.
7.0 Conclusion
An optical computer is a device that uses the photons in visible light or infrared (IR) beams, rather than electric current, to perform digital computations Optical interconnections and optical integrated circuits are immune to electromagnetic interference, and free from electrical short circuits. Optics has a higher bandwidth capacity over electronics, which enables more information to be carried and data to be processed arises because electronic communication along wires requires charging of a capacitor that depends on length. In contrast, optical signals in optical fibers, optical integrated circuits, and free space do not have to charge a capacitor and are therefore faster. Optical data processing can be done much easier and less expensive in parallel than can be done in electronics. Another advantage of light results because photons are uncharged and do not interact with one another as readily as electrons. Consequently, light beams may pass through one another in full-duplex operation, for example without distorting the information carried. In the case of electronics, loops usually generate noise voltage spikes whenever the electromagnetic fields through the loop changes. Further, high frequency or fast switching pulses will cause interference in neighboring wires. Signals in adjacent fibers or in optical integrated channels do not affect one another nor do they pick up noise due to loops. Finally, optical materials possess superior storage density and accessibility over magnetic materials.Optical fiber is used by many telecommunications companies to transmit telephone signals, Internet communication, and cable television signals The optical computer requires little metal wires which causes the size to reduce much smaller. The devices made of optical circuitry can be highly compact and lightweight. The user friendliness will be associated with tremendous speed of processing data.
8.0 Expected Future of Optical Computers
The research field in optics is developing day-by-day or even hour-by-hour. There are a huge number of patents given for various device researches in the field of optics. Many of the research items lead to development of various optical computer components.
The recent development of the laser transistor from a single molecule will be a footstep to attaining the dream of optical computing. The transistor is on research of being coupled with other transistors and thus can form integrated circuits. The current scenario possesses various optical components that are in developed form. They constitute the Holographic memory, various optical gates, optical fiber technology, and various optoelectronic devices which can be useful in development of the optical computers.
The future expectations of optical computers include the development of photonic laser transistors in the subatomic sizes, which can have much higher speeds than that of the current electronic ones. The development of the photonic transistors and their fabrication in low costs create the optical computer readily available in the future. The developments and researches done in these fields show that this future is not so far.
The dream of data rates of terra to exa bytes per seconds can be achieved by the development of the optical computers. This is one future that everyone is eyeing to.
Larger memory in lesser space can be used in optical computer and this memory can have immense parallel data processing capabilities. This is not even futuristic, this is current technology. But it can have maximum application when used parallel. This is achieved only through the optical computing.
9.0 References
1. Debabrata Goswami , “ article on optical computing, optical components
and storage systems,” Resonance- Journal of science education pp:56-71
July 2003
2. Hossin Abdeldayem,Donald. O.Frazier, Mark.S.Paley and William.K,
“Recent advances in photonic devices for optical computing,”
science.nasa.gov Nov 2001
3. Mc Aulay,Alastair.D , “Optical computer architectures and the application
of optical concepts to next generation computers”
4. John M Senior , “Optical fiber communications –principles and practice”
5. Mitsuo Fukuda “Optical semiconductor devices
ii
ReplyDelete