Parallel Computing




Parallel computing    is the simultaneous use of multiple compute resources to solve a computational problem. Parallel computing is to be run on a single computer having a single Central Processing Unit (CPU). A problem is broken into a discrete series of instructions and then instructions are executed one after another. Only one instruction may execute at any moment in time. Flynn's Classical Taxonomy:One of the more widely used classifications, in use since 1966, is called Flynn's Taxonomy. Flynn's taxonomy distinguishes multi-processor computer architectures according to how they can be classified along the two independent dimensions of Instruction and Data. Each of these dimensions can have only one of two possible states: Single or Multiple.
 

S I S D

Single Instruction, Single Data

S I M D

Single Instruction, Multiple Data

M I S D

Multiple Instruction, Single Data

M I M D

Multiple Instruction, Multiple Data
IV WHY Use Parallel ComputingSave time and/or money: In theory, throwing more resources at a task will shorten its time to completion, with potential cost savings. Parallel computers can be built from cheap, commodity components.
Solve larger problems: Many problems are so large and/or complex that it is impractical or impossible to solve them on a single computer, especially given limited computer memory.

II USE OF PARALLEL COMPUTING
Parallel computing can be used to save the time and money it is the task which shorten the time to complete any task with potential and cost savings. Many problems are so large and complex that it is impractical or impossible to solve them on a single computer, especially when there is limited computer memory. for e.g. in web search engines or databases processing millions of transactions of data per second can happened.                                  It provides concurrency i.e. a single compute resource can only do one thing at a time but the multiple computing resources can be doing many things simultaneously. 
   
III CONCEPT AND TERMINOLOGY
John von Neumann who first authored the general requirements for an electronic computer in his 1945 papers. Since then, virtually all computers have followed this basic design, differing from earlier computers which were programmed through "hard wiring". John von Neumann Architecture mainly consist of four components 1) Memory 2) Control Unit 3) Arithmetic Logic Unit 4) Input/output.
Memory: Read/write, random access memory is used to store both program instructions and data.
Control unit: Control unit fetches instructions/data from memory, decodes the instructions and then sequentially coordinates operations to accomplish the programmed task.
Arithmetic Logic Unit: Arithmetic Unit performs basic arithmetic operations.
Input/output: Input/output is the interface to the human operator.

Single Instruction, Single Data (SISD):

Single Instruction: Only one instruction stream is being acted on by the CPU during any one clock cycle.
 Single Data: Only one data stream is being used as input during any one clock cycle Deterministic execution this is the oldest and even today, the most common type of computer Examples: older generation mainframes, minicomputers and workstations; most modern day PCs.

Single Instruction, Multiple Data (SIMD):

Single Instruction: All processing units execute the same instruction at any given clock cycle.
Multiple Data: Each processing unit can operate on a different data element. Best suited for specialized problems characterized by a high degree of regularity, such as graphics/image processing. Synchronous (lockstep) and deterministic execution two varieties: Processor Arrays and Vector Pipelines.

 Multiple Instructions, Single Data (MIMD):

Multiple Instructions: Each processing unit operates on the data independently via separate instruction streams.
Single Data: A single data stream is fed into multiple processing units.

Multiple Instructions, Multiple Data (MIMD):

Multiple Instructions: Every processor may be executing a different instruction stream.
Multiple Data: Every processor may be working with a different data stream Execution can be synchronous or asynchronous, deterministic or non-deterministic.
Provide concurrency: A single compute resource can only do one thing at a time. Multiple computing resources can be doing many things simultaneously. For example, the Access Grid provides a global collaboration network where people from around the world can meet and conduct work "virtually".
Use of non-local resources: Using compute resources on a wide area network, or even the Internet when local compute resources are scarce.
Limits to serial computing: Both physical and practical reasons pose significant constraints to simply building ever faster serial computers.
Transmission speeds - the speed of a serial computer is directly dependent upon how fast data can move through hardware. Absolute limits are the speed of light (30 cm/nanosecond) and the transmission limit of copper wire (9 cm/nanosecond). Increasing speeds necessitate increasing proximity of processing elements.
Limits to miniaturization - processor technology is allowing an increasing number of transistors to be placed on a chip. However, even with molecular or atomic-level

components, a limit will be reached on how small components can be.
Economic limitations - it is increasingly expensive to make a single processor faster. Using a larger number of moderately fast commodity processors to achieve the same (or better) performance is less expensive.
V   PRATICAL APPLICATIONS OF PARALLEL
           COMPUTING

Parallel computing has been used to model difficult problems in many areas of science and engineering like: -    1) Atmosphere, Earth, and Environment.
2) Physics - applied, nuclear, particle, condensed matter,
     high pressure, fusion, photonics.

3) Bioscience, Biotechnology, Genetics.
4) Chemistry, Molecular Sciences, Geology, Seismology.
5) Mechanical Engineering from prosthetics to
    Spacecraft.   
                               
6) Electrical Engineering, Circuit Design,
     Microelectronics.
7) Computer Science, Mathematics.
      VI FUTURE OF PARALLEL COMPUTING
 It is expected to lead to other major changes in the industry. A great challenge is to write a software program to divide computer processors into chunks. This could only be done with the new programming language to revolutionize the every piece of software written. Parallel computing may change the way computer work in the future and how we use them for work and play.

VII CONCLUSION

It is considered to be “the high end of computing” and has been used to model difficult scientific, computational and engineering problems computational field technique which is used for solving the computational tasks by using different type multiple resources simultaneously is called as parallel computing. It breaks down large problem into smaller ones, which are solved concurrently.

No comments:

Post a Comment

leave your opinion