Itanium Processor

Itanium Processor

The Itanium brand extends Intel’s reach into the highest level of computing enabling powerful servers and high- performance workstations to address the increasing demands that the internet economy places on e-business. The Itanium architecture is a unique combination of innovative features, such as explicit parallelism, predication, speculation and much more.

In addition to providing much more memory that today’s 32-bit designs, the 64-bit architecture changes the way the processor hardware interacts with code. The Itanium is geared toward increasingly power-hungry applications like e-commerce security, computer-aided design and scientific modeling.

Intel said the Itanium provides a 12-fold performance improvement over today’s 32-bit designs. Its “Explicitly Parallel Instruction Computing”(EPIC) technology enables it to handle parallel processing differently than previous architectures, most of which were designed 10 to 20 years ago. The technology reduces hardware complexity to better enable processor speed upgrades. Itanium processors contain “massive chip execution resources”, that allow “breakthrough capabilities in processing terabytes of data”.

Here a sincere attempt is made to explore the architecture feature and performance characteristic of Itanium processor. A brief explanation on the system environment and their computing applications is also undertaken.

The Itanium processor family came about for several reasons, but the primary one was that the processor architecture advances of RISC were no longer growing at the rate seen in the 1980’s or the 1990’s.Yet,customers continued demand greater application performance.


                   The Itanium architecture achieves a more difficult goal than a processor that could have been designed with ‘price as no object’.  Rather, it delivers near-peerless speed at a price that is sustainable by the mainstream corporate market.

The main challenges in today’s architecture are the following:

·        Sequential Semantics of the ISA

·        Low Instruction Level Parallelism(ILP)

·        Unpredictable Branches, Memory dependencies

·        Ever Increasing Memory Latency, Ever increasing Memory

·        Limited Resources(registers,memoy address)

·        Procedure call,Loop pipelining Overhead

      A program is a sequence of instructions. It has an implied order of instruction execution. So there is a potential dependence from instruction to instruction. But high performance needs parallel execution which in turn needs independent instructions. So independent instructions must be rediscovered by the hardware.

Consider the code:
Dependent                                      Independent
add r1=r2,r3                                    add r1=r2,r3
sub r4=r1,r2                                     sub r4=r11,r2
shl r5=r4,r8                                      shl r5=r14,r8
Here though the compiler understands the parallelism within the instruction, it is unable to convey it to the hardware. So the hardware needs to rediscover the parallelism in the instructions.

          In present day programs branches are frequent. As a result code blocks are small. So parallelism is limited within the code blocks. Wider machines need more parallel instructions. So ILP across the branches need to be exploited. But when this is done some instructions can fault due to wrong prediction. In short branches are a barrier to code motion.
         Branch predictions are not perfect. When wrong it leads to performance penalty. It is more if the instructions which went wrong consist of memory operations (loads & stores) or floating point operations. Also if exception  on speculative operations, we need to defer it. This results in more book keeping hardware.

         Usually load instructions are at the top  of a chain of instructions. ILP requires moving these loads. Store instructions are also a barrier. Dynamic disambiguation has its limitations For it requires additional hardware and it adds to the code size if done in software.

           Though the speed of A.L.U, decoders and other execution units       have increased with time, the advances in technologies related to memories is not in pace with it. So even if the decoding and further execution of the instruction is fast ,the memory fetch which is needed prior to it  takes time and leads to reduced pace of program execution. The cache hierarchy which reduces the memory latency has its limitations. It is managed asynchronously by hardware and helps only if there is locality. Also it consumes precious silicon area.

                Small register space creates false dependencies. Shared resources like conditional flags and conditional registers force dependencies on independent instructions. Floating point resources are limited and not flexible.

             As modular programming is increasingly used the programs tend to be call intensive. Register space is shared by caller and calle. Call/return requires register save/restore.

           Though loops are common sources of good ILP Unrolling/Pipelining is needed to exploit this ILP. Prologue/Epilogue causes code expansion. So the applicability of these techniques is limited.


·        Explicitly Parallel Instruction Semantics

·        Predication

·        Control/Data Speculation

·        Massive Resources(registers, memory)

·        Register Stack and its Engine

·        Memory Hierarchy Management Support

·        Software Pipelining Support

Here program is a collection of parallel instruction groups. The instructions have implied order and no dependence between instructions within a group. So high performance is obtained as independent instructions are explicitly indicated for parallel execution.

Dependent                                      Independent
add r1=r2,r3  ;;                                add r1=r2,r3
sub r4=r1,r2   ;;                               sub r4=r11,r2
shl r5=r4,r8    ;;                               shl r5=r14,r8

consider the above code. Here the dependent instructions are differentiated from independent instructions by the compiler with the help of Itanium’s unique instruction set architecture. The absence of semi colom(;) conveys the independence. So there is no need for the hardware to “rediscover” the available parallelism. Thus hardware can easily exploit parallelism.

As a result of predication of unpredictable branches are removed, and so mis–predication penalties are eliminated. So compiler has a larger scope to find ILP.Basic block size increases as both “then” and “else” are executed in parallel. Thus predication results in increased speed of execution.


                   IA-64’s deign is fluid. Its operational characteristics are entirely controlled by the compiler or assembly programmer. It doesn’t  engage in any of the automatic speedup mechanisms that are present in Intel’s IA-32 architecture. It is an obedient servant of the programmer. Itanium is a machine capable of achieving levels of true greatness that are directly commensurate with the programmers abilities. Because the EPIC architecture is new for both Intel and HP, software written for Intel’s Pentium and HP’s PA-RISC  machines need to be execute on Itanium platforms. To this end Intel included a small engine in the new design to execute the programs written for Pentium platforms. HP used software similar to code morphing method invented by Transmeta.

                   Itanium’s 64-bit architecture is crucial to  Intel’s Invasion of high-end workstations and servers. A 64-bit data path guarantees a vastly larger addressable memory space. The 32-bit architecture of  Intel’s Pentium can directly access upto 4GB of memory only. The 64-bit architectures can directly address more than 16 Exa-bytes(roughly 10^18 bytes).

No comments:

Post a Comment

leave your opinion