fbpx
Wikipedia

Instruction-level parallelism

Instruction-level parallelism (ILP) is the parallel or simultaneous execution of a sequence of instructions in a computer program. More specifically ILP refers to the average number of instructions run per step of this parallel execution.[2]: 5 

Atanasoff–Berry computer, the first computer with parallel processing[1]

Discussion

ILP must not be confused with concurrency. In ILP there is a single specific thread of execution of a process. On the other hand, concurrency involves the assignment of multiple threads to a CPU's core in a strict alternation, or in true parallelism if there are enough CPU cores, ideally one core for each runnable thread.

There are two approaches to instruction-level parallelism: hardware and software.

Hardware level works upon dynamic parallelism, whereas the software level works on static parallelism. Dynamic parallelism means the processor decides at run time which instructions to execute in parallel, whereas static parallelism means the compiler decides which instructions to execute in parallel.[3][clarification needed] The Pentium processor works on the dynamic sequence of parallel execution, but the Itanium processor works on the static level parallelism.

Consider the following program:

e = a + b f = c + d m = e * f 

Operation 3 depends on the results of operations 1 and 2, so it cannot be calculated until both of them are completed. However, operations 1 and 2 do not depend on any other operation, so they can be calculated simultaneously. If we assume that each operation can be completed in one unit of time then these three instructions can be completed in a total of two units of time, giving an ILP of 3/2.

A goal of compiler and processor designers is to identify and take advantage of as much ILP as possible. Ordinary programs are typically written under a sequential execution model where instructions execute one after the other and in the order specified by the programmer. ILP allows the compiler and the processor to overlap the execution of multiple instructions or even to change the order in which instructions are executed.

How much ILP exists in programs is very application specific. In certain fields, such as graphics and scientific computing the amount can be very large. However, workloads such as cryptography may exhibit much less parallelism.

Micro-architectural techniques that are used to exploit ILP include:

  • Instruction pipelining where the execution of multiple instructions can be partially overlapped.
  • Superscalar execution, VLIW, and the closely related explicitly parallel instruction computing concepts, in which multiple execution units are used to execute multiple instructions in parallel.
  • Out-of-order execution where instructions execute in any order that does not violate data dependencies. Note that this technique is independent of both pipelining and superscalar execution. Current implementations of out-of-order execution dynamically (i.e., while the program is executing and without any help from the compiler) extract ILP from ordinary programs. An alternative is to extract this parallelism at compile time and somehow convey this information to the hardware. Due to the complexity of scaling the out-of-order execution technique, the industry has re-examined instruction sets which explicitly encode multiple independent operations per instruction.
  • Register renaming which refers to a technique used to avoid unnecessary serialization of program operations imposed by the reuse of registers by those operations, used to enable out-of-order execution.
  • Speculative execution which allows the execution of complete instructions or parts of instructions before being certain whether this execution should take place. A commonly used form of speculative execution is control flow speculation where instructions past a control flow instruction (e.g., a branch) are executed before the target of the control flow instruction is determined. Several other forms of speculative execution have been proposed and are in use including speculative execution driven by value prediction, memory dependence prediction and cache latency prediction.
  • Branch prediction which is used to avoid stalling for control dependencies to be resolved. Branch prediction is used with speculative execution.

It is known that the ILP is exploited by both the compiler and hardware support but the compiler also provides inherent and implicit ILP in programs to hardware by compile-time optimizations. Some optimization techniques for extracting available ILP in programs would include instruction scheduling, register allocation/renaming, and memory access optimization.

Dataflow architectures are another class of architectures where ILP is explicitly specified, for a recent example see the TRIPS architecture.

In recent years, ILP techniques have been used to provide performance improvements in spite of the growing disparity between processor operating frequencies and memory access times (early ILP designs such as the IBM System/360 Model 91 used ILP techniques to overcome the limitations imposed by a relatively small register file). Presently, a cache miss penalty to main memory costs several hundreds of CPU cycles. While in principle it is possible to use ILP to tolerate even such memory latencies, the associated resource and power dissipation costs are disproportionate. Moreover, the complexity and often the latency of the underlying hardware structures results in reduced operating frequency further reducing any benefits. Hence, the aforementioned techniques prove inadequate to keep the CPU from stalling for the off-chip data. Instead, the industry is heading towards exploiting higher levels of parallelism that can be exploited through techniques such as multiprocessing and multithreading.[4]

See also

References

  1. ^ "The History of Computing". mason.gmu.edu. Retrieved 2019-03-24.
  2. ^ Goossens, Bernard; Langlois, Philippe; Parello, David; Petit, Eric (2012). "PerPI: A Tool to Measure Instruction Level Parallelism". Applied Parallel and Scientific Computing. Lecture Notes in Computer Science. 7133: 270–281. doi:10.1007/978-3-642-28151-8_27. ISBN 978-3-642-28150-1. S2CID 26665479.
  3. ^ Hennessy, John L.; Patterson, David A. Computer Architecture: A Quantitative Approach.
  4. ^

Further reading

  • Aiken, Alex; Banerjee, Utpal; Kejariwal, Arun; Nicolau, Alexandru (2016-11-30). Instruction Level Parallelism. Professional Computing (1 ed.). Springer. ISBN 978-1-4899-7795-3. ISBN 1-4899-7795-3. (276 pages)

External links

instruction, level, parallelism, parallel, simultaneous, execution, sequence, instructions, computer, program, more, specifically, refers, average, number, instructions, step, this, parallel, execution, atanasoff, berry, computer, first, computer, with, parall. Instruction level parallelism ILP is the parallel or simultaneous execution of a sequence of instructions in a computer program More specifically ILP refers to the average number of instructions run per step of this parallel execution 2 5 Atanasoff Berry computer the first computer with parallel processing 1 Contents 1 Discussion 2 See also 3 References 4 Further reading 5 External linksDiscussion EditILP must not be confused with concurrency In ILP there is a single specific thread of execution of a process On the other hand concurrency involves the assignment of multiple threads to a CPU s core in a strict alternation or in true parallelism if there are enough CPU cores ideally one core for each runnable thread There are two approaches to instruction level parallelism hardware and software Hardware level works upon dynamic parallelism whereas the software level works on static parallelism Dynamic parallelism means the processor decides at run time which instructions to execute in parallel whereas static parallelism means the compiler decides which instructions to execute in parallel 3 clarification needed The Pentium processor works on the dynamic sequence of parallel execution but the Itanium processor works on the static level parallelism Consider the following program e a b f c d m e f Operation 3 depends on the results of operations 1 and 2 so it cannot be calculated until both of them are completed However operations 1 and 2 do not depend on any other operation so they can be calculated simultaneously If we assume that each operation can be completed in one unit of time then these three instructions can be completed in a total of two units of time giving an ILP of 3 2 A goal of compiler and processor designers is to identify and take advantage of as much ILP as possible Ordinary programs are typically written under a sequential execution model where instructions execute one after the other and in the order specified by the programmer ILP allows the compiler and the processor to overlap the execution of multiple instructions or even to change the order in which instructions are executed How much ILP exists in programs is very application specific In certain fields such as graphics and scientific computing the amount can be very large However workloads such as cryptography may exhibit much less parallelism Micro architectural techniques that are used to exploit ILP include Instruction pipelining where the execution of multiple instructions can be partially overlapped Superscalar execution VLIW and the closely related explicitly parallel instruction computing concepts in which multiple execution units are used to execute multiple instructions in parallel Out of order execution where instructions execute in any order that does not violate data dependencies Note that this technique is independent of both pipelining and superscalar execution Current implementations of out of order execution dynamically i e while the program is executing and without any help from the compiler extract ILP from ordinary programs An alternative is to extract this parallelism at compile time and somehow convey this information to the hardware Due to the complexity of scaling the out of order execution technique the industry has re examined instruction sets which explicitly encode multiple independent operations per instruction Register renaming which refers to a technique used to avoid unnecessary serialization of program operations imposed by the reuse of registers by those operations used to enable out of order execution Speculative execution which allows the execution of complete instructions or parts of instructions before being certain whether this execution should take place A commonly used form of speculative execution is control flow speculation where instructions past a control flow instruction e g a branch are executed before the target of the control flow instruction is determined Several other forms of speculative execution have been proposed and are in use including speculative execution driven by value prediction memory dependence prediction and cache latency prediction Branch prediction which is used to avoid stalling for control dependencies to be resolved Branch prediction is used with speculative execution It is known that the ILP is exploited by both the compiler and hardware support but the compiler also provides inherent and implicit ILP in programs to hardware by compile time optimizations Some optimization techniques for extracting available ILP in programs would include instruction scheduling register allocation renaming and memory access optimization Dataflow architectures are another class of architectures where ILP is explicitly specified for a recent example see the TRIPS architecture In recent years ILP techniques have been used to provide performance improvements in spite of the growing disparity between processor operating frequencies and memory access times early ILP designs such as the IBM System 360 Model 91 used ILP techniques to overcome the limitations imposed by a relatively small register file Presently a cache miss penalty to main memory costs several hundreds of CPU cycles While in principle it is possible to use ILP to tolerate even such memory latencies the associated resource and power dissipation costs are disproportionate Moreover the complexity and often the latency of the underlying hardware structures results in reduced operating frequency further reducing any benefits Hence the aforementioned techniques prove inadequate to keep the CPU from stalling for the off chip data Instead the industry is heading towards exploiting higher levels of parallelism that can be exploited through techniques such as multiprocessing and multithreading 4 See also EditData dependency Memory level parallelism MLP References Edit The History of Computing mason gmu edu Retrieved 2019 03 24 Goossens Bernard Langlois Philippe Parello David Petit Eric 2012 PerPI A Tool to Measure Instruction Level Parallelism Applied Parallel and Scientific Computing Lecture Notes in Computer Science 7133 270 281 doi 10 1007 978 3 642 28151 8 27 ISBN 978 3 642 28150 1 S2CID 26665479 Hennessy John L Patterson David A Computer Architecture A Quantitative Approach Reflections of the Memory WallFurther reading EditAiken Alex Banerjee Utpal Kejariwal Arun Nicolau Alexandru 2016 11 30 Instruction Level Parallelism Professional Computing 1 ed Springer ISBN 978 1 4899 7795 3 ISBN 1 4899 7795 3 276 pages External links EditApproaches to addressing the Memory Wall Wired magazine article that refers to the above paper https www scribd com doc 33700101 Instruction Level Parallelism scribd http www hpl hp com techreports 92 HPL 92 132 pdf Retrieved from https en wikipedia org w index php title Instruction level parallelism amp oldid 1123646585, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.