Implementation of high-level programming languages
A high-level programming language defines a programming
abstraction: the programmer expresses an algorithm using the language, and the
compiler must translate that program to the target language. Generally,
higher-level programming languages are easier to program in,but are less
efficient, that is, the target programs run more slowly. Programmers using a
lowlevel language have more control over a computation and can, in principle,
produce more efficient code. Unfortunately, lower-level programs are harder to
write and — worse still —less portable, more prone to errors, and harder to
maintain. Optimizing compilers include techniques to improve the performance of
generated code, thus offsetting the inefficiency introduced by high-level
abstractions.
Optimizations for computer architectures
The rapid evolution of computer architectures has also led to an
insatiable demand for new compiler technology. Almost all high-performance
systems take advantage of the same two basic techniques: parallelism and memory hierarchies. Parallelism can be found at
several levels: at the instruction level, where multiple operations
are executed simultaneously and at the processor level, where different threads of
the same application are run on different processors. Memory hierarchies
are a response to the basic limitation that we can build very fast storage or
very large storage, but not storage that is both fast and large.
Parallelism-All modern microprocessors exploit instruction-level
parallelism. However,this parallelism can be hidden from the programmer.
Programs are written as if all instructions were executed in sequence; the
hardware dynamically checks for dependencies in the sequential instruction
stream and issues them in parallel when possible. In some cases, the machine
includes a hardware scheduler that can change the instruction ordering to
increase the parallelism in the program. Whether the hardware reorders the
instructions or not, compilers can rearrange the instructions to make
instruction-level parallelism more effective.
Memory Hierarchies- A memory hierarchy consists of several levels of
storage with different speeds and sizes, with the level closest to the
processor being the fastest but smallest. The average memory-access time of a
program is reduced if most of its accesses are satisfied by the faster levels
of the hierarchy. Both parallelism and the existence of a memory hierarchy
improve the potential performance of a machine, but they must be harnessed
effectively by the compiler to deliver real performance on an application.
Design of new computer architectures
In the early days of computer architecture design, compilers were
developed after the machines were built. That has changed. Since programming in
highlevel languages is the norm, the performance of a computer system is
determined not by its raw speed but also by how well compilers can exploit
its features. Thus, in modern computer architecture development, compilers are
developed in the processor-design stage, and compiled code, running on
simulators, is used to evaluate the proposed architectural features.
Program translations
While we normally think of compiling as a translation from a
high-level language to the machine level, the same technology can be applied to
translate between different kinds of languages.
Software productivity tools
Programs are arguably the most complicated engineering artifacts
ever produced; they consist of many details, every one of which must be correct
before the program will work completely. As a result, errors are rampant in
programs; errors may crash a system, produce wrong results, render a system
vulnerable to security attacks, or even lead to catastrophic failures in
critical systems. Testing is the primary technique for locating errors in
programs. An interesting and promising complementary approach is to use
data-flow analysis to locate errors statically (that is, before the program is
run). Dataflow analysis can find errors along all the possible execution paths,
and not just those exercised by the input data sets, as in the case of program
testing. Many of the data-flow-analysis techniques, originally developed for
compiler optimizations, can be used to create tools that assist programmers in
their software engineering tasks.
0 comments