The summary of useful weblinks is stored on the summary page


For this last week, apart from the two project presentations, I will go over again some of the already covered points on computer architecture and on parallel computing and give a brief taste of GPU computing.

In the eighties and nineties, vector computing, where the big machines had processors that dealt efficiently with arrays of numbers or calculations were all the vogue and hard to program, as well as expensive, and hard to cool. But most computers were and are still serial machines, meaning they have only one or a few cores.

The enemies of efficient high performance computing include:

  • Efficient algorithms - quite well under control in 2016 for serial computing.
  • High hardware costs for large memory machines - drastically reduced with commodity parallel clusters. (Commodity means off-the shelf mass produced computers, either in boxes or cards in a special cabinet.) But!!! these led to new algorithmic challenges.
  • Parallel clusters only lowered hardware costs, if anything increased electrical power and cooling demands.

    GPU clusters can lower power/cooling costs, but further increase algorithmic demands.

  • Some of the important distinctions are:

    1. Shared versus distributed memory: before commodity clusters won the race, large memory computers (SMP - shared memory processors) with a limited number of processors became common. They needed sharing software but it was relatively easy to write.
    2. Commodity clusters need parallel algorithms and software and also fast switches to communicate between the nodes. The communication language that won the fight is called MPI, message passing interface. There are example codes and links to help on MPI in my HPC lectures.
    3. Each node of the cluster can have several processors so TAMNUN, for example is really a combination of shared and distributed memory architectures. Communication within each node is faster than between nodes.

    But it gets more complicated. Each processor is like a PC. Of course, on a large parallel machine there are only screens attached to a few nodes, called the master node(s). This is usually for the use of the system administrator, everyone else connects from remote. So why does TAMNUN have GPUs?

    The following is extracted from the nvidia homepages

    The CPU (central processing unit) has often been called the brains of the PC. But increasingly, that brain is being enhanced by another part of the PC, viz. the GPU (graphics processing unit), which is its soul.

    Cpu-vs-gpu All PCs have chips that render the display images to monitors. But not all these chips are created equal. Intel's integrated graphics controller provides basic graphics that can display only productivity applications like Microsoft PowerPoint, low-resolution video and basic games.

    The GPU is in a class by itself and it goes far beyond basic graphics controller functions, and is a programmable and powerful computational device in its own right. What Is a GPU?

    The GPU's advanced capabilities were originally used primarily for 3D game rendering. But now those capabilities are being harnessed more broadly to accelerate computational workloads in areas such as financial modeling, cutting-edge scientific research and oil and gas exploration.

    In a recent BusinessWeek article, Insight64 principal analyst Nathan Brookwood described the unique capabilities of the GPU this way: ''GPUs are optimized for taking huge batches of data and performing the same operation over and over very quickly, unlike PC microprocessors, which tend to skip all over the place.''

    Architecturally, the CPU is composed of a only few cores with lots of cache memory that can handle a few software threads at a time. In contrast, a GPU is composed of hundreds of cores that can handle thousands of threads simultaneously. The ability of a GPU with 100+ cores to process thousands of threads can accelerate some software by 100x over a CPU alone. What's more, the GPU achieves this acceleration while being more power- and cost-efficient than a CPU. GPU-accelerated computing has now grown into a mainstream movement supported by the latest operating systems from Apple (with OpenCL) and Microsoft (using DirectCompute). The reason for the wide and mainstream acceptance is that the GPU is a computational powerhouse, and its capabilities are growing faster than those of the x86 CPU.

    In today'sy PC, the GPU can now take on many multimedia tasks, such as accelerating Adobe Flash video, transcoding (translating) video between different formats, image recognition, virus pattern matching and others. More and more, the really hard problems to solve are those that have an inherent parallel nature such as video processing, image analysis, signal processing.

    The combination of a CPU with a GPU can deliver the best value of system performance, price, and power.

    end of extract

    Of course GPUs are hard to program. In fact just as hard as parallel machines for the same reasons. But let us look at some GPU examples.

    Back to the index page

    For more information about the Computational Physics Group at the

    Technion - Israel Institute of Technology

    see the

    Computational Physics Home Page

    December, 2012