程序代写代做代考 algorithm Microsoft PowerPoint – Chapter 4 – Basic Communication Operations

Microsoft PowerPoint – Chapter 4 – Basic Communication Operations

Introduction to
Parallel Computing

George Karypis
Basic Communication Operations

Outline
Importance of Collective Communication
Operations
One-to-All Broadcast
All-to-One Reduction
All-to-All Broadcast & Reduction
All-Reduce & Prefix-Sum
Scatter and Gather
All-to-All Personalized

Collective Communication
Operations

They represent regular communication patterns that are
performed by parallel algorithms.

Collective: Involve groups of processors
Used extensively in most data-parallel algorithms.
The parallel efficiency of these algorithms depends on
efficient implementation of these operations.
They are equally applicable to distributed and shared
address space architectures
Most parallel libraries provide functions to perform them
They are extremely useful for “getting started” in parallel
processing!

MPI Names

One-to-All Broadcast &
All-to-One Reduction

Broadcast on a Ring Algorithm

Reduction on a Ring Algorithm

Broadcast on a Mesh

Broadcast on a Hypercube

Code for the Broadcast
Source: Root

Code for Broadcast
Arbitrary Source

All-to-All Broadcast & Reduction

All-to-All Broadcast for Ring

All-to-All Broadcast on a Mesh

All-to-All Broadcast on a HCube

All-Reduce & Prefix-Sum

Scatter & Gather

Scatter Operation on HCube

All-to-All Personalized (Transpose)

All-to-all Personalized on a Ring

All-to-all Personalized on a Mesh

All-to-all Personalized on a HCube

All-to-all Personalized on a HCube
Improved Algorithm

Perform log(p) point-to-point
communication steps

Processor i communicates
with processor iXORj during
the jth communication step.

Complexities

Leave a Reply

Your email address will not be published. Required fields are marked *