Optimizing performance of parallel computations from a data communications perspective

Date of Completion

January 1997

Keywords

Computer Science

Degree

Ph.D.

Abstract

The performance of a High Performance Parallel or Distributed Computation depends heavily on minimizing communication between the cooperating processes. Communication is the overhead cost in parallel computing. When concurrent processes possibly residing on different processors require access to the same data, delay is introduced as processes need to access data remotely. In general, programmers of parallel computations have few tools to help optimize performance. Many of our results can be applied and automated into program development and real-time control. First, our research focuses on minimizing the interprocess communication. We develop several algorithms as an aid to the assignment of data. Second, we define conditions for data migration between processes at run time. The algorithm presented can be used as a part of a runtime optimization system. Third, we explore the Fork-Join construct with respect to performance as a function of data communication. Fourth, we develop a general framework for performance modeling in data-parallel cascading Fork-Joins. Lastly, we apply our data assignment and communication optimization techniques to parallel LU decomposition. We validate the accuracy of our performance model, and show how it outperforms conventional approaches. ^

Share

COinS