Message-Optimal and Latency-Optimal Termination Detection Algorithms for Arbitrary Topologies



An important problem in distributed systems is to detect termination of a distributed computation. A computation is said to have terminated when all processes have become passive and all channels have become empty. We present two optimal algorithms for detecting termination of a non-diffusing distributed computation for an arbitrary topology. Both algorithms are optimal in terms of message complexity and detection latency. The first termination detection algorithm has to be initiated along with the underlying computation. The second termination detection algorithm can be initiated at any time after the underlying computation has started. However, in this case, all channels are required to be first-in-first-out (FIFO). Our termination detection algorithms impose minimal execution overhead on the system, and, at the same time, detect termination as soon as possible.