Distributed database systems implement a transaction commit protocol to ensure transaction atomicity. A commit protocol guarantees the uniform commitment of distributed transaction execution, that is, it ensures that all the participating sites agree on the final transaction outcome (commit or abort). Most importantly, this guarantee is valid even in the presence of site or network failures. Over the last two decades, a variety of commit protocols have been proposed by database researchers. These include the classical two phase commit (2PC) protocol, its variations such as Presumed Commit and Presumed Abort , nested 2PC , broadcast 2PC and three phase commit . To achieve their functionality, these commit protocols typically require exchange of multiple messages, in multiple phases, between the participating sites where the distributed transaction executed. In addition, several log records are generated, some of which have to be "forced", that is, flushed to disk immediately. Due to these costs, commit processing can result in a significant increase in transaction execution times, and therefore the choice of commit protocol becomes an important decision in the design of a distributed database system. Surprisingly, however, no systematic studies are available on the relative performance of these protocols with respect to their quantitative impact on transaction processing performance, rendering it difficult for designers to make an informed choice. In this thesis, we address this lacuna for two kinds of distributed database systems: (1) Distributed OnLine Transaction Processing Systems (OLTP), and (2) Distributed Real-Time Database Systems (RTDB). A special feature of our study is that we consider both blocking commit protocols, wherein site or network failures can lead ..