The field of computational geometry is concerned with the design and analysis of geometric algorithms. For such algorithms, correctness and efficiency proofs are constructed, or problems are proven to be hard when no correct and efficient algorithms exist. In order to be able to do this, several assumptions about the input data for geometric algorithms are made. One of them is that this data is correct, with absolute certainty and infinite precision. In practical applications, this is often not the case, and as a result the value of these theoretical guarantees may be questionable. If we want to supply geometric algorithms with theoretical guarantees that are actually observed in practice, we have to loosen our assumptions about the input data to a more realistic level. Depending on the application, we may be confident that each data point, for example, is not more than some value epsilon away from its given position. We can then construct algorithms that are guaranteed to be correct and efficient as long as the input satisfies this weaker assumption. Furthermore, we can analyse how the imprecision in the input influences the accuracy of the output. In this thesis, we present new algorithms for classical geometric problems, that explicitly take data imprecision into account. These algorithms cannot produce the real answers to those problems, but instead produce information about the possible values that the answers can have. In several cases, this can be done without adding any extra cost to the asymptotic running times of the classical solutions. In some cases, though, computing this information is significantly more costly than using classical algorithms, and in some cases we prove that indeed no efficient algorithms exist
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.