3 research outputs found

    New algorithmic developments in maximum consensus robust fitting

    Get PDF
    In many computer vision applications, the task of robustly estimating the set of parameters of a geometric model is a fundamental problem. Despite the longstanding research efforts on robust model fitting, there remains significant scope for investigation. For a large number of geometric estimation tasks in computer vision, maximum consensus is the most popular robust fitting criterion. This thesis makes several contributions in the algorithms for consensus maximization. Randomized hypothesize-and-verify algorithms are arguably the most widely used class of techniques for robust estimation thanks to their simplicity. Though efficient, these randomized heuristic methods do not guarantee finding good maximum consensus estimates. To improve the randomize algorithms, guided sampling approaches have been developed. These methods take advantage of additional domain information, such as descriptor matching scores, to guide the sampling process. Subsets of the data that are more likely to result in good estimates are prioritized for consideration. However, these guided sampling approaches are ineffective when good domain information is not available. This thesis tackles this shortcoming by proposing a new guided sampling algorithm, which is based on the class of LP-type problems and Monte Carlo Tree Search (MCTS). The proposed algorithm relies on a fundamental geometric arrangement of the data to guide the sampling process. Specifically, we take advantage of the underlying tree structure of the maximum consensus problem and apply MCTS to efficiently search the tree. Empirical results show that the new guided sampling strategy outperforms traditional randomized methods. Consensus maximization also plays a key role in robust point set registration. A special case is the registration of deformable shapes. If the surfaces have the same intrinsic shapes, their deformations can be described accurately by a conformal model. The uniformization theorem allows the shapes to be conformally mapped onto a canonical domain, wherein the shapes can be aligned using a M¨obius transformation. The problem of correspondence-free M¨obius alignment of two sets of noisy and partially overlapping point sets can be tackled as a maximum consensus problem. Solving for the M¨obius transformation can be approached by randomized voting-type methods which offers no guarantee of optimality. Local methods such as Iterative Closest Point can be applied, but with the assumption that a good initialization is given or these techniques may converge to a bad local minima. When a globally optimal solution is required, the literature has so far considered only brute-force search. This thesis contributes a new branch-and-bound algorithm that solves for the globally optimal M¨obius transformation much more efficiently. So far, the consensus maximization problems are approached mainly by randomized algorithms, which are efficient but offer no analytical convergence guarantee. On the other hand, there exist exact algorithms that can solve the problem up to global optimality. The global methods, however, are intractable in general due to the NP-hardness of the consensus maximization. To fill the gap between the two extremes, this thesis contributes two novel deterministic algorithms to approximately optimize the maximum consensus criterion. The first method is based on non-smooth penalization supported by a Frank-Wolfe-style optimization scheme, and another algorithm is based on Alternating Direction Method of Multipliers (ADMM). Both of the proposed methods are capable of handling the non-linear geometric residuals commonly used in computer vision. As will be demonstrated, our proposed methods consistently outperform other heuristics and approximate methods.Thesis (Ph.D.) (Research by Publication) -- University of Adelaide, School of Computer Science, 201

    Optics and virtualization as data center network infrastructure

    Get PDF
    The emerging cloud services have motivated a fresh look at the design of data center network infrastructure in multiple layers. To transfer the huge amount of data generated by many data intensive applications, data center network has to be fast, scalable and power efficient. To support flexible and efficient sharing in cloud services, service providers deploy a virtualization layer as part of the data center infrastructure. This thesis explores the design and performance analysis of data center network infrastructure in both physical network and virtualization layer. On the physical network design front, we present a hybrid packet/circuit switched network architecture which uses circuit switched optics to augment traditional packet-switched Ethernet in modern data centers. We show that this technique has substantial potential to improve bisection bandwidth and application performance in a cost-effective manner. To push the adoption of optical circuits in real cloud data centers, we further explore and address the circuit control issues in shared data center environments. On the virtualization layer, we present an analytical study on the network performance of virtualized data centers. Using Amazon EC2 as an experiment platform, we quantify the impact of virtualization on network performance in commercial cloud. Our findings provide valuable insights to both cloud users in moving legacy application into cloud and service providers in improving the virtualization infrastructure to support better cloud services

    Developing the Fringe Routing Protocol

    No full text
    An ISP style network often has a particular traffic pattern not typically seen in other networks and which is a direct result of the ISP’s purpose, to connect internal clients with a high speed external link. Such a network is likely to consist of a backbone with the clients on one ‘side’ and one or more external links on the other. Most traffic on the network moves between an internal client and the external world via the backbone. But what about traffic between two clients of the ISP? Typical routing protocols will find the ‘best’ path between the two gateway routers at the edge of the client stub networks. As these routers connect the stubs to the ISP core, this route should be entirely within the ISP network. Ideally, from the ISP point of view, this traffic will go up to the backbone and down again but it is possible that it may find another route along a redundant backup path. Don Stokes of Knossos Networks has developed a protocol to sit on the client fringes of this ISP style of network. It is based on the distance vector algorithm and is intended to be subordinate to the existing interior gateway protocol running on the ISPs backbone. It manipulates the route cost calculation so that paths towards the backbone become very cheap and paths away from the backbone become expensive. This forces traffic in the preferred direction unless the backup path ‘shortcut’ is very attractive or the backbone link has disappeared. It is the analysis and development of the fringe routing protocol that forms the content of this ME thesis
    corecore