21 research outputs found

    Interactions entre les Cliques et les Stables dans un Graphe

    Get PDF
    This thesis is concerned with different types of interactions between cliques and stable sets, two very important objects in graph theory, as well as with the connections between these interactions. At first, we study the classical problem of graph coloring, which can be stated in terms of partioning the vertices of the graph into stable sets. We present a coloring result for graphs with no triangle and no induced cycle of even length at least six. Secondly, we study the Erdös-Hajnal property, which asserts that the maximum size of a clique or a stable set is polynomial (instead of logarithmic in random graphs). We prove that the property holds for graphs with no induced path on k vertices and its complement.Then, we study the Clique-Stable Set Separation, which is a less known problem. The question is about the order of magnitude of the number of cuts needed to separate all the cliques from all the stable sets. This notion was introduced by Yannakakis when he studied extended formulations of the stable set polytope in perfect graphs. He proved that a quasi-polynomial number of cuts is always enough, and he asked if a polynomial number of cuts could suffice. Göös has just given a negative answer, but the question is open for restricted classes of graphs, in particular for perfect graphs. We prove that a polynomial number of cuts is enough for random graphs, and in several hereditary classes. To this end, some tools developed in the study of the Erdös-Hajnal property appear to be very helpful. We also establish the equivalence between the Clique-Stable set Separation problem and two other statements: the generalized Alon-Saks-Seymour conjecture and the Stubborn Problem, a Constraint Satisfaction Problem.Cette thĂšse s'intĂ©resse Ă  diffĂ©rents types d'interactions entre les cliques et les stables, deux objets trĂšs importants en thĂ©orie des graphes, ainsi qu'aux relations entre ces diffĂ©rentes interactions. En premier lieu, nous nous intĂ©ressons au problĂšme classique de coloration de graphes, qui peut s'exprimer comme une partition des sommets du graphe en stables. Nous prĂ©sentons un rĂ©sultat de coloration pour les graphes sans triangles ni cycles pairs de longueur au moins 6. Dans un deuxiĂšme temps, nous prouvons la propriĂ©tĂ© d'Erdös-Hajnal, qui affirme que la taille maximale d'une clique ou d'un stable devient polynomiale (contre logarithmique dans les graphes alĂ©atoires) dans le cas des graphes sans chemin induit Ă  k sommets ni son complĂ©mentaire, quel que soit k.Enfin, un problĂšme moins connu est la Clique-Stable sĂ©paration, oĂč l'on cherche un ensemble de coupes permettant de sĂ©parer toute clique de tout stable. Cette notion a Ă©tĂ© introduite par Yannakakis lors de l’étude des formulations Ă©tendues du polytope des stables dans un graphe parfait. Il prouve qu’il existe toujours un sĂ©parateur Clique-Stable de taille quasi-polynomiale, et se demande si l'on peut se limiter Ă  une taille polynomiale. Göös a rĂ©cemment fourni une rĂ©ponse nĂ©gative, mais la question se pose encore pour des classes de graphes restreintes, en particulier pour les graphes parfaits. Nous prouvons une borne polynomiale pour la Clique-Stable sĂ©paration dans les graphes alĂ©atoires et dans plusieurs classes hĂ©rĂ©ditaires, en utilisant notamment des outils communs Ă  l'Ă©tude de la conjecture d'Erdös-Hajnal. Nous dĂ©crivons Ă©galement une Ă©quivalence entre la Clique-Stable sĂ©paration et deux autres problĂšmes  : la conjecture d'Alon-Saks-Seymour gĂ©nĂ©ralisĂ©e et le ProblĂšme TĂȘtu, un problĂšme de Satisfaction de Contraintes

    Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging.

    Full text link
    This thesis reports on the incorporation of surface information into a probabilistic simultaneous localization and mapping (SLAM) framework used on an autonomous underwater vehicle (AUV) designed for underwater inspection. AUVs operating in cluttered underwater environments, such as ship hulls or dams, are commonly equipped with Doppler-based sensors, which---in addition to navigation---provide a sparse representation of the environment in the form of a three-dimensional (3D) point cloud. The goal of this thesis is to develop perceptual algorithms that take full advantage of these sparse observations for correcting navigational drift and building a model of the environment. In particular, we focus on three objectives. First, we introduce a novel representation of this 3D point cloud as collections of planar features arranged in a factor graph. This factor graph representation probabalistically infers the spatial arrangement of each planar segment and can effectively model smooth surfaces (such as a ship hull). Second, we show how this technique can produce 3D models that serve as input to our pipeline that produces the first-ever 3D photomosaics using a two-dimensional (2D) imaging sonar. Finally, we propose a model-assisted bundle adjustment (BA) framework that allows for robust registration between surfaces observed from a Doppler sensor and visual features detected from optical images. Throughout this thesis, we show methods that produce 3D photomosaics using a combination of triangular meshes (derived from our SLAM framework or given a-priori), optical images, and sonar images. Overall, the contributions of this thesis greatly increase the accuracy, reliability, and utility of in-water ship hull inspection with AUVs despite the challenges they face in underwater environments. We provide results using the Hovering Autonomous Underwater Vehicle (HAUV) for autonomous ship hull inspection, which serves as the primary testbed for the algorithms presented in this thesis. The sensor payload of the HAUV consists primarily of: a Doppler velocity log (DVL) for underwater navigation and ranging, monocular and stereo cameras, and---for some applications---an imaging sonar.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120750/1/paulozog_1.pd

    36th International Symposium on Theoretical Aspects of Computer Science: STACS 2019, March 13-16, 2019, Berlin, Germany

    Get PDF

    Software integration in mobile robotics, a science to scale up machine intelligence

    Get PDF
    The present work tackles integration in mobile robotics. Integration is often considered to be a mere technique, unworthy of scientific investigation. On the contrary, we show that integrating capabilities in a mobile robot entails new questions that the parts alone do not feature. These questions reflect the structure of the application and the physics of the world. We also show that a successful integration process transforms the parts themselves and allows to scale up mobile-robot intelligence in real-world applications. In Chapter 2 we present the hardware. In Chapter 3, we show that building a low-level control architecture considering the mechanic and electronic reality of the robot improves the performances and allows to integrate a large number of sensors and actuators. In Chapter 4, we show that globally optimising mechatronic parameters considering the robot as a whole allows to implement slam using an inexpensive sensor with a low processor load. In Chapter 5, we show that based on the output from the slam algorithm, we can combine infrared proximity sensors and vision to detect objects and to build a semantic map of the environment. We show how to find free paths for the robot and how to create a dual geometric-symbolic representation of the world. In Chapter 6, we show that the nature of scenarios influences the implementation of a task-planning algorithm and changes its execution properties. All these chapters contribute results that together prove that integration is a science. In Chapter 7, we show that combining these results improves the state of the art in a difficult application : autonomous construction in unknown environments with scarce resources. This application is interesting because it is challenging at multiple levels : For low-level control, manipulating objects in the real world to build structures is difficult. At the level of perceptions, the fusion of multiple heterogeneous inexpensive sensors is not trivial, because these sensors are noisy and the noise is non-Gaussian. At the level of cognition, reasoning about elements from an unknown world in real time on a miniature robot is demanding. Building this application upon our other results proves that integration allows to scale up machine intelligence, because this application shows intelligence that is beyond the state of the art, still only combining basic components that are individually slightly behind the state of the art

    Graph-Based Approaches to Protein StructureComparison - From Local to Global Similarity

    Get PDF
    The comparative analysis of protein structure data is a central aspect of structural bioinformatics. Drawing upon structural information allows the inference of function for unknown proteins even in cases where no apparent homology can be found on the sequence level. Regarding the function of an enzyme, the overall fold topology might less important than the specific structural conformation of the catalytic site or the surface region of a protein, where the interaction with other molecules, such as binding partners, substrates and ligands occurs. Thus, a comparison of these regions is especially interesting for functional inference, since structural constraints imposed by the demands of the catalyzed biochemical function make them more likely to exhibit structural similarity. Moreover, the comparative analysis of protein binding sites is of special interest in pharmaceutical chemistry, in order to predict cross-reactivities and gain a deeper understanding of the catalysis mechanism. From an algorithmic point of view, the comparison of structured data, or, more generally, complex objects, can be attempted based on different methodological principles. Global methods aim at comparing structures as a whole, while local methods transfer the problem to multiple comparisons of local substructures. In the context of protein structure analysis, it is not a priori clear, which strategy is more suitable. In this thesis, several conceptually different algorithmic approaches have been developed, based on local, global and semi-global strategies, for the task of comparing protein structure data, more specifically protein binding pockets. The use of graphs for the modeling of protein structure data has a long standing tradition in structural bioinformatics. Recently, graphs have been used to model the geometric constraints of protein binding sites. The algorithms developed in this thesis are based on this modeling concept, hence, from a computer scientist's point of view, they can also be regarded as global, local and semi-global approaches to graph comparison. The developed algorithms were mainly designed on the premise to allow for a more approximate comparison of protein binding sites, in order to account for the molecular flexibility of the protein structures. A main motivation was to allow for the detection of more remote similarities, which are not apparent by using more rigid methods. Subsequently, the developed approaches were applied to different problems typically encountered in the field of structural bioinformatics in order to assess and compare their performance and suitability for different problems. Each of the approaches developed during this work was capable of improving upon the performance of existing methods in the field. Another major aspect in the experiments was the question, which methodological concept, local, global or a combination of both, offers the most benefits for the specific task of protein binding site comparison, a question that is addressed throughout this thesis

    Privacy and spectral analysis of social network randomization

    Get PDF
    Social networks are of significant importance in various application domains. Un- derstanding the general properties of real social networks has gained much attention due to the proliferation of networked data. Many applications of networks such as anonymous web browsing and data publishing require relationship anonymity due to the sensitive, stigmatizing, or confidential nature of the relationship. One general ap- proach for this problem is to randomize the edges in true networks, and only release the randomized networks for data analysis. Our research focuses on the development of randomization techniques such that the released networks can preserve data utility while preserving data privacy. Data privacy refers to the sensitive information in the network data. The released network data after a simple randomization could incur various disclosures including identity disclosure, link disclosure and attribute disclosure. Data utility refers to the information, features, and patterns contained in the network data. Many important features may not be preserved in the released network data after a simple randomiza- tion. In this dissertation, we develop advanced randomization techniques to better preserve data utility of the network data while still preserving data privacy. Specifi- cally we develop two advanced randomization strategies that can preserve the spectral properties of the network or can preserve the real features (e.g., modularity) of the network. We quantify to what extent various randomization techniques can protect data privacy when attackers use different attacks or have different background knowl- edge. To measure the data utility, we also develop a consistent spectral framework to measure the non-randomness (importance) of the edges, nodes, and the overall graph. Exploiting the spectral space of network topology, we further develop fraud detection techniques for various collaborative attacks in social networks. Extensive theoretical analysis and empirical evaluations are conducted to demonstrate the efficacy of our developed techniques

    Multiscale imaging of endothelial cell guidance

    Full text link
    Angiogenesis, the process through which new blood vessels are formed, relies on coordinated endothelial cell behaviours, regulated by key signalling pathways such as vascular endothelial growth factor (VEGF) and integrin pathways. In vitro study of endothelial cell guidance at multiple scales is vital to understand how different environmental cues are integrated at the subcellular level. The aim of this thesis was to develop high content imaging methods to capture temporal information at the cellular, subcellular and molecular scale in controlled microenvironments. To examine the hypothesis that persistent endothelial cell migration is directed by soluble and surface-bound gradients, a method to create steady soluble or immobilised gradients on radiofrequency-plasma-polymerised surfaces to support endothelial cell attachment and migration was developed. A novel method of imaging single molecule protein adsorption by Total Internal Reflection Fluorescence Microscopy (TIRF-M) was developed. This new single molecule counting method may have future applications, particularly the study of how proteins and cells interact with surfaces at the molecular scale. Endothelial cells should not be considered as homogeneous cell populations as they take on different roles during new vessel growth. Single cell live-cell imaging was developed to capture the heterogeneous nature of endothelial cell migration directed by a soluble gradient. Statistical methods for analysing directed cell motion were evaluated. Both circular and Hotelling’s TÂČ statistics provided robust statistics for evaluating the effect of a chemical gradient on endothelial cell migration. It was surprising to discover that a soluble VEGF165 gradient on its own was not sufficient to direct endothelial cell migration, requiring synergy with a sphingosine-1-phosphate gradient to elicit optimal responses. The Rho GTPases are thought to coordinate cell surface signalling with remodelling of cytoarchitecture for directed endothelial cell motion. A Förster resonance energy transfer (FRET) probe (Raichu) was used to visualise Rho GTPase activity in live endothelial cells at subcellular scales in real time using fluorescence lifetime imaging and intensity-based measurements. A fluorescent protein toolbox was developed to quantify spectral bleed-through. A new quantification method has extended the resolution of live cell FRET measurements by statistical modelling of spectral bleed-through and biological noise

    Dynamic Trees: A Hierarchical Probabilistic Approach to Image Modelling

    Get PDF
    Institute for Adaptive and Neural ComputationThis work introduces a new class of image model which we call dynamic trees or DTs. A dynamic tree model specifies a prior over structures of trees, each of which is a forest of one or more tree-structured belief networks (TSBN). In the literature standard tree-structured belief network models were found to produce “blocky” segmentations when naturally occurring boundaries within an image did not coincide with those of the subtrees in the rigid fixed structure of the network. Dynamic trees have a flexible architecture which allows the structure to vary to accommodate configurations where the subtree and image boundaries align, and experimentation with the model showed significant improvements. They are also hierarchical in nature allowing a multi-scale representation and are constructed within a well founded Bayesian framework. For large models the number of tree configurations quickly becomes intractable to enumerate over, presenting a problem for exact inference. Techniques such as Gibbs sampling over trees are considered and search using simulated annealing finds high posterior probability trees on synthetic 2-d images generated from the model. However simulated annealing and sampling techniques are rather slow. Variational methods are applied to the model in an attempt to approximate the posterior by a simpler tractable distribution, and the simplest of these techniques, mean field, found comparable solutions to simulated annealing in the order of 100 times faster. This increase in speed goes a long way towards making real-time inference in the dynamic tree viable. Variational methods have the further advantage that by attempting to model the full posterior distribution it is possible to gain an indication as to the quality of the solutions found. An EM-style update based upon mean field inference is derived and the learned conditional probability tables (describing state transitions between a node and its parent) are compared with exact EM on small tractable fixed architecture models. The mean field approximation by virtue of its form is biased towards fully factorised solutions which tends to create degenerate CPTs, but despite this mean field learning still produces solutions whose log likelihood rivals exact EM. Development of algorithms for learning the probabilities of the prior over tree structures completes the dynamic tree picture. After discussion of the relative merits of certain representations for the disconnection probabilities and initial investigation on small model structures the full dynamic tree model is applied to a database of images of outdoor scenes where all of its parameters are learned. DTs are seen to offer significant improvement in performance over the fixed architecture TSBN and in a coding comparison the DT achieves 0 294 bits per pixel (bpp) compression compared to 0 378 bpp for lossless JPEG on images of 7 colours
    corecore