11 research outputs found

    An extensive English language bibliography on graph theory and its applications

    Get PDF
    Bibliography on graph theory and its application

    Kinematic Synthesis of Deployable-Foldable Truss Structures Using Graph Theory

    Get PDF
    A graph theoretic approach is applied to the conceptual design of deployable truss structures. The characteristics that relate to the inter-connectivity of the elements of a deployable truss structure can be captured in a schematic representation, called a graph. A procedure is presented that enables the exhaustive generation of these graphs for structures of any given number of nodes and links and which are foldable onto a plane or onto a line. A special type of truss structures, called truss modules, is presented. Graphs of this class of structures form a subset of the graphs of truss structures. Two procedures are presented that are applied to recognize these graphs among graphs of truss structures. The procedures also generate information on the relative lengths of the links in a truss module by examining the graph it represents. This enables the generation of numerous novel (deployable) truss modules as well as those that have been reported in the literature. A procedure is presented for the generation of all possible folded configurations of deployable truss structures. By applying this procedure to deployable truss modules, truss modules are identified that exhibit special geometrical properties which allow the module to fold using fewer joints than dictated in the initial phase of the conceptual design process. Using an alternate definition of graphs, procedures are presented for the specification of the joint types and joint inter-connectivity that accommodates the folding and/or deployment of a deployable truss structure. These procedures are applied to generate all possible joint assignments for deployable truss modules. Procedures for the conceptual design of deployable truss structures result in the generation of innumerable design concepts. An expert system is developed to aid the designer of deployable truss structures in the evaluation of such designs. Incorporated in this expert system are selection criteria that are developed to assist a designer in selecting the best candidates for any given application. Employing this approach, many promising novel designs, as well as those that have been reported in the literature, are identified

    A strategy for the visual recognition of objects in an industrial environment.

    Get PDF
    This thesis is concerned with the problem of recognizing industrial objects rapidly and flexibly. The system design is based on a general strategy that consists of a generalized local feature detector, an extended learning algorithm and the use of unique structure of the objects. Thus, the system is not designed to be limited to the industrial environment. The generalized local feature detector uses the gradient image of the scene to provide a feature description that is insensitive to a range of imaging conditions such as object position, and overall light intensity. The feature detector is based on a representative point algorithm which is able to reduce the data content of the image without restricting the allowed object geometry. Thus, a major advantage of the local feature detector is its ability to describe and represent complex object structure. The reliance on local features also allows the system to recognize partially visible objects. The task of the learning algorithm is to observe the feature description generated by the feature detector in order to select features that are reliable over the range of imaging conditions of interest. Once a set of reliable features is found for each object, the system finds unique relational structure which is later used to recognize the objects. Unique structure is a set of descriptions of unique subparts of the objects of interest. The present implementation is limited to the use of unique local structure. The recognition routine uses these unique descriptions to recognize objects in new images. An important feature of this strategy is the transference of a large amount of processing required for graph matching from the recognition stage to the learning stage, which allows the recognition routine to execute rapidly. The test results show that the system is able to function with a significant level of insensitivity to operating conditions; The system shows insensitivity to its 3 main assumptions -constant scale, constant lighting, and 2D images- displaying a degree of graceful degradation when the operating conditions degrade. For example, for one set of test objects, the recognition threshold was reached when the absolute light level was reduced by 70%-80%, or the object scale was reduced by 30%-40%, or the object was tilted away from the learned 2D plane by 300-400. This demonstrates a very important feature of the learning strategy: It shows that the generalizations made by the system are not only valid within the domain of the sampled set of images, but extend outside this domain. The test results also show that the recognition routine is able to execute rapidly, requiring 10ms-500ms (on a PDP11/24 minicomputer) in the special case when ideal operating conditions are guaranteed. (Note: This does not include pre-processing time). This thesis describes the strategy, the architecture and the implementation of the vision system in detail, and gives detailed test results. A proposal for extending the system to scale independent 3D object recognition is also given

    The Effect of Code Obfuscation on Authorship Attribution of Binary Computer Files

    Get PDF
    In many forensic investigations, questions linger regarding the identity of the authors of the software specimen. Research has identified methods for the attribution of binary files that have not been obfuscated, but a significant percentage of malicious software has been obfuscated in an effort to hide both the details of its origin and its true intent. Little research has been done around analyzing obfuscated code for attribution. In part, the reason for this gap in the research is that deobfuscation of an unknown program is a challenging task. Further, the additional transformation of the executable file introduced by the obfuscator modifies or removes features from the original executable that would have been used in the author attribution process. Existing research has demonstrated good success in attributing the authorship of an executable file of unknown provenance using methods based on static analysis of the specimen file. With the addition of file obfuscation, static analysis of files becomes difficult, time consuming, and in some cases, may lead to inaccurate findings. This paper presents a novel process for authorship attribution using dynamic analysis methods. A software emulated system was fully instrumented to become a test harness for a specimen of unknown provenance, allowing for supervised control, monitoring, and trace data collection during execution. This trace data was used as input into a supervised machine learning algorithm trained to identify stylometric differences in the specimen under test and provide predictions on who wrote the specimen. The specimen files were also analyzed for authorship using static analysis methods to compare prediction accuracies with prediction accuracies gathered from this new, dynamic analysis based method. Experiments indicate that this new method can provide better accuracy of author attribution for files of unknown provenance, especially in the case where the specimen file has been obfuscated

    A general computational tool for structure synthesis

    Get PDF
    Synthesis of structures is a very difficult task even with only a small number of components that form a system; yet it is the catalyst of innovation. Molecular structures and nanostructures typically have a large number of similar components but different connections, which manifests a more challenging task for their synthesis. This thesis presents a novel method and its related algorithms and computer programs for the synthesis of structures. This novel method is based on several concepts: (1) the structure is represented by a graph and further by the adjacency matrix; and (2) instead of only exploiting the eigenvalue of the adjacency matrix, both the eigenvalue and the eigenvector are exploited; specifically the components of the eigenvector have been found very useful in algorithm development. This novel method is called the Eigensystem method. The complexity of the Eigensystem method is equal to that of the famous program called Nauty in the combinatorial world. However, the Eigensystem method can work for the weighted and both directed and undirected graph, while the Nauty program can only work for the non-weighted and both directed and undirected graph. The cause for this is the different philosophies underlying these two methods. The Nauty program is based on the recursive component decomposition strategy, which could involve some unmanageable complexities when dealing with the weighted graph, albeit no such an attempt has been reported in the literature. It is noted that in practical applications of structure synthesis, weighted graphs are more useful than non-weighted graphs for representing physical systems. Pivoted at the Eigensystem method, this thesis presents the algorithms and computer programs for the three fundamental problems in structure synthesis, namely the isomorphism/automorphism, the unique labeling, and the enumeration of the structures or graphs

    Almost Symmetries and the Unit Commitment Problem

    Get PDF
    This thesis explores two main topics. The first is almost symmetry detection on graphs. The presence of symmetry in combinatorial optimization problems has long been considered an anathema, but in the past decade considerable progress has been made. Modern integer and constraint programming solvers have automatic symmetry detection built-in to either exploit or avoid symmetric regions of the search space. Automatic symmetry detection generally works by converting the input problem to a graph which is in exact correspondence with the problem formulation. Symmetry can then be detected on this graph using one of the excellent existing algorithms; these are also the symmetries of the problem formulation.The motivation for detecting almost symmetries on graphs is that almost symmetries in an integer program can force the solver to explore nearly symmetric regions of the search space. Because of the known correspondence between integer programming formulations and graphs, this is a first step toward detecting almost symmetries in integer programming formulations. Though we are only able to compute almost symmetries for graphs of modest size, the results indicate that almost symmetry is definitely present in some real-world combinatorial structures, and likely warrants further investigation.The second topic explored in this thesis is integer programming formulations for the unit commitment problem. The unit commitment problem involves scheduling power generators to meet anticipated energy demand while minimizing total system operation cost. Today, practitioners usually formulate and solve unit commitment as a large-scale mixed integer linear program.The original intent of this project was to bring the analysis of almost symmetries to the unit commitment problem. Two power generators are almost symmetric in the unit commitment problem if they have almost identical parameters. Along the way, however, new formulations for power generators were discovered that warranted a thorough investigation of their own. Chapters 4 and 5 are a result of this research.Thus this work makes three contributions to the unit commitment problem: a convex hull description for a power generator accommodating many types of constraints, an improved formulation for time-dependent start-up costs, and an exact symmetry reduction technique via reformulation

    Robust Anomaly Detection with Applications to Acoustics and Graphs

    Get PDF
    Our goal is to develop a robust anomaly detector that can be incorporated into pattern recognition systems that may need to learn, but will never be shunned for making egregious errors. The ability to know what we do not know is a concept often overlooked when developing classifiers to discriminate between different types of normal data in controlled experiments. We believe that an anomaly detector should be used to produce warnings in real applications when operating conditions change dramatically, especially when other classifiers only have a fixed set of bad candidates from which to choose. Our approach to distributional anomaly detection is to gather local information using features tailored to the domain, aggregate all such evidence to form a global density estimate, and then compare it to a model of normal data. A good match to a recognizable distribution is not required. By design, this process can detect the "unknown unknowns" [1] and properly react to the "black swan events" [2] that can have devastating effects on other systems. We demonstrate that our system is robust to anomalies that may not be well-defined or well-understood even if they have contaminated the training data that is assumed to be non-anomalous. In order to develop a more robust speech activity detector, we reformulate the problem to include acoustic anomaly detection and demonstrate state-of-the-art performance using simple distribution modeling techniques that can be used at incredibly high speed. We begin by demonstrating our approach when training on purely normal conversational speech and then remove all annotation from our training data and demonstrate that our techniques can robustly accommodate anomalous training data contamination. When comparing continuous distributions in higher dimensions, we develop a novel method of discarding portions of a semi-parametric model to form a robust estimate of the Kullback-Leibler divergence. Finally, we demonstrate the generality of our approach by using the divergence between distributions of vertex invariants as a graph distance metric and achieve state-of-the-art performance when detecting graph anomalies with neighborhoods of excessive or negligible connectivity. [1] D. Rumsfeld. (2002) Transcript: DoD news briefing - Secretary Rumsfeld and Gen. Myers. [2] N. N. Taleb, The Black Swan: The Impact of the Highly Improbable. Random House, 2007
    corecore