694 research outputs found

    Gaming security by obscurity

    Get PDF
    Shannon sought security against the attacker with unlimited computational powers: *if an information source conveys some information, then Shannon's attacker will surely extract that information*. Diffie and Hellman refined Shannon's attacker model by taking into account the fact that the real attackers are computationally limited. This idea became one of the greatest new paradigms in computer science, and led to modern cryptography. Shannon also sought security against the attacker with unlimited logical and observational powers, expressed through the maxim that "the enemy knows the system". This view is still endorsed in cryptography. The popular formulation, going back to Kerckhoffs, is that "there is no security by obscurity", meaning that the algorithms cannot be kept obscured from the attacker, and that security should only rely upon the secret keys. In fact, modern cryptography goes even further than Shannon or Kerckhoffs in tacitly assuming that *if there is an algorithm that can break the system, then the attacker will surely find that algorithm*. The attacker is not viewed as an omnipotent computer any more, but he is still construed as an omnipotent programmer. So the Diffie-Hellman step from unlimited to limited computational powers has not been extended into a step from unlimited to limited logical or programming powers. Is the assumption that all feasible algorithms will eventually be discovered and implemented really different from the assumption that everything that is computable will eventually be computed? The present paper explores some ways to refine the current models of the attacker, and of the defender, by taking into account their limited logical and programming powers. If the adaptive attacker actively queries the system to seek out its vulnerabilities, can the system gain some security by actively learning attacker's methods, and adapting to them?Comment: 15 pages, 9 figures, 2 tables; final version appeared in the Proceedings of New Security Paradigms Workshop 2011 (ACM 2011); typos correcte

    Algorithmic Assembly of Nanoscale Structures

    Get PDF
    The development of nanotechnology has become one of the most significant endeavors of our time. A natural objective of this field is discovering how to engineer nanoscale structures. Limitations of current top-down techniques inspire investigation into bottom-up approaches to reach this objective. A fundamental precondition for a bottom-up approach is the ability to control the behavior of nanoscale particles. Many abstract representations have been developed to model systems of particles and to research methods for controlling their behavior. This thesis develops theories on two such approaches for building complex structures: the self-assembly of simple particles, and the use of simple robot swarms. The concepts for these two approaches are straightforward. Self-assembly is the process by which simple particles, following the rules of some behavior-governing system, naturally coalesce into a more complex form. The other method of bottom-up assembly involves controlling nanoscale particles through explicit directions and assembling them into a desired form. Regarding the self-assembly of nanoscale structures, we present two construction methods in a variant of a popular theoretical model known as the 2-Handed Tile Self-Assembly Model. The first technique achieves shape construction at only a constant scale factor, while the second result uses only a constant number of unique particle types. Regarding the use of robot swarms for construction, we first develop a novel technique for reconfiguring a swarm of globally-controlled robots into a desired shape even when the robots can only move maximally in a commanded direction. We then expand on this work by formally defining an entire hierarchy of shapes which can be built in this manner and we provide a technique for doing so

    Three dimensional pattern recognition using feature-based indexing and rule-based search

    Full text link
    In flexible automated manufacturing, robots can perform routine operations as well as recover from atypical events, provided that process-relevant information is available to the robot controller. Real time vision is among the most versatile sensing tools, yet the reliability of machine-based scene interpretation can be questionable. The effort described here is focused on the development of machine-based vision methods to support autonomous nuclear fuel manufacturing operations in hot cells; This thesis presents a method to efficiently recognize 3D objects from 2D images based on feature-based indexing. Object recognition is the identification of correspondences between parts of a current scene and stored views of known objects, using chains of segments or indexing vectors. To create indexed object models, characteristic model image features are extracted during preprocessing. Feature vectors representing model object contours are acquired from several points of view around each object and stored. Recognition is the process of matching stored views with features or patterns detected in a test scene; Two sets of algorithms were developed, one for preprocessing and indexed database creation, and one for pattern searching and matching during recognition. At recognition time, those indexing vectors with the highest match probability are retrieved from the model image database, using a nearest neighbor search algorithm. The nearest neighbor search predicts the best possible match candidates. Extended searches are guided by a search strategy that employs knowledge-base (KB) selection criteria. The knowledge-based system simplifies the recognition process and minimizes the number of iterations and memory usage; Novel contributions include the use of a feature-based indexing data structure together with a knowledge base. Both components improve the efficiency of the recognition process by improved structuring of the database of object features and reducing data base size. This data base organization according to object features facilitates machine learning in the context of a knowledge-base driven recognition algorithm. Lastly, feature-based indexing permits the recognition of 3D objects based on a comparatively small number of stored views, further limiting the size of the feature database; Experiments with real images as well as synthetic images including occluded (partially visible) objects are presented. The experiments show almost perfect recognition with feature-based indexing, if the detected features in the test scene are viewed from the same angle as the view on which the model is based. The experiments also show that the knowledge base is a highly effective and efficient search tool recognition performance is improved without increasing the database size requirements. The experimental results indicate that feature-based indexing in combination with a knowledge-based system will be a useful methodology for automatic target recognition (ATR)

    Unmanned vehicles formation control in 3D space and cooperative search

    Get PDF
    The first problem considered in this dissertation is the decentralized non-planar formation control of multiple unmanned vehicles using graph rigidity. The three-dimensional formation control problem consists of n vehicles operating in a plane Q and r vehicles that operate in an upper layer outside of the plane Q. This can be referred to as a layered formation control where the objective is for all vehicles to cooperatively acquire a predefined formation shape using a decentralized control law. The proposed control strategy is based on regulating the inter-vehicle distances and uses backstepping and Lyapunov approaches. Three different models, with increasing level of complexity are considered for the multi-vehicle system: the single integrator vehicle model, the double integrator vehicle model, and a model that represents the dynamics of a class of robotics vehicles including wheeled mobile robots, underwater vehicles with constant depth, aircraft with constant altitude, and marine vessels. A rigorous stability analysis is presented that guarantees convergence of the inter-vehicle distances to desired values. Additionally, a new Neural Network (NN)-based control algorithm that uses graph rigidity and relative positions of the vehicles is proposed to solve the formation control problem of unmanned vehicles in 3D space. The control law for each vehicle consists of a nonlinear component that is dependent on the closed-loop error dynamics plus a NN component that is linear in the output weights (a one-tunable layer NN is used). A Lyapunov analysis shows that the proposed distance-based control strategy achieves the uniformly ultimately bounded stability of the desired infinitesimally and minimally rigid formation and that NN weights remain bounded. Simulation results are included to demonstrate the performance of the proposed method. The second problem addressed in this dissertation is the cooperative unmanned vehicles search. In search and surveillance operations, deploying a team of unmanned vehicles provides a robust solution that has multiple advantages over using a single vehicle in efficiency and minimizing exploration time. The cooperative search problem addresses the challenge of identifying target(s) in a given environment when using a team of unmarried vehicles by proposing a novel method of mapping and movement of vehicle teams in a cooperative manner. The approach consists of two parts. First, the region is partitioned into a hexagonal beehive structure in order to provide equidistant movements in every direction and to allow for more natural and flexible environment mapping. Additionally, in search environments that are partitioned into hexagons, the vehicles have an efficient travel path while performing searches due to this partitioning approach. Second, a team of unmanned vehicles that move in a cooperative manner and utilize the Tabu Random algorithm is used to search for target(s). Due to the ever-increasing use of robotics and unmanned systems, the field of cooperative multi-vehicle search has developed many applications recently that would benefit from the use of the approach presented in this dissertation, including: search and rescue operations, surveillance, data collection, and border patrol. Simulation results are presented that show the performance of the Tabu Random search algorithm method in combination with hexagonal partitioning

    The Formats of Spatial Representations

    Get PDF
    Mental representations are the essence of cognition. Yet, to understand how the mind works, we must understand not just the content of mental representations (i.e., what information is stored), but also the format of those representations (i.e., how that information is stored). If we want to understand how sensory information is translated into symbolic representations, if we want to know how the mind forms ā€˜cognitive mapsā€™, if we want to know how the firing of neurons can lead to the emergent phenomenon of human cognition ā€” all of these things require us to understand how information is organized in the mind. In this thesis, I describe three ā€˜case studiesā€™ of representational format in the domain of spatial cognition. I focus on spatial cognition for several reasons. First, spatial cognition is ubiquitous in the animal kingdom; thus, understanding spatial cognition in the human mind has the potential to reveal insights that generalize to all minds. Second, spatial cognition may be the single domain for which we know the most about the format of representations; indeed, the field was essentially founded on the premise that there exists a discernable ā€˜cognitive mapā€™ within the mind. As such, it serves as an apt domain to study representational format. Finally, spatial representations (location representations in particular) may serve as the format of other higher-level information (e.g., numerical information, social information, etc.). Understanding the formats of spatial representation, therefore, may shed light on how other kinds of information are represented and organized in the mind. The first case study I describe pertains to the format of location representations. I show that, using a simple ā€˜error correlationā€™ analysis, we can uncover from simple spatial tasks the coordinate systems underlying spatial behavior. Using this approach, I argue that locations are spontaneously represented in polar coordinates, but flexibly in other coordinate systems (e.g., Cartesian coordinates) as needed. The second case study I describe pertains to the format of size representations. It has been known for many decades that the perception of size is illusory; for example, larger objects are perceived as being relatively less large. However, these illusions are typically explained by vague, unfalsifiable theories of size perception. I offer a simpler (and falsifiable) explanation of size illusions: that perceived size is equal to the sum of an objectsā€™ dimensions rather than the product. Here, I focus primarily on the perception of area in adults, but this phenomenon appears to be highly general: I briefly allude to similar illusions that children experience, as well as similar illusions of volume. The final case study I describe pertains to how spatial information is used as a format to represent other information. I show that task-irrelevant ā€˜spatial structureā€™ spontaneously improves working memory. This effect is specific to spatial information; color information and audio information produce no such benefit. I discuss how these findings relate to existing models of working memory, and help us to understand the relationship between space and memory more broadly. I conclude with some final remarks about how understanding spatial behavior in light of the formats of representations can help us to understand the building blocks of cognition

    Branching Boogaloo: Botanical Adventures in Multi-Mediated Morphologies

    Get PDF
    FormaLeaf is a software interface for exploring leaf morphology using parallel string rewriting grammars called L-systems. Scanned images of dicotyledonous angiosperm leaves removed from plants around Bardā€™s campus are displayed on the left and analyzed using the computer vision library OpenCV. Morphometrical information and terminological labels are reported in a side-panel. ā€œSlider modeā€ allows the user to control the structural template and growth parameters of the generated L-system leaf displayed on the right. ā€œVision modeā€ shows the input and generated leaves as the computer ā€˜seesā€™ them. ā€œSearch modeā€ attempts to automatically produce a formally defined graphical representation of the input by evaluating the visual similarity of a generated pool of candidate leaves. The system seeks to derive a possible internal structural configuration for venation based purely off a visual analysis of external shape. The iterations of the generated L-system leaves when viewed in succession appear as a hypothetical development sequence. FormaLeaf was written in Processing
    • ā€¦
    corecore