31 research outputs found

    Drawing graphs for cartographic applications

    Get PDF
    Graph Drawing is a relatively young area that combines elements of graph theory, algorithms, (computational) geometry and (computational) topology. Research in this field concentrates on developing algorithms for drawing graphs while satisfying certain aesthetic criteria. These criteria are often expressed in properties like edge complexity, number of edge crossings, angular resolutions, shapes of faces or graph symmetries and in general aim at creating a drawing of a graph that conveys the information to the reader in the best possible way. Graph drawing has applications in a wide variety of areas which include cartography, VLSI design and information visualization. In this thesis we consider several graph drawing problems. The first problem we address is rectilinear cartogram construction. A cartogram, also known as value-by-area map, is a technique used by cartographers to visualize statistical data over a set of geographical regions like countries, states or counties. The regions of a cartogram are deformed such that the area of a region corresponds to a particular geographic variable. The shapes of the regions depend on the type of cartogram. We consider rectilinear cartograms of constant complexity, that is cartograms where each region is a rectilinear polygon with a constant number of vertices. Whether a cartogram is good is determined by how closely the cartogram resembles the original map and how precisely the area of its regions describe the associated values. The cartographic error is defined for each region as jAc¡Asj=As, where Ac is the area of the region in the cartogram and As is the specified area of that region, given by the geographic variable to be shown. In this thesis we consider the construction of rectilinear cartograms that have correct adjacencies of the regions and zero cartographic error. We show that any plane triangulated graph admits a rectilinear cartogram where every region has at most 40 vertices which can be constructed in O(nlogn) time. We also present experimental results that show that in practice the algorithm works significantly better than suggested by the complexity bounds. In our experiments on real-world data we were always able to construct a cartogram where the average number of vertices per region does not exceed five. Since a rectangle has four vertices, this means that most of the regions of our rectilinear car tograms are in fact rectangles. Moreover, the maximum number vertices of each region in these cartograms never exceeded ten. The second problem we address in this thesis concerns cased drawings of graphs. The vertices of a drawing are commonly marked with a disk, but differentiating between vertices and edge crossings in a dense graph can still be difficult. Edge casing is a wellknown method—used, for example, in electrical drawings, when depicting knots, and, more generally, in information visualization—to alleviate this problem and to improve the readability of a drawing. A cased drawing orders the edges of each crossing and interrupts the lower edge in an appropriate neighborhood of the crossing. One can also envision that every edge is encased in a strip of the background color and that the casing of the upper edge covers the lower edge at the crossing. If there are no application-specific restrictions that dictate the order of the edges at each crossing, then we can in principle choose freely how to arrange them. However, certain orders will lead to a more readable drawing than others. In this thesis we formulate aesthetic criteria for a cased drawing as optimization problems and solve these problems. For most of the problems we present either a polynomial time algorithm or demonstrate that the problem is NP-hard. Finally we consider a combinatorial question in computational topology concerning three types of objects: closed curves in the plane, surfaces immersed in the plane, and surfaces embedded in space. In particular, we study casings of closed curves in the plane to decide whether these curves can be embedded as the boundaries of certain special surfaces. We show that it is NP-complete to determine whether an immersed disk is the projection of a surface embedded in space, or whether a curve is the boundary of an immersed surface in the plane that is not constrained to be a disk. However, when a casing is supplied with a self-intersecting curve, describing which component of the curve lies above and which below at each crossing, we can determine in time linear in the number of crossings whether the cased curve forms the projected boundary of a surface in space. As a related result, we show that an immersed surface with a single boundary curve that crosses itself n times has at most 2n=2 combinatorially distinct spatial embeddings and we discuss the existence of fixed-parameter tractable algorithms for related problems

    Developing a labelled object-relational constraint database architecture for the projection operator

    Get PDF
    Current relational databases have been developed in order to improve the handling of stored data, however, there are some types of information that have to be analysed for which no suitable tools are available. These new types of data can be represented and treated as constraints, allowing a set of data to be represented through equations, inequations and Boolean combinations of both. To this end, constraint databases were defined and some prototypes were developed. Since there are aspects that can be improved, we propose a new architecture called labelled object-relational constraint database (LORCDB). This provides more expressiveness, since the database is adapted in order to support more types of data, instead of the data having to be adapted to the database. In this paper, the projection operator of SQL is extended so that it works with linear and polynomial constraints and variables of constraints. In order to optimize query evaluation efficiency, some strategies and algorithms have been used to obtain an efficient query plan. Most work on constraint databases uses spatiotemporal data as case studies. However, this paper proposes model-based diagnosis since it is a highly potential research area, and model-based diagnosis permits more complicated queries than spatiotemporal examples. Our architecture permits the queries over constraints to be defined over different sets of variables by using symbolic substitution and elimination of variables.Ministerio de Ciencia y Tecnología DPI2006-15476-C02-0

    Smart Fabric sensors for foot motion monitoring

    Get PDF
    Smart Fabrics or fabrics that have the characteristics of sensors are a wide and emerging field of study. This thesis summarizes an investigation into the development of fabric sensors for use in sensorized socks that can be used to gather real time information about the foot such as gait features. Conventional technologies usually provide 2D information about the foot. Sensorized socks are able to provide angular data in which foot angles are correlated to the output from the sensor enabling 3D monitoring of foot position. Current angle detection mechanisms are mainly heavy and cumbersome; the sensorized socks are not only portable but also non-invasive to the subject who wears them. The incorporation of wireless features into the sensorized socks enabled a remote monitoring of the foot

    A Dual-SLIP Model For Dynamic Walking In A Humanoid Over Uneven Terrain

    Get PDF

    Topology optimization for additive manufacture

    Get PDF
    Additive manufacturing (AM) offers a way to manufacture highly complex designs with potentially enhanced performance as it is free from many of the constraints associated with traditional manufacturing. However, current design and optimisation tools, which were developed much earlier than AM, do not allow efficient exploration of AM's design space. Among these tools are a set of numerical methods/algorithms often used in the field of structural optimisation called topology optimisation (TO). These powerful techniques emerged in the 1980s and have since been used to achieve structural solutions with superior performance to those of other types of structural optimisation. However, such solutions are often constrained during optimisation to minimise structural complexities, thereby, ensuring that solutions can be manufactured via traditional manufacturing methods. With the advent of AM, it is necessary to restructure these techniques to maximise AM's capabilities. Such restructuring should involve identification and relaxation of the optimisation constraints within the TO algorithms that restrict design for AM. These constraints include the initial design, optimisation parameters and mesh characteristics of the optimisation problem being solved. A typical TO with certain mesh characteristics would involve the movement of an assumed initial design to another with improved structural performance. It was anticipated that the complexity and performance of a solution would be affected by the optimisation constraints. This work restructured a TO algorithm called the bidirectional evolutionary structural optimisation (BESO) for AM. MATLAB and MSC Nastran were coupled to study and investigate BESO for both two and three dimensional problems. It was observed that certain parametric values promote the realization of complex structures and this could be further enhanced by including an adaptive meshing strategy (AMS) in the TO. Such a strategy reduced the degrees of freedom initially required for this solution quality without the AMS

    Numerical modelling of additive manufacturing process for stainless steel tension testing samples

    Get PDF
    Nowadays additive manufacturing (AM) technologies including 3D printing grow rapidly and they are expected to replace conventional subtractive manufacturing technologies to some extents. During a selective laser melting (SLM) process as one of popular AM technologies for metals, large amount of heats is required to melt metal powders, and this leads to distortions and/or shrinkages of additively manufactured parts. It is useful to predict the 3D printed parts to control unwanted distortions and shrinkages before their 3D printing. This study develops a two-phase numerical modelling and simulation process of AM process for 17-4PH stainless steel and it considers the importance of post-processing and the need for calibration to achieve a high-quality printing at the end. By using this proposed AM modelling and simulation process, optimal process parameters, material properties, and topology can be obtained to ensure a part 3D printed successfully

    Proceedings of the ECCOMAS Thematic Conference on Multibody Dynamics 2015

    Get PDF
    This volume contains the full papers accepted for presentation at the ECCOMAS Thematic Conference on Multibody Dynamics 2015 held in the Barcelona School of Industrial Engineering, Universitat Politècnica de Catalunya, on June 29 - July 2, 2015. The ECCOMAS Thematic Conference on Multibody Dynamics is an international meeting held once every two years in a European country. Continuing the very successful series of past conferences that have been organized in Lisbon (2003), Madrid (2005), Milan (2007), Warsaw (2009), Brussels (2011) and Zagreb (2013); this edition will once again serve as a meeting point for the international researchers, scientists and experts from academia, research laboratories and industry working in the area of multibody dynamics. Applications are related to many fields of contemporary engineering, such as vehicle and railway systems, aeronautical and space vehicles, robotic manipulators, mechatronic and autonomous systems, smart structures, biomechanical systems and nanotechnologies. The topics of the conference include, but are not restricted to: ● Formulations and Numerical Methods ● Efficient Methods and Real-Time Applications ● Flexible Multibody Dynamics ● Contact Dynamics and Constraints ● Multiphysics and Coupled Problems ● Control and Optimization ● Software Development and Computer Technology ● Aerospace and Maritime Applications ● Biomechanics ● Railroad Vehicle Dynamics ● Road Vehicle Dynamics ● Robotics ● Benchmark ProblemsPostprint (published version

    Hollow condensates, topological ladders and quasiperiodic chains

    Get PDF
    This thesis presents three distinct topics pertaining to the intersection of condensed matter and atomic, molecular and optical (AMO) physics. We theoretically address the physics of hollow Bose-Einstein condensates and the behavior of vortices within them then discuss localization-delocalization physics of one-dimensional quasiperiodic models, and end by focusing on the physics of localized edge modes and topological phases in quasi-one-dimensional ladder models. For all three topics we maintain a focus on experimentally accessible, physically realistic systems and explicitly discuss experimental implementations of our work or its implications for future experiments. First, we study shell-shaped Bose-Einstein condensates (BECs). This work is motivated by experiments aboard the International Space Station (ISS) in the Cold Atom Laboratory (CAL) where hollow condensates are being engineered. Additionally, shell-like structures of superfluids form in interiors of neutron stars and with ultracold bosons in three-dimensional optical lattices. Our work serves as a theoretical parallel to CAL studies and a step towards understanding these more complex systems. We model hollow BECs as confined by a trapping potential that allows for transitions between fully-filled and hollow geometries. Our study is the first to consider such a real-space topological transition. We find that collective mode frequencies of spherically symmetric condensates show non-monotonic features at the hollowing-out point. We further determine that for fully hollow spherically symmetric BECs effects of Earth's gravity are very destructive and consequently focus on microgravity environments. Finally, we study quantized vortices on hollow condensate shells and their response to system rotation. Vortex behavior interesting as a building block for studies of more complicated quantum fluid equilibration processes and physics of rotating neutron stars interiors. Condensate shells' closed and hollow geometry constrains possible vortex configurations. We find that those configurations are stable only for high rotation rates. Further, we determine that vortex lines nucleate at lower rotation rates for hollow condensates than those that are fully-filled. Second, we analyze the effects of quasiperiodicity in one-dimensional systems. Distinct from truly disordered systems, these models exhibit delocalization in contrast to well-known facts about Anderson localization. We study the famous Aubry-Andre-Harper (AAH) model, a one-dimensional tight-binding model that localizes only for sufficiently strong quasiperiodic on-site modulation and is equivalent to the Hofstadter problem at its critical point. Generalizations of the AAH modelhave been studied numerically and a generalized self-dual AAH model has been proposed and analytically analyzed by S. Ganeshan, J. Pixley and S. Das Sarma (GPD). For extended and generalized AAH models the appearance of a mobility edge i.e. an energy cut-off dictating which wavefunctions undergo the localization-delocalization transition is expected. For the GPD model this critical energy has been theoretically determined. We employ transfer matrices to study one-dimensional quasiperiodic systems. Transfer matrices characterize localization physics through Lyapunov exponents. The symplectic nature of transfer matrices allows us to represent them as points on a torus. We then obtain information about wavefunctions of the system by studying toroidal curves corresponding to transfer matrix products. Toroidal curves for localized, delocalized and critical wavefunctions are distinct, demonstrating a geometrical characterization of localization physics. Applying the transfer matrix method to AAH-like models, we formulate a geometrical picture that captures the emergence of the mobility edge. Additionally, we connect with experimental findings concerning a realization of the GPD model in an interacting ultracold atomic system. Third, we consider a generalization of the Su-Schrieffer-Heeger (SSH) model. The SSH chain is a one-dimensional tight-binding model that can host localized bound states at its ends. It is celebrated as the simplest model having topological properties captured by invariants calculated from its band-structure. We study two coupled SSH chains i.e. the SSH ladder. The SSH ladder has a complex phase diagram determined by inter-chain and intra-chain couplings. We find three distinct phases: a topological phase hosting localized zero energy modes, a topologically trivial phase having no edge modes and a phase akin to a weak topological insulator where edge modes are not robust. The topological phase of the SSH ladder is analogous to the Kitaev chain, which is known to support localized Majorana fermion end modes. Bound states of the SSH ladder having the same spatial wavefunction profiles as these Majorana end modes are Dirac fermions or bosons. The SSH ladder is consequently more suited for experimental observation than the Kitaev chain. For quasiperiodic variations of the inter-chain coupling, the SSH ladder topological phase diagram reproduces Hofstadter's butterfly pattern. This system is thus a candidate for experimental observation of the famous fractal. We discuss one possible experimental setup for realizing the SSH ladder in its Kitaev chain-like phase in a mechanical meta-material system. This approach could also be used to experimentally study the Hofstadter butterfly in the future. Presented together, these three topics illustrate the richness of the intersection of condensed matter and AMO physics and the many exciting prospects of theoretical work in the realm of the former combining with experimental advances within the latter

    The Significance of Evidence-based Reasoning for Mathematics, Mathematics Education, Philosophy and the Natural Sciences

    Get PDF
    In this multi-disciplinary investigation we show how an evidence-based perspective of quantification---in terms of algorithmic verifiability and algorithmic computability---admits evidence-based definitions of well-definedness and effective computability, which yield two unarguably constructive interpretations of the first-order Peano Arithmetic PA---over the structure N of the natural numbers---that are complementary, not contradictory. The first yields the weak, standard, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically verifiable Tarskian truth values to the formulas of PA under the interpretation. The second yields a strong, finitary, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically computable Tarskian truth values to the formulas of PA under the interpretation. We situate our investigation within a broad analysis of quantification vis a vis: * Hilbert's epsilon-calculus * Goedel's omega-consistency * The Law of the Excluded Middle * Hilbert's omega-Rule * An Algorithmic omega-Rule * Gentzen's Rule of Infinite Induction * Rosser's Rule C * Markov's Principle * The Church-Turing Thesis * Aristotle's particularisation * Wittgenstein's perspective of constructive mathematics * An evidence-based perspective of quantification. By showing how these are formally inter-related, we highlight the fragility of both the persisting, theistic, classical/Platonic interpretation of quantification grounded in Hilbert's epsilon-calculus; and the persisting, atheistic, constructive/Intuitionistic interpretation of quantification rooted in Brouwer's belief that the Law of the Excluded Middle is non-finitary. We then consider some consequences for mathematics, mathematics education, philosophy, and the natural sciences, of an agnostic, evidence-based, finitary interpretation of quantification that challenges classical paradigms in all these disciplines
    corecore