2,334 research outputs found

    Dynamic Connectivity in Disk Graphs

    Get PDF
    Let S ⊆ R2 be a set of n sites in the plane, so that every site s ∈ S has an associated radius rs > 0. Let D(S) be the disk intersection graph defined by S, i.e., the graph with vertex set S and an edge between two distinct sites s, t ∈ S if and only if the disks with centers s, t and radii rs , rt intersect. Our goal is to design data structures that maintain the connectivity structure of D(S) as sites are inserted and/or deleted in S. First, we consider unit disk graphs, i.e., we fix rs = 1, for all sites s ∈ S. For this case, we describe a data structure that has O(log2 n) amortized update time and O(log n/ log log n) query time. Second, we look at disk graphs with bounded radius ratio Ψ, i.e., for all s ∈ S, we have 1 ≤ rs ≤ Ψ, for a parameter Ψ that is known in advance. Here, we not only investigate the fully dynamic case, but also the incremental and the decremental scenario, where only insertions or only deletions of sites are allowed. In the fully dynamic case, we achieve amortized expected update time O(Ψ log4 n) and query time O(log n/ log log n). This improves the currently best update time by a factor of Ψ. In the incremental case, we achieve logarithmic dependency on Ψ, with a data structure that has O(α(n)) amortized query time and O(log Ψ log4 n) amortized expected update time, where α(n) denotes the inverse Ackermann function. For the decremental setting, we first develop an efficient decremental disk revealing data structure: given two sets R and B of disks in the plane, we can delete disks from B, and upon each deletion, we receive a list of all disks in R that no longer intersect the union of B. Using this data structure, we get decremental data structures with a query time of O(log n/ log log n) that supports deletions in O(n log Ψ log4 n) overall expected time for disk graphs with bounded radius ratio Ψ and O(n log5 n) overall expected time for disk graphs with arbitrary radii, assuming that the deletion sequence is oblivious of the internal random choices of the data structures

    The infrared structure of perturbative gauge theories

    Get PDF
    Infrared divergences in the perturbative expansion of gauge theory amplitudes and cross sections have been a focus of theoretical investigations for almost a century. New insights still continue to emerge, as higher perturbative orders are explored, and high-precision phenomenological applications demand an ever more refined understanding. This review aims to provide a pedagogical overview of the subject. We briefly cover some of the early historical results, we provide some simple examples of low-order applications in the context of perturbative QCD, and discuss the necessary tools to extend these results to all perturbative orders. Finally, we describe recent developments concerning the calculation of soft anomalous dimensions in multi-particle scattering amplitudes at high orders, and we provide a brief introduction to the very active field of infrared subtraction for the calculation of differential distributions at colliders. © 2022 Elsevier B.V

    Computational and experimental studies on the reaction mechanism of bio-oil components with additives for increased stability and fuel quality

    Get PDF
    As one of the world’s largest palm oil producers, Malaysia encountered a major disposal problem as vast amount of oil palm biomass wastes are produced. To overcome this problem, these biomass wastes can be liquefied into biofuel with fast pyrolysis technology. However, further upgradation of fast pyrolysis bio-oil via direct solvent addition was required to overcome it’s undesirable attributes. In addition, the high production cost of biofuels often hinders its commercialisation. Thus, the designed solvent-oil blend needs to achieve both fuel functionality and economic targets to be competitive with the conventional diesel fuel. In this thesis, a multi-stage computer-aided molecular design (CAMD) framework was employed for bio-oil solvent design. In the design problem, molecular signature descriptors were applied to accommodate different classes of property prediction models. However, the complexity of the CAMD problem increases as the height of signature increases due to the combinatorial nature of higher order signature. Thus, a consistency rule was developed reduce the size of the CAMD problem. The CAMD problem was then further extended to address the economic aspects via fuzzy multi-objective optimisation approach. Next, a rough-set based machine learning (RSML) model has been proposed to correlate the feedstock characterisation and pyrolysis condition with the pyrolysis bio-oil properties by generating decision rules. The generated decision rules were analysed from a scientific standpoint to identify the underlying patterns, while ensuring the rules were logical. The decision rules generated can be used to select optimal feedstock composition and pyrolysis condition to produce pyrolysis bio-oil of targeted fuel properties. Next, the results obtained from the computational approaches were verified through experimental study. The generated pyrolysis bio-oils were blended with the identified solvents at various mixing ratio. In addition, emulsification of the solvent-oil blend in diesel was also conducted with the help of surfactants. Lastly, potential extensions and prospective work for this study have been discuss in the later part of this thesis. To conclude, this thesis presented the combination of computational and experimental approaches in upgrading the fuel properties of pyrolysis bio-oil. As a result, high quality biofuel can be generated as a cleaner burning replacement for conventional diesel fuel

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Exploration autonome et efficiente de chantiers miniers souterrains inconnus avec un drone filaire

    Get PDF
    Abstract: Underground mining stopes are often mapped using a sensor located at the end of a pole that the operator introduces into the stope from a secure area. The sensor emits laser beams that provide the distance to a detected wall, thus creating a 3D map. This produces shadow zones and a low point density on the distant walls. To address these challenges, a research team from the Université de Sherbrooke is designing a tethered drone equipped with a rotating LiDAR for this mission, thus benefiting from several points of view. The wired transmission allows for unlimited flight time, shared computing, and real-time communication. For compatibility with the movement of the drone after tether entanglements, the excess length is integrated into an onboard spool, contributing to the drone payload. During manual piloting, the human factor causes problems in the perception and comprehension of a virtual 3D environment, as well as the execution of an optimal mission. This thesis focuses on autonomous navigation in two aspects: path planning and exploration. The system must compute a trajectory that maps the entire environment, minimizing the mission time and respecting the maximum onboard tether length. Path planning using a Rapidly-exploring Random Tree (RRT) quickly finds a feasible path, but the optimization is computationally expensive and the performance is variable and unpredictable. Exploration by the frontier method is representative of the space to be explored and the path can be optimized by solving a Traveling Salesman Problem (TSP) but existing techniques for a tethered drone only consider the 2D case and do not optimize the global path. To meet these challenges, this thesis presents two new algorithms. The first one, RRT-Rope, produces an equal or shorter path than existing algorithms in a significantly shorter computation time, up to 70% faster than the next best algorithm in a representative environment. A modified version of RRT-connect computes a feasible path, shortened with a deterministic technique that takes advantage of previously added intermediate nodes. The second algorithm, TAPE, is the first 3D cavity exploration method that focuses on minimizing mission time and unwound tether length. On average, the overall path is 4% longer than the method that solves the TSP, but the tether remains under the allowed length in 100% of the simulated cases, compared to 53% with the initial method. The approach uses a 2-level hierarchical architecture: global planning solves a TSP after frontier extraction, and local planning minimizes the path cost and tether length via a decision function. The integration of these two tools in the NetherDrone produces an intelligent system for autonomous exploration, with semi-autonomous features for operator interaction. This work opens the door to new navigation approaches in the field of inspection, mapping, and Search and Rescue missions.La cartographie des chantiers miniers souterrains est souvent réalisée à l’aide d’un capteur situé au bout d’une perche que l’opérateur introduit dans le chantier, depuis une zone sécurisée. Le capteur émet des faisceaux laser qui fournissent la distance à un mur détecté, créant ainsi une carte en 3D. Ceci produit des zones d’ombres et une faible densité de points sur les parois éloignées. Pour relever ces défis, une équipe de recherche de l’Université de Sherbrooke conçoit un drone filaire équipé d’un LiDAR rotatif pour cette mission, bénéficiant ainsi de plusieurs points de vue. La transmission filaire permet un temps de vol illimité, un partage de calcul et une communication en temps réel. Pour une compatibilité avec le mouvement du drone lors des coincements du fil, la longueur excédante est intégrée dans une bobine embarquée, qui contribue à la charge utile du drone. Lors d’un pilotage manuel, le facteur humain entraîne des problèmes de perception et compréhension d’un environnement 3D virtuel, et d’exécution d’une mission optimale. Cette thèse se concentre sur la navigation autonome sous deux aspects : la planification de trajectoire et l’exploration. Le système doit calculer une trajectoire qui cartographie l’environnement complet, en minimisant le temps de mission et en respectant la longueur maximale de fil embarquée. La planification de trajectoire à l’aide d’un Rapidly-exploring Random Tree (RRT) trouve rapidement un chemin réalisable, mais l’optimisation est coûteuse en calcul et la performance est variable et imprévisible. L’exploration par la méthode des frontières est représentative de l’espace à explorer et le chemin peut être optimisé en résolvant un Traveling Salesman Problem (TSP), mais les techniques existantes pour un drone filaire ne considèrent que le cas 2D et n’optimisent pas le chemin global. Pour relever ces défis, cette thèse présente deux nouveaux algorithmes. Le premier, RRT-Rope, produit un chemin égal ou plus court que les algorithmes existants en un temps de calcul jusqu’à 70% plus court que le deuxième meilleur algorithme dans un environnement représentatif. Une version modifiée de RRT-connect calcule un chemin réalisable, raccourci avec une technique déterministe qui tire profit des noeuds intermédiaires préalablement ajoutés. Le deuxième algorithme, TAPE, est la première méthode d’exploration de cavités en 3D qui minimise le temps de mission et la longueur du fil déroulé. En moyenne, le trajet global est 4% plus long que la méthode qui résout le TSP, mais le fil reste sous la longueur autorisée dans 100% des cas simulés, contre 53% avec la méthode initiale. L’approche utilise une architecture hiérarchique à 2 niveaux : la planification globale résout un TSP après extraction des frontières, et la planification locale minimise le coût du chemin et la longueur de fil via une fonction de décision. L’intégration de ces deux outils dans le NetherDrone produit un système intelligent pour l’exploration autonome, doté de fonctionnalités semi-autonomes pour une interaction avec l’opérateur. Les travaux réalisés ouvrent la porte à de nouvelles approches de navigation dans le domaine des missions d’inspection, de cartographie et de recherche et sauvetage

    Design of new algorithms for gene network reconstruction applied to in silico modeling of biomedical data

    Get PDF
    Programa de Doctorado en Biotecnología, Ingeniería y Tecnología QuímicaLínea de Investigación: Ingeniería, Ciencia de Datos y BioinformáticaClave Programa: DBICódigo Línea: 111The root causes of disease are still poorly understood. The success of current therapies is limited because persistent diseases are frequently treated based on their symptoms rather than the underlying cause of the disease. Therefore, biomedical research is experiencing a technology-driven shift to data-driven holistic approaches to better characterize the molecular mechanisms causing disease. Using omics data as an input, emerging disciplines like network biology attempt to model the relationships between biomolecules. To this effect, gene co- expression networks arise as a promising tool for deciphering the relationships between genes in large transcriptomic datasets. However, because of their low specificity and high false positive rate, they demonstrate a limited capacity to retrieve the disrupted mechanisms that lead to disease onset, progression, and maintenance. Within the context of statistical modeling, we dove deeper into the reconstruction of gene co-expression networks with the specific goal of discovering disease-specific features directly from expression data. Using ensemble techniques, which combine the results of various metrics, we were able to more precisely capture biologically significant relationships between genes. We were able to find de novo potential disease-specific features with the help of prior biological knowledge and the development of new network inference techniques. Through our different approaches, we analyzed large gene sets across multiple samples and used gene expression as a surrogate marker for the inherent biological processes, reconstructing robust gene co-expression networks that are simple to explore. By mining disease-specific gene co-expression networks we come up with a useful framework for identifying new omics-phenotype associations from conditional expression datasets.In this sense, understanding diseases from the perspective of biological network perturbations will improve personalized medicine, impacting rational biomarker discovery, patient stratification and drug design, and ultimately leading to more targeted therapies.Universidad Pablo de Olavide de Sevilla. Departamento de Deporte e Informátic

    Algorithms for Geometric Facility Location: Centers in a Polygon and Dispersion on a Line

    Get PDF
    We study three geometric facility location problems in this thesis. First, we consider the dispersion problem in one dimension. We are given an ordered list of (possibly overlapping) intervals on a line. We wish to choose exactly one point from each interval such that their left to right ordering on the line matches the input order. The aim is to choose the points so that the distance between the closest pair of points is maximized, i.e., they must be socially distanced while respecting the order. We give a new linear-time algorithm for this problem that produces a lexicographically optimal solution. We also consider some generalizations of this problem. For the next two problems, the domain of interest is a simple polygon with n vertices. The second problem concerns the visibility center. The convention is to think of a polygon as the top view of a building (or art gallery) where the polygon boundary represents opaque walls. Two points in the domain are visible to each other if the line segment joining them does not intersect the polygon exterior. The distance to visibility from a source point to a target point is the minimum geodesic distance from the source to a point in the polygon visible to the target. The question is: Where should a single guard be located within the polygon to minimize the maximum distance to visibility? For m point sites in the polygon, we give an O((m + n) log (m + n)) time algorithm to determine their visibility center. Finally, we address the problem of locating the geodesic edge center of a simple polygon—a point in the polygon that minimizes the maximum geodesic distance to any edge. For a triangle, this point coincides with its incenter. The geodesic edge center is a generalization of the well-studied geodesic center (a point that minimizes the maximum distance to any vertex). Center problems are closely related to farthest Voronoi diagrams, which are well- studied for point sites in the plane, and less well-studied for line segment sites in the plane. When the domain is a polygon rather than the whole plane, only the case of point sites has been addressed—surprisingly, more general sites (with line segments being the simplest example) have been largely ignored. En route to our solution, we revisit, correct, and generalize (sometimes in a non-trivial manner) existing algorithms and structures tailored to work specifically for point sites. We give an optimal linear-time algorithm for finding the geodesic edge center of a simple polygon

    A New Deterministic Algorithm for Fully Dynamic All-Pairs Shortest Paths

    Full text link
    We study the fully dynamic All-Pairs Shortest Paths (APSP) problem in undirected edge-weighted graphs. Given an nn-vertex graph GG with non-negative edge lengths, that undergoes an online sequence of edge insertions and deletions, the goal is to support approximate distance queries and shortest-path queries. We provide a deterministic algorithm for this problem, that, for a given precision parameter ϵ\epsilon, achieves approximation factor (loglogn)2O(1/ϵ3)(\log\log n)^{2^{O(1/\epsilon^3)}}, and has amortized update time O(nϵlogL)O(n^{\epsilon}\log L) per operation, where LL is the ratio of longest to shortest edge length. Query time for distance-query is O(2O(1/ϵ)lognloglogL)O(2^{O(1/\epsilon)}\cdot \log n\cdot \log\log L), and query time for shortest-path query is O(E(P)+2O(1/ϵ)lognloglogL)O(|E(P)|+2^{O(1/\epsilon)}\cdot \log n\cdot \log\log L), where PP is the path that the algorithm returns. To the best of our knowledge, even allowing any o(n)o(n)-approximation factor, no adaptive-update algorithms with better than Θ(m)\Theta(m) amortized update time and better than Θ(n)\Theta(n) query time were known prior to this work. We also note that our guarantees are stronger than the best current guarantees for APSP in decremental graphs in the adaptive-adversary setting.Comment: arXiv admin note: text overlap with arXiv:2109.0562

    Graphonomics and your Brain on Art, Creativity and Innovation : Proceedings of the 19th International Graphonomics Conference (IGS 2019 – Your Brain on Art)

    Get PDF
    [Italiano]: “Grafonomia e cervello su arte, creatività e innovazione”. Un forum internazionale per discutere sui recenti progressi nell'interazione tra arti creative, neuroscienze, ingegneria, comunicazione, tecnologia, industria, istruzione, design, applicazioni forensi e mediche. I contributi hanno esaminato lo stato dell'arte, identificando sfide e opportunità, e hanno delineato le possibili linee di sviluppo di questo settore di ricerca. I temi affrontati includono: strategie integrate per la comprensione dei sistemi neurali, affettivi e cognitivi in ambienti realistici e complessi; individualità e differenziazione dal punto di vista neurale e comportamentale; neuroaesthetics (uso delle neuroscienze per spiegare e comprendere le esperienze estetiche a livello neurologico); creatività e innovazione; neuro-ingegneria e arte ispirata dal cervello, creatività e uso di dispositivi di mobile brain-body imaging (MoBI) indossabili; terapia basata su arte creativa; apprendimento informale; formazione; applicazioni forensi. / [English]: “Graphonomics and your brain on art, creativity and innovation”. A single track, international forum for discussion on recent advances at the intersection of the creative arts, neuroscience, engineering, media, technology, industry, education, design, forensics, and medicine. The contributions reviewed the state of the art, identified challenges and opportunities and created a roadmap for the field of graphonomics and your brain on art. The topics addressed include: integrative strategies for understanding neural, affective and cognitive systems in realistic, complex environments; neural and behavioral individuality and variation; neuroaesthetics (the use of neuroscience to explain and understand the aesthetic experiences at the neurological level); creativity and innovation; neuroengineering and brain-inspired art, creative concepts and wearable mobile brain-body imaging (MoBI) designs; creative art therapy; informal learning; education; forensics

    On a Vehicle Routing Problem with Customer Costs and Multi Depots

    Get PDF
    The Vehicle Routing Problem with Customer Costs (short VRPCC) was developed for railway maintenance scheduling. In detail, corrective maintenance jobs for unexpected occurring failures are planned to a short time horizon. These jobs are geographically distributed in the railway net. Furthermore, dependent on the severity of the failure, it can be necessary to reduce the top speed on the track section in order to avoid safety risks or a too fast deterioration. For fatal failures, it can even be necessary to close the track section. The resulting limitations on railway service lead to penalty costs for the maintenance operator. These must be paid until the track is repaired and the restrictions are removed. By scheduling the maintenance tasks, these penalty costs can be reduced by proceeding corresponding maintenance tasks earlier. However, this may in return lead to increased costs for moving the maintenance machines and crews. For this scheduling problem, the VRPCC was developed. With it, for each maintenance vehicle and crew, a route is defined that describes the order to proceed maintenance tasks. Two kinds of costs are considered: Firstly, travel costs for machinery and crew; and secondly, penalty costs for an unsafe track condition that have to be paid for each day from failure detection to maintenance completion. To model the penalties, the novel customer costs are defined. In detail, for each maintenance activity a customer cost coefficient is given which incur for each day between failure detection and failure repair. The objective function of this problem is defined by the sum of travel costs and time-dependent customer costs. With it, the priority of customers can be taken into account without losing the sight on travel costs. This new vehicle routing problem was introduced in this thesis by a non-linear partition and permutation model. In this model, a feasible solution is defined by a partition of the job set into subsets that represent the allocation of jobs to vehicles and a permutation for each subset that represent the order of processing the jobs. Then, the start times of the jobs were calculated based on the order given by the permutations. It was taken into account that work can only be done in eight hour shifts during the night. Based on the start times, the customer cost value of each job is computed which equals to the paid penalty costs. Then, the costs of a schedule are calculated via the sum of travel costs and customer costs. To solve the VRPCC by a commercial linear programming solver, different formulations of the VRPCC as mixed-integer linear program were developed. In doing so, the start times became decision variables. It turned out that including customer costs led to problems harder to solve than vehicle routing problems where only travel costs are minimized. Further, in the thesis several construction heuristics for the VRPCC were designed and investigated. Also two local search algorithms, first and best improvement, were applied. The computational experiments showed that the solutions generated by the local search algorithm were much better than the solutions of the construction heuristics. The main part of this thesis was to design a Branch-and-Bound algorithm for the VRPCC. For this purpose, new lower bounds for the customer cost part of the objective function were formulated. The computational experiments showed that a lower bound computed from the LP relaxation of a specific bin packing problem had the best trade-off between computational effort and bound quality. For the travel cost part of the objective function, several known lower bounds from the TSP were compared. To design a Branch-and-Bound algorithm, beside efficient lower bound, also suitable branching strategies are necessary to split the problem space into smaller subspaces. In this thesis two branching strategies were developed which are based on the non-linear partition and permutation model to take advantage from the problem structure. To be more precise, new branches are generated by appending or including a job to an uncompleted schedule. Consequently, the start times can be computed directly from the so far planned jobs and more tight lower bounds can be computed for the so far unplanned jobs. By means of computational experiments, the developed Branch-and-Bound algorithms were compared with the classical approach, which means solving a mixed-integer linear program of the VRPCC by a commercial solver. The results showed that both Branch-and-Bound algorithms solved the small instances faster than the classical approach
    corecore