7,254 research outputs found

    Method for prediction and control by uncertain microsatellite magnetic cleanliness based on calculation and compensation magnetic field spatial harmonics

    Get PDF
    Aim. Development of method for prediction and control the microsatellite magnetic cleanliness taking into account the uncertainties of the magnetic characteristics of the microsatellite, based on calculation the magnetic field spatial spherical harmonics in the area of the onboard magnetometer installation and using compensating multipoles. Methodology. Spatial spherical harmonics of microsatellite magnetic field in the area of the onboard magnetometer installation calculated as solution of nonlinear minimax optimization problem based on near field measurements for prediction far spacecraft magnetic field magnitude. Nonlinear objective function calculated as the weighted sum of squared residuals between the measured and predicted magnetic field. Values of the compensating dipoles, quadrupoles and octupoles and coordinates of them placement inside the spaceship for compensation of the dipoles, quadrupoles and octupoles components of the microsatellite initial magnetic field also calculated as solution of nonlinear minimax optimization problem. Both solutions of this nonlinear minimax optimization problems calculated based on particle swarm nonlinear optimization algorithms. Results. Results of prediction spacecraft far magnetic field magnitude based on spacecraft spatial spherical harmonics of the magnetic field using near field measurements and compensation of the dipoles, quadrupoles and octupoles components of the initial magnetic field with consideration of spacecraft magnetic characteristics uncertainty for ensuring the microsatellite magnetic cleanliness. Originality. The method for prediction and control by spacecraft magnetic cleanliness based on calculation spatial spherical harmonics of the magnetic field in the area of the onboard magnetometer installation using compensation of the dipoles, quadrupoles and octupoles components of the initial magnetic field with consideration of magnetic characteristics uncertainty is developed. Practical value. The important practical problem of ensuring the magnetic cleanliness of the «Sich-2» microsatellite family based on the spatial spherical harmonics of the magnetic field model using the compensation of the dipole, quadrupole and octupole components of the output magnetic field of the sensor for the kinetic parameters of the neutral component of the space plasma at the point of installation of the on-board magnetometer LEMI-016 by setting the compensating dipole, quadrupole and octupole with consideration of spacecraft magnetic characteristics uncertainty solved.Мета. Розробка методу прогнозування та управління магнітною чистотою мікросупутника на основі обчислення просторових сферичних гармонік магнітного поля в зоні встановлення бортового магнітометру з використанням компенсації сферичних гармонік вихідного магнітного поля та з урахуванням невизначеності магнітних характеристик. Методологія. Просторові сферичні гармоніки магнітного поля мікросупутника розраховані як рішення задачі нелінійної мінімаксної оптимізації на основі вимірювань ближнього магнітного поля для прогнозування величини дальнього магнітного поля. Нелінійна цільова функція обчислена в вигляді зваженої суми квадратів залишків між виміряним і прогнозованим магнітним полем. Величини компенсуючих диполів, квадруполів та октуполів та координати їх розташування в просторі мікросупутника для компенсації вихідного магнітного поля космічного апарату розраховані як рішення нелінійної задачі мінімаксної оптимізації. Рішення обох задач нелінійної мінімаксної оптимізації розраховані на основі алгоритмів нелінійної оптимізації роєм частинок. Результати. Результати прогнозування величини дальнього магнітного поля мікросупутника на основі обчислення просторових сферичних гармонік моделі магнітного поля в зоні встановлення бортового магнітометру з використанням вимірювань ближнього поля та компенсації дипольних, квадрупольних та октупольних компонент вихідного магнітного поля з урахуванням невизначеності магнітних характеристик для забезпечення магнітної чистоти мікросупутника. Оригінальність. Розроблено метод прогнозування та управління магнітною чистотою мікросупутника на основі обчислення просторових сферичних гармонік магнітного поля з використанням компенсації дипольних, квадрупольних та октупольних компонент вихідного магнітного поля та з урахуванням невизначеності магнітних характеристик. Практична цінність. Вирішено важливу практичну задачу забезпечення магнітної чистоти орбітального космічного апарату сімейства «Січ-2» на основі обчислення просторових сферичних гармонік моделі магнітного поля з використанням компенсації дипольних, квадрупольних та октупольних компонент вихідного магнітного поля датчика кінетичних параметрів нейтрального компонента космічної плазми в точці розташування бортового магнітометру LEMI-016 шляхом установки компенсуючих диполів, квадруполів та октуполів та з урахуванням невизначеності магнітних характеристик

    Optimization of Construction Projects Time-Cost-Quality-Environment Trade-off Problem Using Adaptive Selection Slime Mold Algorithm

    Get PDF
    In order to address optimization problems, artificial intelligence (AI) is employed in the construction industry, which aids in the growth and popularization of AI. This study utilizes a Hybrid algorithm called Adaptive Selection Slime Mold Algorithm (ASSMA), which combines the Tournament Selection (TS) and Slime Mould Algorithm (SMA) to address the four-factor optimization problem in projects. This combination will improve the original algorithm's performance, speed up result finding and achieve good convergence via Pareto Front. Hence, efficient resource management must be comprehended in order to optimize the time, cost, quality and environmental impact trade-off (TCQE). Case studies are used to illustrate the capabilities of the new model, and ASSMA results are compared to those of the data envelopment analysis (DEA) method used by the previous researcher. To improve the suggested model's superiority and effectiveness, it is compared to the multiple-target swarm algorithm (MOPSO), multi-objective artificial bee colony (MOABC) and non-dominant sort genetic algorithm (NSGA-II). Based on the overall results, it is clear that the ASSMA model illustrates diversification and offers a robust and convincing optimal solution for readers to understand the potential of the proposed model

    A revisited branch-and-cut algorithm for large-scale orienteering problems

    Get PDF
    The orienteering problem is a route optimization problem which consists of finding a simple cycle that maximizes the total collected profit subject to a maximum distance limitation. In the last few decades, the occurrence of this problem in real-life applications has boosted the development of many heuristic algorithms to solve it. However, during the same period, not much research has been devoted to the field of exact algorithms for the orienteering problem. The aim of this work is to develop an exact method which is able to obtain the optimum in a wider set of instances than with previous methods, or to improve the lower and upper bounds in its disability. We propose a revisited version of the branch-and-cut algorithm for the orienteering problem which includes new contributions in the separation algorithms of inequalities stemming from the cycle problem, in the separation loop, in the variables pricing, and in the calculation of the lower and upper bounds of the problem. Our proposal is compared to three state-of-the-art algorithms on 258 benchmark instances with up to 7397 nodes. The computational experiments show the relevance of the designed components where 18 new optima, 76 new best-known solutions and 85 new upper-bound values were obtained

    A heuristic with a performance guarantee for the Commodity constrained Split Delivery Vehicle Routing Problem

    Get PDF
    The Commodity constrained Split Delivery Vehicle Routing Problem (C-SDVRP) is a routing problem where customer demands are composed of multiple commodities. A fleet of capacitated vehicles must serve customer demands in a way that minimizes the total routing costs. Vehicles can transport any set of commodities and customers are allowed to be visited multiple times. However, the demand for a single commodity must be delivered by one vehicle only. In this work, we developed a heuristic with a performance guarantee to solve the C-SDVRP. The proposed heuristic is based on a set covering formulation, where the exponentially-many variables correspond to routes. First, a subset of the variables is obtained by solving the linear relaxation of the formulation by means of a column generation approach which embeds a new pricing heuristic aimed to reduce the computational time. Solving the linear relaxation gives a valid lower bound used as a performance guarantee for the heuristic. Then, we devise a restricted master heuristic to provide good upper bounds: the formulation is restricted to the subset of variables found so far and solved as an integer program with a commercial solver. A local search based on a mathematical programming operator is applied to improve the solution. We test the heuristic algorithm on benchmark instances from the literature. Several new (best-known) solutions are found in reasonable computational time. The comparison with the state of the art heuristics for solving C-SDVRP shows that our approach significantly improves the solution time, while keeping a comparable solution quality

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Data-assisted modeling of complex chemical and biological systems

    Get PDF
    Complex systems are abundant in chemistry and biology; they can be multiscale, possibly high-dimensional or stochastic, with nonlinear dynamics and interacting components. It is often nontrivial (and sometimes impossible), to determine and study the macroscopic quantities of interest and the equations they obey. One can only (judiciously or randomly) probe the system, gather observations and study trends. In this thesis, Machine Learning is used as a complement to traditional modeling and numerical methods to enable data-assisted (or data-driven) dynamical systems. As case studies, three complex systems are sourced from diverse fields: The first one is a high-dimensional computational neuroscience model of the Suprachiasmatic Nucleus of the human brain, where bifurcation analysis is performed by simply probing the system. Then, manifold learning is employed to discover a latent space of neuronal heterogeneity. Second, Machine Learning surrogate models are used to optimize dynamically operated catalytic reactors. An algorithmic pipeline is presented through which it is possible to program catalysts with active learning. Third, Machine Learning is employed to extract laws of Partial Differential Equations describing bacterial Chemotaxis. It is demonstrated how Machine Learning manages to capture the rules of bacterial motility in the macroscopic level, starting from diverse data sources (including real-world experimental data). More importantly, a framework is constructed though which already existing, partial knowledge of the system can be exploited. These applications showcase how Machine Learning can be used synergistically with traditional simulations in different scenarios: (i) Equations are available but the overall system is so high-dimensional that efficiency and explainability suffer, (ii) Equations are available but lead to highly nonlinear black-box responses, (iii) Only data are available (of varying source and quality) and equations need to be discovered. For such data-assisted dynamical systems, we can perform fundamental tasks, such as integration, steady-state location, continuation and optimization. This work aims to unify traditional scientific computing and Machine Learning, in an efficient, data-economical, generalizable way, where both the physical system and the algorithm matter

    Machine-learning-aided design optimization of internal flow channel cross-sections

    Get PDF

    Adaptive Robotic Information Gathering via Non-Stationary Gaussian Processes

    Full text link
    Robotic Information Gathering (RIG) is a foundational research topic that answers how a robot (team) collects informative data to efficiently build an accurate model of an unknown target function under robot embodiment constraints. RIG has many applications, including but not limited to autonomous exploration and mapping, 3D reconstruction or inspection, search and rescue, and environmental monitoring. A RIG system relies on a probabilistic model's prediction uncertainty to identify critical areas for informative data collection. Gaussian Processes (GPs) with stationary kernels have been widely adopted for spatial modeling. However, real-world spatial data is typically non-stationary -- different locations do not have the same degree of variability. As a result, the prediction uncertainty does not accurately reveal prediction error, limiting the success of RIG algorithms. We propose a family of non-stationary kernels named Attentive Kernel (AK), which is simple, robust, and can extend any existing kernel to a non-stationary one. We evaluate the new kernel in elevation mapping tasks, where AK provides better accuracy and uncertainty quantification over the commonly used stationary kernels and the leading non-stationary kernels. The improved uncertainty quantification guides the downstream informative planner to collect more valuable data around the high-error area, further increasing prediction accuracy. A field experiment demonstrates that the proposed method can guide an Autonomous Surface Vehicle (ASV) to prioritize data collection in locations with significant spatial variations, enabling the model to characterize salient environmental features.Comment: International Journal of Robotics Research (IJRR). arXiv admin note: text overlap with arXiv:2205.0642

    BOtied: Multi-objective Bayesian optimization with tied multivariate ranks

    Full text link
    Many scientific and industrial applications require joint optimization of multiple, potentially competing objectives. Multi-objective Bayesian optimization (MOBO) is a sample-efficient framework for identifying Pareto-optimal solutions. We show a natural connection between non-dominated solutions and the highest multivariate rank, which coincides with the outermost level line of the joint cumulative distribution function (CDF). We propose the CDF indicator, a Pareto-compliant metric for evaluating the quality of approximate Pareto sets that complements the popular hypervolume indicator. At the heart of MOBO is the acquisition function, which determines the next candidate to evaluate by navigating the best compromises among the objectives. Multi-objective acquisition functions that rely on box decomposition of the objective space, such as the expected hypervolume improvement (EHVI) and entropy search, scale poorly to a large number of objectives. We propose an acquisition function, called BOtied, based on the CDF indicator. BOtied can be implemented efficiently with copulas, a statistical tool for modeling complex, high-dimensional distributions. We benchmark BOtied against common acquisition functions, including EHVI and random scalarization (ParEGO), in a series of synthetic and real-data experiments. BOtied performs on par with the baselines across datasets and metrics while being computationally efficient.Comment: 10 pages (+5 appendix), 9 figures. Submitted to NeurIP

    Vibration-based damage localisation: Impulse response identification and model updating methods

    Get PDF
    Structural health monitoring has gained more and more interest over the recent decades. As the technology has matured and monitoring systems are employed commercially, the development of more powerful and precise methods is the logical next step in this field. Especially vibration sensor networks with few measurement points combined with utilisation of ambient vibration sources are attractive for practical applications, as this approach promises to be cost-effective while requiring minimal modification to the monitored structures. Since efficient methods for damage detection have already been developed for such sensor networks, the research focus shifts towards extracting more information from the measurement data, in particular to the localisation and quantification of damage. Two main concepts have produced promising results for damage localisation. The first approach involves a mechanical model of the structure, which is used in a model updating scheme to find the damaged areas of the structure. Second, there is a purely data-driven approach, which relies on residuals of vibration estimations to find regions where damage is probable. While much research has been conducted following these two concepts, different approaches are rarely directly compared using the same data sets. Therefore, this thesis presents advanced methods for vibration-based damage localisation using model updating as well as a data-driven method and provides a direct comparison using the same vibration measurement data. The model updating approach presented in this thesis relies on multiobjective optimisation. Hence, the applied numerical optimisation algorithms are presented first. On this basis, the model updating parameterisation and objective function formulation is developed. The data-driven approach employs residuals from vibration estimations obtained using multiple-input finite impulse response filters. Both approaches are then verified using a simulated cantilever beam considering multiple damage scenarios. Finally, experimentally obtained data from an outdoor girder mast structure is used to validate the approaches. In summary, this thesis provides an assessment of model updating and residual-based damage localisation by means of verification and validation cases. It is found that the residual-based method exhibits numerical performance sufficient for real-time applications while providing a high sensitivity towards damage. However, the localisation accuracy is found to be superior using the model updating method
    corecore