1,969 research outputs found

    AI: Limits and Prospects of Artificial Intelligence

    Get PDF
    The emergence of artificial intelligence has triggered enthusiasm and promise of boundless opportunities as much as uncertainty about its limits. The contributions to this volume explore the limits of AI, describe the necessary conditions for its functionality, reveal its attendant technical and social problems, and present some existing and potential solutions. At the same time, the contributors highlight the societal and attending economic hopes and fears, utopias and dystopias that are associated with the current and future development of artificial intelligence

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Runway Safety Improvements Through a Data Driven Approach for Risk Flight Prediction and Simulation

    Get PDF
    Runway overrun is one of the most frequently occurring flight accident types threatening the safety of aviation. Sensors have been improved with recent technological advancements and allow data collection during flights. The recorded data helps to better identify the characteristics of runway overruns. The improved technological capabilities and the growing air traffic led to increased momentum for reducing flight risk using artificial intelligence. Discussions on incorporating artificial intelligence to enhance flight safety are timely and critical. Using artificial intelligence, we may be able to develop the tools we need to better identify runway overrun risk and increase awareness of runway overruns. This work seeks to increase attitude, skill, and knowledge (ASK) of runway overrun risks by predicting the flight states near touchdown and simulating the flight exposed to runway overrun precursors. To achieve this, the methodology develops a prediction model and a simulation model. During the flight training process, the prediction model is used in flight to identify potential risks and the simulation model is used post-flight to review the flight behavior. The prediction model identifies potential risks by predicting flight parameters that best characterize the landing performance during the final approach phase. The predicted flight parameters are used to alert the pilots for any runway overrun precursors that may pose a threat. The predictions and alerts are made when thresholds of various flight parameters are exceeded. The flight simulation model simulates the final approach trajectory with an emphasis on capturing the effect wind has on the aircraft. The focus is on the wind since the wind is a relatively significant factor during the final approach; typically, the aircraft is stabilized during the final approach. The flight simulation is used to quickly assess the differences between fight patterns that have triggered overrun precursors and normal flights with no abnormalities. The differences are crucial in learning how to mitigate adverse flight conditions. Both of the models are created with neural network models. The main challenges of developing a neural network model are the unique assignment of each model design space and the size of a model design space. A model design space is unique to each problem and cannot accommodate multiple problems. A model design space can also be significantly large depending on the depth of the model. Therefore, a hyperparameter optimization algorithm is investigated and used to design the data and model structures to best characterize the aircraft behavior during the final approach. A series of experiments are performed to observe how the model accuracy change with different data pre-processing methods for the prediction model and different neural network models for the simulation model. The data pre-processing methods include indexing the data by different frequencies, by different window sizes, and data clustering. The neural network models include simple Recurrent Neural Networks, Gated Recurrent Units, Long Short Term Memory, and Neural Network Autoregressive with Exogenous Input. Another series of experiments are performed to evaluate the robustness of these models to adverse wind and flare. This is because different wind conditions and flares represent controls that the models need to map to the predicted flight states. The most robust models are then used to identify significant features for the prediction model and the feasible control space for the simulation model. The outcomes of the most robust models are also mapped to the required landing distance metric so that the results of the prediction and simulation are easily read. Then, the methodology is demonstrated with a sample flight exposed to an overrun precursor, and high approach speed, to show how the models can potentially increase attitude, skill, and knowledge of runway overrun risk. The main contribution of this work is on evaluating the accuracy and robustness of prediction and simulation models trained using Flight Operational Quality Assurance (FOQA) data. Unlike many studies that focused on optimizing the model structures to create the two models, this work optimized both data and model structures to ensure that the data well capture the dynamics of the aircraft it represents. To achieve this, this work introduced a hybrid genetic algorithm that combines the benefits of conventional and quantum-inspired genetic algorithms to quickly converge to an optimal configuration while exploring the design space. With the optimized model, this work identified the data features, from the final approach, with a higher contribution to predicting airspeed, vertical speed, and pitch angle near touchdown. The top contributing features are altitude, angle of attack, core rpm, and air speeds. For both the prediction and the simulation models, this study goes through the impact of various data preprocessing methods on the accuracy of the two models. The results may help future studies identify the right data preprocessing methods for their work. Another contribution from this work is on evaluating how flight control and wind affect both the prediction and the simulation models. This is achieved by mapping the model accuracy at various levels of control surface deflection, wind speeds, and wind direction change. The results saw fairly consistent prediction and simulation accuracy at different levels of control surface deflection and wind conditions. This showed that the neural network-based models are effective in creating robust prediction and simulation models of aircraft during the final approach. The results also showed that data frequency has a significant impact on the prediction and simulation accuracy so it is important to have sufficient data to train the models in the condition that the models will be used. The final contribution of this work is on demonstrating how the prediction and the simulation models can be used to increase awareness of runway overrun.Ph.D

    Analiza progresivnog loma kompozitnih laminata u uslovima prostornog stanja napona primenom slojevitih konačnih elemenata

    Get PDF
    Laminar composites are extensively used in civil engendering due to their exceptional strength, stiffness, corrosion resistance, and cost-effectiveness. They are ideal for high-reliability applications. The 21st century’s focus on environmental protection has led to increased use of natural-based materials like cross-laminated timber (CLT) in building construction. CLT panels have a high stiffness-to-weight ratio, making them well-suited as load-bearing elements, such as walls and floors. The optimal design of laminar composites is often hindered by uncertainties in failure prediction and the computational costs associated with progressive failure analysis (PFA), particularly for larger structures. This study introduces a novel prediction model that combines the smeared crack band (SCB) damage model with the full layerwise theory (FLWT). The aim is to enhance the computational efficiency of PFA in laminar composites while maintaining the accuracy of 3D finite element models. The SCB model accurately captures the response of damaged lamina in both fiber and matrix directions using distinct strain-softening curves, ensuring a precise representation of post-failure behaviour. The damage law is derived based on the assumption that the total energy required to cause failure in an element (released strain energy) is equivalent to the energy necessary to create a crack passing through it. To alleviate mesh dependency, the fracture energy is scaled by a characteristic element length. Failure initiation and modes are determined using the Hashin failure criterion. Furthermore, the model is extended to consider different failure behaviour of timber in tension and compression. This extension enhances the computational framework’s applicability to the field of computational mechanics for bio-based composites, such as CLT. The validity of the model is then confirmed through an extensive experimental program carried out at the Faculty of Civil Engineering, University of Belgrade. Application of layered finite elements for continuum damage modelling in laminar composites remains largely unexplored in literature, particularly when combined with the SCB damage model. The FLWT-based model accurately captures the 3D stress state within each lamina, including continuous transverse stresses between adjacent layers, crucial for accurate prediction of failure initiation. Furthermore, the FLWT demonstrates a weak correlation between the size of the considered domain and the mesh, presenting a notable difference from standard finite element models. The developed FLWT-SCB prediction model is integrated into an original FLWTFEM framework, offering a user-friendly graphical environment for easy visualization of input and output data. The proposed model’s efficiency has been verified using numerous benchmark examples during progressive failure analyses of laminar composites and CLT panels with arbitrary geometries, loading and boudary conditions and stacking sequences. The model has demonstrated its accuracy in predicting the response of both intact and damaged laminar composites, and valuable recommendations for future research in this field are included.Zbog svojih izuzetnih materijalnih karakteristika u pogledu čvrstoće i krutosti, male sopstvene težine, otpornosti na koroziju i niskih troškova održavanja, kompozitni laminati imaju potencijal za upotrebu u građevinarstvu. Sa porastom svesti o zaštiti životne sredine u 21. veku, sve je češća upotreba prirodnih materijala. U skladu sa tim, u građevinarstvu sve veću popularnost stiče kompozitni laminat na bazi drveta - unakrsno-lamelirano drvo (CLT). Zbog visokog odnosa krutosti i sopstvene težine CLT-a, moguće je projektovati elemente male težine i velikog raspona. Nepouzdanost u predviđanju ponašanja oštećenih kompozitnih laminata, kao i kompleksnost proračuna progresivnog loma znatno otežavaju njihovo projektovanje. U okviru ove disertacije je razvijen numerički model za analizu progresivnog loma kompozitnih laminata, koristeći model razmazane pukotine (eng. "smeared crack band" - SCB) i slojevitu teoriju ploča. Model poseduje kapacitet trodimenzionalnih numeričkih modela uz smanjeno trajanje proračuna, čime se povećava efikasnost numeričke analize. Kod SCB modela, ponašanje oštećene lamine je opisano različitim krivama loma u naponsko-deformacijskom prostoru, kako bi se u makroskopskom pogledu opisala propagacija oštećenja koje nastaje usled kidanja vlakana i matrice, respektivno. Zakon omekšavanja materijala je određen na osnovu pretpostavke da je oslobođena energija deformacije jednaka energiji potrebnoj da dođe do loma vlakana, odnosno kidanja matrice. Inicijacija i oblici loma su određeni primenom Hashin-ovog kriterijuma loma. Nakon toga, izvršena je modifikacija modela kako bi se opisalo različito ponašanje drveta pri zatezanju i pritisku. Na taj način, mogućnosti razvijenog numeričkog modela su proširene i na analizu progresivnog loma prirodnih kompozitnih laminata, kao što je CLT. Validnost predloženog modela je potvrđena kroz detaljna eksperimentalna ispitivanja na Građevinskom fakultetu Univerziteta u Beogradu. Upotreba slojevitih konačnih elemenata u analizi progresivnog loma je u velikoj meri neistražena u literaturi, posebno u kombinaciji sa SCB degradacijskim modelima, gde slojeviti model ploče treba objediniti sa fenomenima mehanike loma. Numerički model, zasnovan na slojevitoj teoriji ploča, omogućava precizno određivanje prostornog stanja napona, zadovoljavajući uslove ravnoteže međulaminarnih napona, što je veoma bitno prilikom predviđanja inicijacije loma. Takođe, pri modeliranju većih konstrukcija, primenom slojevite teorije ploča omogućava se znatno smanjenje broja konačnih elemenata u poređenju sa postojećim numeričkim modelima. Razvijeni numerički model je implementiran u FLWTFEM kod, čime je obezbeđeno puno grafičko okruženje, pogodno za vizualizaciju ulaznih podataka i rezultata proračuna. Efikasnost predloženog modela je verifikovana korišćenjem brojnih referentnih numeričkih primera, prilikom analize progresivnog loma kompozitnih laminata i CLT panela sa proizvoljnom geometrijom, opterećenjem, graničnim uslovima i orijentacijom slojeva. Potvrđena je tačnost predloženog modela u predviđanju odgovora kako neoštećenih tako i oštećenih kompozitnih laminata, a date su i važne preporuke za buduća istraživanja u ovoj oblasti

    On the motion planning & control of nonlinear robotic systems

    Get PDF
    In the last decades, we saw a soaring interest in autonomous robots boosted not only by academia and industry, but also by the ever in- creasing demand from civil users. As a matter of fact, autonomous robots are fast spreading in all aspects of human life, we can see them clean houses, navigate through city traffic, or harvest fruits and vegetables. Almost all commercial drones already exhibit unprecedented and sophisticated skills which makes them suitable for these applications, such as obstacle avoidance, simultaneous localisation and mapping, path planning, visual-inertial odometry, and object tracking. The major limitations of such robotic platforms lie in the limited payload that can carry, in their costs, and in the limited autonomy due to finite battery capability. For this reason researchers start to develop new algorithms able to run even on resource constrained platforms both in terms of computation capabilities and limited types of endowed sensors, focusing especially on very cheap sensors and hardware. The possibility to use a limited number of sensors allowed to scale a lot the UAVs size, while the implementation of new efficient algorithms, performing the same task in lower time, allows for lower autonomy. However, the developed robots are not mature enough to completely operate autonomously without human supervision due to still too big dimensions (especially for aerial vehicles), which make these platforms unsafe for humans, and the high probability of numerical, and decision, errors that robots may make. In this perspective, this thesis aims to review and improve the current state-of-the-art solutions for autonomous navigation from a purely practical point of view. In particular, we deeply focused on the problems of robot control, trajectory planning, environments exploration, and obstacle avoidance

    Optimal Surface Fitting of Point Clouds Using Local Refinement : Application to GIS Data

    Get PDF
    This open access book provides insights into the novel Locally Refined B-spline (LR B-spline) surface format, which is suited for representing terrain and seabed data in a compact way. It provides an alternative to the well know raster and triangulated surface representations. An LR B-spline surface has an overall smooth behavior and allows the modeling of local details with only a limited growth in data volume. In regions where many data points belong to the same smooth area, LR B-splines allow a very lean representation of the shape by locally adapting the resolution of the spline space to the size and local shape variations of the region. The iterative method can be modified to improve the accuracy in particular domains of a point cloud. The use of statistical information criterion can help determining the optimal threshold, the number of iterations to perform as well as some parameters of the underlying mathematical functions (degree of the splines, parameter representation). The resulting surfaces are well suited for analysis and computing secondary information such as contour curves and minimum and maximum points. Also deformation analysis are potential applications of fitting point clouds with LR B-splines

    AI for time-resolved imaging: from fluorescence lifetime to single-pixel time of flight

    Get PDF
    Time-resolved imaging is a field of optics which measures the arrival time of light on the camera. This thesis looks at two time-resolved imaging modalities: fluorescence lifetime imaging and time-of-flight measurement for depth imaging and ranging. Both of these applications require temporal accuracy on the order of pico- or nanosecond (10−12 − 10−9s) scales. This demands special camera technology and optics that can sample light-intensity extremely quickly, much faster than an ordinary video camera. However, such detectors can be very expensive compared to regular cameras while offering lower image quality. Further, information of interest is often hidden (encoded) in the raw temporal data. Therefore, computational imaging algorithms are used to enhance, analyse and extract information from time-resolved images. "A picture is worth a thousand words". This describes a fundamental blessing and curse of image analysis: images contain extreme amounts of data. Consequently, it is very difficult to design algorithms that encompass all the possible pixel permutations and combinations that can encode this information. Fortunately, the rise of AI and machine learning (ML) allow us to instead create algorithms in a data-driven way. This thesis demonstrates the application of ML to time-resolved imaging tasks, ranging from parameter estimation in noisy data and decoding of overlapping information, through super-resolution, to inferring 3D information from 1D (temporal) data

    Strong-Field Physics in QED and QCD: From Fundamentals to Applications

    Full text link
    We provide a pedagogical review article on fundamentals and applications of the quantum dynamics in strong electromagnetic fields in QED and QCD. The fundamentals include the basic picture of the Landau quantization and the resummation techniques applied to the class of higher-order diagrams that are enhanced by large magnitudes of the external fields. We then discuss observable effects of the vacuum fluctuations in the presence of the strong fields, which consist of the interdisciplinary research field of nonlinear QED. We also discuss extensions of the Heisenberg-Euler effective theory to finite temperature/density and to non-Abelian theories with some applications. Next, we proceed to the paradigm of the dimensional reduction emerging in the low-energy dynamics in the strong magnetic fields. The mechanisms of superconductivity, the magnetic catalysis of the chiral symmetry breaking, and the Kondo effect are addressed from a unified point of view in terms of the renormalization-group method. We provide an up-to-date summary of the lattice QCD simulations in magnetic fields for the chiral symmetry breaking and the related topics as of the end of 2022. Finally, we discuss novel transport phenomena induced by chiral anomaly and the axial-charge dynamics. Those discussions are supported by a number of appendices.Comment: Prepared for an invited review article; Published versio

    Connecting core-collapse supernova remnants and their central neutron stars

    Get PDF
    The topic of this thesis is the investigation of core-collapse supernova remnants and their central neutron stars, with an emphasis on the intrinsic connection between the two. This is achieved via spectroscopic and astrometric analysis of their X-ray emission, which is characteristic for the hot plasma and energetic nonthermal processes occurring in these objects. On one hand, this work investigates the velocity distribution of central compact objects (CCOs), a class of neutron stars characterized by purely thermal emission. Via careful astrometric calibration of data from the Chandra X-ray telescope, it is shown that none of the investigated CCOs exhibits a problematically high velocity in the celestial plane, which would exceed the expectation for the recoil experienced by the neutron star during the explosion. Furthermore, the presented proper motion measurements are used, in combination with the expansion of the associated supernova remnants (SNRs), to constrain their ages in a model-independent manner. This allows dating the SNR Puppis A to an age of 4600 +/- 700 years, and establishing strict upper limits on the ages of the SNRs G350.1-0.3 and RX J1713.7-3946, at 700 and 1700 years, respectively. This is complimented by spectroscopic analysis of the X-ray emission of the SNRs Puppis A and Vela with physically motivated models, using data from the recently launched SRG/eROSITA telescope. In Puppis A, this allows for the identification of regions which were only recently shock-heated, as well as a detailed search for ejecta produced during the explosion. This reveals that Puppis A contains an atypically high fraction of silicon for a core-collapse SNR, as well as a misalignment of intermediate-mass elements with the recoil direction implied by the motion of the neutron star. Similarly, in the much older Vela SNR, an unexpected composition of X-ray detected ejecta is revealed, with strongly supersolar abundance ratios of neon and magnesium compared to oxygen. Furthermore, the presented analysis for the first time allows for isolating the hard synchrotron emission of the central Vela pulsar wind nebula from the softer thermal emission from the SNR. This effort demonstrates a much larger extent of the synchrotron nebula than visible at other wavelengths, at a radius up to three degrees. A possible explanation for its phenomenology is the slow diffusion of high-energy electrons through a relatively weak ambient magnetic field, similarly to the gamma-ray halos observed around several older pulsars
    corecore