196 research outputs found
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
Eddy current defect response analysis using sum of Gaussian methods
This dissertation is a study of methods to automatedly detect and produce approximations of eddy current differential coil defect signatures in terms of a summed collection of Gaussian functions (SoG). Datasets consisting of varying material, defect size, inspection frequency, and coil diameter were investigated. Dimensionally reduced representations of the defect responses were obtained utilizing common existing reduction methods and novel enhancements to them utilizing SoG Representations. Efficacy of the SoG enhanced representations were studied utilizing common Machine Learning (ML) interpretable classifier designs with the SoG representations indicating significant improvement of common analysis metrics
Roadmap on Label-Free Super-resolution Imaging
Label-free super-resolution (LFSR) imaging relies on light-scattering processes in nanoscale objects without a need for fluorescent (FL) staining required in super-resolved FL microscopy. The objectives of this Roadmap are to present a comprehensive vision of the developments, the state-of-the-art in this field, and to discuss the resolution boundaries and hurdles that need to be overcome to break the classical diffraction limit of the label-free imaging. The scope of this Roadmap spans from the advanced interference detection techniques, where the diffraction-limited lateral resolution is combined with unsurpassed axial and temporal resolution, to techniques with true lateral super-resolution capability that are based on understanding resolution as an information science problem, on using novel structured illumination, near-field scanning, and nonlinear optics approaches, and on designing superlenses based on nanoplasmonics, metamaterials, transformation optics, and microsphere-assisted approaches. To this end, this Roadmap brings under the same umbrella researchers from the physics and biomedical optics communities in which such studies have often been developing separately. The ultimate intent of this paper is to create a vision for the current and future developments of LFSR imaging based on its physical mechanisms and to create a great opening for the series of articles in this field.Peer reviewe
Studies of hybrid pixel detectors for use in Transmission Electron Microscopy
Hybrid pixel detectors (HPDs) are a class of direct electron detectors that have been adopted for use in a wide variety of experimental modalities across all branches of electron microscopy. Nevertheless, this does not preclude the possibility of further improvement and optimisation of their performance for specific applications and increasing the range of experiments for which they are suitable. The aims of this thesis are two-fold. Firstly, to develop a more comprehensive understanding of the current generation HPDs using Si sensors, with a view to optimising their design. Secondly, to determine the advantages of alternative sensor materials that, in principle, should improve the performance of HPDs in transmission electron microscopy (TEM) due to their increased stopping power.
The three chapters review the relevant theoretical background. This includes the physics underpinning the performance of semiconductor-based sensors in electron microscopy as well as the operation of detectors more generally and the theory underlying the metrics used to evaluate detector performance in Chapter 1. In Chapter 2, TEM as a key tool in the study of nano- and atomic scale systems is also introduced, along with an overview of the detector technologies used in TEM. Also presented as part of the background material in Chapter 3 is a description of the experimental methods and software packages used to acquire the results presented in the latter half of the thesis.
Chapter 4, the first results chapter, presents a comparison of the performance of Medipix3 detectors with Si sensors with various combination of pixel pitch and sensor thickness for 60 keV and 200 keV electrons. In Chapter 5, simulations of the interactions of electrons with energies ranging from 30-300 keV with GaAs:Cr and CdTe/CZT, two of the most viable alternatives to Si for use in the sensors of HPDs, are compared with simulations of the interactions of electrons with Si. A comparative study of the performance of a Medipix3 device with GaAs:Cr sensor with that of a Si sensor of the same thickness and pixel pitch for electrons with energies ranging from 60-300 keV is presented in Chapter 6. Also included in this Chapter are the results of investigations into the defects present in the CaAs:Cr sensor material and how these affect device performance. These consist of confocal scanning transmission electron microscopy scans used to estimate the size and shape of individual pixels and how these relate to the linearity of pixels’ response, as well as studies of how the efficacy of a standard flat field depends on the incident electron flux. In the final results chapter, the focus shifts to preliminary measurements of the response of an integrating detector with GaAs:Cr sensor to electrons. These initial experimental measurements prompted further simulations investigating how the backside contact of GaAs:Cr sensors can be improved when using electrons
Boosting the sensitivity of continuous gravitational waves all-sky searches using advanced filtering techniques
The work presented in this PhD thesis has been done in the context of gravitational-waves searches. Since the first detection on the 14th September 2015 by the LIGO-Virgo collaboration, a growing number of gravitational-wave events has been detected, all emitted by the coalescence of binary systems involving black holes and/or neutron stars. My work is focused on the search for continuous gravitational waves, which still miss the first detection. These signals are expected to be emitted, for instance, by spinning neutron stars with an asymmetric shape with respect to the rotation axis, and are at least five orders of magnitude weaker than the typical amplitude of detected binary coalescences. In this PhD thesis I report on the work done in four different projects, with the common purpose of increasing the sensitivity of continuous-wave searches, involving both data analysis and instrumental aspects. The first project is a contribution to the commissioning of the Virgo interferometer in view of the next observing run, O4, which will start in May 2023. My contribution has been mainly devoted to the noise hunting activity, focused on the identification and mitigation of instrumental-noise sources that can degrade the sensitivity of continuous-wave searches.
The other three projects are related to data analysis. I have focused, in particular, on all-sky searches for sources without electromagnetic counterpart and long-lasting signals from rapidly evolving newly-born neutron stars. I have studied in great detail the robustness of an all-sky data analysis method in the case of overlapping signals. This is relevant for some exotic classes of continuous wave sources and, more generally, in view of third generation detectors, like Einstein Telescope. I have developed a two-dimensional filter, called triangular filter, to be applied to the search for long-lasting gravitational waves from unstable neutron stars, showing that thanks to this method an increase of the search sensitivity of about is achievable. Finally, I describe the first steps of a wide work to develop a new procedure for all-sky continuous-wave searches, exploiting a statistics based on the sidereal modulation, that affects astrophysical signals, due to the Earth rotation
Accelerating inference in cosmology and seismology with generative models
Statistical analyses in many physical sciences require running simulations of the system that is being examined. Such simulations provide complementary information to the theoretical analytic models, and represent an invaluable tool to investigate the dynamics of complex systems. However, running simulations is often computationally expensive, and the high number of required mocks to obtain sufficient statistical precision often makes the problem intractable. In recent years, machine learning has emerged as a possible solution to speed up the generation of scientific simulations. Machine learning generative models usually rely on iteratively feeding some true simulations to the algorithm, until it learns the important common features and is capable of producing accurate simulations in a fraction of the time. In this thesis, advanced machine learning algorithms are explored and applied to the challenge of accelerating physical simulations. Various techniques are applied to problems in cosmology and seismology, showing benefits and limitations of such an approach through a critical analysis. The algorithms are applied to compelling problems in the fields, including surrogate models for the seismic wave equation, the emulation of cosmological summary statistics, and the fast generation of large simulations of the Universe. These problems are formulated within a relevant statistical framework, and tied to real data analysis pipelines. In the conclusions, a critical overview of the results is provided, together with an outlook over possible future expansions of the work presented in the thesis
Image and Video Forensics
Nowadays, images and videos have become the main modalities of information being exchanged in everyday life, and their pervasiveness has led the image forensics community to question their reliability, integrity, confidentiality, and security. Multimedia contents are generated in many different ways through the use of consumer electronics and high-quality digital imaging devices, such as smartphones, digital cameras, tablets, and wearable and IoT devices. The ever-increasing convenience of image acquisition has facilitated instant distribution and sharing of digital images on digital social platforms, determining a great amount of exchange data. Moreover, the pervasiveness of powerful image editing tools has allowed the manipulation of digital images for malicious or criminal ends, up to the creation of synthesized images and videos with the use of deep learning techniques. In response to these threats, the multimedia forensics community has produced major research efforts regarding the identification of the source and the detection of manipulation. In all cases (e.g., forensic investigations, fake news debunking, information warfare, and cyberattacks) where images and videos serve as critical evidence, forensic technologies that help to determine the origin, authenticity, and integrity of multimedia content can become essential tools. This book aims to collect a diverse and complementary set of articles that demonstrate new developments and applications in image and video forensics to tackle new and serious challenges to ensure media authenticity
The Road to General Intelligence
Humans have always dreamed of automating laborious physical and intellectual tasks, but the latter has proved more elusive than naively suspected. Seven decades of systematic study of Artificial Intelligence have witnessed cycles of hubris and despair. The successful realization of General Intelligence (evidenced by the kind of cross-domain flexibility enjoyed by humans) will spawn an industry worth billions and transform the range of viable automation tasks.The recent notable successes of Machine Learning has lead to conjecture that it might be the appropriate technology for delivering General Intelligence. In this book, we argue that the framework of machine learning is fundamentally at odds with any reasonable notion of intelligence and that essential insights from previous decades of AI research are being forgotten. We claim that a fundamental change in perspective is required, mirroring that which took place in the philosophy of science in the mid 20th century. We propose a framework for General Intelligence, together with a reference architecture that emphasizes the need for anytime bounded rationality and a situated denotational semantics. We given necessary emphasis to compositional reasoning, with the required compositionality being provided via principled symbolic-numeric inference mechanisms based on universal constructions from category theory. • Details the pragmatic requirements for real-world General Intelligence. • Describes how machine learning fails to meet these requirements. • Provides a philosophical basis for the proposed approach. • Provides mathematical detail for a reference architecture. • Describes a research program intended to address issues of concern in contemporary AI. The book includes an extensive bibliography, with ~400 entries covering the history of AI and many related areas of computer science and mathematics.The target audience is the entire gamut of Artificial Intelligence/Machine Learning researchers and industrial practitioners. There are a mixture of descriptive and rigorous sections, according to the nature of the topic. Undergraduate mathematics is in general sufficient. Familiarity with category theory is advantageous for a complete understanding of the more advanced sections, but these may be skipped by the reader who desires an overall picture of the essential concepts This is an open access book
Artificial Intelligence for Multimedia Signal Processing
Artificial intelligence technologies are also actively applied to broadcasting and multimedia processing technologies. A lot of research has been conducted in a wide variety of fields, such as content creation, transmission, and security, and these attempts have been made in the past two to three years to improve image, video, speech, and other data compression efficiency in areas related to MPEG media processing technology. Additionally, technologies such as media creation, processing, editing, and creating scenarios are very important areas of research in multimedia processing and engineering. This book contains a collection of some topics broadly across advanced computational intelligence algorithms and technologies for emerging multimedia signal processing as: Computer vision field, speech/sound/text processing, and content analysis/information mining
Holistic improvement of image acquisition and reconstruction in fluorescence microscopy
Recent developments in microscopic imaging led to a better understanding of intra- and intercellular metabolic processes and, for example, to visualize structural properties of viral pathogens. In this thesis, the imaging process of widefield and confocal scanning microscopy techniques is treated holistically to highlight general strategies and maximise their information content. Poisson or shot noise is assumed to be the fundamental noise process for the given measurements. A stable focus position is a basic condition for e.g. long-term measurements in order to provide reliable information about potential changes inside the Field-of-view. While newer microscopy systems can be equipped with hardware autofocus, this is not yet the widespread standard. For image-based focus analysis, different metrics for ideal, noisy and aberrated, in case of spherical aberration and astigmatism, measurements are presented. A stable focus position is also relevant in the example of 2-photon confocal imaging and at the same time the situation is aggravated in the given example, the measurement of the retina in the living mouse. In addition to the natural drift of the focal position, which can be evaluated by means of previously introduced metrics, rhythmic heartbeat, respiration, unrhythmic muscle twitching and movement of the mouse kept in artificial sleep are added. A dejittering algorithm is presented for the measurement data obtained under these circumstances. Using the additional information about the sample distribution in ISM, a method for reconstructing 3D from 2D image data is presented in the form of thick slice unmixing. This method can further be used for suppression of light generated outside the focal layer of 3D data stacks and is compared to selective layer multi-view deconvolution. To reduce phototoxicity and save valuable measurement time for a 3D stack, the method of zLEAP is presented, by which omitted Z-planes are subsequently calculated and inserted
- …