661 research outputs found
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
Beam scanning by liquid-crystal biasing in a modified SIW structure
A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium
The use of scRNA-seq to characterise the tumour microenvironment of high grade serous ovarian carincoma (HGSOC)
High Grade Serous Ovarian Carcinoma (HGSOC) is the most common type of ovarian cancer. Patients with this disease typically experience relapse in their disease following surgical debulking and initially effective chemotherapy. HGSOC has been intensely studied at the genomic and transcriptomic levels in efforts to advance knowledge of the biological mechanisms that drive the behaviour of this malignancy, and so that new treatment strategies may curb the disease progression relapse.
This body of work contributes an optimised protocol for generating robust 10X scRNA-seq libraries from fresh and preserved HGSOC tissue, aiming to dissect the cellular heterogeneity of HGSOC’s Tumour microenvironment (TME). Through unsupervised clustering analysis, it uncovers distinct cellular communities, elucidates transcriptomic signatures across HGSOC tumours, and augments bulk RNA-seq datasets via computational deconvolution, enhancing understanding of HGSOC's cellular complexity across an expanded clinical cohort.
The sequencing and analysis of these HGSOC patient tumours revealed 11 distinct cell types, including 2 that are novel in this tumour type; namely ciliated epithelial cells and metallothionein expressing T-cells. These 11 distinct cell types can be broadly categorised into 3 TME components (Tumour, Stroma and Immune) as in other previous tumour scRNA-seq studies. An additional analysis of these components examined the copy number variation (CNV) in the profiled cells and revealed HGSOC tumour cells to be mostly aneuploid while ciliated epithelial cells were diploid. A novel integrative subcluster analysis of HGSOC aneuploid tumour cells identified several apparently tumourigenic gene expression signatures. These include a KRT17+, protease inhibitory signature, an increased cellular metabolism signature, and an immune-reactive signature. Additionally, a ciliated cluster re-emerged within the HGSOC tumour cells, even though the diploid ciliated epithelial cells were not included in the integrative analysis.
Finally, the high granularity of HGSOC cellular composition revealed by scRNA-seq is utilised to perform deconvolution analyses to estimate cellular proportions and infer the TME of earlier bulk RNA-seq profiled HGSOC tumour samples. This investigation of earlier sequenced HGSOC samples revealed heterogeneity in the proportions of the TME compartments across the patient cohorts. Survival analysis using these inferred cellular proportions suggest that immune cell presence alone is not associated with survival, but metastatic fibroblast burden in tumour samples is significantly associated with worsen overall survival in HGSOC patients.
In conclusion, the laboratory protocol, the scRNA-seq datasets produced, and their analysis and application presented in this work expands the collective knowledge base of HGSOC. Specifically by characterising the cells of the HGSOC tumour microenvironment, and nuances of expression signatures of the malignant cells. The deconvolution approach showcases how scRNA-seq data can expand the clinical utility of earlier RNA-seq HGSOC datasets in a way that is scalable
(b2023 to 2014) The UNBELIEVABLE similarities between the ideas of some people (2006-2016) and my ideas (2002-2008) in physics (quantum mechanics, cosmology), cognitive neuroscience, philosophy of mind, and philosophy (this manuscript would require a REVOLUTION in international academy environment!)
(b2023 to 2014) The UNBELIEVABLE similarities between the ideas of some people (2006-2016) and my ideas (2002-2008) in physics (quantum mechanics, cosmology), cognitive neuroscience, philosophy of mind, and philosophy (this manuscript would require a REVOLUTION in international academy environment!
Gaining Insights into Denoising by Inpainting
The filling-in effect of diffusion processes is a powerful tool for various
image analysis tasks such as inpainting-based compression and dense optic flow
computation. For noisy data, an interesting side effect occurs: The
interpolated data have higher confidence, since they average information from
many noisy sources. This observation forms the basis of our denoising by
inpainting (DbI) framework. It averages multiple inpainting results from
different noisy subsets. Our goal is to obtain fundamental insights into key
properties of DbI and its connections to existing methods. Like in
inpainting-based image compression, we choose homogeneous diffusion as a very
simple inpainting operator that performs well for highly optimized data. We
propose several strategies to choose the location of the selected pixels.
Moreover, to improve the global approximation quality further, we also allow to
change the function values of the noisy pixels. In contrast to traditional
denoising methods that adapt the operator to the data, our approach adapts the
data to the operator. Experimentally we show that replacing homogeneous
diffusion inpainting by biharmonic inpainting does not improve the
reconstruction quality. This again emphasizes the importance of data adaptivity
over operator adaptivity. On the foundational side, we establish deterministic
and probabilistic theories with convergence estimates. In the non-adaptive 1-D
case, we derive equivalence results between DbI on shifted regular grids and
classical homogeneous diffusion filtering via an explicit relation between the
density and the diffusion time
Vision Transformer Advanced by Exploring Intrinsic Inductive Bias
The vision models have experienced a paradigm shift from convolutional neural networks (CNNs) to transformers. Compared with convolutions, transformers can capture both short- and long-range dependencies, making them more adaptable for extensive datasets. However, this adaptability comes at a cost: vision transformers are data-hungry and prone to overfitting with limited training data, restricting their applications in various vision tasks. This thesis aims to mitigate these shortcomings through advancements in architectural design and training methodologies, encompassing a comprehensive assessment involving various vision tasks. We investigate the data-hungry nature of transformers due to their lack of inductive bias. Our proposed remedy involves the incorporation of convolution blocks with multi-head self-attention (MHSA) mechanisms within each transformer block. This integration injects the inductive bias into the architecture, formulating the ViTAE model. Moreover, we present an innovative self-supervised learning approach, RegionCL, which bolsters the training process by emphasizing local information via region swapping. What’s more, a ViTPose-G model, based on ViTAE-G, is introduced and demonstrates exceptional performance in pose estimation tasks across various datasets
Learning-based Wavelet-like Transforms For Fully Scalable and Accessible Image Compression
The goal of this thesis is to improve the existing wavelet transform with the aid of machine learning techniques, so as to enhance coding efficiency of wavelet-based image compression frameworks, such as JPEG 2000.
In this thesis, we first propose to augment the conventional base wavelet transform with two additional learned lifting steps -- a high-to-low step followed by a low-to-high step. The high-to-low step suppresses aliasing in the low-pass band by using the detail bands at the same resolution, while the low-to-high step aims to further remove redundancy from detail bands by using the corresponding low-pass band. These two additional steps reduce redundancy (notably aliasing information) amongst the wavelet subbands, and also improve the visual quality of reconstructed images at reduced resolutions.
To train these two networks in an end-to-end fashion, we develop a backward annealing approach to overcome the non-differentiability of the quantization and cost functions during back-propagation. Importantly, the two additional networks share a common architecture, named a proposal-opacity topology, which is inspired and guided by a specific theoretical argument related to geometric flow. This particular network topology is compact and with limited non-linearities, allowing a fully scalable system; one pair of trained network parameters are applied for all levels of decomposition and for all bit-rates of interest. By employing the additional lifting networks within the JPEG2000 image coding standard, we can achieve up to 17.4% average BD bit-rate saving over a wide range of bit-rates, while retaining the quality and resolution scalability features of JPEG2000.
Built upon the success of the high-to-low and low-to-high steps, we then study more broadly the extension of neural networks to all lifting steps that correspond to the base wavelet transform. The purpose of this comprehensive study is to understand what is the most effective way to develop learned wavelet-like transforms for highly scalable and accessible image compression. Specifically, we examine the impact of the number of learned lifting steps, the number of layers and the number of channels in each learned lifting network, and kernel support in each layer. To facilitate the study, we develop a generic training methodology that is simultaneously appropriate to all lifting structures considered. Experimental results ultimately suggest that to improve the existing wavelet transform, it is more profitable to augment a larger wavelet transform with more diverse high-to-low and low-to-high steps, rather than developing deep fully learned lifting structures
Gabriel Vacariu (c2023 to 2014) The UNBELIEVABLE similarities between the ideas of some people (2006-2016) and my ideas (2002-2008) in physics (quantum mechanics, cosmology), cognitive neuroscience, philosophy of mind, and philosophy
Unbelievable similar ideas to my ideas published long before..
Breaking the Curse of Dimensionality in Deep Neural Networks by Learning Invariant Representations
Artificial intelligence, particularly the subfield of machine learning, has
seen a paradigm shift towards data-driven models that learn from and adapt to
data. This has resulted in unprecedented advancements in various domains such
as natural language processing and computer vision, largely attributed to deep
learning, a special class of machine learning models. Deep learning arguably
surpasses traditional approaches by learning the relevant features from raw
data through a series of computational layers.
This thesis explores the theoretical foundations of deep learning by studying
the relationship between the architecture of these models and the inherent
structures found within the data they process. In particular, we ask What
drives the efficacy of deep learning algorithms and allows them to beat the
so-called curse of dimensionality-i.e. the difficulty of generally learning
functions in high dimensions due to the exponentially increasing need for data
points with increased dimensionality? Is it their ability to learn relevant
representations of the data by exploiting their structure? How do different
architectures exploit different data structures? In order to address these
questions, we push forward the idea that the structure of the data can be
effectively characterized by its invariances-i.e. aspects that are irrelevant
for the task at hand.
Our methodology takes an empirical approach to deep learning, combining
experimental studies with physics-inspired toy models. These simplified models
allow us to investigate and interpret the complex behaviors we observe in deep
learning systems, offering insights into their inner workings, with the
far-reaching goal of bridging the gap between theory and practice.Comment: PhD Thesis @ EPF
Modelling, Dimensioning and Optimization of 5G Communication Networks, Resources and Services
This reprint aims to collect state-of-the-art research contributions that address challenges in the emerging 5G networks design, dimensioning and optimization. Designing, dimensioning and optimization of communication networks resources and services have been an inseparable part of telecom network development. The latter must convey a large volume of traffic, providing service to traffic streams with highly differentiated requirements in terms of bit-rate and service time, required quality of service and quality of experience parameters. Such a communication infrastructure presents many important challenges, such as the study of necessary multi-layer cooperation, new protocols, performance evaluation of different network parts, low layer network design, network management and security issues, and new technologies in general, which will be discussed in this book
- …