15,163 research outputs found

    Multi-omics approaches to studying gastrointestinal microbiome in the context of precision medicine and machine learning

    Get PDF
    The human gastrointestinal (gut) microbiome plays a critical role in maintaining host health and has been increasingly recognized as an important factor in precision medicine. High-throughput sequencing technologies have revolutionized -omics data generation, facilitating the characterization of the human gut microbiome with exceptional resolution. The analysis of various -omics data, including metatranscriptomics, metagenomics, glycomics, and metabolomics, holds potential for personalized therapies by revealing information about functional genes, microbial composition, glycans, and metabolites. This multi-omics approach has not only provided insights into the role of the gut microbiome in various diseases but has also facilitated the identification of microbial biomarkers for diagnosis, prognosis, and treatment. Machine learning algorithms have emerged as powerful tools for extracting meaningful insights from complex datasets, and more recently have been applied to metagenomics data via efficiently identifying microbial signatures, predicting disease states, and determining potential therapeutic targets. Despite these rapid advancements, several challenges remain, such as key knowledge gaps, algorithm selection, and bioinformatics software parametrization. In this mini-review, our primary focus is metagenomics, while recognizing that other -omics can enhance our understanding of the functional diversity of organisms and how they interact with the host. We aim to explore the current intersection of multi-omics, precision medicine, and machine learning in advancing our understanding of the gut microbiome. A multidisciplinary approach holds promise for improving patient outcomes in the era of precision medicine, as we unravel the intricate interactions between the microbiome and human health

    Measurement of the H → γ γ and H → ZZ∗ → 4 cross-sections in pp collisions at √s = 13.6 TeV with the ATLAS detector

    Get PDF
    The inclusive Higgs boson production cross section is measured in the di-photon and the Z Z∗ → 4 decay channels using 31.4 and 29.0 fb−1 of pp collision data respectively, collected with the ATLAS detector at a centre of-mass energy of √s = 13.6 TeV. To reduce the model dependence, the measurement in each channel is restricted to a particle-level phase space that closely matches the chan nel’s detector-level kinematic selection, and it is corrected for detector effects. These measured fiducial cross-sections are σfid,γ γ = 76+14 −13 fb, and σfid,4 = 2.80 ± 0.74 fb, in agreement with the corresponding Standard Model predic tions of 67.6±3.7 fb and 3.67±0.19 fb. Assuming Standard Model acceptances and branching fractions for the two chan nels, the fiducial measurements are extrapolated to the full phase space yielding total cross-sections of σ (pp → H) = 67+12 −11 pb and 46±12 pb at 13.6 TeV from the di-photon and Z Z∗ → 4 measurements respectively. The two measure ments are combined into a total cross-section measurement of σ (pp → H) = 58.2±8.7 pb, to be compared with the Stan dard Model prediction of σ (pp → H)SM = 59.9 ± 2.6 p

    Off the Radar: Uncertainty-Aware Radar Place Recognition with Introspective Querying and Map Maintenance

    Full text link
    Localisation with Frequency-Modulated Continuous-Wave (FMCW) radar has gained increasing interest due to its inherent resistance to challenging environments. However, complex artefacts of the radar measurement process require appropriate uncertainty estimation to ensure the safe and reliable application of this promising sensor modality. In this work, we propose a multi-session map management system which constructs the best maps for further localisation based on learned variance properties in an embedding space. Using the same variance properties, we also propose a new way to introspectively reject localisation queries that are likely to be incorrect. For this, we apply robust noise-aware metric learning, which both leverages the short-timescale variability of radar data along a driven path (for data augmentation) and predicts the downstream uncertainty in metric-space-based place recognition. We prove the effectiveness of our method over extensive cross-validated tests of the Oxford Radar RobotCar and MulRan dataset. In this, we outperform the current state-of-the-art in radar place recognition and other uncertainty-aware methods when using only single nearest-neighbour queries. We also show consistent performance increases when rejecting queries based on uncertainty over a difficult test environment, which we did not observe for a competing uncertainty-aware place recognition system.Comment: 8 pages, 6 figure

    SSC-RS: Elevate LiDAR Semantic Scene Completion with Representation Separation and BEV Fusion

    Full text link
    Semantic scene completion (SSC) jointly predicts the semantics and geometry of the entire 3D scene, which plays an essential role in 3D scene understanding for autonomous driving systems. SSC has achieved rapid progress with the help of semantic context in segmentation. However, how to effectively exploit the relationships between the semantic context in semantic segmentation and geometric structure in scene completion remains under exploration. In this paper, we propose to solve outdoor SSC from the perspective of representation separation and BEV fusion. Specifically, we present the network, named SSC-RS, which uses separate branches with deep supervision to explicitly disentangle the learning procedure of the semantic and geometric representations. And a BEV fusion network equipped with the proposed Adaptive Representation Fusion (ARF) module is presented to aggregate the multi-scale features effectively and efficiently. Due to the low computational burden and powerful representation ability, our model has good generality while running in real-time. Extensive experiments on SemanticKITTI demonstrate our SSC-RS achieves state-of-the-art performance.Comment: 8 pages, 5 figures, IROS202

    An advanced deep learning models-based plant disease detection: A review of recent research

    Get PDF
    Plants play a crucial role in supplying food globally. Various environmental factors lead to plant diseases which results in significant production losses. However, manual detection of plant diseases is a time-consuming and error-prone process. It can be an unreliable method of identifying and preventing the spread of plant diseases. Adopting advanced technologies such as Machine Learning (ML) and Deep Learning (DL) can help to overcome these challenges by enabling early identification of plant diseases. In this paper, the recent advancements in the use of ML and DL techniques for the identification of plant diseases are explored. The research focuses on publications between 2015 and 2022, and the experiments discussed in this study demonstrate the effectiveness of using these techniques in improving the accuracy and efficiency of plant disease detection. This study also addresses the challenges and limitations associated with using ML and DL for plant disease identification, such as issues with data availability, imaging quality, and the differentiation between healthy and diseased plants. The research provides valuable insights for plant disease detection researchers, practitioners, and industry professionals by offering solutions to these challenges and limitations, providing a comprehensive understanding of the current state of research in this field, highlighting the benefits and limitations of these methods, and proposing potential solutions to overcome the challenges of their implementation

    The State of the Art in Deep Learning Applications, Challenges, and Future Prospects::A Comprehensive Review of Flood Forecasting and Management

    Get PDF
    Floods are a devastating natural calamity that may seriously harm both infrastructure and people. Accurate flood forecasts and control are essential to lessen these effects and safeguard populations. By utilizing its capacity to handle massive amounts of data and provide accurate forecasts, deep learning has emerged as a potent tool for improving flood prediction and control. The current state of deep learning applications in flood forecasting and management is thoroughly reviewed in this work. The review discusses a variety of subjects, such as the data sources utilized, the deep learning models used, and the assessment measures adopted to judge their efficacy. It assesses current approaches critically and points out their advantages and disadvantages. The article also examines challenges with data accessibility, the interpretability of deep learning models, and ethical considerations in flood prediction. The report also describes potential directions for deep-learning research to enhance flood predictions and control. Incorporating uncertainty estimates into forecasts, integrating many data sources, developing hybrid models that mix deep learning with other methodologies, and enhancing the interpretability of deep learning models are a few of these. These research goals can help deep learning models become more precise and effective, which will result in better flood control plans and forecasts. Overall, this review is a useful resource for academics and professionals working on the topic of flood forecasting and management. By reviewing the current state of the art, emphasizing difficulties, and outlining potential areas for future study, it lays a solid basis. Communities may better prepare for and lessen the destructive effects of floods by implementing cutting-edge deep learning algorithms, thereby protecting people and infrastructure

    MD-HIT: Machine learning for materials property prediction with dataset redundancy control

    Full text link
    Materials datasets are usually featured by the existence of many redundant (highly similar) materials due to the tinkering material design practice over the history of materials research. For example, the materials project database has many perovskite cubic structure materials similar to SrTiO3_3. This sample redundancy within the dataset makes the random splitting of machine learning model evaluation to fail so that the ML models tend to achieve over-estimated predictive performance which is misleading for the materials science community. This issue is well known in the field of bioinformatics for protein function prediction, in which a redundancy reduction procedure (CD-Hit) is always applied to reduce the sample redundancy by ensuring no pair of samples has a sequence similarity greater than a given threshold. This paper surveys the overestimated ML performance in the literature for both composition based and structure based material property prediction. We then propose a material dataset redundancy reduction algorithm called MD-HIT and evaluate it with several composition and structure based distance threshold sfor reducing data set sample redundancy. We show that with this control, the predicted performance tends to better reflect their true prediction capability. Our MD-hit code can be freely accessed at https://github.com/usccolumbia/MD-HITComment: 12page

    Distinct genomic routes underlie transitions to specialised symbiotic lifestyles in deep-sea annelid worms.

    Get PDF
    Bacterial symbioses allow annelids to colonise extreme ecological niches, such as hydrothermal vents and whale falls. Yet, the genetic principles sustaining these symbioses remain unclear. Here, we show that different genomic adaptations underpin the symbioses of phylogenetically related annelids with distinct nutritional strategies. Genome compaction and extensive gene losses distinguish the heterotrophic symbiosis of the bone-eating worm Osedax frankpressi from the chemoautotrophic symbiosis of deep-sea Vestimentifera. Osedax's endosymbionts complement many of the host's metabolic deficiencies, including the loss of pathways to recycle nitrogen and synthesise some amino acids. Osedax's endosymbionts possess the glyoxylate cycle, which could allow more efficient catabolism of bone-derived nutrients and the production of carbohydrates from fatty acids. Unlike in most Vestimentifera, innate immunity genes are reduced in O. frankpressi, which, however, has an expansion of matrix metalloproteases to digest collagen. Our study supports that distinct nutritional interactions influence host genome evolution differently in highly specialised symbioses

    Memory Effects, Multiple Time Scales and Local Stability in Langevin Models of the S&P500 Market Correlation

    Full text link
    The analysis of market correlations is crucial for optimal portfolio selection of correlated assets, but their memory effects have often been neglected. In this work, we analyse the mean market correlation of the S&P500 which corresponds to the main market mode in principle component analysis. We fit a generalised Langevin equation (GLE) to the data whose memory kernel implies that there is a significant memory effect in the market correlation ranging back at least three trading weeks. The memory kernel improves the forecasting accuracy of the GLE compared to models without memory and hence, such a memory effect has to be taken into account for optimal portfolio selection to minimise risk or for predicting future correlations. Moreover, a Bayesian resilience estimation provides further evidence for non-Markovianity in the data and suggests the existence of a hidden slow time scale that operates on much slower times than the observed daily market data. Assuming that such a slow time scale exists, our work supports previous research on the existence of locally stable market states.Comment: 15 pages (excluding references and appendix
    corecore