44 research outputs found

    Discovering novel cancer bio-markers in acquired lapatinib resistance using Bayesian methods.

    Full text link
    Signalling transduction pathways (STPs) are commonly hijacked by many cancers for their growth and malignancy, but demystifying their underlying mechanisms is difficult. Here, we developed methodologies with a fully Bayesian approach in discovering novel driver bio-markers in aberrant STPs given high-throughput gene expression (GE) data. This project, namely 'PathTurbEr' (Pathway Perturbation Driver) uses the GE dataset derived from the lapatinib (an EGFR/HER dual inhibitor) sensitive and resistant samples from breast cancer cell lines (SKBR3). Differential expression analysis revealed 512 differentially expressed genes (DEGs) and their pathway enrichment revealed 13 highly perturbed singalling pathways in lapatinib resistance, including PI3K-AKT, Chemokine, Hippo and TGF-β\beta singalling pathways. Next, the aberration in TGF-β\beta STP was modelled as a causal Bayesian network (BN) using three MCMC sampling methods, i.e. Neighbourhood sampler (NS) and Hit-and-Run (HAR) sampler that potentially yield robust inference with lower chances of getting stuck at local optima and faster convergence compared to other state-of-art methods. Next, we examined the structural features of the optimal BN as a statistical process that generates the global structure using p1p_1-model, a special class of Exponential Random Graph Models (ERGMs), and MCMC methods for their hyper-parameter sampling. This step enabled key drivers identification that drive the aberration within the perturbed BN structure of STP, and yielded 34, 34 and 23 perturbation driver genes out of 80 constituent genes of three perturbed STP models of TGF-β\beta signalling inferred by NS, HAR and MH sampling methods, respectively. Functional-relevance and disease-relevance analyses suggested their significant associations with breast cancer progression/resistance

    BeyondPixels: A Comprehensive Review of the Evolution of Neural Radiance Fields

    Full text link
    Neural rendering combines ideas from classical computer graphics and machine learning to synthesize images from real-world observations. NeRF, short for Neural Radiance Fields, is a recent innovation that uses AI algorithms to create 3D objects from 2D images. By leveraging an interpolation approach, NeRF can produce new 3D reconstructed views of complicated scenes. Rather than directly restoring the whole 3D scene geometry, NeRF generates a volumetric representation called a ``radiance field,'' which is capable of creating color and density for every point within the relevant 3D space. The broad appeal and notoriety of NeRF make it imperative to examine the existing research on the topic comprehensively. While previous surveys on 3D rendering have primarily focused on traditional computer vision-based or deep learning-based approaches, only a handful of them discuss the potential of NeRF. However, such surveys have predominantly focused on NeRF's early contributions and have not explored its full potential. NeRF is a relatively new technique continuously being investigated for its capabilities and limitations. This survey reviews recent advances in NeRF and categorizes them according to their architectural designs, especially in the field of novel view synthesis.Comment: 22 page, 1 figure, 5 tabl

    A comprehensive integrated drug similarity resource for in-silico drug repositioning and beyond.

    Full text link
    Drug similarity studies are driven by the hypothesis that similar drugs should display similar therapeutic actions and thus can potentially treat a similar constellation of diseases. Drug-drug similarity has been derived by variety of direct and indirect sources of evidence and frequently shown high predictive power in discovering validated repositioning candidates as well as other in-silico drug development applications. Yet, existing resources either have limited coverage or rely on an individual source of evidence, overlooking the wealth and diversity of drug-related data sources. Hence, there has been an unmet need for a comprehensive resource integrating diverse drug-related information to derive multi-evidenced drug-drug similarities. We addressed this resource gap by compiling heterogenous information for an exhaustive set of small-molecule drugs (total of 10 367 in the current version) and systematically integrated multiple sources of evidence to derive a multi-modal drug-drug similarity network. The resulting database, 'DrugSimDB' currently includes 238 635 drug pairs with significant aggregated similarity, complemented with an interactive user-friendly web interface (http://vafaeelab.com/drugSimDB.html), which not only enables database ease of access, search, filtration and export, but also provides a variety of complementary information on queried drugs and interactions. The integration approach can flexibly incorporate further drug information into the similarity network, providing an easily extendable platform. The database compilation and construction source-code has been well-documented and semi-automated for any-time upgrade to account for new drugs and up-to-date drug information

    Machine Learning Approaches to Identify Patient Comorbidities and Symptoms That Increased Risk of Mortality in COVID-19

    Full text link
    Providing appropriate care for people suffering from COVID-19, the disease caused by the pandemic SARS-CoV-2 virus, is a significant global challenge. Many individuals who become infected may have pre-existing conditions that may interact with COVID-19 to increase symptom severity and mortality risk. COVID-19 patient comorbidities are likely to be informative regarding the individual risk of severe illness and mortality. Determining the degree to which comorbidities are associated with severe symptoms and mortality would thus greatly assist in COVID-19 care planning and provision. To assess this we performed a meta-analysis of published global literature, and machine learning predictive analysis using an aggregated COVID-19 global dataset. Our meta-analysis suggested that chronic obstructive pulmonary disease (COPD), cerebrovascular disease (CEVD), cardiovascular disease (CVD), type 2 diabetes, malignancy, and hypertension as most significantly associated with COVID-19 severity in the current published literature. Machine learning classification using novel aggregated cohort data similarly found COPD, CVD, CKD, type 2 diabetes, malignancy, and hypertension, as well as asthma, as the most significant features for classifying those deceased versus those who survived COVID-19. While age and gender were the most significant predictors of mortality, in terms of symptom–comorbidity combinations, it was observed that Pneumonia–Hypertension, Pneumonia–Diabetes, and Acute Respiratory Distress Syndrome (ARDS)–Hypertension showed the most significant associations with COVID-19 mortality. These results highlight the patient cohorts most likely to be at risk of COVID-19-related severe morbidity and mortality, which have implications for prioritization of hospital resource

    Deep CNN-LSTM With Self-Attention Model for Human Activity Recognition Using Wearable Sensor

    Get PDF
    Human Activity Recognition (HAR) systems are devised for continuously observing human behavior - primarily in the fields of environmental compatibility, sports injury detection, senior care, rehabilitation, entertainment, and the surveillance in intelligent home settings. Inertial sensors, e.g., accelerometers, linear acceleration, and gyroscopes are frequently employed for this purpose, which are now compacted into smart devices, e.g., smartphones. Since the use of smartphones is so widespread now-a-days, activity data acquisition for the HAR systems is a pressing need. In this article, we have conducted the smartphone sensor-based raw data collection, namely H-Activity , using an Android-OS-based application for accelerometer, gyroscope, and linear acceleration. Furthermore, a hybrid deep learning model is proposed, coupling convolutional neural network and long-short term memory network (CNN-LSTM), empowered by the self-attention algorithm to enhance the predictive capabilities of the system. In addition to our collected dataset ( H-Activity ), the model has been evaluated with some benchmark datasets, e.g., MHEALTH, and UCI-HAR to demonstrate the comparative performance of our model. When compared to other models, the proposed model has an accuracy of 99.93% using our collected H-Activity data, and 98.76% and 93.11% using data from MHEALTH and UCI-HAR databases respectively, indicating its efficacy in recognizing human activity recognition. We hope that our developed model could be applicable in the clinical settings and collected data could be useful for further research.publishedVersio

    Transmission policy for wireless sensor networks

    No full text
    Wireless sensor networks (WSNs) are commonly recognized as the technological cornerstone in achieving ubiquitous computing, where future computing devices will be invisibly embedded in the world around us and accessed through simple and intelligent interfaces. From applications’ perspective, WSNs can be classified as data gathering and query-based networks, where in the former, sensors send their data proactively, either periodically or on detecting some events, and in the later, sensors only report in response to explicit user request. Irrespective of their data models, deployment of a WSN posses a number of technical challenges that stem primarily from i) the constraints imposed by simple sensor devices, e.g., limited power, transmission range, bandwidth, etc., ii) the node heterogeneity in terms of data rate, energy storage, deployment density, etc., and iii) the scalability since the network size and the number of nodes deployed can vary widely. Moreover, WSNs exhibit highly unbalanced traffic flow due to the many-to-one communication paradigm, where a large number of sensors communicate concomitantly with a central sink which reduces network lifetime drastically. This thesis aims to address these issues by developing transmission policy for transmitting sensor data which regulates the communication ranges and their associated duty cycles that nodes use over lifetime. The policy attains efficient and balanced energy usage among sensors despite the inherent unbalanced traffic flow of WSNs and thereby, significantly extends the network lifetime, a key deployment issue. We start by designing transmission policies for data gathering homogeneous sensor networks, where the transmission policies differ in terms of their degree of flexibility in using variable transmission ranges and associated duty cycles among nodes. Then, we extend our formulations for multi-tier heterogeneous sensor networks. We develop a traffic model considering collaboration among multiple services deploying multiple classes of heterogeneous sensors. Finally, we propose a distributed query optimization and processing framework for such collaborative heterogeneous sensor networks. Vigorous analytical and experimental analyses confirm the efficacy of the proposed new strategies over the notable works in the related area
    corecore