18 research outputs found

    Approximate kernel reconstruction for time-varying networks

    Get PDF
    Most existing algorithms for modeling and analyzing molecular networks assume a static or time-invariant network topology. Such view, however, does not render the temporal evolution of the underlying biological process as molecular networks are typically “re-wired” over time in response to cellular development and environmental changes. In our previous work, we formulated the inference of time-varying or dynamic networks as a tracking problem, where the target state is the ensemble of edges in the network. We used the Kalman filter to track the network topology over time. Unfortunately, the output of the Kalman filter does not reflect known properties of molecular networks, such as sparsity

    Robust Learning via Ensemble Density Propagation in Deep Neural Networks

    Get PDF
    Learning in uncertain, noisy, or adversarial environments is a challenging task for deep neural networks (DNNs). We propose a new theoretically grounded and efficient approach for robust learning that builds upon Bayesian estimation and Variational Inference. We formulate the problem of density propagation through layers of a DNN and solve it using an Ensemble Density Propagation (EnDP) scheme. The EnDP approach allows us to propagate moments of the variational probability distribution across the layers of a Bayesian DNN, enabling the estimation of the mean and covariance of the predictive distribution at the output of the model. Our experiments using MNIST and CIFAR-10 datasets show a significant improvement in the robustness of the trained models to random noise and adversarial attacks

    Inception Modules Enhance Brain Tumor Segmentation.

    Get PDF
    Magnetic resonance images of brain tumors are routinely used in neuro-oncology clinics for diagnosis, treatment planning, and post-treatment tumor surveillance. Currently, physicians spend considerable time manually delineating different structures of the brain. Spatial and structural variations, as well as intensity inhomogeneity across images, make the problem of computer-assisted segmentation very challenging. We propose a new image segmentation framework for tumor delineation that benefits from two state-of-the-art machine learning architectures in computer vision, i.e., Inception modules and U-Net image segmentation architecture. Furthermore, our framework includes two learning regimes, i.e., learning to segment intra-tumoral structures (necrotic and non-enhancing tumor core, peritumoral edema, and enhancing tumor) or learning to segment glioma sub-regions (whole tumor, tumor core, and enhancing tumor). These learning regimes are incorporated into a newly proposed loss function which is based on the Dice similarity coefficient (DSC). In our experiments, we quantified the impact of introducing the Inception modules in the U-Net architecture, as well as, changing the objective function for the learning algorithm from segmenting the intra-tumoral structures to glioma sub-regions. We found that incorporating Inception modules significantly improved the segmentation performance (p \u3c 0.001) for all glioma sub-regions. Moreover, in architectures with Inception modules, the models trained with the learning objective of segmenting the intra-tumoral structures outperformed the models trained with the objective of segmenting the glioma sub-regions for the whole tumor (p \u3c 0.001). The improved performance is linked to multiscale features extracted by newly introduced Inception module and the modified loss function based on the DSC

    Dilated Inception U-Net (DIU-Net) for Brain Tumor Segmentation

    Get PDF
    Magnetic resonance imaging (MRI) is routinely used for brain tumor diagnosis, treatment planning, and post-treatment surveillance. Recently, various models based on deep neural networks have been proposed for the pixel-level segmentation of tumors in brain MRIs. However, the structural variations, spatial dissimilarities, and intensity inhomogeneity in MRIs make segmentation a challenging task. We propose a new end-to-end brain tumor segmentation architecture based on U-Net that integrates Inception modules and dilated convolutions into its contracting and expanding paths. This allows us to extract local structural as well as global contextual information. We performed segmentation of glioma sub-regions, including tumor core, enhancing tumor, and whole tumor using Brain Tumor Segmentation (BraTS) 2018 dataset. Our proposed model performed significantly better than the state-of-the-art U-Net-based model (p\u3c0.05) for tumor core and whole tumor segmentation

    PremiUm-CNN: Propagating Uncertainty Towards Robust Convolutional Neural Networks

    Get PDF
    Deep neural networks (DNNs) have surpassed human-level accuracy in various learning tasks. However, unlike humans who have a natural cognitive intuition for probabilities, DNNs cannot express their uncertainty in the output decisions. This limits the deployment of DNNs in mission-critical domains, such as warfighter decision-making or medical diagnosis. Bayesian inference provides a principled approach to reason about model\u27s uncertainty by estimating the posterior distribution of the unknown parameters. The challenge in DNNs remains the multi-layer stages of non-linearities, which make the propagation of high-dimensional distributions mathematically intractable. This paper establishes the theoretical and algorithmic foundations of uncertainty or belief propagation by developing new deep learning models named PremiUm-CNNs (Propagating Uncertainty in Convolutional Neural Networks). We introduce a tensor normal distribution as a prior over convolutional kernels and estimate the variational posterior by maximizing the evidence lower bound (ELBO). We start by deriving the first-order mean-covariance propagation framework. Later, we develop a framework based on the unscented transformation (correct at least up to the second-order) that propagates sigma points of the variational distribution through layers of a CNN. The propagated covariance of the predictive distribution captures uncertainty in the output decision. Comprehensive experiments conducted on diverse benchmark datasets demonstrate: 1) superior robustness against noise and adversarial attacks, 2) self-assessment through predictive uncertainty that increases quickly with increasing levels of noise or attacks, and 3) an ability to detect a targeted attack from ambient noise

    Deep Ensemble for Rotorcraft Attitude Prediction

    Full text link
    Historically, the rotorcraft community has experienced a higher fatal accident rate than other aviation segments, including commercial and general aviation. Recent advancements in artificial intelligence (AI) and the application of these technologies in different areas of our lives are both intriguing and encouraging. When developed appropriately for the aviation domain, AI techniques provide an opportunity to help design systems that can address rotorcraft safety challenges. Our recent work demonstrated that AI algorithms could use video data from onboard cameras and correctly identify different flight parameters from cockpit gauges, e.g., indicated airspeed. These AI-based techniques provide a potentially cost-effective solution, especially for small helicopter operators, to record the flight state information and perform post-flight analyses. We also showed that carefully designed and trained AI systems could accurately predict rotorcraft attitude (i.e., pitch and yaw) from outside scenes (images or video data). Ordinary off-the-shelf video cameras were installed inside the rotorcraft cockpit to record the outside scene, including the horizon. The AI algorithm could correctly identify rotorcraft attitude at an accuracy in the range of 80\%. In this work, we combined five different onboard camera viewpoints to improve attitude prediction accuracy to 94\%. In this paper, five onboard camera views included the pilot windshield, co-pilot windshield, pilot Electronic Flight Instrument System (EFIS) display, co-pilot EFIS display, and the attitude indicator gauge. Using video data from each camera view, we trained various convolutional neural networks (CNNs), which achieved prediction accuracy in the range of 79\% % to 90\% %. We subsequently ensembled the learned knowledge from all CNNs and achieved an ensembled accuracy of 93.3\%

    Evaluating the Safety and Mobility Impacts of American Dream Complex: Phase I (Feasibility Study, and Data Acquisition)

    Get PDF
    69A3551847102Traffic congestion and motor vehicle crashes are perceived as pivotal concerns that are particularly difficult to manage in high-density urban areas. Thus, mitigating traffic congestion and improving users' safety on roadways are top priorities of the United States Department of Transportation (USDOT). American Dream Complex, located outside New York City, is an entertainment and retail center that was officially opened in October 2019. The complex is expected to attract over 40 million annual visitors once fully operational, which may potentially result in substantial mobility and safety issues for road users in the area. The present research work evaluates the mobility and safety concerns of the transportation network in the vicinity of the American Dream Complex due to its partial official opening. In terms of mobility, the performance of four surrounding corridors was explored by incorporating travel time inflation (TI) as a performance measure. Based on the results obtained from the mobility analysis, no considerable congestion was observed on the opening day of the American Dream Complex on surrounding corridors. Additionally, StreetLight data was also explored for Interstate 95, NJ Route 3, and NJ Route 120 for a period of 120 days before and 120 days after the opening of the complex. Findings showed an increase in the trips made after the opening; however, the travel duration was not significantly impacted due to the opening. To achieve the second goal of this study, the research team developed an innovative artificial intelligence (AI)-based video analytic tool to assess intersection safety using surrogate safety measures. To extract the trajectory data, the proposed work integrates a real-time AI detection algorithm, YOLO-V5, with tracking using the Deep SORT algorithm. The proposed approach achieved a relative accuracy between 95 and 98 percent in detecting and tracking vehicle trajectories

    A Real-time Proactive Intersection Safety Monitoring System Based on Video Data

    Get PDF
    69A3551847102In recent years, identifying road users' behavior and conflicts at intersections have become an essential data source for evaluating traffic safety. According to the Federal Highway Administration (FHWA), in 2020, more than 50% of fatal and injury crashes occurred at or near the intersections, necessitating further investigation. This study developed an innovative artificial intelligence (AI)-based video analytic tool to assess intersection safety using surrogate safety measures. Surrogate safety measures (e.g., Post-encroachment Time (PET) and Time-to-Collision (TTC)) are extensively used to identify future threats, such as rear-end and left-turning collisions due to vehicle and road users' interactions. To extract the trajectory data, this project integrates a real-time AI detection model - YOLO-v5 with a tracking framework based on the DeepSORT algorithm. 54 hours of high-resolution video data were collected at six signalized intersections (including three3-leg intersections and three 4-leg intersections) in Glassboro, New Jersey. Non-compliance behaviors, such as red-light running and pedestrian jaywalking, are captured to better understand the risky behaviors at these locations. The proposed approach achieved an accuracy of 92% to 98% for detecting and tracking the road users' trajectories. Additionally, a user-friendly web-based application was developed that provides directional traffic volumes, pedestrian volumes, vehicles running a red light, pedestrian jaywalking events, and PET and TTC for crossing conflicts between two road users. In addition, an extreme value theory (EVT) was used to estimate the number of crashes at each intersection utilizing the frequency of PETs and TTCs. Finally, the intersections were ranked based on the calculated score considering the severity of crashes. Overall, the developed tool as well as the crash estimation model and ranking method, can provide valuable information for engineers and policymakers to assess the safety of intersections and implement effective countermeasures to mitigate the intersection-involved crashes

    Diagnosing growth in low-grade gliomas with and without longitudinal volume measurements: A retrospective observational study.

    Get PDF
    BACKGROUND: Low-grade gliomas cause significant neurological morbidity by brain invasion. There is no universally accepted objective technique available for detection of enlargement of low-grade gliomas in the clinical setting; subjective evaluation by clinicians using visual comparison of longitudinal radiological studies is the gold standard. The aim of this study is to determine whether a computer-assisted diagnosis (CAD) method helps physicians detect earlier growth of low-grade gliomas. METHODS AND FINDINGS: We reviewed 165 patients diagnosed with grade 2 gliomas, seen at the University of Alabama at Birmingham clinics from 1 July 2017 to 14 May 2018. MRI scans were collected during the spring and summer of 2018. Fifty-six gliomas met the inclusion criteria, including 19 oligodendrogliomas, 26 astrocytomas, and 11 mixed gliomas in 30 males and 26 females with a mean age of 48 years and a range of follow-up of 150.2 months (difference between highest and lowest values). None received radiation therapy. We also studied 7 patients with an imaging abnormality without pathological diagnosis, who were clinically stable at the time of retrospective review (14 May 2018). This study compared growth detection by 7 physicians aided by the CAD method with retrospective clinical reports. The tumors of 63 patients (56 + 7) in 627 MRI scans were digitized, including 34 grade 2 gliomas with radiological progression and 22 radiologically stable grade 2 gliomas. The CAD method consisted of tumor segmentation, computing volumes, and pointing to growth by the online abrupt change-of-point method, which considers only past measurements. Independent scientists have evaluated the segmentation method. In 29 of the 34 patients with progression, the median time to growth detection was only 14 months for CAD compared to 44 months for current standard of care radiological evaluation (p \u3c 0.001). Using CAD, accurate detection of tumor enlargement was possible with a median of only 57% change in the tumor volume as compared to a median of 174% change of volume necessary to diagnose tumor growth using standard of care clinical methods (p \u3c 0.001). In the radiologically stable group, CAD facilitated growth detection in 13 out of 22 patients. CAD did not detect growth in the imaging abnormality group. The main limitation of this study was its retrospective design; nevertheless, the results depict the current state of a gold standard in clinical practice that allowed a significant increase in tumor volumes from baseline before detection. Such large increases in tumor volume would not be permitted in a prospective design. The number of glioma patients (n = 56) is a limitation; however, it is equivalent to the number of patients in phase II clinical trials. CONCLUSIONS: The current practice of visual comparison of longitudinal MRI scans is associated with significant delays in detecting growth of low-grade gliomas. Our findings support the idea that physicians aided by CAD detect growth at significantly smaller volumes than physicians using visual comparison alone. This study does not answer the questions whether to treat or not and which treatment modality is optimal. Nonetheless, early growth detection sets the stage for future clinical studies that address these questions and whether early therapeutic interventions prolong survival and improve quality of life

    A deep learning framework for joint image restoration and recognition

    Get PDF
    Image restoration and recognition are important computer vision tasks representing an inherent part of autonomous systems. These two tasks are often implemented in a sequential manner, in which the restoration process is followed by a recognition. In contrast, this paper proposes a joint framework that simultaneously performs both tasks within a shared deep neural network architecture. This joint framework integrates the restoration and recognition tasks by incorporating: i) common layers, ii) restoration layers and iii) classification layers. The total loss function combines the restoration and classification losses. The proposed joint framework, based on capsules, provides an efficient solution that can cope with challenges due to noise, image rotations and occlusions. The developed framework has been validated and evaluated on a public vehicle logo dataset under various degradation conditions, including Gaussian noise, rotation and occlusion. The results show that the joint framework improves the accuracy compared with the single task networks
    corecore