397,117 research outputs found

    Efficient Neural Network Approximation via Bayesian Reasoning

    Get PDF
    Approximate Computing (AxC) trades off between the accuracy required by the user and the precision provided by the computing system to achieve several optimizations such as performance improvement, energy, and area reduction. Several AxC techniques have been proposed so far in the literature. They work at different abstraction levels and propose both hardware and software implementations. The standard issue of all existing approaches is the lack of a methodology to estimate the impact of a given AxC technique on the application-level accuracy. This paper proposes a probabilistic approach based on Bayesian networks to quickly estimate the impact of a given approximation technique on application-level accuracy. Moreover, we have also shown how Bayesian networks allow a backtrack analysis that automatically identifies the most sensitive components. That influence analysis dramatically reduces the space exploration for approximation techniques. Preliminary results on a simple artificial neural network shown the efficiency of the proposed approach

    Post-transcriptional knowledge in pathway analysis increases the accuracy of phenotypes classification

    Get PDF
    Motivation: Prediction of phenotypes from high-dimensional data is a crucial task in precision biology and medicine. Many technologies employ genomic biomarkers to characterize phenotypes. However, such elements are not sufficient to explain the underlying biology. To improve this, pathway analysis techniques have been proposed. Nevertheless, such methods have shown lack of accuracy in phenotypes classification. Results: Here we propose a novel methodology called MITHrIL (Mirna enrIched paTHway Impact anaLysis) for the analysis of signaling pathways, which has built on top of the work of Tarca et al., 2009. MITHrIL extends pathways by adding missing regulatory elements, such as microRNAs, and their interactions with genes. The method takes as input the expression values of genes and/or microRNAs and returns a list of pathways sorted according to their deregulation degree, together with the corresponding statistical significance (p-values). Our analysis shows that MITHrIL outperforms its competitors even in the worst case. In addition, our method is able to correctly classify sets of tumor samples drawn from TCGA. Availability: MITHrIL is freely available at the following URL: http://alpha.dmi.unict.it/mithril

    Analyzing the Effect of Basic Data Augmentation for COVID-19 Detection through a Fractional Factorial Experimental Design

    Get PDF
    The COVID-19 pandemic has created a worldwide healthcare crisis. Convolutional Neural Networks (CNNs) have recently been used with encouraging results to help detect COVID-19 from chest X-ray images. However, to generalize well to unseen data, CNNs require large labeled datasets. Due to the lack of publicly available COVID-19 datasets, most CNNs apply various data augmentation techniques during training. However, there has not been a thorough statistical analysis of how data augmentation operations affect classification performance for COVID-19 detection. In this study, a fractional factorial experimental design is used to examine the impact of basic augmentation methods on COVID-19 detection. The latter enables identifying which particular data augmentation techniques and interactions have a statistically significant impact on the classification performance, whether positively or negatively. Using the CoroNet architecture and two publicly available COVID-19 datasets, the most common basic augmentation methods in the literature are evaluated. The results of the experiments demonstrate that the methods of zoom, range, and height shift positively impact the model's accuracy in dataset 1. The performance of dataset 2 is unaffected by any of the data augmentation operations. Additionally, a new state-of-the-art performance is achieved on both datasets by training CoroNet with the ideal data augmentation values found using the experimental design. Specifically, in dataset 1, 97% accuracy, 93% precision, and 97.7% recall were attained, while in dataset 2, 97% accuracy, 97% precision, and 97.6% recall were achieved. These results indicate that analyzing the effects of data augmentations on a particular task and dataset is essential for the best performance. Doi: 10.28991/ESJ-2023-SPER-01 Full Text: PD

    Explainable Machine Learning Techniques in Medical Image Analysis Based on Classification with Feature Extraction

    Get PDF
    Animals are also afflicted by COVID-19, a virus that is quickly spreading and infects both humans and animals. This fatal viral disease has an impact on people's daily lives, health, and economy of a nation. Most effective machine learning method is deep learning, which offers insightful analysis for examining a significant number of chest x-ray pictures that have a significant bearing on COVID-19 screening. This research proposes novel technique in lung image analysis for detection of lung infection due to COVID using Explainable Machine learning techniques. Here the input has been collected as COVID patient’s lung image dataset and it has been processed for noise removal and smoothening. This processed image features have been extracted using spatio transfer neural network integrated with DenseNet+ architecture. Extracted features has been classified using stacked auto Boltzmann encoder machine with VGG-19Net+. With the transfer learning method integrated into the binary classification process, the suggested algorithm achieves good classification accuracy. The experimental analysis has been carried out for various COVID dataset in terms of accuracy, precision, Recall, F-1score, RMSE, MAP. The proposed technique attained accuracy of 95%, precision of 91%, recall of 85%, F_1 score of 80%, RMSE of 61% and MAP of 51%

    Comparative analysis of techniques for shooting trajectory reconstruction.

    Get PDF
    Shooting trajectory reconstruction is a common discipline in the field of crime scene reconstruction. This discipline, however, has minimal published history about the origin, the most valuable techniques and methods, or journal articles. In addition, this type of reconstruction is used on a daily basis by law enforcement, but there is no validation of its procedures or tools. This study advanced the knowledge of the discipline of shooting trajectory reconstruction by applying geometric principles and modern day crime scene reconstruction techniques to help with the history of the discipline. Through the use of comparative analysis, this project determined beneficial and hindering aspects of modern tools. This was accomplished by analyzing three shooting scene reconstruction tools; specifically use of the Smart Tool, angle finder, and manual calculations utilizing a digital caliper to determine the efficiency of each technique. This project examined data taken by participants and determined the accuracy, precision, and error rate for the three tools. The study provided law enforcement with research enabling them to choose the most effective tool for crime scene analysis based on the tools tested in this project. This research also began the process validating the discipline of shooting trajectory reconstruction. A statistically significant difference in accuracy (mean bias) and precision (standard deviation) was found between the three tools. The difference between the accuracy and precision for the Smart Tool and angle finder was not statistically significant, however, the difference between those two tools and the digital caliper was significant. This difference between tools means that they cannot be averaged together when calculating results from a crime scene. No statistically significant difference in accuracy or precision was found between examiners and students, independent of the tools. Results indicate the angle finder was the most efficient tool tested. There was a statistically significant interaction found between the impact angles tested in this project. This interaction makes an overall error rate for the tools or discipline impossible.--Abstract

    Impact of fault prediction on checkpointing strategies

    Get PDF
    This paper deals with the impact of fault prediction techniques on checkpointing strategies. We extend the classical analysis of Young and Daly in the presence of a fault prediction system, which is characterized by its recall and its precision, and which provides either exact or window-based time predictions. We succeed in deriving the optimal value of the checkpointing period (thereby minimizing the waste of resource usage due to checkpoint overhead) in all scenarios. These results allow to analytically assess the key parameters that impact the performance of fault predictors at very large scale. In addition, the results of this analytical evaluation are nicely corroborated by a comprehensive set of simulations, thereby demonstrating the validity of the model and the accuracy of the results.Comment: 20 page

    Where Should We Begin? A Low-Level Exploration of Weight Initialization Impact on Quantized Behaviour of Deep Neural Networks

    Get PDF
    With the proliferation of deep convolutional neural network (CNN) algorithms for mobile processing, limited precision quantization has become an essential tool for CNN efficiency. Consequently, various works have sought to design fixed precision quantization algorithms and quantization-focused optimization techniques that minimize quantization induced performance degradation. However, there is little concrete understanding of how various CNN design decisions/best practices affect quantized inference behaviour. Weight initialization strategies are often associated with solving issues such as vanishing/exploding gradients but an often-overlooked aspect is their impact on the final trained distributions of each layer. We present an in-depth, fine-grained ablation study of the effect of different weights initializations on the final distributions of weights and activations of different CNN architectures. The fine-grained, layerwise analysis enables us to gain deep insights on how initial weights distributions will affect final accuracy and quantized behaviour. To our best knowledge, we are the first to perform such a low-level, in-depth quantitative analysis of weights initialization and its effect on quantized behaviour

    Factorial Design and Machine Learning Strategies: Impacts on Pharmaceutical Analysis

    Get PDF
    Pharmaceutical analysis is going through an expeditious progress as the perception of ‘multivariate data analysis’ (MVA) becomes gradually more assimilated. Pharmaceutical analysis comprises a range of processes that covers both chemical and physical assessment of drugs and their formulations employing different analytical techniques. With the revolution in instrumental analysis and the huge amount of information produced, there must be an up-to-date data processing tool. The role of chemometrics then comes up. Multivariate analysis (MVA) has the capability of effectively drawing a complete picture of the investigated process. Moreover, MVA reproduces the arithmetic influence of variables and their interactions through a smaller number of trials, keeping both efforts and capitals. Spectrophotometry is among the most extensively used techniques in pharmaceutical analysis either direct (single component) or derivative (multicomponent). In addition to these recognized benefits, using chemometrics in conjunction with spectrophotometry affects three vital characteristics: accuracy, precision and robustness. The impact of hyphenation of spectrophotometric analytical techniques to chemometrics (experimental design and support vector machines) on analytical laboratory will be revealed. A theoretical background on the different factorial designs and their relevance is provided. Readers will be able to use this chapter as a guide to select the appropriate design for a problem
    • …
    corecore