14 research outputs found

    A Deep Wavelet AutoEncoder Scheme for Image Compression

    Get PDF
    For many years and since its appearance, Digital Wavelet Transform DWT has been used with great success in a wide range of applications especially in image compression and signal de-noising. Combined with several and various approaches, this powerful mathematical tool has shown its strength to compress images with high compression ratio and good visual quality. This paper attempts to demonstrate that it is needless to follow the classical three stages process of compression: pixels transformation, quantization and binary coding when compressing images using the baseline method. Indeed, in this work, we propose a new scheme of image compression system based on an unsupervised convolutional neural network AutoEncoder (CAE) that will reconstruct the approximate sub-band issue from image decomposition by the wavelet transform DWT. In order To evaluate the model’s performance we use Kodak dataset containing a set of 24 images never compressed with a lossy algorithm technique and applied the approach on every one of them. We compared our achieved results with those obtained using standard compression method. We draw this comparison in terms of four performance parameters: Structural Similarity Index Metrix SSIM, Peak Signal to Noise Ratio PSNR, Mean Square Error MSE and Compression Ratio CR. The proposed scheme offers significate improvement in distortion metrics over the traditional image compression method when evaluated for perceptual quality moreover it produces better visual quality images with clearer details and textures which demonstrates its effectiveness and its robustness

    On quadrature rules for solving Partial Differential Equations using Neural Networks

    Get PDF
    Neural Networks have been widely used to solve Partial Differential Equations. These methods require to approximate definite integrals using quadrature rules. Here, we illustrate via 1D numerical examples the quadrature problems that may arise in these applications and propose several alternatives to overcome them, namely: Monte Carlo methods, adaptive integration, polynomial approximations of the Neural Network output, and the inclusion of regularization terms in the loss. We also discuss the advantages and limitations of each proposed numerical integration scheme. We advocate the use of Monte Carlo methods for high dimensions (above 3 or 4), and adaptive integration or polynomial approximations for low dimensions (3 or below). The use of regularization terms is a mathematically elegant alternative that is valid for any spatial dimension; however, it requires certain regularity assumptions on the solution and complex mathematical analysis when dealing with sophisticated Neural Networks

    To Compress or Not to Compress -- Self-Supervised Learning and Information Theory: A Review

    Full text link
    Deep neural networks have demonstrated remarkable performance in supervised learning tasks but require large amounts of labeled data. Self-supervised learning offers an alternative paradigm, enabling the model to learn from data without explicit labels. Information theory has been instrumental in understanding and optimizing deep neural networks. Specifically, the information bottleneck principle has been applied to optimize the trade-off between compression and relevant information preservation in supervised settings. However, the optimal information objective in self-supervised learning remains unclear. In this paper, we review various approaches to self-supervised learning from an information-theoretic standpoint and present a unified framework that formalizes the \textit{self-supervised information-theoretic learning problem}. We integrate existing research into a coherent framework, examine recent self-supervised methods, and identify research opportunities and challenges. Moreover, we discuss empirical measurement of information-theoretic quantities and their estimators. This paper offers a comprehensive review of the intersection between information theory, self-supervised learning, and deep neural networks

    Research on prognostic risk assessment model for acute ischemic stroke based on imaging and multidimensional data

    Get PDF
    Accurately assessing the prognostic outcomes of patients with acute ischemic stroke and adjusting treatment plans in a timely manner for those with poor prognosis is crucial for intervening in modifiable risk factors. However, there is still controversy regarding the correlation between imaging-based predictions of complications in acute ischemic stroke. To address this, we developed a cross-modal attention module for integrating multidimensional data, including clinical information, imaging features, treatment plans, prognosis, and complications, to achieve complementary advantages. The fused features preserve magnetic resonance imaging (MRI) characteristics while supplementing clinical relevant information, providing a more comprehensive and informative basis for clinical diagnosis and treatment. The proposed framework based on multidimensional data for activity of daily living (ADL) scoring in patients with acute ischemic stroke demonstrates higher accuracy compared to other state-of-the-art network models, and ablation experiments confirm the effectiveness of each module in the framework

    DeepGD: A Multi-Objective Black-Box Test Selection Approach for Deep Neural Networks

    Full text link
    Deep neural networks (DNNs) are widely used in various application domains such as image processing, speech recognition, and natural language processing. However, testing DNN models may be challenging due to the complexity and size of their input domain. Particularly, testing DNN models often requires generating or exploring large unlabeled datasets. In practice, DNN test oracles, which identify the correct outputs for inputs, often require expensive manual effort to label test data, possibly involving multiple experts to ensure labeling correctness. In this paper, we propose DeepGD, a black-box multi-objective test selection approach for DNN models. It reduces the cost of labeling by prioritizing the selection of test inputs with high fault revealing power from large unlabeled datasets. DeepGD not only selects test inputs with high uncertainty scores to trigger as many mispredicted inputs as possible but also maximizes the probability of revealing distinct faults in the DNN model by selecting diverse mispredicted inputs. The experimental results conducted on four widely used datasets and five DNN models show that in terms of fault-revealing ability: (1) White-box, coverage-based approaches fare poorly, (2) DeepGD outperforms existing black-box test selection approaches in terms of fault detection, and (3) DeepGD also leads to better guidance for DNN model retraining when using selected inputs to augment the training set

    Number Systems for Deep Neural Network Architectures: A Survey

    Full text link
    Deep neural networks (DNNs) have become an enabling component for a myriad of artificial intelligence applications. DNNs have shown sometimes superior performance, even compared to humans, in cases such as self-driving, health applications, etc. Because of their computational complexity, deploying DNNs in resource-constrained devices still faces many challenges related to computing complexity, energy efficiency, latency, and cost. To this end, several research directions are being pursued by both academia and industry to accelerate and efficiently implement DNNs. One important direction is determining the appropriate data representation for the massive amount of data involved in DNN processing. Using conventional number systems has been found to be sub-optimal for DNNs. Alternatively, a great body of research focuses on exploring suitable number systems. This article aims to provide a comprehensive survey and discussion about alternative number systems for more efficient representations of DNN data. Various number systems (conventional/unconventional) exploited for DNNs are discussed. The impact of these number systems on the performance and hardware design of DNNs is considered. In addition, this paper highlights the challenges associated with each number system and various solutions that are proposed for addressing them. The reader will be able to understand the importance of an efficient number system for DNN, learn about the widely used number systems for DNN, understand the trade-offs between various number systems, and consider various design aspects that affect the impact of number systems on DNN performance. In addition, the recent trends and related research opportunities will be highlightedComment: 28 page
    corecore