182 research outputs found

    Investigating Patterns in Convolution Neural Network Parameters Using Probabilistic Support Vector Machines

    Get PDF
    Artificial neural networks(ANNs) are recognized as high-performance models for classification problems. They have proved to be efficient tools for many of today\u27s applications like automatic driving, image and video recognition and restoration, big-data analysis. However, high performance deep neural networks have millions of parameters, and the iterative training procedure thus involves a very high computational cost. This research attempts to study the relationships between parameters in convolutional neural networks(CNNs). I assume there exists a certain relation between adjacent convolutional layers and proposed a machine learning model(MLM) that can be trained to represent this relation. The MLM\u27s generalization ability is evaluated by the model it creates based only on the knowledge of the initial layer. Experiments and results show that the MLM is able to generate a CNN that has very similar performance but different in parameters. In addition, taking advantage of the difference, I insert noise when creating CNNs from the MLM and use ensemble methods to increase the performance on original classification problems

    Fast Adversarial Training with Smooth Convergence

    Full text link
    Fast adversarial training (FAT) is beneficial for improving the adversarial robustness of neural networks. However, previous FAT work has encountered a significant issue known as catastrophic overfitting when dealing with large perturbation budgets, \ie the adversarial robustness of models declines to near zero during training. To address this, we analyze the training process of prior FAT work and observe that catastrophic overfitting is accompanied by the appearance of loss convergence outliers. Therefore, we argue a moderately smooth loss convergence process will be a stable FAT process that solves catastrophic overfitting. To obtain a smooth loss convergence process, we propose a novel oscillatory constraint (dubbed ConvergeSmooth) to limit the loss difference between adjacent epochs. The convergence stride of ConvergeSmooth is introduced to balance convergence and smoothing. Likewise, we design weight centralization without introducing additional hyperparameters other than the loss balance coefficient. Our proposed methods are attack-agnostic and thus can improve the training stability of various FAT techniques. Extensive experiments on popular datasets show that the proposed methods efficiently avoid catastrophic overfitting and outperform all previous FAT methods. Code is available at \url{https://github.com/FAT-CS/ConvergeSmooth}

    Horizontal Federated Learning and Secure Distributed Training for Recommendation System with Intel SGX

    Full text link
    With the advent of big data era and the development of artificial intelligence and other technologies, data security and privacy protection have become more important. Recommendation systems have many applications in our society, but the model construction of recommendation systems is often inseparable from users' data. Especially for deep learning-based recommendation systems, due to the complexity of the model and the characteristics of deep learning itself, its training process not only requires long training time and abundant computational resources but also needs to use a large amount of user data, which poses a considerable challenge in terms of data security and privacy protection. How to train a distributed recommendation system while ensuring data security has become an urgent problem to be solved. In this paper, we implement two schemes, Horizontal Federated Learning and Secure Distributed Training, based on Intel SGX(Software Guard Extensions), an implementation of a trusted execution environment, and TensorFlow framework, to achieve secure, distributed recommendation system-based learning schemes in different scenarios. We experiment on the classical Deep Learning Recommendation Model (DLRM), which is a neural network-based machine learning model designed for personalization and recommendation, and the results show that our implementation introduces approximately no loss in model performance. The training speed is within acceptable limits.Comment: 5 pages, 8 figure

    Catastrophic Overfitting: A Potential Blessing in Disguise

    Full text link
    Fast Adversarial Training (FAT) has gained increasing attention within the research community owing to its efficacy in improving adversarial robustness. Particularly noteworthy is the challenge posed by catastrophic overfitting (CO) in this field. Although existing FAT approaches have made strides in mitigating CO, the ascent of adversarial robustness occurs with a non-negligible decline in classification accuracy on clean samples. To tackle this issue, we initially employ the feature activation differences between clean and adversarial examples to analyze the underlying causes of CO. Intriguingly, our findings reveal that CO can be attributed to the feature coverage induced by a few specific pathways. By intentionally manipulating feature activation differences in these pathways with well-designed regularization terms, we can effectively mitigate and induce CO, providing further evidence for this observation. Notably, models trained stably with these terms exhibit superior performance compared to prior FAT work. On this basis, we harness CO to achieve `attack obfuscation', aiming to bolster model performance. Consequently, the models suffering from CO can attain optimal classification accuracy on both clean and adversarial data when adding random noise to inputs during evaluation. We also validate their robustness against transferred adversarial examples and the necessity of inducing CO to improve robustness. Hence, CO may not be a problem that has to be solved

    Drag reduction mechanism of Paramisgurnus dabryanus loach with self-lubricating and flexible micro-morphology

    Get PDF
    Underwater machinery withstands great resistance in the water, which can result in consumption of a large amount of power. Inspired by the character that loach could move quickly in mud, the drag reduction mechanism of Paramisgurnus dabryanus loach is discussed in this paper. Subjected to the compression and scraping of water and sediments, a loach could not only secrete a lubricating mucus film, but also importantly, retain its mucus well from losing rapidly through its surface micro structure. In addition, it has been found that flexible deformations can maximize the drag reduction rate. This self-adaptation characteristic can keep the drag reduction rate always at high level in wider range of speeds. Therefore, even though the part of surface of underwater machinery cannot secrete mucus, it should be designed by imitating the bionic micro-morphology to absorb and store fluid, and eventually form a self-lubrication film to reduce the resistance. In the present study, the Paramisgurnus dabryanus loach is taken as the bionic prototype to learn how to avoid or slow down the mucus loss through its body surface. This combination of the flexible and micro morphology method provides a potential reference for drag reduction of underwater machinery

    SIAD: Self-supervised Image Anomaly Detection System

    Full text link
    Recent trends in AIGC effectively boosted the application of visual inspection. However, most of the available systems work in a human-in-the-loop manner and can not provide long-term support to the online application. To make a step forward, this paper outlines an automatic annotation system called SsaA, working in a self-supervised learning manner, for continuously making the online visual inspection in the manufacturing automation scenarios. Benefit from the self-supervised learning, SsaA is effective to establish a visual inspection application for the whole life-cycle of manufacturing. In the early stage, with only the anomaly-free data, the unsupervised algorithms are adopted to process the pretext task and generate coarse labels for the following data. Then supervised algorithms are trained for the downstream task. With user-friendly web-based interfaces, SsaA is very convenient to integrate and deploy both of the unsupervised and supervised algorithms. So far, the SsaA system has been adopted for some real-life industrial applications.Comment: 4 pages, 3 figures, ICCV 2023 Demo Trac

    Pengembangan Media Pembelajaran Fisika Berupa Buletin Dalam Bentuk Buku Saku Untuk Pembelajaran Fisikakelas VIII Materi Gaya Ditinjau Dari Minat Baca Siswa

    Full text link
    Tujuan dari penelitian ini untuk mengembangkan media pembelajaran berupa buletin dalam bentuk buku saku untuk pembelajaran Fisika kelas VIII pada materi Gaya ditinjau dari aspek materi, konstruk, dan bahasa serta minat baca siswa. Penelitian ini termasuk penelitian pengembangan yang menggunakan metode Research and Development (R&D). Penelitian ini menggunakan model pengembangan model prosedural yaitu model yang bersifat deskriptif yang menunjukkan tahapan-tahapan yang harus diikuti untuk menghasilkan produk berupa media pembelajaran.Jenis data yang diperoleh bersifat kualitatif dan kuantitatif yaitu angket dan wawancara. Teknik analisis data yang digunakan adalah analisis deskriptif kualitatif dan kuantitatif. Hasil penelitian menunjukkan bahwa media pembelajaran yang dikembangkan berupa buletin Fisika dalam bentuk buku saku memiliki kriteria sangat baik berdasarkan penilaian dari ahli materi, ahli bahasa Indonesia, dan ahli media memberikan rata-rata penilaian sebesar 86,56%. Media pembelajaran yang dikembangkan juga memiliki kriteria sangat baik bila ditinjau dari peningkatan minat baca siswa. Hal ini terbukti pada hasil angket minat baca awal dan akhir yang diberikan kepada siswa yang memberikan rata-rata peningkatan sebesar 11,13%. Selain itu juga dianalisis dengan menggunakan uji-t berpasangan terhadap data masing-masing kelompok uji coba untuk mengetahui signifikansi dari peningkatan minat baca siswa. Untuk uji coba perorangan diperoleh hasil perhitungan thitung = 6,957 > ttabel = 1,943 dan nilai Sig. = 0,001 < 0,05 yang berarti sangat signifikan. Untuk kelompok kecil didapatkan hasil perhitungan bahwa thitung = 7,848 > ttabel = 1,725 dan nilai Sig. = 0,000 < 0,05 yang berarti sangat signifikan. Untuk kelompok besar juga didapatkan hasil perhitungan bahwa thitung = 20,214 > ttabel = 1,725 dan nilai Sig. = 0,000 < 0,05 yang berarti sangat signifikan. Simpulan dari penelitian ini adalah media pembelajaran berupa buletin dalam bentuk buku saku memiliki kriteria sangat baik bila ditinjau dari aspek materi, konstruk, dan bahasa serta minat baca siswa
    • …
    corecore