1,120 research outputs found

    Machine Learning-Enhanced Advancements in Quantum Cryptography: A Comprehensive Review and Future Prospects

    Get PDF
    Quantum cryptography has emerged as a promising paradigm for secure communication, leveraging the fundamental principles of quantum mechanics to guarantee information confidentiality and integrity. In recent years, the field of quantum cryptography has witnessed remarkable advancements, and the integration of machine learning techniques has further accelerated its progress. This research paper presents a comprehensive review of the latest developments in quantum cryptography, with a specific focus on the utilization of machine learning algorithms to enhance its capabilities. The paper begins by providing an overview of the principles underlying quantum cryptography, such as quantum key distribution (QKD) and quantum secure direct communication (QSDC). Subsequently, it highlights the limitations of traditional quantum cryptographic schemes and introduces how machine learning approaches address these challenges, leading to improved performance and security. To illustrate the synergy between quantum cryptography and machine learning, several case studies are presented, showcasing successful applications of machine learning in optimizing key aspects of quantum cryptographic protocols. These applicatiocns encompass various tasks, including error correction, key rate optimization, protocol efficiency enhancement, and adaptive protocol selection. Furthermore, the paper delves into the potential risks and vulnerabilities introduced by integrating machine learning with quantum cryptography. The discussion revolves around adversarial attacks, model vulnerabilities, and potential countermeasures to bolster the robustness of machine learning-based quantum cryptographic systems. The future prospects of this combined field are also examined, highlighting potential avenues for further research and development. These include exploring novel machine learning architectures tailored for quantum cryptographic applications, investigating the interplay between quantum computing and machine learning in cryptographic protocols, and devising hybrid approaches that synergistically harness the strengths of both fields. In conclusion, this research paper emphasizes the significance of machine learning-enhanced advancements in quantum cryptography as a transformative force in securing future communication systems. The paper serves as a valuable resource for researchers, practitioners, and policymakers interested in understanding the state-of-the-art in this multidisciplinary domain and charting the course for its future advancements

    A New Approach to Synthetic Image Evaluation

    Get PDF
    This study is dedicated to enhancing the effectiveness of Optical Character Recognition (OCR) systems, with a special emphasis on Arabic handwritten digit recognition. The choice to focus on Arabic handwritten digits is twofold: first, there has been relatively less research conducted in this area compared to its English counterparts; second, the recognition of Arabic handwritten digits presents more challenges due to the inherent similarities between different Arabic digits.OCR systems, engineered to decipher both printed and handwritten text, often face difficulties in accurately identifying low-quality or distorted handwritten text. The quality of the input image and the complexity of the text significantly influence their performance. However, data augmentation strategies can notably improve these systems\u27 performance. These strategies generate new images that closely resemble the original ones, albeit with minor variations, thereby enriching the model\u27s learning and enhancing its adaptability. The research found Conditional Variational Autoencoders (C-VAE) and Conditional Generative Adversarial Networks (C-GAN) to be particularly effective in this context. These two generative models stand out due to their superior image generation and feature extraction capabilities. A significant contribution of the study has been the formulation of the Synthetic Image Evaluation Procedure, a systematic approach designed to evaluate and amplify the generative models\u27 image generation abilities. This procedure facilitates the extraction of meaningful features, computation of the Fréchet Inception Distance (LFID) score, and supports hyper-parameter optimization and model modifications

    Trustworthy Federated Learning: A Survey

    Full text link
    Federated Learning (FL) has emerged as a significant advancement in the field of Artificial Intelligence (AI), enabling collaborative model training across distributed devices while maintaining data privacy. As the importance of FL increases, addressing trustworthiness issues in its various aspects becomes crucial. In this survey, we provide an extensive overview of the current state of Trustworthy FL, exploring existing solutions and well-defined pillars relevant to Trustworthy . Despite the growth in literature on trustworthy centralized Machine Learning (ML)/Deep Learning (DL), further efforts are necessary to identify trustworthiness pillars and evaluation metrics specific to FL models, as well as to develop solutions for computing trustworthiness levels. We propose a taxonomy that encompasses three main pillars: Interpretability, Fairness, and Security & Privacy. Each pillar represents a dimension of trust, further broken down into different notions. Our survey covers trustworthiness challenges at every level in FL settings. We present a comprehensive architecture of Trustworthy FL, addressing the fundamental principles underlying the concept, and offer an in-depth analysis of trust assessment mechanisms. In conclusion, we identify key research challenges related to every aspect of Trustworthy FL and suggest future research directions. This comprehensive survey serves as a valuable resource for researchers and practitioners working on the development and implementation of Trustworthy FL systems, contributing to a more secure and reliable AI landscape.Comment: 45 Pages, 8 Figures, 9 Table

    Fourteenth Biennial Status Report: März 2017 - February 2019

    No full text

    BC4LLM: Trusted Artificial Intelligence When Blockchain Meets Large Language Models

    Full text link
    In recent years, artificial intelligence (AI) and machine learning (ML) are reshaping society's production methods and productivity, and also changing the paradigm of scientific research. Among them, the AI language model represented by ChatGPT has made great progress. Such large language models (LLMs) serve people in the form of AI-generated content (AIGC) and are widely used in consulting, healthcare, and education. However, it is difficult to guarantee the authenticity and reliability of AIGC learning data. In addition, there are also hidden dangers of privacy disclosure in distributed AI training. Moreover, the content generated by LLMs is difficult to identify and trace, and it is difficult to cross-platform mutual recognition. The above information security issues in the coming era of AI powered by LLMs will be infinitely amplified and affect everyone's life. Therefore, we consider empowering LLMs using blockchain technology with superior security features to propose a vision for trusted AI. This paper mainly introduces the motivation and technical route of blockchain for LLM (BC4LLM), including reliable learning corpus, secure training process, and identifiable generated content. Meanwhile, this paper also reviews the potential applications and future challenges, especially in the frontier communication networks field, including network resource allocation, dynamic spectrum sharing, and semantic communication. Based on the above work combined and the prospect of blockchain and LLMs, it is expected to help the early realization of trusted AI and provide guidance for the academic community

    MACHINE LEARNING APPLICATIONS TO DATA RECONSTRUCTION IN MARINE BIOGEOCHEMISTRY.

    Get PDF
    Driven by the increase of greenhouse gas emissions, climate change is causing significant shifts in the Earth's climatic patterns, profoundly affecting our oceans. In recent years, our capacity to monitor and understand the state and variability of the ocean has been significantly enhanced, thanks to improved observational capacity, new data-driven approaches, and advanced computational capabilities. Contemporary marine analyses typically integrate multiple data sources: numerical models, satellite data, autonomous instruments, and ship-based measurements. Temperature, salinity, and several other ocean essential variables, such as oxygen, chlorophyll, and nutrients, are among the most frequently monitored variables. Each of these sources and variables, while providing valuable insights, has distinct limitations in terms of uncertainty, spatial and temporal coverage, and resolution. The application of deep learning offers a promising avenue for addressing challenges in data prediction, notably in data reconstruction and interpolation, thus enhancing our ability to monitor and understand the ocean. This thesis proposes and evaluates the performances of a variety of neural network architectures, examining the intricate relationship between methods, ocean data sources, and challenges. A special focus is given to the biogeochemistry of the Mediterranean Sea. A primary objective is predicting low-sampled biogeochemical variables from high-sampled ones. For this purpose, two distinct deep learning models have been developed, each specifically tailored to the dataset used for training. Addressing this challenge not only boosts our capability to predict biogeochemical variables in the highly heterogeneous Mediterranean Sea region but also allows the increase in the usefulness of observational systems such as the BGC-Argo floats. Additionally, a method is introduced to integrate BGC-Argo float observations with outputs from an existing deterministic marine ecosystem model, refining our ability to interpolate and reconstruct biogeochemical variables in the Mediterranean Sea. As the development of novel neural network methods progresses rapidly, the task of establishing benchmarks for data-driven ocean modeling is far from complete. This work offers insights into various applications, highlighting their strengths and limitations, besides highlighting the importance relationship between methods and datasets.Driven by the increase of greenhouse gas emissions, climate change is causing significant shifts in the Earth's climatic patterns, profoundly affecting our oceans. In recent years, our capacity to monitor and understand the state and variability of the ocean has been significantly enhanced, thanks to improved observational capacity, new data-driven approaches, and advanced computational capabilities. Contemporary marine analyses typically integrate multiple data sources: numerical models, satellite data, autonomous instruments, and ship-based measurements. Temperature, salinity, and several other ocean essential variables, such as oxygen, chlorophyll, and nutrients, are among the most frequently monitored variables. Each of these sources and variables, while providing valuable insights, has distinct limitations in terms of uncertainty, spatial and temporal coverage, and resolution. The application of deep learning offers a promising avenue for addressing challenges in data prediction, notably in data reconstruction and interpolation, thus enhancing our ability to monitor and understand the ocean. This thesis proposes and evaluates the performances of a variety of neural network architectures, examining the intricate relationship between methods, ocean data sources, and challenges. A special focus is given to the biogeochemistry of the Mediterranean Sea. A primary objective is predicting low-sampled biogeochemical variables from high-sampled ones. For this purpose, two distinct deep learning models have been developed, each specifically tailored to the dataset used for training. Addressing this challenge not only boosts our capability to predict biogeochemical variables in the highly heterogeneous Mediterranean Sea region but also allows the increase in the usefulness of observational systems such as the BGC-Argo floats. Additionally, a method is introduced to integrate BGC-Argo float observations with outputs from an existing deterministic marine ecosystem model, refining our ability to interpolate and reconstruct biogeochemical variables in the Mediterranean Sea. As the development of novel neural network methods progresses rapidly, the task of establishing benchmarks for data-driven ocean modeling is far from complete. This work offers insights into various applications, highlighting their strengths and limitations, besides highlighting the importance relationship between methods and datasets

    Partnerships for skills : investing in training for the 21st century

    Get PDF

    Sensing the care:Advancing unobtrusive sensing solutions to support informal caregivers of older adults with cognitive impairment

    Get PDF
    Older adults (65 years and above) make up a growing proportion of the world's population which is anticipated to increase further in the coming decades. As individuals age, they often become more vulnerable to cognitive impairments, necessitating a diverse array of care and support services from their caregivers to uphold their quality of life. However, the scarcity of professional caregivers and care facilities, compounded by the preference of many older adults to remain in their own homes, places a significant burden on informal caregivers, adversely affecting their physical, mental, and social well-being. To assist informal caregivers, numerous sensing solutions have been developed. However, many of these solutions are not optimally suited for older adult care, particularly in cases of cognitive impairments. In that regard, the overarching aim of this thesis was to develop and evaluate the Unobtrusive Sensing Solution (USS) for in-home monitoring of older adults with cognitive impairment (OwCI) who live alone in their own houses to ease the support of their informal caregivers. In the 'Explore and Scope' part, a scoping review was conducted to identify available unobtrusive sensing technology that can be implemented in older adult care. Subsequently, in the 'Develop and Test' part, Wi-Fi CSI technology was utilized to collect a dataset illustrating physical agitation activities (Wi-Gitation). However, upon evaluation of the Wi-Gitation dataset, a challenge of generalization across different domains (or environments) was identified. To address this, the Inter-data Selected Sequential Transfer Learning framework was proposed and implemented. Lastly, in the 'Design to Communicate' part, the thesis focused on identifying the needs and requirements of informal caregivers of OwCI towards USSs. These needs and requirements were gathered through interviews and surveys, informing the development of a Lo-Fi prototype for an interaction platform. Overall, the results obtained in this thesis not only enhance the development of Wi-Fi CSI (specifically for OwCI care) but also provide valuable insights into the informational and design requirements of informal caregivers, thereby promoting the context-aware development of USSs

    Face Spoof Detection Using Convolutional Neural Networks

    Get PDF
    Department: Engineering and applied sciences Major: 060509 – Computer Science Major: Software of Computer Systems and Networks Supervisor: PhD, Associate Professor Leyla MuradkhanliIn recent years, the use of facial recognition technology has become increasingly prevalent, finding application in various areas, including security, authentication, and access management. With the extensive employment of face recognition technology has come an increase in the prevalence of face spoofing cases, wherein offenders manipulate the system with unauthentic facial information. The emergence of this issue poses a major risk to the dependability and protection of facial recognition technology. This calls for the development of advanced and robust techniques to detect face spoofing effectively. This thesis suggests a technique that employs convolutional neural networks (CNN) to identify fraudulent facial manipulation. The proposed method comprises teaching an intricate neural network using a comprehensive compilation of genuine and fabricated facial images. Two streams are employed in this process. RGB images are transformed to grayscale images in the first stream, and then facial reflection features are extracted. Face color features from RGB images are extracted in the second stream. These two characteristics are then combined and utilized to identify face spoofing. The structure of CNN includes several layers of convolution and pooling, which enable it to identify distinguishing features in the input images. Following its training, the model is employed to differentiate a presented facial image into either authentic or fraudulent. To determine the efficacy of the proposed technique, I employ a standardized data set for identifying counterfeit or altered facial attributes. The proposed approach has the capability to achieve an average precision rate of 89% while being applied to the provided data set. The suggested method presents various benefits compared to current techniques for detecting face spoofing. To start with, utilizing a deep CNN empowers the model to acquire intricate and discerning characteristics from the input images, thus augmenting the precision of the categorization mission. Additionally, the suggested method is effective in terms of computational requirements, enabling its utilization in real-time scenarios. The proposed methodology is able to withstand a range of fraudulent tactics used on facial recognition systems, such as print and replay attacks. The findings from this study aid in the progression of face recognition technology by enhancing the accuracy and dependability of fraud detection systems. These improved systems have practical applications in security measures, biometric identification, and digital criminal investigations. The suggested method could substantially enhance the dependability and safety of facial recognition systems, consequently boosting their functional value and credibility

    Computational Framework For Neuro-Optics Simulation And Deep Learning Denoising

    Get PDF
    The application of machine learning techniques in microscopic image restoration has shown superior performance. However, the development of such techniques has been hindered by the demand for large datasets and the lack of ground truth. To address these challenges, this study introduces a computer simulation model that accurately captures the neural anatomic volume, fluorescence light transportation within the tissue volume, and the photon collection process of microscopic imaging sensors. The primary goal of this simulation is to generate realistic image data for training and validating machine learning models. One notable aspect of this study is the incorporation of a machine learning denoiser into the simulation, which accelerates the computational efficiency of the entire process. By reducing noise levels in the generated images, the denoiser significantly enhances the simulation\u27s performance, allowing for faster and more accurate modeling and analysis of microscopy images. This approach addresses the limitations of data availability and ground truth annotation, offering a practical and efficient solution for microscopic image restoration. The integration of a machine learning denoiser within the simulation significantly accelerates the overall simulation process, while improving the quality of the generated images. This advancement opens new possibilities for training and validating machine learning models in microscopic image restoration, overcoming the challenges of large datasets and the lack of ground truth
    corecore