1,152 research outputs found

    A New Blind Method for Detecting Novel Steganography

    Get PDF
    Steganography is the art of hiding a message in plain sight. Modern steganographic tools that conceal data in innocuous-looking digital image files are widely available. The use of such tools by terrorists, hostile states, criminal organizations, etc., to camouflage the planning and coordination of their illicit activities poses a serious challenge. Most steganography detection tools rely on signatures that describe particular steganography programs. Signature-based classifiers offer strong detection capabilities against known threats, but they suffer from an inability to detect previously unseen forms of steganography. Novel steganography detection requires an anomaly-based classifier. This paper describes and demonstrates a blind classification algorithm that uses hyper-dimensional geometric methods to model steganography-free jpeg images. The geometric model, comprising one or more convex polytopes, hyper-spheres, or hyper-ellipsoids in the attribute space, provides superior anomaly detection compared to previous research. Experimental results show that the classifier detects, on average, 85.4% of Jsteg steganography images with a mean embedding rate of 0.14 bits per pixel, compared to previous research that achieved a mean detection rate of just 65%. Further, the classification algorithm creates models for as many training classes of data as are available, resulting in a hybrid anomaly/signature or signature-only based classifier, which increases Jsteg detection accuracy to 95%

    Functional and timing implications of transient faults in critical systems

    Get PDF
    Embedded systems in critical domains, such as auto-motive, aviation, space domains, are often required to guarantee both functional and temporal correctness. Considering transient faults, fault analysis and mitigation approaches are implemented at various levels of the system design, in order to maintain the functional correctness. However, transient faults and their mitigation methods have a timing impact, which can affect the temporal correctness of the system. In this work, we expose the functional and the timing implications of transient faults for critical systems. More precisely, we initially highlight the timing effect of transient faults occurring in the combinational and sequential logic of a processor. Furthermore, we propose a full stack vulnerability analysis that drives the design of selective hardware-based mitigation for real-time applications. Last, we study the timing impact of software-based reliability mitigation methods applied in a COTS GPU, using a fault tolerant middleware.This work has been partially funded by ANR-FASY (ANR-21-CE25-0008-01) and received funding by ESA through the 4000136514/21/NL/GLC/my co-funded PhD activity ”Mixed Software/Hardware-based Fault-tolerance Techniques for Complex COTS System-on-Chip in Radiation Environments” and the GPU4S (GPU for Space) project. Moreover, it was partially supported by the Spanish Ministry of Economy and Competitiveness under grants PID2019-107255GB-C21 and IJC2020-045931-I (Spanish State Research Agency / http://dx.doi.org/10.13039/501100011033), by the European Union’s Horizon 2020 grant agreement No 739551 (KIOS CoE) and from the Government of the Republic of Cyprus through the Cyprus Deputy Ministry of Research, Innovation and Digital Policy.Peer ReviewedPostprint (author's final draft

    The Quantum Frontier

    Full text link
    The success of the abstract model of computation, in terms of bits, logical operations, programming language constructs, and the like, makes it easy to forget that computation is a physical process. Our cherished notions of computation and information are grounded in classical mechanics, but the physics underlying our world is quantum. In the early 80s researchers began to ask how computation would change if we adopted a quantum mechanical, instead of a classical mechanical, view of computation. Slowly, a new picture of computation arose, one that gave rise to a variety of faster algorithms, novel cryptographic mechanisms, and alternative methods of communication. Small quantum information processing devices have been built, and efforts are underway to build larger ones. Even apart from the existence of these devices, the quantum view on information processing has provided significant insight into the nature of computation and information, and a deeper understanding of the physics of our universe and its connections with computation. We start by describing aspects of quantum mechanics that are at the heart of a quantum view of information processing. We give our own idiosyncratic view of a number of these topics in the hopes of correcting common misconceptions and highlighting aspects that are often overlooked. A number of the phenomena described were initially viewed as oddities of quantum mechanics. It was quantum information processing, first quantum cryptography and then, more dramatically, quantum computing, that turned the tables and showed that these oddities could be put to practical effect. It is these application we describe next. We conclude with a section describing some of the many questions left for future work, especially the mysteries surrounding where the power of quantum information ultimately comes from.Comment: Invited book chapter for Computation for Humanity - Information Technology to Advance Society to be published by CRC Press. Concepts clarified and style made more uniform in version 2. Many thanks to the referees for their suggestions for improvement

    Federated Learning in Wireless Networks

    Get PDF
    Artificial intelligence (AI) is transitioning from a long development period into reality. Notable instances like AlphaGo, Tesla’s self-driving cars, and the recent innovation of ChatGPT stand as widely recognized exemplars of AI applications. These examples collectively enhance the quality of human life. An increasing number of AI applications are expected to integrate seamlessly into our daily lives, further enriching our experiences. Although AI has demonstrated remarkable performance, it is accompanied by numerous challenges. At the forefront of AI’s advancement lies machine learning (ML), a cutting-edge technique that acquires knowledge by emulating the human brain’s cognitive processes. Like humans, ML requires a substantial amount of data to build its knowledge repository. Computational capabilities have surged in alignment with Moore’s law, leading to the realization of cloud computing services like Amazon AWS. Presently, we find ourselves in the era of the IoT, characterized by the ubiquitous presence of smartphones, smart speakers, and intelligent vehicles. This landscape facilitates decentralizing data processing tasks, shifting them from the cloud to local devices. At the same time, a growing emphasis on privacy protection has emerged, as individuals are increasingly concerned with sharing personal data with corporate giants such as Google and Meta. Federated learning (FL) is a new distributed machine learning paradigm. It fosters a scenario where clients collaborate by sharing learned models rather than raw data, thus safeguarding client data privacy while providing a collaborative and resilient model. FL has promised to address privacy concerns. However, it still faces many challenges, particularly within wireless networks. Within the FL landscape, four main challenges stand out: high communication costs, system heterogeneity, statistical heterogeneity, and privacy and security. When many clients participate in the learning process, and the wireless communication resources remain constrained, accommodating all participating clients becomes very complex. The contemporary realm of deep learning relies on models encompassing millions and, in some cases, billions of parameters, exacerbating communication overhead when transmitting these parameters. The heterogeneity of the system manifests itself across device disparities, deployment scenarios, and connectivity capabilities. Simultaneously, statistical heterogeneity encompasses variations in data distribution and model composition. Furthermore, the distributed architecture makes FL susceptible to attacks inside and outside the system. This dissertation presents a suite of algorithms designed to address the challenges effectively. Mew communication schemes are introduced, including Non-Orthogonal Multiple Access (NOMA), over-the-air computation, and approximate communication. These techniques are coupled with gradient compression, client scheduling, and power allocation, each significantly mitigating communication overhead. Implementing asynchronous FL is a suitable remedy to solve the intricate issue of system heterogeneity. Independent and identically distributed (IID) and non-IID data in statistical heterogeneity are considered in all scenarios. Finally, the aggregation of model updates and individual client model initialization collaboratively address security and privacy issues

    Zero-Knowledge Proof Systems for QMA

    Full text link
    © 2016 IEEE. Prior work has established that all problems in NP admit classical zero-knowledge proof systems, and under reasonable hardness assumptions for quantum computations, these proof systems can be made secure against quantum attacks. We prove a result representing a further quantum generalization of this fact, which is that every problem in the complexity class QMA has a quantum zero-knowledge proof system. More specifically, assuming the existence of an unconditionally binding and quantum computationally concealing commitment scheme, we prove that every problem in the complexity class QMA has a quantum interactive proof system that is zero-knowledge with respect to efficient quantum computations. Our QMA proof system is sound against arbitrary quantum provers, but only requires an honest prover to perform polynomial-time quantum computations, provided that it holds a quantum witness for a given instance of the QMA problem under consideration

    Large-Scale Simulation of Shor's Quantum Factoring Algorithm

    Get PDF
    Shor's factoring algorithm is one of the most anticipated applications of quantum computing. However, the limited capabilities of today's quantum computers only permit a study of Shor's algorithm for very small numbers. Here we show how large GPU-based supercomputers can be used to assess the performance of Shor's algorithm for numbers that are out of reach for current and near-term quantum hardware. First, we study Shor's original factoring algorithm. While theoretical bounds suggest success probabilities of only 3-4 %, we find average success probabilities above 50 %, due to a high frequency of "lucky" cases, defined as successful factorizations despite unmet sufficient conditions. Second, we investigate a powerful post-processing procedure, by which the success probability can be brought arbitrarily close to one, with only a single run of Shor's quantum algorithm. Finally, we study the effectiveness of this post-processing procedure in the presence of typical errors in quantum processing hardware. We find that the quantum factoring algorithm exhibits a particular form of universality and resilience against the different types of errors. The largest semiprime that we have factored by executing Shor's algorithm on a GPU-based supercomputer, without exploiting prior knowledge of the solution, is 549755813701 = 712321 * 771781. We put forward the challenge of factoring, without oversimplification, a non-trivial semiprime larger than this number on any quantum computing device.Comment: differs from the published version in formatting and style; open source code available at https://jugit.fz-juelich.de/qip/shorgp
    • …
    corecore