21 research outputs found

    Roadmap on optical security

    Get PDF
    Postprint (author's final draft

    A Survey on Federated Learning for the Healthcare Metaverse: Concepts, Applications, Challenges, and Future Directions

    Full text link
    Recent technological advancements have considerately improved healthcare systems to provide various intelligent healthcare services and improve the quality of life. Federated learning (FL), a new branch of artificial intelligence (AI), opens opportunities to deal with privacy issues in healthcare systems and exploit data and computing resources available at distributed devices. Additionally, the Metaverse, through integrating emerging technologies, such as AI, cloud edge computing, Internet of Things (IoT), blockchain, and semantic communications, has transformed many vertical domains in general and the healthcare sector in particular. Obviously, FL shows many benefits and provides new opportunities for conventional and Metaverse healthcare, motivating us to provide a survey on the usage of FL for Metaverse healthcare systems. First, we present preliminaries to IoT-based healthcare systems, FL in conventional healthcare, and Metaverse healthcare. The benefits of FL in Metaverse healthcare are then discussed, from improved privacy and scalability, better interoperability, better data management, and extra security to automation and low-latency healthcare services. Subsequently, we discuss several applications pertaining to FL-enabled Metaverse healthcare, including medical diagnosis, patient monitoring, medical education, infectious disease, and drug discovery. Finally, we highlight significant challenges and potential solutions toward the realization of FL in Metaverse healthcare.Comment: Submitted to peer revie

    Enhanced image encryption scheme with new mapreduce approach for big size images

    Get PDF
    Achieving a secured image encryption (IES) scheme for sensitive and confidential data communications, especially in a Hadoop environment is challenging. An accurate and secure cryptosystem for colour images requires the generation of intricate secret keys that protect the images from diverse attacks. To attain such a goal, this work proposed an improved shuffled confusion-diffusion based colour IES using a hyper-chaotic plain image. First, five different sequences of random numbers were generated. Then, two of the sequences were used to shuffle the image pixels and bits, while the remaining three were used to XOR the values of the image pixels. Performance of the developed IES was evaluated in terms of various measures such as key space size, correlation coefficient, entropy, mean squared error (MSE), peak signal to noise ratio (PSNR) and differential analysis. Values of correlation coefficient (0.000732), entropy (7.9997), PSNR (7.61), and MSE (11258) were determined to be better (against various attacks) compared to current existing techniques. The IES developed in this study was found to have outperformed other comparable cryptosystems. It is thus asserted that the developed IES can be advantageous for encrypting big data sets on parallel machines. Additionally, the developed IES was also implemented on a Hadoop environment using MapReduce to evaluate its performance against known attacks. In this process, the given image was first divided and characterized in a key-value format. Next, the Map function was invoked for every key-value pair by implementing a mapper. The Map function was used to process data splits, represented in the form of key-value pairs in parallel modes without any communication between other map processes. The Map function processed a series of key/value pairs and subsequently generated zero or more key/value pairs. Furthermore, the Map function also divided the input image into partitions before generating the secret key and XOR matrix. The secret key and XOR matrix were exploited to encrypt the image. The Reduce function merged the resultant images from the Map tasks in producing the final image. Furthermore, the value of PSNR did not exceed 7.61 when the developed IES was evaluated against known attacks for both the standard dataset and big data size images. As can be seen, the correlation coefficient value of the developed IES did not exceed 0.000732. As the handling of big data size images is different from that of standard data size images, findings of this study suggest that the developed IES could be most beneficial for big data and big size images

    Automated framework for robust content-based verification of print-scan degraded text documents

    Get PDF
    Fraudulent documents frequently cause severe financial damages and impose security breaches to civil and government organizations. The rapid advances in technology and the widespread availability of personal computers has not reduced the use of printed documents. While digital documents can be verified by many robust and secure methods such as digital signatures and digital watermarks, verification of printed documents still relies on manual inspection of embedded physical security mechanisms.The objective of this thesis is to propose an efficient automated framework for robust content-based verification of printed documents. The principal issue is to achieve robustness with respect to the degradations and increased levels of noise that occur from multiple cycles of printing and scanning. It is shown that classic OCR systems fail under such conditions, moreover OCR systems typically rely heavily on the use of high level linguistic structures to improve recognition rates. However inferring knowledge about the contents of the document image from a-priori statistics is contrary to the nature of document verification. Instead a system is proposed that utilizes specific knowledge of the document to perform highly accurate content verification based on a Print-Scan degradation model and character shape recognition. Such specific knowledge of the document is a reasonable choice for the verification domain since the document contents are already known in order to verify them.The system analyses digital multi font PDF documents to generate a descriptive summary of the document, referred to as \Document Description Map" (DDM). The DDM is later used for verifying the content of printed and scanned copies of the original documents. The system utilizes 2-D Discrete Cosine Transform based features and an adaptive hierarchical classifier trained with synthetic data generated by a Print-Scan degradation model. The system is tested with varying degrees of Print-Scan Channel corruption on a variety of documents with corruption produced by repetitive printing and scanning of the test documents. Results show the approach achieves excellent accuracy and robustness despite the high level of noise
    corecore