731 research outputs found

    Inverse Design of Metamaterials for Tailored Linear and Nonlinear Optical Responses Using Deep Learning

    Get PDF
    The conventional process for developing an optimal design for nonlinear optical responses is based on a trial-and-error approach that is largely inefficient and does not necessarily lead to an ideal result. Deep learning can automate this process and widen the realm of nonlinear geometries and devices. This research illustrates a deep learning framework used to create an optimal plasmonic design for metamaterials with specific desired optical responses, both linear and nonlinear. The algorithm can produce plasmonic patterns that can maximize second-harmonic nonlinear effects of a nonlinear metamaterial. A nanolaminate metamaterial is used as a nonlinear material, and a plasmonic patterns are fabricated on the prepared nanolaminate to demonstrate the validity and efficacy of the deep learning algorithm for second-harmonic generation. Photonic upconversion from the infrared regime to the visible spectrum can occur through sum-frequency generation. The deep learning algorithm was improved to optimize a nonlinear plasmonic metamaterial for sum-frequency generation. The framework was then further expanded using transfer learning to lessen computation resources required to optimize metamaterials for new design parameters. The deep learning architecture applied in this research can be expanded to other optical responses and drive the innovation of novel optical applications.Ph.D

    DeepMem: ML Models as storage channels and their (mis-)applications

    Full text link
    Machine learning (ML) models are overparameterized to support generality and avoid overfitting. Prior works have shown that these additional parameters can be used for both malicious (e.g., hiding a model covertly within a trained model) and beneficial purposes (e.g., watermarking a model). In this paper, we propose a novel information theoretic perspective of the problem; we consider the ML model as a storage channel with a capacity that increases with overparameterization. Specifically, we consider a sender that embeds arbitrary information in the model at training time, which can be extracted by a receiver with a black-box access to the deployed model. We derive an upper bound on the capacity of the channel based on the number of available parameters. We then explore black-box write and read primitives that allow the attacker to: (i) store data in an optimized way within the model by augmenting the training data at the transmitter side, and (ii) to read it by querying the model after it is deployed. We also analyze the detectability of the writing primitive and consider a new version of the problem which takes information storage covertness into account. Specifically, to obtain storage covertness, we introduce a new constraint such that the data augmentation used for the write primitives minimizes the distribution shift with the initial (baseline task) distribution. This constraint introduces a level of "interference" with the initial task, thereby limiting the channel's effective capacity. Therefore, we develop optimizations to improve the capacity in this case, including a novel ML-specific substitution based error correction protocol. We believe that the proposed modeling of the problem offers new tools to better understand and mitigate potential vulnerabilities of ML, especially in the context of increasingly large models

    Information Forensics and Security: A quarter-century-long journey

    Get PDF
    Information forensics and security (IFS) is an active R&D area whose goal is to ensure that people use devices, data, and intellectual properties for authorized purposes and to facilitate the gathering of solid evidence to hold perpetrators accountable. For over a quarter century, since the 1990s, the IFS research area has grown tremendously to address the societal needs of the digital information era. The IEEE Signal Processing Society (SPS) has emerged as an important hub and leader in this area, and this article celebrates some landmark technical contributions. In particular, we highlight the major technological advances by the research community in some selected focus areas in the field during the past 25 years and present future trends

    Cybersecurity: Past, Present and Future

    Full text link
    The digital transformation has created a new digital space known as cyberspace. This new cyberspace has improved the workings of businesses, organizations, governments, society as a whole, and day to day life of an individual. With these improvements come new challenges, and one of the main challenges is security. The security of the new cyberspace is called cybersecurity. Cyberspace has created new technologies and environments such as cloud computing, smart devices, IoTs, and several others. To keep pace with these advancements in cyber technologies there is a need to expand research and develop new cybersecurity methods and tools to secure these domains and environments. This book is an effort to introduce the reader to the field of cybersecurity, highlight current issues and challenges, and provide future directions to mitigate or resolve them. The main specializations of cybersecurity covered in this book are software security, hardware security, the evolution of malware, biometrics, cyber intelligence, and cyber forensics. We must learn from the past, evolve our present and improve the future. Based on this objective, the book covers the past, present, and future of these main specializations of cybersecurity. The book also examines the upcoming areas of research in cyber intelligence, such as hybrid augmented and explainable artificial intelligence (AI). Human and AI collaboration can significantly increase the performance of a cybersecurity system. Interpreting and explaining machine learning models, i.e., explainable AI is an emerging field of study and has a lot of potentials to improve the role of AI in cybersecurity.Comment: Author's copy of the book published under ISBN: 978-620-4-74421-

    The Informal Screen Media Economy of Ukraine

    Get PDF
    This research explores informal film translation (voice over and subtitling) and distribution (pirate streaming and torrenting) practices in Ukraine, which together comprise what I call the informal screen media economy of Ukraine. This study addresses wider issues of debate around the distinct reasons media piracy exists in non-Western economies. There is already a considerable body of research on piracy outside of the traditional anti-piracy discourse, one that recognises that informal media are not all unequivocally destructive nor that they are necessarily marginal, particularly in non-Western countries. Yet, there remain gaps in the range of geographies and specific types of pirate practices being studied. Furthermore, academics often insufficiently address the intricate conditions of the context within which a given pirate activity is undertaken. Finally, whereas many researchers talk about pirates, considerably fewer talk to them. This project sets out to address these gaps. Specifically, I examine the distinct practicalities of the informal screen media practices in Ukraine through netnographic observations of pirate sites and in-depth interviews with the Ukrainian informal screen media practitioners. I explore their notably diverse motivations for engaging in these activities and how they negotiate their practices with the complex economic, cultural, and regulatory context of Ukraine. I find that, contrary to common perceptions, the Ukrainian pirates do not oppose the copyright law but operate largely within and around it. A more important factor in piracy in Ukraine instead is the economics of the Ukrainian language. This is reflected in the language exclusivity inherent to most Ukrainian pirate distribution platforms as well as in the motives of some informal translators, for whom their practice is a form of language activism. Overall, I argue for a more holistic approach to researching the informal space of the media economy, especially in non-Western contexts, one that recognises the heterogeneity of this space and explores accordingly intricate factors behind its existence. In addition, this project offers a methodological contribution by providing a detailed reflection on the use of ethnographic methods to study a pirate economy in a non-Western, non-anglophone country

    SoC-based FPGA architecture for image analysis and other highly demanding applications

    Get PDF
    Al giorno d'oggi, lo sviluppo di algoritmi si concentra su calcoli efficienti in termini di prestazioni ed efficienza energetica. Tecnologie come il field programmable gate array (FPGA) e il system on chip (SoC) basato su FPGA (FPGA/SoC) hanno dimostrato la loro capacitĂ  di accelerare applicazioni di calcolo intensive risparmiando al contempo il consumo energetico, grazie alla loro capacitĂ  di elevato parallelismo e riconfigurazione dell'architettura. Attualmente, i cicli di progettazione esistenti per FPGA/SoC sono lunghi, a causa della complessitĂ  dell'architettura. Pertanto, per colmare il divario tra le applicazioni e le architetture FPGA/SoC e ottenere un design hardware efficiente per l'analisi delle immagini e altri applicazioni altamente demandanti utilizzando lo strumento di sintesi di alto livello, vengono prese in considerazione due strategie complementari: tecniche ad hoc e stima delle prestazioni. Per quanto riguarda le tecniche ad-hoc, tre applicazioni molto impegnative sono state accelerate attraverso gli strumenti HLS: discriminatore di forme di impulso per i raggi cosmici, classificazione automatica degli insetti e re-ranking per il recupero delle informazioni, sottolineando i vantaggi quando questo tipo di applicazioni viene attraversato da tecniche di compressione durante il targeting dispositivi FPGA/SoC. Inoltre, in questa tesi viene proposto uno stimatore delle prestazioni per l'accelerazione hardware per prevedere efficacemente l'utilizzo delle risorse e la latenza per FPGA/SoC, costruendo un ponte tra l'applicazione e i domini architetturali. Lo strumento integra modelli analitici per la previsione delle prestazioni e un motore design space explorer (DSE) per fornire approfondimenti di alto livello agli sviluppatori di hardware, composto da due motori indipendenti: DSE basato sull'ottimizzazione a singolo obiettivo e DSE basato sull'ottimizzazione evolutiva multiobiettivo.Nowadays, the development of algorithms focuses on performance-efficient and energy-efficient computations. Technologies such as field programmable gate array (FPGA) and system on chip (SoC) based on FPGA (FPGA/SoC) have shown their ability to accelerate intensive computing applications while saving power consumption, owing to their capability of high parallelism and reconfiguration of the architecture. Currently, the existing design cycles for FPGA/SoC are time-consuming, owing to the complexity of the architecture. Therefore, to address the gap between applications and FPGA/SoC architectures and to obtain an efficient hardware design for image analysis and highly demanding applications using the high-level synthesis tool, two complementary strategies are considered: ad-hoc techniques and performance estimator. Regarding ad-hoc techniques, three highly demanding applications were accelerated through HLS tools: pulse shape discriminator for cosmic rays, automatic pest classification, and re-ranking for information retrieval, emphasizing the benefits when this type of applications are traversed by compression techniques when targeting FPGA/SoC devices. Furthermore, a comprehensive performance estimator for hardware acceleration is proposed in this thesis to effectively predict the resource utilization and latency for FPGA/SoC, building a bridge between the application and architectural domains. The tool integrates analytical models for performance prediction, and a design space explorer (DSE) engine for providing high-level insights to hardware developers, composed of two independent sub-engines: DSE based on single-objective optimization and DSE based on evolutionary multi-objective optimization

    A dual watermarking scheme for identity protection

    Get PDF
    A novel dual watermarking scheme with potential applications in identity protection, media integrity maintenance and copyright protection in both electronic and printed media is presented. The proposed watermarking scheme uses the owner’s signature and fingerprint as watermarks through which the ownership and validity of the media can be proven and kept intact. To begin with, the proposed watermarking scheme is implemented on continuous-tone/greyscale images, and later extended to images achieved via multitoning, an advanced version of halftoning-based printing. The proposed watermark embedding is robust and imperceptible. Experimental simulations and evaluations of the proposed method show excellent results from both objective and subjective view-points

    Teaching and Collecting Technical Standards: A Handbook for Librarians and Educators

    Get PDF
    Technical standards are a vital source of information for providing guidelines during the design, manufacture, testing, and use of whole products, materials, and components. To prepare students—especially engineering students—for the workforce, universities are increasing the use of standards within the curriculum. Employers believe it is important for recent university graduates to be familiar with standards. Despite the critical role standards play within academia and the workforce, little information is available on the development of standards information literacy, which includes the ability to understand the standardization process; identify types of standards; and locate, evaluate, and use standards effectively. Libraries and librarians are a critical part of standards education, and much of the discussion has been focused on the curation of standards within libraries. However, librarians also have substantial experience in developing and teaching standards information literacy curriculum. With the need for universities to develop a workforce that is well-educated on the use of standards, librarians and course instructors can apply their experiences in information literacy toward teaching students the knowledge and skills regarding standards that they will need to be successful in their field. This title provides background information for librarians on technical standards as well as collection development best practices. It also creates a model for librarians and course instructors to use when building a standards information literacy curriculum.https://docs.lib.purdue.edu/pilh/1004/thumbnail.jp

    An Improved VGG16 and CNN-LSTM Deep Learning Model for Image Forgery Detection

    Get PDF
    As the field of image processing and computer vision continues to develop, we are able to create edited images that seem more natural than ever before. Identifying real photos from fakes has become a formidable obstacle. Image forgery has become more common as the multimedia capabilities of personal computers have developed over the previous several years. This is due to the fact that it is simpler to produce fake images. Since image object fabrication might obscure critical evidence, techniques for detecting it have been intensively investigated for quite some time. The publicly available datasets are insufficient to deal with these problems adequately. Our work recommends using a deep learning based image inpainting technique to create a model to detect fabricated images. To further detect copy-move forgeries in images, we use an CNN-LSTM and Improved VGG adaptation network. Our approach could be useful in cases when classifying the data is impossible. In contrast, researchers seldom use deep learning theory, preferring instead to depend on tried-and-true techniques like image processing and classifiers. In this article, we recommend the CNN-LSTM and improved VGG-16 convolutional neural network for intra-frame forensic analysis of altered images
    • …
    corecore