1,154 research outputs found
SCALING UP TASK EXECUTION ON RESOURCE-CONSTRAINED SYSTEMS
The ubiquity of executing machine learning tasks on embedded systems with constrained resources has made efficient execution of neural networks on these systems under the CPU, memory, and energy constraints increasingly important. Different from high-end computing systems where resources are abundant and reliable, resource-constrained systems only have limited computational capability, limited memory, and limited energy supply. This dissertation focuses on how to take full advantage of the limited resources of these systems in order to improve task execution efficiency from different aspects of the execution pipeline. While the existing literature primarily aims at solving the problem by shrinking the model size according to the resource constraints, this dissertation aims to improve the execution efficiency for a given set of tasks from the following two aspects. Firstly, we propose SmartON, which is the first batteryless active event detection system that considers both the event arrival pattern as well as the harvested energy to determine when the system should wake up and what the duty cycle should be. Secondly, we propose Antler, which exploits the affinity between all pairs of tasks in a multitask inference system to construct a compact graph representation of the task set for a given overall size budget. To achieve the aforementioned algorithmic proposals, we propose the following hardware solutions. One is a controllable capacitor array that can expand the system’s energy storage on-the-fly. The other is a FRAM array that can accommodate multiple neural networks running on one system.Doctor of Philosoph
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
Covert Communication in Autoencoder Wireless Systems
The broadcast nature of wireless communications presents security and privacy challenges. Covert communication is a wireless security practice that focuses on intentionally hiding transmitted information. Recently, wireless systems have experienced significant growth, including the emergence of autoencoder-based models. These models, like other DNN architectures, are vulnerable to adversarial attacks, highlighting the need to study their susceptibility to covert communication. While there is ample research on covert communication in traditional wireless systems, the investigation of autoencoder wireless systems remains scarce. Furthermore, many existing covert methods are either detectable analytically or difficult to adapt to diverse wireless systems. The first part of this thesis provides a comprehensive examination of autoencoder-based communication systems in various scenarios and channel conditions. It begins with an introduction to autoencoder communication systems, followed by a detailed discussion of our own implementation and evaluation results. This serves as a solid foundation for the subsequent part of the thesis, where we propose a GAN-based covert communication model. By treating the covert sender, covert receiver, and observer as generator, decoder, and discriminator neural networks, respectively, we conduct joint training in an adversarial setting to develop a covert communication scheme that can be integrated into any normal autoencoder. Our proposal minimizes the impact on ongoing normal communication, addressing previous works shortcomings. We also introduce a training algorithm that allows for the desired tradeoff between covertness and reliability. Numerical results demonstrate the establishment of a reliable and undetectable channel between covert users, regardless of the cover signal or channel condition, with minimal disruption to the normal system operation
Evaluation Methodologies in Software Protection Research
Man-at-the-end (MATE) attackers have full control over the system on which
the attacked software runs, and try to break the confidentiality or integrity
of assets embedded in the software. Both companies and malware authors want to
prevent such attacks. This has driven an arms race between attackers and
defenders, resulting in a plethora of different protection and analysis
methods. However, it remains difficult to measure the strength of protections
because MATE attackers can reach their goals in many different ways and a
universally accepted evaluation methodology does not exist. This survey
systematically reviews the evaluation methodologies of papers on obfuscation, a
major class of protections against MATE attacks. For 572 papers, we collected
113 aspects of their evaluation methodologies, ranging from sample set types
and sizes, over sample treatment, to performed measurements. We provide
detailed insights into how the academic state of the art evaluates both the
protections and analyses thereon. In summary, there is a clear need for better
evaluation methodologies. We identify nine challenges for software protection
evaluations, which represent threats to the validity, reproducibility, and
interpretation of research results in the context of MATE attacks
A Survey on Reversible Image Data Hiding Using the Hierarchical Block Embedding Technique
The use of graphics for data concealment has significantly advanced the fields of secure communication and identity verification. Reversible data hiding (RDH) involves hiding data within host media, such as images, while allowing for the recovery of the original cover. Various RDH approaches have been developed, including difference expansion, interpolation techniques, prediction, and histogram modification. However, these methods were primarily applied to plain photos. This study introduces a novel reversible image transformation technique called Block Hierarchical Substitution (BHS). BHS enhances the quality of encrypted images and enables lossless restoration of the secret image with a low Peak Signal-to-Noise Ratio (PSNR). The cover image is divided into non-overlapping blocks, and the pixel values within each block are encrypted using the modulo function. This ensures that the linear prediction difference in the block remains consistent before and after encryption, enabling independent data extraction without picture decryption. In order to address the challenges associated with secure multimedia data processing, such as data encryption during transmission and storage, this survey investigates the specific issues related to reversible data hiding in encrypted images (RDHEI). Our proposed solution aims to enhance security (low Mean Squared Error) and improve the PSNR value by applying the method to encrypted images
Iris Biometric Watermarking for Authentication Using Multiband Discrete Wavelet Transform and Singular-Value Decomposition
The most advanced technology, watermarking enables intruders to access the database. Various techniques have been developed for information security. Watermarks and histories are linked to many biometric techniques such as fingerprints, palm positions, gait, iris and speech are recommended. Digital watermarking is the utmost successful approaches among the methods available. In this paper the multiband wavelet transforms and singular value decomposition are discussed to establish a watermarking strategy rather than biometric information. The use of biometrics instead of conservative watermarks can enhance information protection. The biometric technology being used is iris. The iris template can be viewed as a watermark, while an iris mode of communication may be used to help information security with the addition of a watermark to the image of the iris. The research involves verifying authentication against different attacks such as no attacks, Jpeg Compression, Gaussian, Median Filtering and Blurring. The Algorithm increases durability and resilience when exposed to geometric and frequency attacks. Finally, the proposed framework can be applied not only to the assessment of iris biometrics, but also to other areas where privacy is critical
Cybersecurity: Past, Present and Future
The digital transformation has created a new digital space known as
cyberspace. This new cyberspace has improved the workings of businesses,
organizations, governments, society as a whole, and day to day life of an
individual. With these improvements come new challenges, and one of the main
challenges is security. The security of the new cyberspace is called
cybersecurity. Cyberspace has created new technologies and environments such as
cloud computing, smart devices, IoTs, and several others. To keep pace with
these advancements in cyber technologies there is a need to expand research and
develop new cybersecurity methods and tools to secure these domains and
environments. This book is an effort to introduce the reader to the field of
cybersecurity, highlight current issues and challenges, and provide future
directions to mitigate or resolve them. The main specializations of
cybersecurity covered in this book are software security, hardware security,
the evolution of malware, biometrics, cyber intelligence, and cyber forensics.
We must learn from the past, evolve our present and improve the future. Based
on this objective, the book covers the past, present, and future of these main
specializations of cybersecurity. The book also examines the upcoming areas of
research in cyber intelligence, such as hybrid augmented and explainable
artificial intelligence (AI). Human and AI collaboration can significantly
increase the performance of a cybersecurity system. Interpreting and explaining
machine learning models, i.e., explainable AI is an emerging field of study and
has a lot of potentials to improve the role of AI in cybersecurity.Comment: Author's copy of the book published under ISBN: 978-620-4-74421-
AlexNet Convolutional Neural Network Architecture with Cosine and Hamming Similarity/Distance Measures for Fingerprint Biometric Matching
يعد التحقق من بصمة الأصبع أحد الطرق الحديثة في مجال أمن المعلومات والذي يهدف إلى إيجاد أنماط مميزة للتعرف على هوية الفرد. يتم ذلك عبر عملية مقارنة بين أزواج من نماذج معدة مسبقا للبصمة وإيجاد نسبة التشابه بينهم. غالبية الدراسات السابقة كانت تعتمد على طريقة تدعى (فازي فالت) بالإضافة إلى طرق فلترة الصور. لكن هذه الطرق لا تزال تعاني من ضعف تمييز النقاط المهمة في البصمات، ظهور التقنيات الحديثة من التعلم العميق مثل الشبكات العصبية اللفائفية قد ساهم بشكل كبير في تحليل الصورة والتعرف على الكيانات داخل الصور وقدأظهرت دقة أعلى من الطرق التقليدية. هذه الدراسة استغلت إحدى هذه الشبكات المدربة مسبقا على صور بصمات وتعرف باسم (اليكس نت) بحيث تم استخراج أهم الخصائص الكامنة بالصور وتم توليد مفتاح خاص بكل صورة ومن ثم تم تخزين كل تلك المعلومات في قاعدة بيانات مرجعية. باستخدام أدوات قياس التشابه مثل جتا الزاوية وهامنج استطاعت هذه الدراسة من تبيان التشابه خلال مقارنة صور اختبارية بالنسبة لقاعدة البيانات المرجعية. تم استجلاب الصور من قاعدة بيانات عامة وقد أظهرت نتائج دقة القبول دقة الرفض على نسبة 2.09% و 2.81% على التوالي. بمقارنة هذه النتائج مع نتائج الدراسات السابقة خصوصا تلك التي استخدمت أدوات تقليدية مثل (فازي فالت) تفوق الطريقة المطروحة بهذه الدراسة. وبذلك تم استنتاج أهمية استخدام الشبكات العصبية اللفائفية مع أدوات قياس التشابه في التعرف على بصمة اليد.In information security, fingerprint verification is one of the most common recent approaches for verifying human identity through a distinctive pattern. The verification process works by comparing a pair of fingerprint templates and identifying the similarity/matching among them. Several research studies have utilized different techniques for the matching process such as fuzzy vault and image filtering approaches. Yet, these approaches are still suffering from the imprecise articulation of the biometrics’ interesting patterns. The emergence of deep learning architectures such as the Convolutional Neural Network (CNN) has been extensively used for image processing and object detection tasks and showed an outstanding performance compared to traditional image filtering techniques. This paper aimed to utilize a specific CNN architecture known as AlexNet for the fingerprint-matching task. Using such an architecture, this study has extracted the significant features of the fingerprint image, generated a key based on such a biometric feature of the image, and stored it in a reference database. Then, using Cosine similarity and Hamming Distance measures, the testing fingerprints have been matched with a reference. Using the FVC2002 database, the proposed method showed a False Acceptance Rate (FAR) of 2.09% and a False Rejection Rate (FRR) of 2.81%. Comparing these results against other studies that utilized traditional approaches such as the Fuzzy Vault has demonstrated the efficacy of CNN in terms of fingerprint matching. It is also emphasizing the usefulness of using Cosine similarity and Hamming Distance in terms of matching
Development of a New Approach to the Problem of Detecting the Integrity Violations of a Digital Image
The energy sector is an integral part of the critical infrastructure of any state and is particularly sensitive to the quality of information security systems. Due to the diversity of activities of vari-ous organizations in the energy industry, almost all threats to information security are relevant to them, in particular, unauthorized changes in information content. The aim of the work is to de-velop the new approach to the examination of the integrity of images, which will provide the possibility of building universal methods for detecting its violations, effective regardless of the strength and type of influences that change them. The aim was achieved by: introducing the concept of “frequency index” for the singular vector of the image matrix, and study of the properties of the dependence of the frequency index of a singular vector on its number. The most important result of the work is the almost constant rate of increase the trend of the studied function, which results in the linearity of the frequency index of singular vectors for digital im-ages, the integrity of which is not violated. This characteristic is sensitive to any changes in the image matrix. The significance of the obtained result lies in the fact that the use of the estab-lished properties for the function of the dependence of the frequency index of singular vectors on its number makes it possible to develop universal integrity examination methods that are ef-fective regardless of the strength and type of disturbances that violated the integrity of the im-age
- …