214 research outputs found

    Towards trustworthy computing on untrustworthy hardware

    Get PDF
    Historically, hardware was thought to be inherently secure and trusted due to its obscurity and the isolated nature of its design and manufacturing. In the last two decades, however, hardware trust and security have emerged as pressing issues. Modern day hardware is surrounded by threats manifested mainly in undesired modifications by untrusted parties in its supply chain, unauthorized and pirated selling, injected faults, and system and microarchitectural level attacks. These threats, if realized, are expected to push hardware to abnormal and unexpected behaviour causing real-life damage and significantly undermining our trust in the electronic and computing systems we use in our daily lives and in safety critical applications. A large number of detective and preventive countermeasures have been proposed in literature. It is a fact, however, that our knowledge of potential consequences to real-life threats to hardware trust is lacking given the limited number of real-life reports and the plethora of ways in which hardware trust could be undermined. With this in mind, run-time monitoring of hardware combined with active mitigation of attacks, referred to as trustworthy computing on untrustworthy hardware, is proposed as the last line of defence. This last line of defence allows us to face the issue of live hardware mistrust rather than turning a blind eye to it or being helpless once it occurs. This thesis proposes three different frameworks towards trustworthy computing on untrustworthy hardware. The presented frameworks are adaptable to different applications, independent of the design of the monitored elements, based on autonomous security elements, and are computationally lightweight. The first framework is concerned with explicit violations and breaches of trust at run-time, with an untrustworthy on-chip communication interconnect presented as a potential offender. The framework is based on the guiding principles of component guarding, data tagging, and event verification. The second framework targets hardware elements with inherently variable and unpredictable operational latency and proposes a machine-learning based characterization of these latencies to infer undesired latency extensions or denial of service attacks. The framework is implemented on a DDR3 DRAM after showing its vulnerability to obscured latency extension attacks. The third framework studies the possibility of the deployment of untrustworthy hardware elements in the analog front end, and the consequent integrity issues that might arise at the analog-digital boundary of system on chips. The framework uses machine learning methods and the unique temporal and arithmetic features of signals at this boundary to monitor their integrity and assess their trust level

    ClearMark: Intuitive and Robust Model Watermarking via Transposed Model Training

    Full text link
    Due to costly efforts during data acquisition and model training, Deep Neural Networks (DNNs) belong to the intellectual property of the model creator. Hence, unauthorized use, theft, or modification may lead to legal repercussions. Existing DNN watermarking methods for ownership proof are often non-intuitive, embed human-invisible marks, require trust in algorithmic assessment that lacks human-understandable attributes, and rely on rigid thresholds, making it susceptible to failure in cases of partial watermark erasure. This paper introduces ClearMark, the first DNN watermarking method designed for intuitive human assessment. ClearMark embeds visible watermarks, enabling human decision-making without rigid value thresholds while allowing technology-assisted evaluations. ClearMark defines a transposed model architecture allowing to use of the model in a backward fashion to interwove the watermark with the main task within all model parameters. Compared to existing watermarking methods, ClearMark produces visual watermarks that are easy for humans to understand without requiring complex verification algorithms or strict thresholds. The watermark is embedded within all model parameters and entangled with the main task, exhibiting superior robustness. It shows an 8,544-bit watermark capacity comparable to the strongest existing work. Crucially, ClearMark's effectiveness is model and dataset-agnostic, and resilient against adversarial model manipulations, as demonstrated in a comprehensive study performed with four datasets and seven architectures.Comment: 20 pages, 18 figures, 4 table

    The Image Bank: Reflections on an Incomplete Archive

    Get PDF
    This thesis examines the development of a digital archive for The Image Bank at GSU as a process of excavation and reconstruction. It defines the digital archive as a medium for the institutionalization of knowledge, its reproduction, and preservation. In addition, this thesis examines the digital archive as it operates on a continuum of materiality and immateriality, encompassing fractured distinctions between its possibilities and impossibilities in an increasingly dematerialized digitized landscape

    Secured Mechanism Towards Integrity of Digital Images Using DWT, DCT, LSB and Watermarking Integrations

    Get PDF
    "Watermarking" is one method in which digital information is buried in a carrier signal; the hidden information should be related to the carrier signal. There are many different types of digital watermarking, including traditional watermarking that uses visible media (such as snaps, images, or video), and a signal may be carrying many watermarks. Any signal that can tolerate noise, such as audio, video, or picture data, can have a digital watermark implanted in it. A digital watermark must be able to withstand changes that can be made to the carrier signal in order to protect copyright information in media files. The goal of digital watermarking is to ensure the integrity of data, whereas steganography focuses on making information undetectable to humans. Watermarking doesn't alter the original digital image, unlike public-key encryption, but rather creates a new one with embedded secured aspects for integrity. There are no residual effects of encryption on decrypted documents. This work focuses on strong digital image watermarking algorithms for copyright protection purposes. Watermarks of various sorts and uses were discussed, as well as a review of current watermarking techniques and assaults. The project shows how to watermark an image in the frequency domain using DCT and DWT, as well as in the spatial domain using the LSB approach. When it comes to noise and compression, frequency-domain approaches are far more resilient than LSB. All of these scenarios necessitate the use of the original picture to remove the watermark. Out of the three, the DWT approach has provided the best results. We can improve the resilience of our watermark while having little to no extra influence on image quality by embedding watermarks in these places.

    A blind recovery technique with integer wavelet transforms in image watermarking

    Get PDF
    The development of internet technology has simplified the sharing and modification of digital image information. The aim of this study is to propose a new blind recovery technique based on integer wavelets transform (BRIWT) by utilizing their image content. The LSB adjustment technique on the integer wavelet transform is used to embed recovery data into the two least significant bits (LSB) of the image content. Authentication bits are embedded into the current locations of the LSB of the image content, while the recovery information is embedded into different block locations based on the proposed block mapping. The embedded recovery data is securely placed at random locations within the two LSBs using a secret key. A three-layer embedding of authentication bits is used to validate the integrity of the image contents, achieving high precision and accuracy. Tamper localization accuracy is employed to identify recovery bits from the image content. This research also investigates the image inpainting method to enhance recovery from tampered images. The proposed image inpainting is performed by identifying non-tampered pixels in the surrounding tamper localization. The results demonstrate that the proposed scheme can produce highly watermarked images with imperceptibility, with an average SSIM value of 0.9978 and a PSNR value of 46.20 dB. The proposed scheme significantly improves the accuracy of tamper localization, with a precision of 0.9943 and an accuracy of 0.9971. The proposed recovery technique using integer wavelet transforms achieves high-quality blind recovery with an SSIM value of 0.9934 under a tampering rate of 10%. The findings of this study reveal that the proposed scheme improves the quality of blind recovery by 14.2 % under a tampering rate of 80 %

    Sharing Secret Colour Images with Embedded Visual Cryptography Using the Stamping Algorithm and OTP Procedure

    Get PDF
    Finding a way to ensure the safety of media is becoming increasingly common in the modern world as digital media usage increases. Visual cryptography (VC) offers an efficient method for sending images securely. Images that have been protected using visual encryption can be decoded using features of human vision. Emails are not a highly safe method of exchanging private data because someone else can quickly weaken the content. In the visual cryptography technique, we presented for colour pictures, the divided shares are enclosed in additional pictures using stamping. Using a random number generator, the shares are created. Visual cryptography schemes (VCS) are a method of encoding pictures that conceals the secret information which is present in images. A secret image is encrypted using a straightforward visual cryptography technique by splitting it into n shares, and the stamping operation is carried out by overlapping k shares. It can be beneficial for hiding a secret image. There is a chance that employing cryptography for information exchange could cause security problems because the process of decryption of simple visual cryptographic algorithms can be completed by the human eye. To address this issue, we are using the OTP procedure. In the past, static ID and passwords were employed, making them susceptible to replay and eavesdropping attacks. One Time Password technology, which generates a unique password each time, is utilized to solve this issue. The suggested approach strengthens the security of the created transparencies by applying an envelope to each share and employing a stamping technique to address security vulnerabilities that the previous methods had, such as pixel expansion and noise

    A Feminist Rhetorical Analysis of Queer Appropriation in Digital Spaces

    Full text link
    Appropriation is an invariable part of the way in which people communicate. In order to better understand what appropriation is and how appropriation functions communicatively, this research defines what appropriation is from a rhetorical perspective. As the world becomes more interconnected through the use of popular social media platforms such as TikTok and YouTube, the popularity of appropriation only continues to grow. This research focuses on popular examples of how queer culture is appropriated and used within mainstream culture by straight individuals as a way for to gain financial and social capital. Keywords: Appropriation, pop culture, digital spaces, TikTok, YouTube, queer studie

    Pokročilé metody detekce steganografického obsahu

    Get PDF
    Steganography can be used for illegal activities. It is essential to be prepared. To detect steganography images, we have a counter-technique known as steganalysis. There are different steganalysis types, depending on if the original artifact (cover work) is known or not, or we know which algorithm was used for embedding. In terms of practical use, the most important are “blind steganalysis” methods that can be applied to image files because we do not have the original cover work for comparison. This philosophiæ doctor thesis describes the methodology to the issues of image steganalysis.In this work, it is crucial to understand the behavior of the targeted steganography algorithm. Then we can use it is weaknesses to increase the detection capability and success of categorization. We are primarily focusing on breaking the steganography algorithm OutGuess2.0. and secondary on breaking the F5 algorithm. We are analyzing the detector's ability, which utilizes a calibration process, blockiness calculation, and shallow neural network, to detect the presence of steganography message in the suspected image. The new approach and results are discussed in this Ph.D. thesis.Steganografie může být využita k nelegálním aktivitám. Proto je velmi důležité být připraven. K detekci steganografického obrázku máme k dispozici techniku známou jako stegoanalýza. Existují různé typy stegoanalýzy v závislosti na tom, zda je znám originální nosič nebo zdali víme, jaký byl použit algoritmus pro vložení tajné zprávy. Z hlediska praktického použití jsou nejdůležitější metody "slepé stagoanalýzy", které zle aplikovat na obrazové soubory a jelikož nemáme originální nosič pro srovnání. Tato doktorská práce popisuje metodologii obrazové stegoanalýzy. V této práci je důležité porozumět chování cíleného steganografického algoritmu. Pak můžeme využít jeho slabiny ke zvýšení detekční schopnosti a úspěšnosti kategorizace. Primárně se zaměřujeme na prolomení steganografického algoritmu OutGuess2.0 a sekundárně na algoritmus F5. Analyzujeme schopnost detektoru, který využívá proces kalibrace, výpočtu shlukování a mělkou neuronovou síť k detekci přítomnosti steganografické zprávy na podezřelém snímku. Nový přístup a výsledky jsou sepsány v této doktorské práci.460 - Katedra informatikyvyhově

    Applied Methuerstic computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC
    corecore