61 research outputs found

    Introductory Computer Forensics

    Get PDF
    INTERPOL (International Police) built cybercrime programs to keep up with emerging cyber threats, and aims to coordinate and assist international operations for ?ghting crimes involving computers. Although signi?cant international efforts are being made in dealing with cybercrime and cyber-terrorism, ?nding effective, cooperative, and collaborative ways to deal with complicated cases that span multiple jurisdictions has proven dif?cult in practic

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    The 1992 4th NASA SERC Symposium on VLSI Design

    Get PDF
    Papers from the fourth annual NASA Symposium on VLSI Design, co-sponsored by the IEEE, are presented. Each year this symposium is organized by the NASA Space Engineering Research Center (SERC) at the University of Idaho and is held in conjunction with a quarterly meeting of the NASA Data System Technology Working Group (DSTWG). One task of the DSTWG is to develop new electronic technologies that will meet next generation electronic data system needs. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The NASA SERC is proud to offer, at its fourth symposium on VLSI design, presentations by an outstanding set of individuals from national laboratories, the electronics industry, and universities. These speakers share insights into next generation advances that will serve as a basis for future VLSI design

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Combined use of congestion control and frame discarding for Internet video streaming

    Get PDF
    Cataloged from PDF version of article.Increasing demand for video applications over the Internet and the inherent uncooperative behavior of the User Datagram Protocol (UDP) used currently as the transport protocol of choice for video networking applications, is known to be leading to congestion collapse of the Internet. The congestion collapse can be prevented by using mechanisms in networks that penalize uncooperative flows like UDP or employing end-to-end congestion control. Since today’s vision for the Internet architecture is based on moving the complexity towards the edges of the networks, employing end-to-end congestion control for video applications has recently been a hot area of research. One alternative is to use a Transmission Control Protocol (TCP)-friendly end-to-end congestion control scheme. Such schemes, similar to TCP, probe the network for estimating the bandwidth available to the session they belong to. The average bandwidth available to a session using a TCP-friendly congestion control scheme has to be the same as that of a session using TCP. Some TCP-friendly congestion control schemes are highly responsive as TCP itself leading to undesired oscillations in the estimated bandwidth and thus fluctuating quality. Slowly responsive TCP-friendly congestion control schemes to prevent this type of behavior have recently been proposed in the literature. The main goal of this thesis is to develop an architecture for video streaming in IP networks using slowly responding TCP-friendly end-to-end congestion control. In particular, we use Binomial Congestion Control (BCC). In this architecture, the video streaming device intelligently discards some of the video packets of lesser priority before injecting them in the network in order to match the incoming video rate to the estimated bandwidth using BCC and to ensure a high throughput for those video packets with higher priority. We iiidemonstrate the efficacy of this architecture using simulations in a variety of scenarios.Yücesan, OngunM.S

    An architecture for an ATM network continuous media server exploiting temporal locality of access

    Get PDF
    With the continuing drop in the price of memory, Video-on-Demand (VoD) solutions that have so far focused on maximising the throughput of disk units with a minimal use of physical memory may now employ significant amounts of cache memory. The subject of this thesis is the study of a technique to best utilise a memory buffer within such a VoD solution. In particular, knowledge of the streams active on the server is used to allocate cache memory. Stream optimised caching exploits reuse of data among streams that are temporally close to each other within the same clip; the data fetched on behalf of the leading stream may be cached and reused by the following streams. Therefore, only the leading stream requires access to the physical disk and the potential level of service provision allowed by the server may be increased. The use of stream optimised caching may consequently be limited to environments where reuse of data is significant. As such, the technique examined within this thesis focuses on a classroom environment where user progress is generally linear and all users progress at approximately the same rate for such an environment, reuse of data is guaranteed. The analysis of stream optimised caching begins with a detailed theoretical discussion of the technique and suggests possible implementations. Later chapters describe both the design and construction of a prototype server that employs the caching technique, and experiments that use of the prototype to assess the effectiveness of the technique for the chosen environment using `emulated' users. The conclusions of these experiments indicate that stream optimised caching may be applicable to larger scale VoD systems than small scale teaching environments. Future development of stream optimised caching is considered

    PDE-based image compression based on edges and optimal data

    Get PDF
    This thesis investigates image compression with partial differential equations (PDEs) based on edges and optimal data. It first presents a lossy compression method for cartoon-like images. Edges together with some adjacent pixel values are extracted and encoded. During decoding, information not covered by this data is reconstructed by PDE-based inpainting with homogeneous diffusion. The result is a compression codec based on perceptual meaningful image features which is able to outperform JPEG and JPEG2000. In contrast, the second part of the thesis focuses on the optimal selection of inpainting data. The proposed methods allow to recover a general image from only 4% of all pixels almost perfectly, even with homogeneous diffusion inpainting. A simple conceptual encoding shows the potential of an optimal data selection for image compression: The results beat the quality of JPEG2000 when anisotropic diffusion is used for inpainting. Finally, the thesis shows that the combination of the concepts allows for further improvements.Die vorliegende Arbeit untersucht die Bildkompression mit partiellen Differentialgleichungen (PDEs), basierend auf Kanten und optimalen Daten. Sie stellt zunächst ein verlustbehaftetes Kompressionsverfahren für cartoonartige Bilder vor. Dazu werden Kanten zusammen mit einigen benachbarten Pixelwerten extrahiert und anschließend kodiert. Während der Dekodierung, werden Informationen, die durch die gespeicherten Daten nicht abgedeckt sind, mittels PDE-basiertem Inpainting mit homogenener Diffusion rekonstruiert. Das Ergebnis ist ein Kompressionscodec, der auf visuell bedeutsamen Bildmerkmalen basiert und in der Lage ist, die Qualität von JPEG und JPEG2000 zu übertreffen. Im Gegensatz dazu konzentriert sich der zweite Teil der Arbeit auf die optimale Auswahl von Inpaintingdaten. Die vorgeschlagenen Methoden ermöglichen es, ein gewöhnliches Bild aus nur 4% aller Pixel nahezu perfekt wiederherzustellen, selbst mit homogenem Diffusionsinpainting. Eine einfache konzeptuelle Kodierung zeigt das Potential einer optimierten Datenauswahl auf: Die Ergebnisse übersteigen die Qualität von JPEG2000, sofern das Inpainting mit einem anisotropen Diffusionsprozess erfolgt. Schließlich zeigt die Arbeit, dass weitere Verbesserungen durch die Kombination der Konzepte erreicht werden können
    corecore