61 research outputs found
Introductory Computer Forensics
INTERPOL (International Police) built cybercrime programs to keep up with emerging cyber threats, and aims to coordinate and assist international operations for ?ghting crimes involving computers. Although signi?cant international efforts are being made in dealing with cybercrime and cyber-terrorism, ?nding effective, cooperative, and collaborative ways to deal with complicated cases that span multiple jurisdictions has proven dif?cult in practic
Recommended from our members
Advancing Multimedia: Application Sharing, Latency Measurements and User-Created Services
Online collaboration tools exist and have been used since the early days of the Internet. Asynchronous tools such as wikis and discussion boards and real-time tools such as instant messaging and voice conferencing have been the only viable collaboration solutions up until recently, due to the low bandwidth between participants. With the increasing bandwidth in computer networks, multimedia collaboration such as application sharing and video conferencing have become feasible. Application and desktop sharing allows sharing of any application with one or more people over the Internet. The participants receive the screen-view of the shared application from the server. Their mouse and keyboard events are delivered and regenerated at the server. Application and desktop sharing enables collaborative work, software tutoring, and e-learning over the Internet. I have developed a high performance application and desktop sharing system called BASS which is efficient, reliable, independent of the operating system, scales well via heterogeneous multicast, supports all applications, and features true application sharing. Most of the time an application sharing session requires audio and video conferencing to be more useful. High quality video conferencing requires a fair amount of bandwidth and unfortunately Internet bandwidth of home users is still limited and shared by more than one application and user. Therefore, I measured the performance of popular video conferencing applications under congestion to understand whether they are flexible enough to adapt to fluctuating and limited bandwidth conditions. In particular, I analyzed how Skype, Windows Live Messenger, Eyebeam and X-Lite react to changes in available bandwidth, presence of HTTP and BitTorrent traffic and wireless packet losses. To perform these measurements more effectively, I have also developed vDelay, a novel tool for measuring the capture-to-display latency (CDL) and frame rate of real-time video conferencing sessions. vDelay enables developers and testers to measure the CDL and frame rate of any video conferencing application without modifying the source code. Further, it does not require any specialized hardware. I have used vDelay to measure the CDL and frame rate of popular video chat applications including Skype, Windows Live Messenger, and GMail video chat. vDelay can also be used to measure the CDL and frame rate of these applications in the presence of bandwidth variations. The results from the performance study showed that existing products, such as Skype, adapt to bandwidth fluctuations fairly well and can differentiate wireless and congestion-based packet losses. Therefore, rather than trying to improve video conferencing tools, I changed my focus to end-user created communication-related services to increase the utility of existing stand alone Internet services, devices in the physical world, communication and online social networks. I have developed SECE (Sense Everything, Control Everything), a new language and its supporting software infrastructure for user created services. SECE allows non-technical end-users to create services that combine communication, social networks, presence, calendaring, location and devices in the physical world. SECE is an event-driven system that uses a natural-English-like language to trigger action scripts. Users associate actions with events and when an event happens its associated action is executed. Presence updates, social network updates, incoming calls, email, calendar and time events, sensor inputs and location updates can trigger rules. SECE retrieves all this information from multiple sources to personalize services and to adapt them to changes in the user's context and preferences. Actions can control the delivery of email, change the handling of phone calls, update social network status and set the state of actuators such as lights, thermostats and electrical appliances
Multimedia Forensics
This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field
The 1992 4th NASA SERC Symposium on VLSI Design
Papers from the fourth annual NASA Symposium on VLSI Design, co-sponsored by the IEEE, are presented. Each year this symposium is organized by the NASA Space Engineering Research Center (SERC) at the University of Idaho and is held in conjunction with a quarterly meeting of the NASA Data System Technology Working Group (DSTWG). One task of the DSTWG is to develop new electronic technologies that will meet next generation electronic data system needs. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The NASA SERC is proud to offer, at its fourth symposium on VLSI design, presentations by an outstanding set of individuals from national laboratories, the electronics industry, and universities. These speakers share insights into next generation advances that will serve as a basis for future VLSI design
Multimedia Forensics
This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field
Combined use of congestion control and frame discarding for Internet video streaming
Cataloged from PDF version of article.Increasing demand for video applications over the Internet and the inherent
uncooperative behavior of the User Datagram Protocol (UDP) used currently as
the transport protocol of choice for video networking applications, is known to
be leading to congestion collapse of the Internet. The congestion collapse can be
prevented by using mechanisms in networks that penalize uncooperative flows
like UDP or employing end-to-end congestion control. Since today’s vision for
the Internet architecture is based on moving the complexity towards the edges of
the networks, employing end-to-end congestion control for video applications has
recently been a hot area of research. One alternative is to use a Transmission
Control Protocol (TCP)-friendly end-to-end congestion control scheme. Such
schemes, similar to TCP, probe the network for estimating the bandwidth available
to the session they belong to. The average bandwidth available to a session
using a TCP-friendly congestion control scheme has to be the same as that of
a session using TCP. Some TCP-friendly congestion control schemes are highly
responsive as TCP itself leading to undesired oscillations in the estimated bandwidth
and thus fluctuating quality. Slowly responsive TCP-friendly congestion
control schemes to prevent this type of behavior have recently been proposed
in the literature. The main goal of this thesis is to develop an architecture for
video streaming in IP networks using slowly responding TCP-friendly end-to-end
congestion control. In particular, we use Binomial Congestion Control (BCC).
In this architecture, the video streaming device intelligently discards some of
the video packets of lesser priority before injecting them in the network in order
to match the incoming video rate to the estimated bandwidth using BCC and
to ensure a high throughput for those video packets with higher priority. We
iiidemonstrate the efficacy of this architecture using simulations in a variety of
scenarios.Yücesan, OngunM.S
An architecture for an ATM network continuous media server exploiting temporal locality of access
With the continuing drop in the price of memory, Video-on-Demand (VoD) solutions that have so far focused on maximising the throughput of disk units with a minimal use of physical memory may now employ significant amounts of cache memory. The subject of this thesis is the study of a technique to best utilise a memory buffer within such a VoD solution. In particular, knowledge of the streams active on the server is used to allocate cache memory. Stream optimised caching exploits reuse of data among streams that are temporally close to each other within the same clip; the data fetched on behalf of the leading stream may be cached and reused by the following streams. Therefore, only the leading stream requires access to the physical disk and the potential level of service provision allowed by the server may be increased. The use of stream optimised caching may consequently be limited to environments where reuse of data is significant. As such, the technique examined within this thesis focuses on a classroom environment where user progress is generally linear and all users progress at approximately the same rate for such an environment, reuse of data is guaranteed. The analysis of stream optimised caching begins with a detailed theoretical discussion of the technique and suggests possible implementations. Later chapters describe both the design and construction of a prototype server that employs the caching technique, and experiments that use of the prototype to assess the effectiveness of the technique for the chosen environment using `emulated' users. The conclusions of these experiments indicate that stream optimised caching may be applicable to larger scale VoD systems than small scale teaching environments. Future development of stream optimised caching is considered
PDE-based image compression based on edges and optimal data
This thesis investigates image compression with partial differential equations (PDEs) based on edges and optimal data.
It first presents a lossy compression method for cartoon-like images. Edges together with some adjacent pixel values are extracted and encoded. During decoding, information not covered by this data is reconstructed by PDE-based inpainting with homogeneous diffusion. The result is a compression codec based on perceptual meaningful image features which is able to outperform JPEG and JPEG2000.
In contrast, the second part of the thesis focuses on the optimal selection of inpainting data. The proposed methods allow to recover a general image from only 4% of all pixels almost perfectly, even with homogeneous diffusion inpainting. A simple conceptual encoding shows the potential of an optimal data selection for image compression: The results beat the quality of JPEG2000 when anisotropic diffusion is used for inpainting.
Finally, the thesis shows that the combination of the concepts allows for further improvements.Die vorliegende Arbeit untersucht die Bildkompression mit partiellen Differentialgleichungen (PDEs), basierend auf Kanten und optimalen Daten.
Sie stellt zunächst ein verlustbehaftetes Kompressionsverfahren für cartoonartige Bilder vor. Dazu werden Kanten zusammen mit einigen benachbarten Pixelwerten extrahiert und anschließend kodiert. Während der Dekodierung, werden Informationen, die durch die gespeicherten Daten nicht abgedeckt sind, mittels PDE-basiertem Inpainting mit homogenener Diffusion rekonstruiert. Das Ergebnis ist ein Kompressionscodec, der auf visuell bedeutsamen Bildmerkmalen basiert und in der Lage ist, die Qualität von JPEG und JPEG2000 zu übertreffen.
Im Gegensatz dazu konzentriert sich der zweite Teil der Arbeit auf die optimale Auswahl von Inpaintingdaten. Die vorgeschlagenen Methoden ermöglichen es, ein gewöhnliches Bild aus nur 4% aller Pixel nahezu perfekt wiederherzustellen, selbst mit homogenem Diffusionsinpainting. Eine einfache konzeptuelle Kodierung zeigt das Potential einer optimierten Datenauswahl auf: Die Ergebnisse übersteigen die Qualität von JPEG2000, sofern das Inpainting mit einem anisotropen Diffusionsprozess erfolgt.
Schließlich zeigt die Arbeit, dass weitere Verbesserungen durch die Kombination der Konzepte erreicht werden können
- …