83 research outputs found

    Lossless Compression Methods for Real-Time Images

    Get PDF
    This paper proposes and implements two lossless methods, to compress real-time greyscale medical images, which are Huffman coding and a new lossless method called Reduced Lossless Compression Method (RLCM), both of which were tested when applying a random sample of greyscale medical images with a size of 256×256 pixels. Different factors were measured to check the compression method performances such as the compression time, the compressed image size, and the compression ratio (CR). The system is fully implemented on a field programmable gate array (FPGA) using a fully hardware based (no software driven processor) system architecture. A Terasic DE4 board was used as the main platform for implementing and testing the system using Quartus-II software and tools for design and debugging. The impact of compressing the image and carrying the compressed data through parallel lines is like the impact of compressed the same image inside a single core with a higher compression ratio, in this system between 7.5 and 126.8

    Video coding for compression and content-based functionality

    Get PDF
    The lifetime of this research project has seen two dramatic developments in the area of digital video coding. The first has been the progress of compression research leading to a factor of two improvement over existing standards, much wider deployment possibilities and the development of the new international ITU-T Recommendation H.263. The second has been a radical change in the approach to video content production with the introduction of the content-based coding concept and the addition of scene composition information to the encoded bit-stream. Content-based coding is central to the latest international standards efforts from the ISO/IEC MPEG working group. This thesis reports on extensions to existing compression techniques exploiting a priori knowledge about scene content. Existing, standardised, block-based compression coding techniques were extended with work on arithmetic entropy coding and intra-block prediction. These both form part of the H.263 and MPEG-4 specifications respectively. Object-based coding techniques were developed within a collaborative simulation model, known as SIMOC, then extended with ideas on grid motion vector modelling and vector accuracy confidence estimation. An improved confidence measure for encouraging motion smoothness is proposed. Object-based coding ideas, with those from other model and layer-based coding approaches, influenced the development of content-based coding within MPEG-4. This standard made considerable progress in this newly adopted content based video coding field defining normative techniques for arbitrary shape and texture coding. The means to generate this information, the analysis problem, for the content to be coded was intentionally not specified. Further research work in this area concentrated on video segmentation and analysis techniques to exploit the benefits of content based coding for generic frame based video. The work reported here introduces the use of a clustering algorithm on raw data features for providing initial segmentation of video data and subsequent tracking of those image regions through video sequences. Collaborative video analysis frameworks from COST 21 l qual and MPEG-4, combining results from many other segmentation schemes, are also introduced

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Digital Image Processing

    Get PDF
    This book presents several recent advances that are related or fall under the umbrella of 'digital image processing', with the purpose of providing an insight into the possibilities offered by digital image processing algorithms in various fields. The presented mathematical algorithms are accompanied by graphical representations and illustrative examples for an enhanced readability. The chapters are written in a manner that allows even a reader with basic experience and knowledge in the digital image processing field to properly understand the presented algorithms. Concurrently, the structure of the information in this book is such that fellow scientists will be able to use it to push the development of the presented subjects even further

    Information Extraction and Modeling from Remote Sensing Images: Application to the Enhancement of Digital Elevation Models

    Get PDF
    To deal with high complexity data such as remote sensing images presenting metric resolution over large areas, an innovative, fast and robust image processing system is presented. The modeling of increasing level of information is used to extract, represent and link image features to semantic content. The potential of the proposed techniques is demonstrated with an application to enhance and regularize digital elevation models based on information collected from RS images

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    View generated database

    Get PDF
    This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics

    Enhancing Operational Flood Detection Solutions through an Integrated Use of Satellite Earth Observations and Numerical Models

    Get PDF
    Among natural disasters floods are the most common and widespread hazards worldwide (CRED and UNISDR, 2018). Thus, making communities more resilient to flood is a priority, particularly in large flood-prone areas located in emerging countries, because the effects of extreme events severely setback the development process (Wright, 2013). In this context, operational flood preparedness requires novel modeling approaches for a fast delineation of flooding in riverine environments. Starting from a review of advances in the flood modeling domain and a selection of the more suitable open toolsets available in the literature, a new method for the Rapid Estimation of FLood EXtent (REFLEX) at multiple scales (Arcorace et al., 2019) is proposed. The simplified hydraulic modeling adopted in this method consists of a hydro-geomorphological approach based on the Height Above the Nearest Drainage (HAND) model (Nobre et al., 2015). The hydraulic component of this method employs a simplified version of fluid mechanic equations for natural river channels. The input runoff volume is distributed from channel to hillslope cells of the DEM by using an iterative flood volume optimization based on Manning\u2019s equation. The model also includes a GIS-based method to expand HAND contours across neighbor watersheds in flat areas, particularly useful in flood modeling expansion over coastal zones. REFLEX\u2019s flood modeling has been applied in multiple case studies in both surveyed and ungauged river basins. The development and the implementation of the whole modeling chain have enabled a rapid estimation of flood extent over multiple basins at different scales. When possible, flood modeling results are compared with reference flood hazard maps or with detailed flood simulations. Despite the limitations of the method due to the employed simplified hydraulic modeling approach, obtained results are promising in terms of flood extent and water depth. Given the geomorphological nature of the method, it does not require initial and boundary conditions as it is in traditional 1D/2D hydraulic modeling. Therefore, its usage fits better in data-poor environments or large-scale flood modeling. An extensive employment of this slim method has been adopted by CIMA Research Foundation researchers for flood hazard mapping purposes over multiple African countries. As collateral research, multiple types of Earth observation (EO) data have been employed in the REFLEX modeling chain. Remotely sensed data from the satellites, in fact, are not only a source to obtain input digital terrain models but also to map flooded areas. Thus, in this work, different EO data exploitation methods are used for estimating water extent and surface height. Preliminary results by using Copernicus\u2019s Sentinel-1 SAR and Sentinel-3 radar altimetry data highlighted their potential mainly for model calibration and validation. In conclusion, REFLEX combines the advantages of geomorphological models with the ones of traditional hydraulic modeling to ensure a simplified steady flow computation of flooding in open channels. This work highlights the pros and cons of the method and indicates the way forward for future research in the hydro-geomorphological domain
    corecore