11,718 research outputs found

    Digital Color Imaging

    Full text link
    This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided

    Lempel Ziv Welch data compression using associative processing as an enabling technology for real time application

    Get PDF
    Data compression is a term that refers to the reduction of data representation requirements either in storage and/or in transmission. A commonly used algorithm for compression is the Lempel-Ziv-Welch (LZW) method proposed by Terry A. Welch[l]. LZW is an adaptive, dictionary based, lossless algorithm. This provides for a general compression mechanism that is applicable to a broad range of inputs. Furthermore, the lossless nature of LZW implies that it is a reversible process which results in the original file/message being fully recoverable from compression. A variant of this algorithm is currently the foundation of the UNIX compress program. Additionally, LZW is one of the compression schemes defined in the TIFF standard[12], as well as in the CCITT V.42bis standard. One of the challenges in designing an efficient compression mechanism, such as LZW, which can be used in real time applications, is the speed of the search into the data dictionary. In this paper an Associative Processing implementation of the LZW algorithm is presented. This approach provides an efficient solution to this requirement. Additionally, it is shown that Associative Processing (ASP) allows for rapid and elegant development of the LZW algorithm that will generally outperform standard approaches in complexity, readability, and performance

    Exclusive-or preprocessing and dictionary coding of continuous-tone images.

    Get PDF
    The field of lossless image compression studies the various ways to represent image data in the most compact and efficient manner possible that also allows the image to be reproduced without any loss. One of the most efficient strategies used in lossless compression is to introduce entropy reduction through decorrelation. This study focuses on using the exclusive-or logic operator in a decorrelation filter as the preprocessing phase of lossless image compression of continuous-tone images. The exclusive-or logic operator is simply and reversibly applied to continuous-tone images for the purpose of extracting differences between neighboring pixels. Implementation of the exclusive-or operator also does not introduce data expansion. Traditional as well as innovative prediction methods are included for the creation of inputs for the exclusive-or logic based decorrelation filter. The results of the filter are then encoded by a variation of the Lempel-Ziv-Welch dictionary coder. Dictionary coding is selected for the coding phase of the algorithm because it does not require the storage of code tables or probabilities and because it is lower in complexity than other popular options such as Huffman or Arithmetic coding. The first modification of the Lempel-Ziv-Welch dictionary coder is that image data can be read in a sequence that is linear, 2-dimensional, or an adaptive combination of both. The second modification of the dictionary coder is that the coder can instead include multiple, dynamically chosen dictionaries. Experiments indicate that the exclusive-or operator based decorrelation filter when combined with a modified Lempel-Ziv-Welch dictionary coder provides compression comparable to algorithms that represent the current standard in lossless compression. The proposed algorithm provides compression performance that is below the Context-Based, Adaptive, Lossless Image Compression (CALIC) algorithm by 23%, below the Low Complexity Lossless Compression for Images (LOCO-I) algorithm by 19%, and below the Portable Network Graphics implementation of the Deflate algorithm by 7%, but above the Zip implementation of the Deflate algorithm by 24%. The proposed algorithm uses the exclusive-or operator in the modeling phase and uses modified Lempel-Ziv-Welch dictionary coding in the coding phase to form a low complexity, reversible, and dynamic method of lossless image compression

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Iris Information Management in Object-Relational Databases

    Get PDF
    Biometrics is a technology under development that has been enhanced by the increasing security concerns in organizations at all levels. Public agencies that employ this technology need to consult the biometric data efficiently and share them with other agencies. Hence the need for data models and standards that allow interoperability between systems and facilitate data searches. The objective of this work is to develop a generic architecture using object-relational database technology (ORDB), according to international standards, for identifying people by means of iris recognition. In addition, a model expressed in Unified Modeling Language (UML) class diagram where the domain data types defined for use in architecture is proposed. This architecture will allow interoperability between organizations efficiently and safely.XII Workshop Bases de Datos y Minería de Datos (WBDDM)Red de Universidades con Carreras en Informática (RedUNCI

    Utilization of CT scanning associated with complex spine surgery.

    Get PDF
    BackgroundDue to the risk associated with exposure to ionizing radiation, there is an urgent need to identify areas of CT scanning overutilization. While increased use of diagnostic spinal imaging has been documented, no previous research has estimated the magnitude of follow-up imaging used to evaluate the postoperative spine.MethodsThis retrospective cohort study quantifies the association between spinal surgery and CT utilization. An insurance database (Humana, Inc.) with ≈ 19 million enrollees was employed, representing 8 consecutive years (2007-2014). Surgical and imaging procedures were captured by anatomic-specific CPT codes. Complex surgeries included all cervical, thoracic and lumbar instrumented spine fusions. Simple surgeries included discectomy and laminectomy. Imaging was restricted to CT and MRI. Postoperative imaging frequency extended to 5-years post-surgery.ResultsThere were 140,660 complex spinal procedures and 39,943 discectomies and 49,889 laminectomies. MRI was the predominate preoperative imaging modality for all surgical procedures (median: 80%; range: 73-82%). Postoperatively, CT prevalence following complex procedures increased more than two-fold from 6 months (18%) to 5 years (≥40%), and patients having a postoperative CT averaged two scans. For simple procedures, the prevalence of postoperative CT scanning never exceeded 30%.ConclusionsCT scanning is used frequently for follow-up imaging evaluation following complex spine surgery. There is emerging evidence of an increased cancer risk due to ionizing radiation exposure with CT. In the setting of complex spine surgery, actions to mitigate this risk should be considered and include reducing nonessential scans, using the lowest possible radiation dose protocols, exerting greater selectivity in monitoring the developing fusion construct, and adopting non-ferromagnetic implant biomaterials that facilitate MRI postoperatively
    corecore