60 research outputs found

    Image Segmentation using Human Visual System Properties with Applications in Image Compression

    Get PDF
    In order to represent a digital image, a very large number of bits is required. For example, a 512 X 512 pixel, 256 gray level image requires over two million bits. This large number of bits is a substantial drawback when it is necessary to store or transmit a digital image. Image compression, often referred to as image coding, attempts to reduce the number of bits used to represent an image, while keeping the degradation in the decoded image to a minimum. One approach to image compression is segmentation-based image compression. The image to be compressed is segmented, i.e. the pixels in the image are divided into mutually exclusive spatial regions based on some criteria. Once the image has been segmented, information is extracted describing the shapes and interiors of the image segments. Compression is achieved by efficiently representing the image segments. In this thesis we propose an image segmentation technique which is based on centroid-linkage region growing, and takes advantage of human visual system (HVS) properties. We systematically determine through subjective experiments the parameters for our segmentation algorithm which produce the most visually pleasing segmented images, and demonstrate the effectiveness of our method. We also propose a method for the quantization of segmented images based on HVS contrast sensitivity, arid investigate the effect of quantization on segmented images

    The Space and Earth Science Data Compression Workshop

    Get PDF
    This document is the proceedings from a Space and Earth Science Data Compression Workshop, which was held on March 27, 1992, at the Snowbird Conference Center in Snowbird, Utah. This workshop was held in conjunction with the 1992 Data Compression Conference (DCC '92), which was held at the same location, March 24-26, 1992. The workshop explored opportunities for data compression to enhance the collection and analysis of space and Earth science data. The workshop consisted of eleven papers presented in four sessions. These papers describe research that is integrated into, or has the potential of being integrated into, a particular space and/or Earth science data information system. Presenters were encouraged to take into account the scientists's data requirements, and the constraints imposed by the data collection, transmission, distribution, and archival system

    Motion compensated video coding

    Get PDF
    The result of many years of international co-operation in video coding has been the development of algorithms that remove interframe redundancy, such that only changes in the image that occur over a given time are encoded for transmission to the recipient. The primary process used here is the derivation of pixel differences, encoded in a method referred to as Differential Pulse-Coded Modulation (DPCM)and this has provided the basis of contemporary research into low-bit rate hybrid codec schemes. There are, however, instances when the DPCM technique cannot successfully code a segment of the image sequence because motion is a major cause of interframe differences. Motion Compensation (MC) can be used to improve the efficiency of the predictive coding algorithm. This thesis examines current thinking in the area of motion-compensated video compression and contrasts the application of differing algorithms to the general requirements of interframe coding. A novel technique is proposed, where the constituent features in an image are segmented, classified and their motion tracked by a local search algorithm. Although originally intended to complement the DPCM method in a predictive hybrid codec, it will be demonstrated that the evaluation of feature displacement can, in its own right, form the basis of a low bitrate video codec of low complexity. After an extensive discussion of the issues involved, a description of laboratory simulations shows how the postulated technique is applied to standard test sequences. Measurements of image quality and the efficiency of compression are made and compared with a contemporary standard method of low bitrate video coding

    Motion compensated video coding

    Get PDF
    The result of many years of international co-operation in video coding has been the development of algorithms that remove interframe redundancy, such that only changes in the image that occur over a given time are encoded for transmission to the recipient. The primary process used here is the derivation of pixel differences, encoded in a method referred to as Differential Pulse-Coded Modulation (DPCM)and this has provided the basis of contemporary research into low-bit rate hybrid codec schemes. There are, however, instances when the DPCM technique cannot successfully code a segment of the image sequence because motion is a major cause of interframe differences. Motion Compensation (MC) can be used to improve the efficiency of the predictive coding algorithm. This thesis examines current thinking in the area of motion-compensated video compression and contrasts the application of differing algorithms to the general requirements of interframe coding. A novel technique is proposed, where the constituent features in an image are segmented, classified and their motion tracked by a local search algorithm. Although originally intended to complement the DPCM method in a predictive hybrid codec, it will be demonstrated that the evaluation of feature displacement can, in its own right, form the basis of a low bitrate video codec of low complexity. After an extensive discussion of the issues involved, a description of laboratory simulations shows how the postulated technique is applied to standard test sequences. Measurements of image quality and the efficiency of compression are made and compared with a contemporary standard method of low bitrate video coding

    Map online system using internet-based image catalogue

    Get PDF
    Digital maps carry along its geodata information such as coordinate that is important in one particular topographic and thematic map. These geodatas are meaningful especially in military field. Since the maps carry along this information, its makes the size of the images is too big. The bigger size, the bigger storage is required to allocate the image file. It also can cause longer loading time. These conditions make it did not suitable to be applied in image catalogue approach via internet environment. With compression techniques, the image size can be reduced and the quality of the image is still guaranteed without much changes. This report is paying attention to one of the image compression technique using wavelet technology. Wavelet technology is much batter than any other image compression technique nowadays. As a result, the compressed images applied to a system called Map Online that used Internet-based Image Catalogue approach. This system allowed user to buy map online. User also can download the maps that had been bought besides using the searching the map. Map searching is based on several meaningful keywords. As a result, this system is expected to be used by Jabatan Ukur dan Pemetaan Malaysia (JUPEM) in order to make the organization vision is implemented

    A Modular Approach to Lung Nodule Detection from Computed Tomography Images Using Artificial Neural Networks and Content Based Image Representation

    Get PDF
    Lung cancer is one of the most lethal cancer types. Research in computer aided detection (CAD) and diagnosis for lung cancer aims at providing effective tools to assist physicians in cancer diagnosis and treatment to save lives. In this dissertation, we focus on developing a CAD framework for automated lung cancer nodule detection from 3D lung computed tomography (CT) images. Nodule detection is a challenging task that no machine intelligence can surpass human capability to date. In contrast, human recognition power is limited by vision capacity and may suffer from work overload and fatigue, whereas automated nodule detection systems can complement expert’s efforts to achieve better detection performance. The proposed CAD framework encompasses several desirable properties such as mimicking physicians by means of geometric multi-perspective analysis, computational efficiency, and the most importantly producing high performance in detection accuracy. As the central part of the framework, we develop a novel hierarchical modular decision engine implemented by Artificial Neural Networks. One advantage of this decision engine is that it supports the combination of spatial-level and feature-level information analysis in an efficient way. Our methodology overcomes some of the limitations of current lung nodule detection techniques by combining geometric multi-perspective analysis with global and local feature analysis. The proposed modular decision engine design is flexible to modifications in the decision modules; the engine structure can adopt the modifications without having to re-design the entire system. The engine can easily accommodate multi-learning scheme and parallel implementation so that each information type can be processed (in parallel) by the most adequate learning technique of its own. We have also developed a novel shape representation technique that is invariant under rigid-body transformation and we derived new features based on this shape representation for nodule detection. We implemented a prototype nodule detection system as a demonstration of the proposed framework. Experiments have been conducted to assess the performance of the proposed methodologies using real-world lung CT data. Several performance measures for detection accuracy are used in the assessment. The results show that the decision engine is able to classify patterns efficiently with very good classification performance

    Design of a digital compression technique for shuttle television

    Get PDF
    The determination of the performance and hardware complexity of data compression algorithms applicable to color television signals, were studied to assess the feasibility of digital compression techniques for shuttle communications applications. For return link communications, it is shown that a nonadaptive two dimensional DPCM technique compresses the bandwidth of field-sequential color TV to about 13 MBPS and requires less than 60 watts of secondary power. For forward link communications, a facsimile coding technique is recommended which provides high resolution slow scan television on a 144 KBPS channel. The onboard decoder requires about 19 watts of secondary power

    Digital image compression

    Get PDF

    Studies and simulations of the DigiCipher system

    Get PDF
    During this period the development of simulators for the various high definition television (HDTV) systems proposed to the FCC was continued. The FCC has indicated that it wants the various proposers to collaborate on a single system. Based on all available information this system will look very much like the advanced digital television (ADTV) system with major contributions only from the DigiCipher system. The results of our simulations of the DigiCipher system are described. This simulator was tested using test sequences from the MPEG committee. The results are extrapolated to HDTV video sequences. Once again, some caveats are in order. The sequences used for testing the simulator and generating the results are those used for testing the MPEG algorithm. The sequences are of much lower resolution than the HDTV sequences would be, and therefore the extrapolations are not totally accurate. One would expect to get significantly higher compression in terms of bits per pixel with sequences that are of higher resolution. However, the simulator itself is a valid one, and should HDTV sequences become available, they could be used directly with the simulator. A brief overview of the DigiCipher system is given. Some coding results obtained using the simulator are looked at. These results are compared to those obtained using the ADTV system. These results are evaluated in the context of the CCSDS specifications and make some suggestions as to how the DigiCipher system could be implemented in the NASA network. Simulations such as the ones reported can be biased depending on the particular source sequence used. In order to get more complete information about the system one needs to obtain a reasonable set of models which mirror the various kinds of sources encountered during video coding. A set of models which can be used to effectively model the various possible scenarios is provided. As this is somewhat tangential to the other work reported, the results are included as an appendix

    Graphical user interface for image processing

    Get PDF
    A user friendly, menu driven, highly interactive X Windows package for Image Processing Applications using Motif Widget Set under Motif Window Manager is developed. Modules related to Segementation, Enhancement, Representation, Transformations are developed. The above routines are useful for image manipulation. The current gray scale/binary image is displayed on the window. Online histogram is provided so that the user can change the threshold value interactively. The OSF/Motif toolkit is used efficiently and also Xlib calls to display the image by allocating colormap. An on-line image manipulation help menu facility is incorparated in the tool to make it more versatile
    corecore