984 research outputs found

    Improved picture-rate conversion using classification-based LMS-filters

    Full text link
    Due to the recent explosion of multimedia formats and the need to convert between them, more attention is drawn to picture rate conversion. Moreover, growing demands on video motion portrayal without judder or blur requires improved format conversion. The simplest conversion repeats the latest picture until a more recent one becomes available. Advanced methods estimate the motion of moving objects to interpolate their correct position in additional images. Although motion blur and judder have been reduced using motion compensation, artifacts, especially around the moving objects in sequences with fast motion, may be disturbing. Previous work has reduced this so-called 'halo' artifact, but the overall result is still perceived as sub-optimal due to the complexity of the heuristics involved. In this paper, we aim at reducing the heuristics by designing LMS up conversion filters optimized for pre-defined local spatio-temporal image classes. Design and evaluation, and a benchmark with earlier techniques will be discussed. In general, the proposed approach gives better results

    MASCOT : metadata for advanced scalable video coding tools : final report

    Get PDF
    The goal of the MASCOT project was to develop new video coding schemes and tools that provide both an increased coding efficiency as well as extended scalability features compared to technology that was available at the beginning of the project. Towards that goal the following tools would be used: - metadata-based coding tools; - new spatiotemporal decompositions; - new prediction schemes. Although the initial goal was to develop one single codec architecture that was able to combine all new coding tools that were foreseen when the project was formulated, it became clear that this would limit the selection of the new tools. Therefore the consortium decided to develop two codec frameworks within the project, a standard hybrid DCT-based codec and a 3D wavelet-based codec, which together are able to accommodate all tools developed during the course of the project

    Assessment of visual quality and spatial accuracy of fast anisotropic diffusion and scan conversion algorithms for real-time three-dimensional spherical ultrasound

    Get PDF
    Three-dimensional ultrasound machines based on matrix phased-array transducers are gaining predominance for real-time dynamic screening in cardiac and obstetric practice. These transducers array acquire three-dimensional data in spherical coordinates along lines tiled in azimuth and elevation angles at incremental depth. This study aims at evaluating fast filtering and scan conversion algorithms applied in the spherical domain prior to visualization into Cartesian coordinates for visual quality and spatial measurement accuracy. Fast 3d scan conversion algorithms were implemented and with different order interpolation kernels. Downsizing and smoothing of sampling artifacts were integrated in the scan conversion process. In addition, a denoising scheme for spherical coordinate data with 3d anisotropic diffusion was implemented and applied prior to scan conversion to improve image quality. Reconstruction results under different parameter settings, such as different interpolation kernels, scaling factor, smoothing options, and denoising, are reported. Image quality was evaluated on several data sets via visual inspections and measurements of cylinder objects dimensions. Error measurements of the cylinder's radius, reported in this paper, show that the proposed fast scan conversion algorithm can correctly reconstruct three-dimensional ultrasound in Cartesian coordinates under tuned parameter settings. Denoising via three-dimensional anisotropic diffusion was able to greatly improve the quality of resampled data without affecting the accuracy of spatial information after the modification of the introduction of a variable gradient threshold parameter

    Exploring space situational awareness using neuromorphic event-based cameras

    Get PDF
    The orbits around earth are a limited natural resource and one that hosts a vast range of vital space-based systems that support international systems use by both commercial industries, civil organisations, and national defence. The availability of this space resource is rapidly depleting due to the ever-growing presence of space debris and rampant overcrowding, especially in the limited and highly desirable slots in geosynchronous orbit. The field of Space Situational Awareness encompasses tasks aimed at mitigating these hazards to on-orbit systems through the monitoring of satellite traffic. Essential to this task is the collection of accurate and timely observation data. This thesis explores the use of a novel sensor paradigm to optically collect and process sensor data to enhance and improve space situational awareness tasks. Solving this issue is critical to ensure that we can continue to utilise the space environment in a sustainable way. However, these tasks pose significant engineering challenges that involve the detection and characterisation of faint, highly distant, and high-speed targets. Recent advances in neuromorphic engineering have led to the availability of high-quality neuromorphic event-based cameras that provide a promising alternative to the conventional cameras used in space imaging. These cameras offer the potential to improve the capabilities of existing space tracking systems and have been shown to detect and track satellites or ‘Resident Space Objects’ at low data rates, high temporal resolutions, and in conditions typically unsuitable for conventional optical cameras. This thesis presents a thorough exploration of neuromorphic event-based cameras for space situational awareness tasks and establishes a rigorous foundation for event-based space imaging. The work conducted in this project demonstrates how to enable event-based space imaging systems that serve the goals of space situational awareness by providing accurate and timely information on the space domain. By developing and implementing event-based processing techniques, the asynchronous operation, high temporal resolution, and dynamic range of these novel sensors are leveraged to provide low latency target acquisition and rapid reaction to challenging satellite tracking scenarios. The algorithms and experiments developed in this thesis successfully study the properties and trade-offs of event-based space imaging and provide comparisons with traditional observing methods and conventional frame-based sensors. The outcomes of this thesis demonstrate the viability of event-based cameras for use in tracking and space imaging tasks and therefore contribute to the growing efforts of the international space situational awareness community and the development of the event-based technology in astronomy and space science applications
    • …
    corecore