12 research outputs found

    Single-shot ultrafast optical imaging

    Get PDF
    Single-shot ultrafast optical imaging can capture two-dimensional transient scenes in the optical spectral range at ≥100 million frames per second. This rapidly evolving field surpasses conventional pump-probe methods by possessing real-time imaging capability, which is indispensable for recording nonrepeatable and difficult-to-reproduce events and for understanding physical, chemical, and biological mechanisms. In this mini-review, we survey state-of-the-art single-shot ultrafast optical imaging comprehensively. Based on the illumination requirement, we categorized the field into active-detection and passive-detection domains. Depending on the specific image acquisition and reconstruction strategies, these two categories are further divided into a total of six subcategories. Under each subcategory, we describe operating principles, present representative cutting-edge techniques, with a particular emphasis on their methodology and applications, and discuss their advantages and challenges. Finally, we envision prospects for technical advancement in this field

    Spatio-Spectral Sampling and Color Filter Array Design

    Get PDF
    Owing to the growing ubiquity of digital image acquisition and display, several factors must be considered when developing systems to meet future color image processing needs, including improved quality, increased throughput, and greater cost-effectiveness. In consumer still-camera and video applications, color images are typically obtained via a spatial subsampling procedure implemented as a color filter array (CFA), a physical construction whereby only a single component of the color space is measured at each pixel location. Substantial work in both industry and academia has been dedicated to post-processing this acquired raw image data as part of the so-called image processing pipeline, including in particular the canonical demosaicking task of reconstructing a full-color image from the spatially subsampled and incomplete data acquired using a CFA. However, as we detail in this chapter, the inherent shortcomings of contemporary CFA designs mean that subsequent processing steps often yield diminishing returns in terms of image quality. For example, though distortion may be masked to some extent by motion blur and compression, the loss of image quality resulting from all but the most computationally expensive state-of-the-art methods is unambiguously apparent to the practiced eye. … As the CFA represents one of the first steps in the image acquisition pipeline, it largely determines the maximal resolution and computational efficiencies achievable by subsequent processing schemes. Here, we show that the attainable spatial resolution yielded by a particular choice of CFA is quantifiable and propose new CFA designs to maximize it. In contrast to the majority of the demosaicking literature, we explicitly consider the interplay between CFA design and properties of typical image data and its implications for spatial reconstruction quality. Formally, we pose the CFA design problem as simultaneously maximizing the allowable spatio-spectral support of luminance and chrominance channels, subject to a partitioning requirement in the Fourier representation of the sensor data. This classical aliasing-free condition preserves the integrity of the color image data and thereby guarantees exact reconstruction when demosaicking is implemented as demodulation (demultiplexing in frequency)

    Moving-object reconstruction from camera-blurred sequences using interframe and interregion constraints

    Get PDF
    Also issued as Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1988.Includes bibliographical references.Supported by the AT&TFoundation through a Bell Laboratories Ph.D. scholarship.Stephen Charles Hsu

    Single-shot ultrafast optical imaging

    Get PDF
    Single-shot ultrafast optical imaging can capture two-dimensional transient scenes in the optical spectral range at ≥100 million frames per second. This rapidly evolving field surpasses conventional pump-probe methods by possessing real-time imaging capability, which is indispensable for recording nonrepeatable and difficult-to-reproduce events and for understanding physical, chemical, and biological mechanisms. In this mini-review, we survey state-of-the-art single-shot ultrafast optical imaging comprehensively. Based on the illumination requirement, we categorized the field into active-detection and passive-detection domains. Depending on the specific image acquisition and reconstruction strategies, these two categories are further divided into a total of six subcategories. Under each subcategory, we describe operating principles, present representative cutting-edge techniques, with a particular emphasis on their methodology and applications, and discuss their advantages and challenges. Finally, we envision prospects for technical advancement in this field

    Sparse variational regularization for visual motion estimation

    Get PDF
    The computation of visual motion is a key component in numerous computer vision tasks such as object detection, visual object tracking and activity recognition. Despite exten- sive research effort, efficient handling of motion discontinuities, occlusions and illumina- tion changes still remains elusive in visual motion estimation. The work presented in this thesis utilizes variational methods to handle the aforementioned problems because these methods allow the integration of various mathematical concepts into a single en- ergy minimization framework. This thesis applies the concepts from signal sparsity to the variational regularization for visual motion estimation. The regularization is designed in such a way that it handles motion discontinuities and can detect object occlusions

    3D reconstruction of coronary arteries from angiographic sequences for interventional assistance

    Get PDF
    Introduction -- Review of literature -- Research hypothesis and objectives -- Methodology -- Results and discussion -- Conclusion and future perspectives

    Picture processing for enhancement and recognition

    Get PDF
    Recent years have been characterized by an incredible growth in computing power and storage capabilities, communication speed and bandwidth availability, either for desktop platform or mobile device. The combination of these factors have led to a new era of multimedia applications: browsing of huge image archives, consultation of online video databases, location based services and many other. Multimedia is almost everywhere and requires high quality data, easy retrieval of multimedia contents, increase in network access capacity and bandwidth per user. To meet all the mentioned requirements many efforts have to be made in various research areas, ranging from signal processing, image and video analysis, communication protocols, etc. The research activity developed during these three years concerns the field of multimedia signal processing, with particular attention to image and video analysis and processing. Two main topics have been faced: the first is relating to image and video reconstruction/restoration (using super resolution techniques) in web based application for multimedia contents' fruition; the second is relating to image analysis for location based systems in indoor scenario. The first topic is relating to image and video processing, in particular the focus has been put on the development of algorithm for super resolution reconstruction of image and video sequences in order to make easier the fruition of multimedia data over the web. On one hand, latest years have been characterized by an incredible proliferation and surprising success of user generated multimedia contents, and also distributed and collaborative multimedia database over the web. This brought to serious issues related to their management and maintenance: bandwidth limitation and service costs are important factors when dealing with mobile multimedia contents’ fruition. On the other hand, the current multimedia consumer market has been characterized by the advent of cheap but rather high-quality high definition displays. However, this trend is only partially supported by the deployment of high-resolution multimedia services, thus the resulting disparity between content and display formats have to be addressed and older productions need to be either re-mastered or postprocessed in order to be broadcasted for HD exploitation. In the presented scenario, superresolution reconstruction represents a major solution. Image or video super resolution techniques allow restoring the original spatial resolution from low-resolution compressed data. In this way, both content and service providers, not to tell the final users, are relieved from the burden of providing and supporting large multimedia data transfer. The second topic addressed during my Phd research activity is related to the implementation of an image based positioning system for an indoor navigator. As modern mobile device become faster, classical signal processing is suggested to be used for new applications, such location based service. The exponential growth of wearable devices, such as smartphone and PDA in general, equipped with embedded motion (accelerometers) and rotation (gyroscopes) sensors, Internet connection and high-resolution cameras makes it ideal for INS (Inertial Navigation System) applications aiming to support the localization/navigation of objects and/or users in an indoor environment where common localization systems, such as GPS (Global Positioning System), fail. Thus the need to use alternative positioning techniques. A series of intensive tests have been carried out, showing how modern signal processing techniques can be successfully applied in different scenarios, from image and video enhancement up to image recognition for localization purpose, providing low costs solutions and ensuring real-time performance

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Object-based 3-d motion and structure analysis for video coding applications

    Get PDF
    Ankara : Department of Electrical and Electronics Engineering and the Institute of Engineering and Sciences of Bilkent University, 1997.Thesis (Ph.D.) -- -Bilkent University, 1997.Includes bibliographical references leaves 102-115Novel 3-D motion analysis tools, which can be used in object-based video codecs, are proposed. In these tools, the movements of the objects, which are observed through 2-D video frames, are modeled in 3-D space. Segmentation of 2-D frames into objects and 2-D dense motion vectors for each object are necessary as inputs for the proposed 3-D analysis. 2-D motion-based object segmentation is obtained by Gibbs formulation; the initialization is achieved by using a fast graph-theory based region segmentation algorithm which is further improved to utilize the motion information. Moreover, the same Gibbs formulation gives the needed dense 2-D motion vector field. The formulations for the 3-D motion models are given for both rigid and non- rigid moving objects. Deformable motion is modeled by a Markov random field which permits elastic relations between neighbors, whereas, rigid 3-D motion parameters are estimated using the E-matrix method. Some improvements on the E-matrix method are proposed to make this algorithm more robust to gross errors like the consequence of incorrect segmentation of 2-D correspondences between frames. Two algorithms are proposed to obtain dense depth estimates, which are robust to input errors and suitable for encoding, respectively. While the former of these two algorithms gives simply a MAP estimate, the latter uses rate-distortion theory. Finally, 3-D motion models are further utilized for occlusion detection and motion compensated temporal interpolation, and it is observed that for both applications 3-D motion models have superiority over their 2-D counterparts. Simulation results on artificial and real data show the advantages of the 3-D motion models in object-based video coding algorithms.Alatan, A AydinPh.D
    corecore