28 research outputs found

    Evaluation of pixel- and motion vector-based global motion estimation for camera motion characterization

    Full text link
    Pixel-based and motion vector-based global motion estimation (GME) techniques are evaluated in this paper with an automatic system for camera motion characterization. First, the GME techniques are com-pared with a frame-by-frame PNSR measurement using five video sequences. The best motion vector-based GME method is then eval-uated together with a common and a simplified pixel-based GME technique for camera motion characterization. For this, selected unedited videos from the TRECVid 2005 BBC rushes corpus are used. We evaluate how the estimation accuracy of global motion parameters affects the results for camera motion characterization in terms of retrieval measures. The results for this characterization show that the simplified pixel-based GME technique obtains results that are comparable with the common pixel-based GME method, and outperforms significantly the results of an earlier proposed motion vector-based GME approach

    Design of an imaging spectrometer for Earth observation using freeform mirrors

    Get PDF
    Design of an imaging spectrometer for earth observation using freeform mirrors Thomas Peschel1, Christoph Damm1, Matthias Beier1, Andreas Gebhard1, Stefan Risse1, Ingo Walter2, Ilse Sebastian2, David Krutz2 1 Fraunhofer Institut für Angewandte Optik und Feinwerktechnik, Jena 2 DLR, Institut für Optische Sensorsysteme, Berlin In 2017 the new hyperspectral DLR Earth Sensing Imaging Spectrometer (DESIS) will be integrated in the Multi-User-System for Earth Sensing (MUSES) platform /1/ installed on the International Space Station (ISS). The DESIS instrument is developed under the responsibility of the DLR. It will deliver images of the earth with a spatial resolution of 30 m on ground in 235 spectral channels in the wavelength range from 400 nm to 1 µm. As partner of the development team Fraunhofer IOF is responsible for the optical system of the imaging spectrometer.The optical system is made of two primary components: A compact Three-Mirror-Anastigmat (TMA) telescope images the ground strip under observation onto a slit. The following spectrometer reimages the slit onto the detector and performs the spectral separation using a reflective grating. The whole optical system is realized using metal-based mirrors the surfaces of which are made by Single-Point-Diamond Turning (SPDT). Since the spectral range is in the visible, a post-processing of the surfaces by Nickel plating is necessary. The final surface shape and roughness are realized by a second SPDT step and subsequent Magneto-Rheological Finishing. The TMA provides a focal length of 320 mm and an aperture of F/2.8. Its mechanical design relies on the Duolith-technology of IOF as well as optical and mechanical reference structures on the mirrors /2/ manufactured in the same SPDT run. This strategy allows for a significantly simplified adjustment of the optical system /3/. The spectrometer was designed on the basis of the so-called Offner scheme. Because of the high aperture of the system a freeform mirror had to be introduced in order to provide a good imaging quality over the whole spectral range. The above optical design requires a grating on a curved surface. Technologies are developed in order to fabricate the grating either by SPDT or, alternatively, by laser lithography. The mechanical design uses light-weight housing elements which wrap the optical path to suppress stray light. An athermal design is provided by using the same metal for mirrors and housing. To provide high adjustment precision, the housing elements carry reference and mounting features made by SPDT as well. This approach allows for a stiff mechanical set-up of the system, which is compatible with the harsh requirements of a space flight. References: 1 N. Humphrey, “A View From Above: Imaging from the ISS”, Teledyne DALSA 2014, http://possibility.teledynedalsa.com/a-view-from-above/ 2 S. Scheiding, e.a., “Ultra-precisely manufactured mirror assemblies with well-defined reference structures“, Proc. SPIE 7739, 2010. 3 T. Peschel, e.a., “Anamorphotic telescope for earth observation in the mid-infrared range”, ICSO 201

    AllerCatPro 2.0: a web server for predicting protein allergenicity potential

    Get PDF
    Proteins in food and personal care products can pose a risk for an immediate immunoglobulin E (IgE)-mediated allergic response. Bioinformatic tools can assist to predict and investigate the allergenic potential of proteins. Here we present AllerCatPro 2.0, a web server that can be used to predict protein allergenicity potential with better accuracy than other computational methods and new features that help assessors making informed decisions. AllerCatPro 2.0 predicts the similarity between input proteins using both their amino acid sequences and predicted 3D structures towards the most comprehensive datasets of reliable proteins associated with allergenicity. These datasets currently include 4979 protein allergens, 162 low allergenic proteins, and 165 autoimmune allergens with manual expert curation from the databases of WHO/International Union of Immunological Societies (IUIS), Comprehensive Protein Allergen Resource (COMPARE), Food Allergy Research and Resource Program (FARRP), UniProtKB and Allergome. Various examples of profilins, autoimmune allergens, low allergenic proteins, very large proteins, and nucleotide input sequences showcase the utility of AllerCatPro 2.0 for predicting protein allergenicity potential. The AllerCatPro 2.0 web server is freely accessible at https://allercatpro.bii.a-star.edu.sg

    Rushes video summarization using a collaborative approach

    Get PDF
    This paper describes the video summarization system developed by the partners of the K-Space European Network of Excellence for the TRECVID 2008 BBC rushes summarization evaluation. We propose an original method based on individual content segmentation and selection tools in a collaborative system. Our system is organized in several steps. First, we segment the video, secondly we identify relevant and redundant segments, and finally, we select a subset of segments to concatenate and build the final summary with video acceleration incorporated. We analyze the performance of our system through the TRECVID evaluation

    A collaborative approach to video summarization

    Get PDF
    This poster describes an approach to video summarization based on the combination of several decision mechanisms provided by the partners of the KSpace European Network of Excellence. The system has been applied to the TRECVID 2008 BBC rushes summarization task

    Von Sprites hin zu globaler Bewegungsschätzung

    No full text
    Videocodierungstechniken haben sich über die letzten Jahrzehnte sehr stark entwickelt. Seit die digitale Videoverarbeitung und -übertragung die analoge Technik abgelöst hat, ist die Komprimierung von Videodaten vor der Übertragung ein sehr wichtiger Bestandteil der gesamten Prozesskette der Videoübertragung. Dabei kamen zusätzlich zu den schon vorhandenen Systemen, wie normale TV-Übertragung und Speichermedien wie DVD oder Bluray-Disk, nun neue Platformen und Geräte, in denen Video angezeigt werden kann dazu. Zwei wichtige Beispiele stellen hier mobile Geräte, wie Handys und mobile Spielkonsolen, und natürlich das Internet dar. Betrachtet man populäre Internet-Platformen, wie z.B. YouTube, Myvideo, Sevenload, etc., ist zu erkennen, wie drastisch die Anzahl der Videodaten heutzutage steigt. Weiterhin erfordert die jüngste Entwicklung von High-Definition TV-Endgeräte natürlich auch High-Definition Inhalt, d.h. Videodaten mit einer höheren Auflösung als der bisher bekannte TV-Standard. Es wurde schon gezeigt, dass der letzte Videocodierungsstandard H.264/AVC, welcher eine überragende Codierungseffizienz bei Videodaten bis zu einer Auflösung des bisherigen TV-Standards hat, durch erweiterte und neue Techniken bei Anwendung auf höher aufgelöstes Videomaterial signifikant verbessert werden kann. All diese Aspekte zeigen, dass das allgemeine fast unvorstellbare Wachstum an digitalem Videodatenmaterial für jegliche Medien eine fortlaufende Forschung und Entwicklung zur Erweiterung bestehender Codierungstechniken und neuen Ansätzen zur effizienten Codierung dieser riesigen Datenmengen erfordert. Dafür wurden einige Ansätze zu Codierung von Video bereits vorgestellt. Die erfolgreichste Technik beinhaltet eine DCT-basierte bewegungskompensierte Prädiktion. Die sogenannte hybride Videocodierung wurde bereits mehrfach in verschiedenen Standardisierungen verarbeitet und befindet sich heutzutage in fast allen Anwendungen, die oben erwähnt wurden. Neben der hybriden Videocodierung wurden alternative Verfahren ebenfalls verfolgt. Eine Modell-basierte'' Methode analysiert zuerst den Videoinhalt, um dann diesen Inhalt in unterschiedliche Objekte zu unterteilen und diese dann separat zu codieren und zu übertragen. Am Empfänger werden die Objekte dann decodiert und wieder zum ursprünglichen Inhalt zusammengesetzt. Diese Technik brachte einen sehr hohen Codiergewinn im Vergleich zum hybriden Ansatz und wurde deshalb auch vor ungefähr zehn Jahren standardisiert. Allerdings hat dieser objektbasierte Ansatz auch große Nachteile, wie z.B. die Objektsegmentierung im Voranalyseschritt und die allgemeine inhaltsabhängige Codiereffizienz. Seit der Standardisierung wurde versucht, diese Technologie fortlaufend zu verbessern. Verbesserungen wurden gezeigt in Bezug auf eine Objektrepräsentation, welche Sprite genannt wird. In einem sogenannten Sprite wird der gesamte Hintergrundinhalt über alle Bilder einer Eingangsvideosequenz zu einem Bild zusammengefasst. Neue und effizientere Algorithmen wurden entwickelt, um solch ein Sprite aufzubauen. Weiterhin wurden die Sprites in Codierungsumgebungen eingebunden, um dadurch Codierverbesserungen im Vergleich zum hybriden Ansatz zu erreichen. Allerdings verbleibt eine signifikante Anzahl an offenen Fragen, um diese Art der Codierung, welche auch Sprite coding'' genannt wird, zur Marktanwendung zu bringen. Deshalb ist die Motivation dieser Dissertation, eine Brücke zwischen dem Sprite coding'' und der hybriden Videocodierung zu bauen, um Vorteile beider Verfahren zu kombinieren und mögliche Nachteile zu minimieren. Es wird damit begonnen, die klassische Spritecodierungstechnik in allen Teilen der gesamten Prozesskette zu verbessern, um maximale Kompressionseffizienz für den klassischen Bereich zu erziehlen. Danach wird die Sprite-basierte Repräsentation in eine Codierumgebung eingebracht, wobei der hybride Standard H.264/AVC zur Codierung verwendet wird. Obwohl H.264/AVC nicht für die Verarbeitung von modell- oder objektbasierter Repräsentation der Eingangsdaten entwickelt wurde, kann eine signifikante Verbesserung der Codiereffizienz bei dieser Codierumgebung gezeigt werden. Weiterhin werden Voranalyseschritte betrachtet, wie z.B. ein Ansatz zu automatischen Objektsegmentierung und Inhaltsanalyse zur Definition, ob der Sprite-basierte Ansatz verwendet werden kann oder nicht. Ergebnisse werden mit bekannten objektiven Metriken zur Bildqualitätsevaluierung erstellt. Dabei werden auch Metriken verwendet, die an die subjektive menschliche Wahrnehmung angepasst sind. Schließlich wird ein Filterdesign vorgestellt, wobei Techniken aus der klassischen Spritegenerierung verwendet werden. Es wird gezeigt, dass dieses Filter großes Potential bei der Anwendung in Codierungsumgebungen, als Nachverarbeitung zur Videoinhaltsverbesserung sowie als Vorverarbeitung für weitere Videoanalysetechniken aufweist.Video coding techniques have evolved over recent decades. Since digital video representation and transmission have replaced the analogue counterpart, efficient compression of digitized video is a very important topic in the whole processing chain. As well as common TV-broadcast and storage media like DVD or Bluray-Disk, other devices and platforms showing video content have been developed such as handheld devices and, especially, the Internet. Popular internet platforms, e.g. YouTube, Myvideo, Sevenload, etc., have led to the transmission of large amounts of video data. Further, the latest development of High-Definition (HD) displays demands high-definition video content, which means higher resolution video than the common TV-broadcast format. It has been shown that the latest video coding standard H.264/AVC, which has outstanding coding performance for Standard-Definition (SD) resolution, can be significantly improved applying enhanced and new techniques for HD-resolution video content. All these aspects point to a great increase of video data material for all media, requiring ongoing research, development and enhancement of existing techniques and finding new approaches for efficient encoding of this huge amount of data. For that, a number of algorithms has been developed. The most successful technique is DCT-based motion-compensated prediction. This so-called hybrid video coding approach has been the subject matter in various standardization processes and has been used until today in almost all applications described above. Alternative approaches for efficiently encoding video data have also been pursued. One method, which can be described as model-based'', analyzes the video content first, separates the content into objects and codes these separately. After transmission, the separated objects are decoded and merged to the original form. This technique brought very high coding gain in comparison to the hybrid video coding approach and therefore, it was standardized almost ten years ago. However, this object-based'' coding approach has several limitations, e.g. the object segmentation in the pre-analysis step and the content-dependent coding performance. Since the standardization, people have tried to develop techniques to improve this type of coding. Some improvements have been developed considering one object representation called Sprite, where all the background information of an entire video sequence is mapped into one image. New and more efficient algorithms have been developed to build such a Sprite. Furthermore, these Sprite representations have been included in encoding environments to show some improvements comparing to hybrid video encoding. However, a lot of open issues remain for bringing this type of encoding, which was called Sprite coding during the standardization process, to the market. Therefore, the motivation of this thesis is to build a bridge between Sprite coding and the hybrid video coding approach to both combine advantages and minimize disadvantages. It starts with the classical Sprite coding technique. Then, the Sprite-based representation is integrated in a coding environment using the latest standardized video codec, H.264/AVC. Although H.264/AVC is not designed for model- or object-based representations, a significant improvement of coding efficiency is shown using the Sprite-based approach. Further, pre-analysis steps, such as automatic object segmentation and analysis of the content of the video for checking whether the video is appropriate for Sprite coding or not are also examined. Different kinds of visual quality metrics are also used to even emphasize the subjective improvement of videos coded with Sprites. Finally, a filter design will be introduced using techniques inside the Sprite generation, which has a great potential to be used not only in coding environments but also as post-processing for video enhancement or as pre-processing for further video analysis techniques

    Object-Based Multiple Sprite Coding of Unsegmented Videos using H.264/AVC

    No full text
    In spite of recent progress in the development of hybrid block-based video codecs, it has been shown that for low-bitrate scenarios there is still coding gain applying object-based techniques. We present a sprite-based codec, based on latest H.264 features using an inbuilt segmentation approach for scenes recorded by a rotating camera. The segmentation itself is built up on reliable background estima-tion from the sprite and short-term image registration. Moreover, we generate multiple sprites based on physical camera parameter esti-mation that overcome three of the main drawbacks of sprite coding techniques. First, the coding cost for the sprite image is minimized. Second, multiple sprites allow temporal background refresh and fi-nally, registration error accumulation is kept very small. Experimen-tal results show that this coding approach significantly outperforms latest H.264 extensions applying hierarchical B pictures. Index Terms — Object-based video coding, sprite coding, mul-tiple sprites, H.264/AVC 1

    WINDOWED IMAGE REGISTRATION FOR ROBUST MOSAICING OF SCENES WITH LARGE BACKGROUND OCCLUSIONS

    No full text
    We propose an enhanced window-based approach to localimage registration for robustvideo mosaicing in scenes with arbitrarily moving foreground objects. Unlike other approaches, we estimate accurately the image transformation without any pre-segmentation even iflarge background regions are occluded. We apply awindowed hierarchical frame-to-frame registration based on image pyramid decomposition. In the lowest resolution level phase correlation for initial parameter estimation is used while in the next levels robust Newton-based energy minimization of the compensated image mean-squared error is conducted. To overcome the degradation error caused by spatial image interpolation due to the warping process, i.e. aliasing effects from under-sampling, final pixel values are assigned in an up-sampled image domain using a Daubechies bi-orthogonal synthesis filter. Experimental results show the excellent performance of the method compared to recently published methods. The image registration is sufficiently accurate to allow open-loop parameter accumulation for long-term motion estimation

    BTDI Detector technology for reconnaissance application

    No full text
    The Institute of Optical Sensor Systems (OS) at the Robotics and Mechatronics Center of the German Aerospace Center (DLR) has more than 30 years of experience with high-resolution imaging technology. This paper shows the institute’s scientific results of the leading-edge detector design in a BTDI (Bidirectional Time Delay and Integration) architecture. This project demonstrates an approved technological design for high or multi-spectral resolution spaceborne instruments. DLR OS and BAESystems were driving the technology of new detectors and the FPA design for future projects, new manufacturing accuracy in order to keep pace with ambitious scientific and user requirements. Resulting from customer requirements and available technologies the current generation of space borne sensor systems is focussing on VIS/NIR high spectral resolution to meet the requirements on earth and planetary observation systems. The combination of large swath and high-spectral resolution with intelligent synchronization control, fastread out ADC chains and new focalplane concepts opens the door to new remotesensing and smart deep space instruments. The paper gives an overview of the detector development and verification program at DLR on detector module level, new control possibilities in synchronisation control mode, and key parameters like PRNU, DSNU, MTF, SNR, linearity, spectral response, quantum efficiency
    corecore