20,965 research outputs found

    Markov mezƑk a kĂ©pmodellezĂ©sben, alkalmazĂĄsuk az automatikus kĂ©pszegmentĂĄlĂĄs terĂŒletĂ©n = Markovian Image Models: Applications in Unsupervised Image Segmentation

    Get PDF
    1) KifejlesztettĂŒnk egy olyan szĂ­n Ă©s textĂșra alapĂș szegmentĂĄlĂł MRF algoritmust, amely alkalmas egy kĂ©p automatikus szegmentĂĄlĂĄsĂĄt elvĂ©gezni. Az eredmĂ©nyeinket az Image and Vision Computing folyĂłiratban publikĂĄltuk. 2) KifejlesztettĂŒnk egy Reversible Jump Markov Chain Monte Carlo technikĂĄn alapulĂł automatikus kĂ©pszegmentĂĄlĂł eljĂĄrĂĄst, melyet sikeresen alkalmaztunk szĂ­nes kĂ©pek teljesen automatikus szegmentĂĄlĂĄsĂĄra. Az eredmĂ©nyeinket a BMVC 2004 konferenciĂĄn Ă©s az Image and Vision Computing folyĂłiratban publikĂĄltuk. 3) A modell többrĂ©tegƱ tovĂĄbbfejlesztĂ©sĂ©t alkalmaztuk video objektumok szĂ­n Ă©s mozgĂĄs alapĂș szegmentĂĄlĂĄsĂĄra, melynek eredmĂ©nyeit a HACIPPR 2005 illetve az ACCV 2006 nemzetközi konferenciĂĄkon publikĂĄltuk. SzintĂ©n ehhez az alapproblĂ©mĂĄhoz kapcsolĂłdik HorvĂĄth PĂ©ter hallgatĂłmmal az optic flow szamĂ­tĂĄsĂĄval illetve szĂ­n, textĂșra Ă©s mozgĂĄs alapĂș GVF aktĂ­v kontĂșrral kapcsoltos munkĂĄink. TDK dolgozata elsƑ helyezĂ©st Ă©rt el a 2004-es helyi versenyen, az eredmĂ©nyeinket pedig a KEPAF 2004 konferenciĂĄn publikĂĄltuk. 4) HorvĂĄth PĂ©ter PhD hallgatĂłmmal illetve az franciaorszĂĄgi INRIA Ariana csoportjĂĄval, kidolgoztunk egy olyan kĂ©pszegmentĂĄlĂł eljĂĄrĂĄst, amely a szegmentĂĄlandĂł objektum alakjĂĄt is figyelembe veszi. Az eredmĂ©nyeinket az ICPR 2006 illetve az ICCVGIP 2006 konferenciĂĄn foglaltuk össze. A modell elƑzmĂ©nyekĂ©nt kidolgoztunk tovĂĄbbĂĄ egy alakzat-momemntumokon alapulĂł aktĂ­v kontĂșr modellt, amelyet a HACIPPR 2005 konferenciĂĄn publikĂĄltunk. | 1) We have proposed a monogrid MRF model which is able to combine color and texture features in order to improve the quality of segmentation results. We have also solved the estimation of model parameters. This work has been published in the Image and Vision Computing journal. 2) We have proposed an RJMCMC sampling method which is able to identify multi-dimensional Gaussian mixtures. Using this technique, we have developed a fully automatic color image segmentation algorithm. Our results have been published at BMVC 2004 international conference and in the Image and Vision Computing journal. 3) A new multilayer MRF model has been proposed which is able to segment an image based on multiple cues (such as color, texture, or motion). This work has been published at HACIPPR 2005 and ACCV 2006 international conferences. The work on optic flow computation and color-, texture-, and motion-based GVF active contours doen with my student, Mr. Peter Horvath, won a first price at the local Student Research Competition in 2004. Results have been presented at KEPAF 2004 conference. 4) A new shape prior, called 'gas of circles' has been introduced using active contour models. This work is done in collaboration with the Ariana group of INRIA, France and my PhD student, Mr. Peter Horvath. Results are published at the ICPR 2006 and ICCVGIP 2006 conferences. A preliminary study on active contour models using shape-moments has also been done, these results are published at HACIPPR 2005

    Silhouette coverage analysis for multi-modal video surveillance

    Get PDF
    In order to improve the accuracy in video-based object detection, the proposed multi-modal video surveillance system takes advantage of the different kinds of information represented by visual, thermal and/or depth imaging sensors. The multi-modal object detector of the system can be split up in two consecutive parts: the registration and the coverage analysis. The multi-modal image registration is performed using a three step silhouette-mapping algorithm which detects the rotation, scale and translation between moving objects in the visual, (thermal) infrared and/or depth images. First, moving object silhouettes are extracted to separate the calibration objects, i.e., the foreground, from the static background. Key components are dynamic background subtraction, foreground enhancement and automatic thresholding. Then, 1D contour vectors are generated from the resulting multi-modal silhouettes using silhouette boundary extraction, cartesian to polar transform and radial vector analysis. Next, to retrieve the rotation angle and the scale factor between the multi-sensor image, these contours are mapped on each other using circular cross correlation and contour scaling. Finally, the translation between the images is calculated using maximization of binary correlation. The silhouette coverage analysis also starts with moving object silhouette extraction. Then, it uses the registration information, i.e., rotation angle, scale factor and translation vector, to map the thermal, depth and visual silhouette images on each other. Finally, the coverage of the resulting multi-modal silhouette map is computed and is analyzed over time to reduce false alarms and to improve object detection. Prior experiments on real-world multi-sensor video sequences indicate that automated multi-modal video surveillance is promising. This paper shows that merging information from multi-modal video further increases the detection results

    Interaction between high-level and low-level image analysis for semantic video object extraction

    Get PDF
    Authors of articles published in EURASIP Journal on Advances in Signal Processing are the copyright holders of their articles and have granted to any third party, in advance and in perpetuity, the right to use, reproduce or disseminate the article, according to the SpringerOpen copyright and license agreement (http://www.springeropen.com/authors/license)

    Advanced content-based semantic scene analysis and information retrieval: the SCHEMA project

    Get PDF
    The aim of the SCHEMA Network of Excellence is to bring together a critical mass of universities, research centers, industrial partners and end users, in order to design a reference system for content-based semantic scene analysis, interpretation and understanding. Relevant research areas include: content-based multimedia analysis and automatic annotation of semantic multimedia content, combined textual and multimedia information retrieval, semantic -web, MPEG-7 and MPEG-21 standards, user interfaces and human factors. In this paper, recent advances in content-based analysis, indexing and retrieval of digital media within the SCHEMA Network are presented. These advances will be integrated in the SCHEMA module-based, expandable reference system

    The aceToolbox: low-level audiovisual feature extraction for retrieval and classification

    Get PDF
    In this paper we present an overview of a software platform that has been developed within the aceMedia project, termed the aceToolbox, that provides global and local lowlevel feature extraction from audio-visual content. The toolbox is based on the MPEG-7 eXperimental Model (XM), with extensions to provide descriptor extraction from arbitrarily shaped image segments, thereby supporting local descriptors reflecting real image content. We describe the architecture of the toolbox as well as providing an overview of the descriptors supported to date. We also briefly describe the segmentation algorithm provided. We then demonstrate the usefulness of the toolbox in the context of two different content processing scenarios: similarity-based retrieval in large collections and scene-level classification of still images

    Segmentation and tracking of video objects for a content-based video indexing context

    Get PDF
    This paper examines the problem of segmentation and tracking of video objects for content-based information retrieval. Segmentation and tracking of video objects plays an important role in index creation and user request definition steps. The object is initially selected using a semi-automatic approach. For this purpose, a user-based selection is required to define roughly the object to be tracked. In this paper, we propose two different methods to allow an accurate contour definition from the user selection. The first one is based on an active contour model which progressively refines the selection by fitting the natural edges of the object while the second used a binary partition tree with aPeer ReviewedPostprint (published version

    Hand gesture recognition with jointly calibrated Leap Motion and depth sensor

    Get PDF
    Novel 3D acquisition devices like depth cameras and the Leap Motion have recently reached the market. Depth cameras allow to obtain a complete 3D description of the framed scene while the Leap Motion sensor is a device explicitly targeted for hand gesture recognition and provides only a limited set of relevant points. This paper shows how to jointly exploit the two types of sensors for accurate gesture recognition. An ad-hoc solution for the joint calibration of the two devices is firstly presented. Then a set of novel feature descriptors is introduced both for the Leap Motion and for depth data. Various schemes based on the distances of the hand samples from the centroid, on the curvature of the hand contour and on the convex hull of the hand shape are employed and the use of Leap Motion data to aid feature extraction is also considered. The proposed feature sets are fed to two different classifiers, one based on multi-class SVMs and one exploiting Random Forests. Different feature selection algorithms have also been tested in order to reduce the complexity of the approach. Experimental results show that a very high accuracy can be obtained from the proposed method. The current implementation is also able to run in real-time
    • 

    corecore