1,257 research outputs found

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Contributions to region-based image and video analysis: feature aggregation, background subtraction and description constraining

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones. Fecha de lectura: 22-01-2016Esta tesis tiene embargado el acceso al texto completo hasta el 22-07-2017The use of regions for image and video analysis has been traditionally motivated by their ability to diminish the number of processed units and hence, the number of required decisions. However, as we explore in this thesis, this is just one of the potential advantages that regions may provide. When dealing with regions, two description spaces may be differentiated: the decision space, on which regions are shaped—region segmentation—, and the feature space, on which regions are used for analysis—region-based applications—. These two spaces are highly related. The solutions taken on the decision space severely affect their performance in the feature space. Accordingly, in this thesis we propose contributions on both spaces. Regarding the contributions to region segmentation, these are two-fold. Firstly, we give a twist to a classical region segmentation technique, the Mean-Shift, by exploring new solutions to automatically set the spectral kernel bandwidth. Secondly, we propose a method to describe the micro-texture of a pixel neighbourhood by using an easily customisable filter-bank methodology—which is based on the discrete cosine transform (DCT)—. The rest of the thesis is devoted to describe region-based approaches to several highly topical issues in computer vision; two broad tasks are explored: background subtraction (BS) and local descriptors (LD). Concerning BS, regions are here used as complementary cues to refine pixel-based BS algorithms: by providing robust to illumination cues and by storing the background dynamics in a region-driven background modelling. Relating to LD, the region is here used to reshape the description area usually fixed for local descriptors. Region-masked versions of classical two-dimensional and three-dimensional local descriptions are designed. So-built descriptions are proposed for the task of object identification, under a novel neural-oriented strategy. Furthermore, a local description scheme based on a fuzzy use of the region membership is derived. This characterisation scheme has been geometrically adapted to account for projective deformations, providing a suitable tool for finding corresponding points in wide-baseline scenarios. Experiments have been conducted for every contribution, discussing the potential benefits and the limitations of the proposed schemes. In overall, obtained results suggest that the region—conditioned by successful aggregation processes—is a reliable and useful tool to extrapolate pixel-level results, diminish semantic noise, isolate significant object cues and constrain local descriptions. The methods and approaches described along this thesis present alternative or complementary solutions to pixel-based image processing.El uso de regiones para el análisis de imágenes y secuencias de video ha estado tradicionalmente motivado por su utilidad para disminuir el número de unidades de análisis y, por ende, el número de decisiones. En esta tesis evidenciamos que esta es sólo una de las muchas ventajas adheridas a la utilización de regiones. En el procesamiento por regiones deben distinguirse dos espacios de análisis: el espacio de decisión, en donde se construyen las regiones, y el espacio de características, donde se utilizan. Ambos espacios están altamente relacionados. Las soluciones diseñadas para la construcción de regiones en el espacio de decisión definen su utilidad en el espacio de análisis. Por este motivo, a lo largo de esta tesis estudiamos ambos espacios. En particular, proponemos dos contribuciones en la etapa de construcción de regiones. En la primera, revisitamos una técnica clásica, Mean-Shift, e introducimos un esquema para la selección automática del ancho de banda que permite estimar localmente la densidad de una determinada característica. En la segunda, utilizamos la transformada discreta del coseno para describir la variabilidad local en el entorno de un píxel. En el resto de la tesis exploramos soluciones en el espacio de características, en otras palabras, proponemos aplicaciones que se apoyan en la región para realizar el procesamiento. Dichas aplicaciones se centran en dos ramas candentes en el ámbito de la visión por computador: la segregación del frente por substracción del fondo y la descripción local de los puntos de una imagen. En la rama substracción de fondo, utilizamos las regiones como unidades de apoyo a los algoritmos basados exclusivamente en el análisis a nivel de píxel. En particular, mejoramos la robustez de estos algoritmos a los cambios locales de iluminación y al dinamismo del fondo. Para esta última técnica definimos un modelo de fondo completamente basado en regiones. Las contribuciones asociadas a la rama de descripción local están centradas en el uso de la región para definir, automáticamente, entornos de descripción alrededor de los puntos. En las aproximaciones existentes, estos entornos de descripción suelen ser de tamaño y forma fija. Como resultado de este procedimiento se establece el diseño de versiones enmascaradas de descriptores bidimensionales y tridimensionales. En el algoritmo desarrollado, organizamos los descriptores así diseñados en una estructura neuronal y los utilizamos para la identificación automática de objetos. Por otro lado, proponemos un esquema de descripción mediante asociación difusa de píxeles a regiones. Este entorno de descripción es transformado geométricamente para adaptarse a potenciales deformaciones proyectivas en entornos estéreo donde las cámaras están ampliamente separadas. Cada una de las aproximaciones desarrolladas se evalúa y discute, remarcando las ventajas e inconvenientes asociadas a su utilización. En general, los resultados obtenidos sugieren que la región, asumiendo que ha sido construida de manera exitosa, es una herramienta fiable y de utilidad para: extrapolar resultados a nivel de pixel, reducir el ruido semántico, aislar las características significativas de los objetos y restringir la descripción local de estas características. Los métodos y enfoques descritos a lo largo de esta tesis establecen soluciones alternativas o complementarias al análisis a nivel de píxelIt was partially supported by the Spanish Government trough its FPU grant program and the projects (TEC2007-65400 - SemanticVideo), (TEC2011-25995 Event Video) and (TEC2014-53176-R HAVideo); the European Commission (IST-FP6-027685 - Mesh); the Comunidad de Madrid (S-0505/TIC-0223 - ProMultiDis-CM) and the Spanish Administration Agency CENIT 2007-1007 (VISION)

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    Object Tracking

    Get PDF
    Object tracking consists in estimation of trajectory of moving objects in the sequence of images. Automation of the computer object tracking is a difficult task. Dynamics of multiple parameters changes representing features and motion of the objects, and temporary partial or full occlusion of the tracked objects have to be considered. This monograph presents the development of object tracking algorithms, methods and systems. Both, state of the art of object tracking methods and also the new trends in research are described in this book. Fourteen chapters are split into two sections. Section 1 presents new theoretical ideas whereas Section 2 presents real-life applications. Despite the variety of topics contained in this monograph it constitutes a consisted knowledge in the field of computer object tracking. The intention of editor was to follow up the very quick progress in the developing of methods as well as extension of the application

    Appearance Modelling and Reconstruction for Navigation in Minimally Invasive Surgery

    Get PDF
    Minimally invasive surgery is playing an increasingly important role for patient care. Whilst its direct patient benefit in terms of reduced trauma, improved recovery and shortened hospitalisation has been well established, there is a sustained need for improved training of the existing procedures and the development of new smart instruments to tackle the issue of visualisation, ergonomic control, haptic and tactile feedback. For endoscopic intervention, the small field of view in the presence of a complex anatomy can easily introduce disorientation to the operator as the tortuous access pathway is not always easy to predict and control with standard endoscopes. Effective training through simulation devices, based on either virtual reality or mixed-reality simulators, can help to improve the spatial awareness, consistency and safety of these procedures. This thesis examines the use of endoscopic videos for both simulation and navigation purposes. More specifically, it addresses the challenging problem of how to build high-fidelity subject-specific simulation environments for improved training and skills assessment. Issues related to mesh parameterisation and texture blending are investigated. With the maturity of computer vision in terms of both 3D shape reconstruction and localisation and mapping, vision-based techniques have enjoyed significant interest in recent years for surgical navigation. The thesis also tackles the problem of how to use vision-based techniques for providing a detailed 3D map and dynamically expanded field of view to improve spatial awareness and avoid operator disorientation. The key advantage of this approach is that it does not require additional hardware, and thus introduces minimal interference to the existing surgical workflow. The derived 3D map can be effectively integrated with pre-operative data, allowing both global and local 3D navigation by taking into account tissue structural and appearance changes. Both simulation and laboratory-based experiments are conducted throughout this research to assess the practical value of the method proposed

    Exploiting Sparse Structures in Source Localization and Tracking

    Get PDF
    This thesis deals with the modeling of structured signals under different sparsity constraints. Many phenomena exhibit an inherent structure that may be exploited when setting up models, examples include audio waves, radar, sonar, and image objects. These structures allow us to model, identify, and classify the processes, enabling parameter estimation for, e.g., identification, localisation, and tracking.In this work, such structures are exploited, with the goal to achieve efficient localisation and tracking of a structured source signal. Specifically, two scenarios are considered. In papers A and B, the aim is to find a sparse subset of a structured signal such that the signal parameters and source locations maybe estimated in an optimal way. For the sparse subset selection, a combinatorial optimization problem is approximately solved by means of convex relaxation, with the results of allowing for different types of a priori information to be incorporated in the optimization. In paper C, a sparse subset of data is provided, and a generative model is used to find the location of an unknown number of jammers in a wireless network, with the jammers’ movement in the network being tracked as additional observations become available

    Proceedings of the GIS Research UK 18th Annual Conference GISRUK 2010

    Get PDF
    This volume holds the papers from the 18th annual GIS Research UK (GISRUK). This year the conference, hosted at University College London (UCL), from Wednesday 14 to Friday 16 April 2010. The conference covered the areas of core geographic information science research as well as applications domains such as crime and health and technological developments in LBS and the geoweb. UCL’s research mission as a global university is based around a series of Grand Challenges that affect us all, and these were accommodated in GISRUK 2010. The overarching theme this year was “Global Challenges”, with specific focus on the following themes: * Crime and Place * Environmental Change * Intelligent Transport * Public Health and Epidemiology * Simulation and Modelling * London as a global city * The geoweb and neo-geography * Open GIS and Volunteered Geographic Information * Human-Computer Interaction and GIS Traditionally, GISRUK has provided a platform for early career researchers as well as those with a significant track record of achievement in the area. As such, the conference provides a welcome blend of innovative thinking and mature reflection. GISRUK is the premier academic GIS conference in the UK and we are keen to maintain its outstanding record of achievement in developing GIS in the UK and beyond

    Robust and real-time hand detection and tracking in monocular video

    Get PDF
    In recent years, personal computing devices such as laptops, tablets and smartphones have become ubiquitous. Moreover, intelligent sensors are being integrated into many consumer devices such as eyeglasses, wristwatches and smart televisions. With the advent of touchscreen technology, a new human-computer interaction (HCI) paradigm arose that allows users to interface with their device in an intuitive manner. Using simple gestures, such as swipe or pinch movements, a touchscreen can be used to directly interact with a virtual environment. Nevertheless, touchscreens still form a physical barrier between the virtual interface and the real world. An increasingly popular field of research that tries to overcome this limitation, is video based gesture recognition, hand detection and hand tracking. Gesture based interaction allows the user to directly interact with the computer in a natural manner by exploring a virtual reality using nothing but his own body language. In this dissertation, we investigate how robust hand detection and tracking can be accomplished under real-time constraints. In the context of human-computer interaction, real-time is defined as both low latency and low complexity, such that a complete video frame can be processed before the next one becomes available. Furthermore, for practical applications, the algorithms should be robust to illumination changes, camera motion, and cluttered backgrounds in the scene. Finally, the system should be able to initialize automatically, and to detect and recover from tracking failure. We study a wide variety of existing algorithms, and propose significant improvements and novel methods to build a complete detection and tracking system that meets these requirements. Hand detection, hand tracking and hand segmentation are related yet technically different challenges. Whereas detection deals with finding an object in a static image, tracking considers temporal information and is used to track the position of an object over time, throughout a video sequence. Hand segmentation is the task of estimating the hand contour, thereby separating the object from its background. Detection of hands in individual video frames allows us to automatically initialize our tracking algorithm, and to detect and recover from tracking failure. Human hands are highly articulated objects, consisting of finger parts that are connected with joints. As a result, the appearance of a hand can vary greatly, depending on the assumed hand pose. Traditional detection algorithms often assume that the appearance of the object of interest can be described using a rigid model and therefore can not be used to robustly detect human hands. Therefore, we developed an algorithm that detects hands by exploiting their articulated nature. Instead of resorting to a template based approach, we probabilistically model the spatial relations between different hand parts, and the centroid of the hand. Detecting hand parts, such as fingertips, is much easier than detecting a complete hand. Based on our model of the spatial configuration of hand parts, the detected parts can be used to obtain an estimate of the complete hand's position. To comply with the real-time constraints, we developed techniques to speed-up the process by efficiently discarding unimportant information in the image. Experimental results show that our method is competitive with the state-of-the-art in object detection while providing a reduction in computational complexity with a factor 1 000. Furthermore, we showed that our algorithm can also be used to detect other articulated objects such as persons or animals and is therefore not restricted to the task of hand detection. Once a hand has been detected, a tracking algorithm can be used to continuously track its position in time. We developed a probabilistic tracking method that can cope with uncertainty caused by image noise, incorrect detections, changing illumination, and camera motion. Furthermore, our tracking system automatically determines the number of hands in the scene, and can cope with hands entering or leaving the video canvas. We introduced several novel techniques that greatly increase tracking robustness, and that can also be applied in other domains than hand tracking. To achieve real-time processing, we investigated several techniques to reduce the search space of the problem, and deliberately employ methods that are easily parallelized on modern hardware. Experimental results indicate that our methods outperform the state-of-the-art in hand tracking, while providing a much lower computational complexity. One of the methods used by our probabilistic tracking algorithm, is optical flow estimation. Optical flow is defined as a 2D vector field describing the apparent velocities of objects in a 3D scene, projected onto the image plane. Optical flow is known to be used by many insects and birds to visually track objects and to estimate their ego-motion. However, most optical flow estimation methods described in literature are either too slow to be used in real-time applications, or are not robust to illumination changes and fast motion. We therefore developed an optical flow algorithm that can cope with large displacements, and that is illumination independent. Furthermore, we introduce a regularization technique that ensures a smooth flow-field. This regularization scheme effectively reduces the number of noisy and incorrect flow-vector estimates, while maintaining the ability to handle motion discontinuities caused by object boundaries in the scene. The above methods are combined into a hand tracking framework which can be used for interactive applications in unconstrained environments. To demonstrate the possibilities of gesture based human-computer interaction, we developed a new type of computer display. This display is completely transparent, allowing multiple users to perform collaborative tasks while maintaining eye contact. Furthermore, our display produces an image that seems to float in thin air, such that users can touch the virtual image with their hands. This floating imaging display has been showcased on several national and international events and tradeshows. The research that is described in this dissertation has been evaluated thoroughly by comparing detection and tracking results with those obtained by state-of-the-art algorithms. These comparisons show that the proposed methods outperform most algorithms in terms of accuracy, while achieving a much lower computational complexity, resulting in a real-time implementation. Results are discussed in depth at the end of each chapter. This research further resulted in an international journal publication; a second journal paper that has been submitted and is under review at the time of writing this dissertation; nine international conference publications; a national conference publication; a commercial license agreement concerning the research results; two hardware prototypes of a new type of computer display; and a software demonstrator
    corecore