137 research outputs found

    Application of Fast Deviation Correction Algorithm Based on Shape Matching Algorithm in Component Placement

    Get PDF
    For contradiction PC template matching between accuracy and speed, combined with the advantages of FPGA high speed parallel computing. This paper presents a FPGA-based rapid correction shape matching algorithm. Mainly in the FPGA, using shape matching and least squares method to calculate the angular deviation chip components. Use single instruction stream algorithm acceleration. Experimental results show that compared with traditional PC template matching algorithms, this algorithm to further improve the correction accuracy and greatly reducing correction time. And SMT machine vision correction can be obtained in a stable and efficient use

    Hardware dedicado para sistemas empotrados de visión

    Get PDF
    La constante evolución de las Tecnologías de la Información y las Comunicaciones no solo ha permitido que más de la mitad de la población mundial esté actualmente interconectada a través de Internet, sino que ha sido el caldo de cultivo en el que han surgido nuevos paradigmas, como el ‘Internet de las cosas’ (IoT) o la ‘Inteligencia ambiental’ (AmI), que plantean la necesidad de interconectar objetos con distintas funcionalidades para lograr un entorno digital, sensible y adaptativo, que proporcione servicios de muy distinta índole a sus usuarios. La consecución de este entorno requiere el desarrollo de dispositivos electrónicos de bajo coste que, con tamaño y peso reducido, sean capaces de interactuar con el medio que los rodea, operar con máxima autonomía y proporcionar un elevado nivel de inteligencia. La funcionalidad de muchos de estos dispositivos incluirá la capacidad para adquirir, procesar y transmitir imágenes, extrayendo, interpretando o modificando la información visual que resulte de interés para una determinada aplicación. En el marco de este desafío surge la presente Tesis Doctoral, cuyo eje central es el desarrollo de hardware dedicado para la implementación de algoritmos de procesamiento de imágenes y secuencias de vídeo usados en sistemas empotrados de visión. El trabajo persigue una doble finalidad. Por una parte, la búsqueda de soluciones que, por sus prestaciones y rendimiento, puedan ser incorporadas en sistemas que satisfagan las estrictas exigencias de funcionalidad, tamaño, consumo de energía y velocidad de operación demandadas por las nuevas aplicaciones. Por otra, el diseño de una serie de bloques funcionales implementados como módulos de propiedad intelectual, que permitan aliviar la carga computacional de las unidades de procesado de los sistemas en los que se integren. En la Tesis se proponen soluciones específicas para la implementación de dos tipos de operaciones habitualmente presentes en muchos sistemas de visión artificial: la sustracción de fondo y el etiquetado de componentes conexos. Las distintas alternativas surgen como consecuencia de aplicar una adecuada relación de compromiso entre funcionalidad y coste, entendiendo este último criterio en términos de recursos de cómputo, velocidad de operación y potencia consumida, lo que permite cubrir un amplio espectro de aplicaciones. En algunas de las soluciones propuestas se han utilizado además, técnicas de inferencia basadas en Lógica Difusa con idea de mejorar la calidad de los sistemas de visión resultantes. Para la realización de los diferentes bloques funcionales se ha seguido una metodología de diseño basada en modelos, que ha permitido la realización de todo el ciclo de desarrollo en un único entorno de trabajo. Dicho entorno combina herramientas informáticas que facilitan las etapas de codificación algorítmica, diseño de circuitos, implementación física y verificación funcional y temporal de las distintas alternativas, acelerando con ello todas las fases del flujo de diseño y posibilitando una exploración más eficiente del espacio de posibles soluciones. Asimismo, con el objetivo de demostrar la funcionalidad de las distintas aportaciones de esta Tesis Doctoral, algunas de las soluciones propuestas han sido integradas en sistemas de vídeo reales, que emplean buses estándares de uso común. Los dispositivos seleccionados para llevar a cabo estos demostradores han sido FPGAs y SoPCs de Xilinx, ya que sus excelentes propiedades para el prototipado y la construcción de sistemas que combinan componentes software y hardware, los convierten en candidatos ideales para dar soporte a la implementación de este tipo de sistemas.The continuous evolution of the Information and Communication Technologies (ICT), not only has allowed more than half of the global population to be currently interconnected through Internet, but it has also been the breeding ground for new paradigms such as Internet of Things (ioT) or Ambient Intelligence (AmI). These paradigms expose the need of interconnecting elements with different functionalities in order to achieve a digital, sensitive, adaptive and responsive environment that provides services of distinct nature to the users. The development of low cost devices, with small size, light weight and a high level of autonomy, processing power and ability for interaction is required to obtain this environment. Attending to this last feature, many of these devices will include the capacity to acquire, process and transmit images, extracting, interpreting and modifying the visual information that could be of interest for a certain application. This PhD Thesis, focused on the development of dedicated hardware for the implementation of image and video processing algorithms used in embedded systems, attempts to response to this challenge. The work has a two-fold purpose: on one hand, the search of solutions that, for its performance and properties, could be integrated on systems with strict requirements of functionality, size, power consumption and speed of operation; on the other hand, the design of a set of blocks that, packaged and implemented as IP-modules, allow to alleviate the computational load of the processing units of the systems where they could be integrated. In this Thesis, specific solutions for the implementation of two kinds of usual operations in many computer vision systems are provided. These operations are background subtraction and connected component labelling. Different solutions are created as the result of applying a good performance/cost trade-off (approaching this last criteria in terms of area, speed and consumed power), able to cover a wide range of applications. Inference techniques based on Fuzzy Logic have been applied to some of the proposed solutions in order to improve the quality of the resulting systems. To obtain the mentioned solutions, a model based-design methodology has been applied. This fact has allowed us to carry out all the design flow from a single work environment. That environment combines CAD tools that facilitate the stages of code programming, circuit design, physical implementation and functional and temporal verification of the different algorithms, thus accelerating the overall processes and making it possible to explore the space of solutions. Moreover, aiming to demonstrate the functionality of this PhD Thesis’s contributions, some of the proposed solutions have been integrated on real video systems that employ common and standard buses. The devices selected to perform these demonstrators have been FPGA and SoPCs (manufactured by Xilinx) since, due to their excellent properties for prototyping and creating systems that combine software and hardware components, they are ideal to develop these applications

    Automated Segmentation of Left and Right Ventricles in MRI and Classification of the Myocarfium Abnormalities

    Get PDF
    A fundamental step in diagnosis of cardiovascular diseases, automated left and right ventricle (LV and RV) segmentation in cardiac magnetic resonance images (MRI) is still acknowledged to be a difficult problem. Although algorithms for LV segmentation do exist, they require either extensive training or intensive user inputs. RV segmentation in MRI has yet to be solved and is still acknowledged a completely unsolved problem because its shape is not symmetric and circular, its deformations are complex and varies extensively over the cardiac phases, and it includes papillary muscles. In this thesis, I investigate fast detection of the LV endo- and epi-cardium surfaces (3D) and contours (2D) in cardiac MRI via convex relaxation and distribution matching. A rapid 3D segmentation of the RV in cardiac MRI via distribution matching constraints on segment shape and appearance is also investigated. These algorithms only require a single subject for training and a very simple user input, which amounts to one click. The solution is sought following the optimization of functionals containing probability product kernel constraints on the distributions of intensity and geometric features. The formulations lead to challenging optimization problems, which are not directly amenable to convex-optimization techniques. For each functional, the problem is split into a sequence of sub-problems, each of which can be solved exactly and globally via a convex relaxation and the augmented Lagrangian method. Finally, an information-theoretic based artificial neural network (ANN) is proposed for normal/abnormal LV myocardium motion classification. Using the LV segmentation results, the LV cavity points is estimated via a Kalman filter and a recursive dynamic Bayesian filter. However, due to the similarities between the statistical information of normal and abnormal points, differentiating between distributions of abnormal and normal points is a challenging problem. The problem was investigated with a global measure based on the Shannon\u27s differential entropy (SDE) and further examined with two other information-theoretic criteria, one based on Renyi entropy and the other on Fisher information. Unlike the existing information-theoretic studies, the approach addresses explicitly the overlap between the distributions of normal and abnormal cases, thereby yielding a competitive performance. I further propose an algorithm based on a supervised 3-layer ANN to differentiate between the distributions farther. The ANN is trained and tested by five different information measures of radial distance and velocity for points on endocardial boundary

    Early Vision Optimization: Parametric Models, Parallelization and Curvature

    Get PDF
    Early vision is the process occurring before any semantic interpretation of an image takes place. Motion estimation, object segmentation and detection are all parts of early vision, but recognition is not. Many of these tasks are formulated as optimization problems and one of the key factors for the success of recent methods is that they seek to compute globally optimal solutions. This thesis is concerned with improving the efficiency and extending the applicability of the current state of the art. This is achieved by introducing new methods of computing solutions to image segmentation and other problems of early vision. The first part studies parametric problems where model parameters are estimated in addition to an image segmentation. For a small number of parameters these problems can still be solved optimally. In the second part the focus is shifted toward curvature regularization, i.e. when the commonly used length and area regularization is replaced by curvature in two and three dimensions. These problems can be discretized over a mesh and special attention is given to the mesh geometry. Specifically, hexagonal meshes are compared to square ones and a method for generating adaptive methods is introduced and evaluated. The framework is then extended to curvature regularization of surfaces. Thirdly, fast methods for finding minimal graph cuts and solving related problems on modern parallel hardware are developed and extensively evaluated. Finally, the thesis is concluded with two applications to early vision problems: heart segmentation and image registration

    Marine Vessel Inspection as a Novel Field for Service Robotics: A Contribution to Systems, Control Methods and Semantic Perception Algorithms.

    Get PDF
    This cumulative thesis introduces a novel field for service robotics: the inspection of marine vessels using mobile inspection robots. In this thesis, three scientific contributions are provided and experimentally verified in the field of marine inspection, but are not limited to this type of application. The inspection scenario is merely a golden thread to combine the cumulative scientific results presented in this thesis. The first contribution is an adaptive, proprioceptive control approach for hybrid leg-wheel robots, such as the robot ASGUARD described in this thesis. The robot is able to deal with rough terrain and stairs, due to the control concept introduced in this thesis. The proposed system is a suitable platform to move inside the cargo holds of bulk carriers and to deliver visual data from inside the hold. Additionally, the proposed system also has stair climbing abilities, allowing the system to move between different decks. The robot adapts its gait pattern dynamically based on proprioceptive data received from the joint motors and based on the pitch and tilt angle of the robot's body during locomotion. The second major contribution of the thesis is an independent ship inspection system, consisting of a magnetic wall climbing robot for bulkhead inspection, a particle filter based localization method, and a spatial content management system (SCMS) for spatial inspection data representation and organization. The system described in this work was evaluated in several laboratory experiments and field trials on two different marine vessels in close collaboration with ship surveyors. The third scientific contribution of the thesis is a novel approach to structural classification using semantic perception approaches. By these methods, a structured environment can be semantically annotated, based on the spatial relationships between spatial entities and spatial features. This method was verified in the domain of indoor perception (logistics and household environment), for soil sample classification, and for the classification of the structural parts of a marine vessel. The proposed method allows the description of the structural parts of a cargo hold in order to localize the inspection robot or any detected damage. The algorithms proposed in this thesis are based on unorganized 3D point clouds, generated by a LIDAR within a ship's cargo hold. Two different semantic perception methods are proposed in this thesis. One approach is based on probabilistic constraint networks; the second approach is based on Fuzzy Description Logic and spatial reasoning using a spatial ontology about the environment

    Pattern Recognition

    Get PDF
    Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition

    Development of a Multiphoton Photoacoustic Microscope

    Get PDF
    Cellular/subcellular imaging of biological tissue is an important tool for understanding disease mechanisms. Many current techniques for subcellular absorption contrast imaging, such as two-photon excited fluorescence (TPEF), require exogenous contrast agents to gain access to many naturally occurring biomolecules. Non-fluorescent biomolecules must have a fluorescent marker (tag) chemically bound in order to be observed by TPEF. Contrast agents and markers, while effective, are not an optimal solution because they can change the local environment in the biological system and require FDA approval for human use. Photoacoustic microscopy (PAM) is an imaging modality with high endogenous absorption contrast and penetration depth due to its ability to detect acoustic waves, which are attenuated much less than light in tissue. However, this technique suffers from poor axial resolution, precluding it from consideration for subcellular imaging. This manuscript describes the author's efforts to improve the axial resolution of traditional PAM by merging it with pump-probe spectroscopy. Pump-probe spectroscopy is a non-linear optical technique that exploits a physical process called transient absorption, providing spatial resolution equivalent to two-photon microscopy and access to molecular-specific traits, such as the ground state recovery time and transient absorption spectrum. These traits provide molecular contrast to the imaging technique, which is highly desirable in a complex, multi-chromophore biological system. In this manuscript, a novel technique called transient absorption ultrasonic microscopy (TAUM) is designed and characterized in detail. A second-generation TAUM system is also described, which improves speed and sensitivity of TAUM by up to 1000-fold. This system is validated by collecting volumes of red blood cells in blood smears and tissue samples. These results constitute the first time single cells have been fully resolved using a photoacoustic microscope. Finally, the TAUM system is modified to measure chromophore ground state recovery times. This technique is validated by measuring the recovery time of Rhodamine 6G, which matches well with published values of the fluorescence lifetime. Recovery times of oxidized and reduced forms of hemoglobin are also measured and shown to statistically differ from one another, suggesting the possibility of subcellular measurements of oxygen saturation in future iterations of TAUM

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    System Abstractions for Scalable Application Development at the Edge

    Get PDF
    Recent years have witnessed an explosive growth of Internet of Things (IoT) devices, which collect or generate huge amounts of data. Given diverse device capabilities and application requirements, data processing takes place across a range of settings, from on-device to a nearby edge server/cloud and remote cloud. Consequently, edge-cloud coordination has been studied extensively from the perspectives of job placement, scheduling and joint optimization. Typical approaches focus on performance optimization for individual applications. This often requires domain knowledge of the applications, but also leads to application-specific solutions. Application development and deployment over diverse scenarios thus incur repetitive manual efforts. There are two overarching challenges to provide system-level support for application development at the edge. First, there is inherent heterogeneity at the device hardware level. The execution settings may range from a small cluster as an edge cloud to on-device inference on embedded devices, differing in hardware capability and programming environments. Further, application performance requirements vary significantly, making it even more difficult to map different applications to already heterogeneous hardware. Second, there are trends towards incorporating edge and cloud and multi-modal data. Together, these add further dimensions to the design space and increase the complexity significantly. In this thesis, we propose a novel framework to simplify application development and deployment over a continuum of edge to cloud. Our framework provides key connections between different dimensions of design considerations, corresponding to the application abstraction, data abstraction and resource management abstraction respectively. First, our framework masks hardware heterogeneity with abstract resource types through containerization, and abstracts away the application processing pipelines into generic flow graphs. Further, our framework further supports a notion of degradable computing for application scenarios at the edge that are driven by multimodal sensory input. Next, as video analytics is the killer app of edge computing, we include a generic data management service between video query systems and a video store to organize video data at the edge. We propose a video data unit abstraction based on a notion of distance between objects in the video, quantifying the semantic similarity among video data. Last, considering concurrent application execution, our framework supports multi-application offloading with device-centric control, with a userspace scheduler service that wraps over the operating system scheduler
    corecore