171,056 research outputs found

    Automatic visual recognition using parallel machines

    Get PDF
    Invariant features and quick matching algorithms are two major concerns in the area of automatic visual recognition. The former reduces the size of an established model database, and the latter shortens the computation time. This dissertation, will discussed both line invariants under perspective projection and parallel implementation of a dynamic programming technique for shape recognition. The feasibility of using parallel machines can be demonstrated through the dramatically reduced time complexity. In this dissertation, our algorithms are implemented on the AP1000 MIMD parallel machines. For processing an object with a features, the time complexity of the proposed parallel algorithm is O(n), while that of a uniprocessor is O(n2). The two applications, one for shape matching and the other for chain-code extraction, are used in order to demonstrate the usefulness of our methods. Invariants from four general lines under perspective projection are also discussed in here. In contrast to the approach which uses the epipolar geometry, we investigate the invariants under isotropy subgroups. Theoretically speaking, two independent invariants can be found for four general lines in 3D space. In practice, we show how to obtain these two invariants from the projective images of four general lines without the need of camera calibration. A projective invariant recognition system based on a hypothesis-generation-testing scheme is run on the hypercube parallel architecture. Object recognition is achieved by matching the scene projective invariants to the model projective invariants, called transfer. Then a hypothesis-generation-testing scheme is implemented on the hypercube parallel architecture

    Implementation of a Synchronized Oscillator Circuit for Fast Sensing and Labeling of Image Objects

    Get PDF
    We present an application-specific integrated circuit (ASIC) CMOS chip that implements a synchronized oscillator cellular neural network with a matrix size of 32 × 32 for object sensing and labeling in binary images. Networks of synchronized oscillators are a recently developed tool for image segmentation and analysis. Its parallel network operation is based on a “temporary correlation” theory that attempts to describe scene recognition as if performed by the human brain. The synchronized oscillations of neuron groups attract a person’s attention if he or she is focused on a coherent stimulus (image object). For more than one perceived stimulus, these synchronized patterns switch in time between different neuron groups, thus forming temporal maps that code several features of the analyzed scene. In this paper, a new oscillator circuit based on a mathematical model is proposed, and the network architecture and chip functional blocks are presented and discussed. The proposed chip is implemented in AMIS 0.35 μm C035M-D 5M/1P technology. An application of the proposed network chip for the segmentation of insulin-producing pancreatic islets in magnetic resonance liver images is presented

    A parallel Homological Spanning Forest framework for 2D topological image analysis

    Get PDF
    In [14], a topologically consistent framework to support parallel topological analysis and recognition for2 D digital objects was introduced. Based on this theoretical work, we focus on the problem of findingefficient algorithmic solutions for topological interrogation of a 2 D digital object of interest D of a pre- segmented digital image I , using 4-adjacency between pixels of D . In order to maximize the degree ofparallelization of the topological processes, we use as many elementary unit processing as pixels theimage I has. The mathematical model underlying this framework is an appropriate extension of the clas- sical concept of abstract cell complex: a primal–dual abstract cell complex (pACC for short). This versatiledata structure encompasses the notion of Homological Spanning Forest fostered in [14,15]. Starting froma symmetric pACC associated with I , the modus operandi is to construct via combinatorial operationsanother asymmetric one presenting the maximal number of non-null primal elementary interactions be- tween the cells of D . The fundamental topological tools have been transformed so as to promote anefficient parallel implementation in any parallel-oriented architecture (GPUs, multi-threaded computers,SIMD kernels and so on). A software prototype modeling such a parallel framework is built.Ministerio de Educación y Ciencia TEC2012-37868-C04-02/0

    Trainable Regularization in Dense Image Matching Problems

    Get PDF
    This study examines the development of specialized models designed to solve image-matching problems. The purpose of this research is to develop a technique based on energy tensor aggregation for dense image matching. This task is relevant within the framework of computer systems since image comparison makes it possible to solve current problems such as reconstructing a three-dimensional model of an object, creating a panorama scene, ensuring object recognition, etc. This paper examines in detail the key features of the image matching process based on the use of binocular stereo reconstruction and the features of calculating energies during this process, and establishes the main parts of the proposed method in the form of diagrams and formulas. This research develops a machine learning model that provides solutions to image matching problems for real data using parallel programming tools. A detailed description of the architecture of the convolutional recurrent neural network that underlies this method is given. Appropriate computational experiments were conducted to compare the results obtained with the methods proposed in the scientific literature. The method discussed in this article is characterized by better efficiency, both in terms of the speed of work execution and the number of possible errors. Doi: 10.28991/HIJ-2023-04-03-011 Full Text: PD

    Parallel architectures for image analysis

    Get PDF
    This thesis is concerned with the problem of designing an architecture specifically for the application of image analysis and object recognition. Image analysis is a complex subject area that remains only partially defined and only partially solved. This makes the task of designing an architecture aimed at efficiently implementing image analysis and recognition algorithms a difficult one. Within this work a massively parallel heterogeneous architecture, the Warwick Pyramid Machine is described. This architecture consists of SIMD, MIMD and MSIMD modes of parallelism each directed at a different part of the problem. The performance of this architecture is analysed with respect to many tasks drawn from very different areas of the image analysis problem. These tasks include an efficient straight line extraction algorithm and a robust and novel geometric model based recognition system. The straight line extraction method is based on the local extraction of line segments using a Hough style algorithm followed by careful global matching and merging. The recognition system avoids quantising the pose space, hence overcoming many of the problems inherent with this class of methods and includes an analytical verification stage. Results and detailed implementations of both of these tasks are given

    Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching

    Full text link
    This paper presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses a category-agnostic affordance prediction algorithm to select and execute among four different grasping primitive behaviors. It then recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT-Princeton Team system that took 1st place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.eduComment: Project webpage: http://arc.cs.princeton.edu Summary video: https://youtu.be/6fG7zwGfIk

    Error Correction for Dense Semantic Image Labeling

    Full text link
    Pixelwise semantic image labeling is an important, yet challenging, task with many applications. Typical approaches to tackle this problem involve either the training of deep networks on vast amounts of images to directly infer the labels or the use of probabilistic graphical models to jointly model the dependencies of the input (i.e. images) and output (i.e. labels). Yet, the former approaches do not capture the structure of the output labels, which is crucial for the performance of dense labeling, and the latter rely on carefully hand-designed priors that require costly parameter tuning via optimization techniques, which in turn leads to long inference times. To alleviate these restrictions, we explore how to arrive at dense semantic pixel labels given both the input image and an initial estimate of the output labels. We propose a parallel architecture that: 1) exploits the context information through a LabelPropagation network to propagate correct labels from nearby pixels to improve the object boundaries, 2) uses a LabelReplacement network to directly replace possibly erroneous, initial labels with new ones, and 3) combines the different intermediate results via a Fusion network to obtain the final per-pixel label. We experimentally validate our approach on two different datasets for the semantic segmentation and face parsing tasks respectively, where we show improvements over the state-of-the-art. We also provide both a quantitative and qualitative analysis of the generated results

    The What-And-Where Filter: A Spatial Mapping Neural Network for Object Recognition and Image Understanding

    Full text link
    The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.Advanced Research Projects Agency (ONR-N00014-92-J-4015, AFOSR 90-0083); British Petroleum (89-A-1204); National Science Foundation (IRI-90-00530, Graduate Fellowship); Office of Naval Research (N00014-91-J-4100, N00014-95-1-0409, N00014-95-1-0657); Air Force Office of Scientific Research (F49620-92-J-0499, F49620-92-J-0334
    • …
    corecore