9 research outputs found

    Three Highly Parallel Computer Architectures and Their Suitability for Three Representative Artificial Intelligence Problems

    Get PDF
    Virtually all current Artificial Intelligence (AI) applications are designed to run on sequential (von Neumann) computer architectures. As a result, current systems do not scale up. As knowledge is added to these systems, a point is reached where their performance quickly degrades. The performance of a von Neumann machine is limited by the bandwidth between memory and processor (the von Neumann bottleneck). The bottleneck is avoided by distributing the processing power across the memory of the computer. In this scheme the memory becomes the processor (a smart memory ). This paper highlights the relationship between three representative AI application domains, namely knowledge representation, rule-based expert systems, and vision, and their parallel hardware realizations. Three machines, covering a wide range of fundamental properties of parallel processors, namely module granularity, concurrency control, and communication geometry, are reviewed: the Connection Machine (a fine-grained SIMD hypercube), DADO (a medium-grained MIMD/SIMD/MSIMD tree-machine), and the Butterfly (a coarse-grained MIMD Butterflyswitch machine)

    Edge detection and image smoothing using two deterministic relaxation algorithms. Implementation on the connection machine CM2

    Get PDF
    Recently, a lot of algorithms minimizing a non-convex energy function have been proposed to salve low level vision problems . Different kinds of relaxation methods are available . The stochastic techniques, such as simulated annealing, asymptotically converge to the global minimum but require a high computational cost . Deterministic relaxation methods which are sub-optimal, give good results and are faster than the stochastic ones . In this palier, we focus on the parallel implementation of two deterministic algorithms for edge détection and image smoothing : the graduated nonconvexity (GNC) originally proposed by Blake & Zisserman and the mean field annealing (MFA) introduced by Geiger & Girosi and extended to anisotropie compound Gauss-Markov random fields by Zerubia & Chellappa . Both methods are based on a weak-membrane model and both algorithms are inherently serial : each step produces a pixel map which is taken as an input for the next step . For the GNC, we implement a checkerboard version of the successive over-relaxation (SOR) method to minimize the energy . For the MFA, we use an optimal step conjugale gradient descent .Récemment, de nombreux algorithmes de minimisation de fonctions non convexes ont été proposés pour résoudre des problèmes de vision bas niveau . Il existe plusieurs méthodes de relaxation . Les techniques stochastiques, telles que le recuit simulé, convergent asymptotiquement, sous certaines conditions, vers le minimum global, mais sont très coûteuses en temps de calcul . Les méthodes de relaxation déterministes sont sous-optimales, mais donnent de bons résultats et sont plus rapides que les méthodes stochastiques . Dans cet article, nous présentons la mise en oeuvre parallèle de deux algorithmes déterministes de détection de contours et de lissage d'image le GNC (« Graduated Non-Convexity ») proposé par Blake & Zisserman et le recuit par champs moyens (MFA) introduit par Geiger & Girosi et étendu aux champs de Markov composés anisotropes par Zerubia & Chellappa . Ces deux méthodes sont fondées sur le modèle de la membrane à contraintes de continuité lâches et sont séquentielles : à chaque pas est produit une image qui est utilisée au pas suivant . Pour le GNC, nous avons utilisé une méthode de minimisation de l'énergie appelée « successive over-relaxation (SOR) » et plus précisément une variante parallèle de cette technique. En ce qui concerne l'algorithme MFA, nous avons utilisé une méthode de descente de gradient conjugué à pas optimal

    Automatic visual recognition using parallel machines

    Get PDF
    Invariant features and quick matching algorithms are two major concerns in the area of automatic visual recognition. The former reduces the size of an established model database, and the latter shortens the computation time. This dissertation, will discussed both line invariants under perspective projection and parallel implementation of a dynamic programming technique for shape recognition. The feasibility of using parallel machines can be demonstrated through the dramatically reduced time complexity. In this dissertation, our algorithms are implemented on the AP1000 MIMD parallel machines. For processing an object with a features, the time complexity of the proposed parallel algorithm is O(n), while that of a uniprocessor is O(n2). The two applications, one for shape matching and the other for chain-code extraction, are used in order to demonstrate the usefulness of our methods. Invariants from four general lines under perspective projection are also discussed in here. In contrast to the approach which uses the epipolar geometry, we investigate the invariants under isotropy subgroups. Theoretically speaking, two independent invariants can be found for four general lines in 3D space. In practice, we show how to obtain these two invariants from the projective images of four general lines without the need of camera calibration. A projective invariant recognition system based on a hypothesis-generation-testing scheme is run on the hypercube parallel architecture. Object recognition is achieved by matching the scene projective invariants to the model projective invariants, called transfer. Then a hypothesis-generation-testing scheme is implemented on the hypercube parallel architecture

    NETRA - A Parallel Architecture for Integrated Vision Systems I: Architecture and Organization

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryNational Aeronautics and Space Administration / NASA-NAG-1-61

    Parallel Architectures and Parallel Algorithms for Integrated Vision Systems

    Get PDF
    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems

    Author index—Volumes 1–89

    Get PDF

    Design Patterns for Parallel Vision Applications

    Get PDF
    Computer vision is a challenging application for high performance computing. To meet its computational demands, a number of SIMD and MIMD based parallel machines have been proposed and developed. However, due to high costs and long term design times these machines have not been widely used. Recently, network based environments, such as a cluster of workstations, have provided effective and economical platforms for high performance computing. But developing parallel applications on such machines involves complex decisions about distribution of processes over the processors, scheduling of processor time between competing processes, communication patterns, etc. Writing explicit code to control these decisions increases program complexity and reduces program reliability and code re-usability. We propose a design methodology based on design patterns which is intended to support parallelization of vision applications on a cluster of workstations. We identify common algorithmic forms occurring repeatedly in parallel vision algorithms and formulate these as design patterns. We specify various aspects of parallel behaviour of a design pattern, such as process placement or communication patterns, in its definition or separately as issues to be addressed explicitly during its implementation. Design patterns ensure program reliability and code re-usability since they capture the essence of working designs in a form that makes them usable in different situations and in future work. The research work is concerned with presenting a catalogue of design patterns to implement various forms of parallelism in vision applications on a cluster of workstations. Using relevant design patterns, we implement representative vision algorithms in low, intermediate and high level vision tasks. Majority of these implementations show promising results. For example, given a 512x512 image, the image restoration algorithm based on Markov random field model can be completed in less than 45 seconds on a network of 16 workstations (Sun SPARCstation 5). The same task takes more than 10 minutes on a single such workstation
    corecore