129 research outputs found

    Parallel programs for the recognition of P-invariant segments

    Get PDF

    Automatic visual recognition using parallel machines

    Get PDF
    Invariant features and quick matching algorithms are two major concerns in the area of automatic visual recognition. The former reduces the size of an established model database, and the latter shortens the computation time. This dissertation, will discussed both line invariants under perspective projection and parallel implementation of a dynamic programming technique for shape recognition. The feasibility of using parallel machines can be demonstrated through the dramatically reduced time complexity. In this dissertation, our algorithms are implemented on the AP1000 MIMD parallel machines. For processing an object with a features, the time complexity of the proposed parallel algorithm is O(n), while that of a uniprocessor is O(n2). The two applications, one for shape matching and the other for chain-code extraction, are used in order to demonstrate the usefulness of our methods. Invariants from four general lines under perspective projection are also discussed in here. In contrast to the approach which uses the epipolar geometry, we investigate the invariants under isotropy subgroups. Theoretically speaking, two independent invariants can be found for four general lines in 3D space. In practice, we show how to obtain these two invariants from the projective images of four general lines without the need of camera calibration. A projective invariant recognition system based on a hypothesis-generation-testing scheme is run on the hypercube parallel architecture. Object recognition is achieved by matching the scene projective invariants to the model projective invariants, called transfer. Then a hypothesis-generation-testing scheme is implemented on the hypercube parallel architecture

    Efficient parallel processing with optical interconnections

    Get PDF
    With the advances in VLSI technology, it is now possible to build chips which can each contain thousands of processors. The efficiency of such chips in executing parallel algorithms heavily depends on the interconnection topology of the processors. It is not possible to build a fully interconnected network of processors with constant fan-in/fan-out using electrical interconnections. Free space optics is a remedy to this limitation. Qualities exclusive to the optical medium are its ability to be directed for propagation in free space and the property that optical channels can cross in space without any interference. In this thesis, we present an electro-optical interconnected architecture named Optical Reconfigurable Mesh (ORM). It is based on an existing optical model of computation. There are two layers in the architecture. The processing layer is a reconfigurable mesh and the deflecting layer contains optical devices to deflect light beams. ORM provides three types of communication mechanisms. The first is for arbitrary planar connections among sets of locally connected processors using the reconfigurable mesh. The second is for arbitrary connections among N of the processors using the electrical buses on the processing layer and N2 fixed passive deflecting units on the deflection layer. The third is for arbitrary connections among any of the N2 processors using the N2 mechanically reconfigurable deflectors in the deflection layer. The third type of communication mechanisms is significantly slower than the other two. Therefore, it is desirable to avoid reconfiguring this type of communication during the execution of the algorithms. Instead, the optical reconfiguration can be done before the execution of each algorithm begins. Determining a right configuration that would be suitable for the entire configuration of a task execution is studied in this thesis. The basic data movements for each of the mechanisms are studied. Finally, to show the power of ORM, we use all three types of communication mechanisms in the first O(logN) time algorithm for finding the convex hulls of all figures in an N x N binary image presented in this thesis

    Systolic Array Implementations With Reduced Compute Time.

    Get PDF
    The goal of the research is the establishment of a formal methodology to develop computational structures more suitable for the changing nature of real-time signal processing and control applications. A major effort is devoted to the following question: Given a systolic array designed to execute a particular algorithm, what other algorithms can be executed on the same array? One approach for answering this question is based on a general model of array operations using graph-theoretic techniques. As a result, a systematic procedure is introduced that models array operations as a function of the compute cycle. As a consequence of the analysis, the dissertation develops the concept of fast algorithm realizations. This concept characterizes specific realizations that can be evaluated in a reduced number of cycles. It restricts the operations to remain in the same class but with reduced execution time. The concept takes advantage of the data dependencies of the algorithm at hand. This feature allows the modification of existing structures by reordering the input data. Applications of the principle allows optimum time band and triangular matrix product on arrays designed for dense matrices. A second approach for analyzing the families of algorithms implementable in an array, is based on the concept of array time constrained operation. The principle uses the number of compute cycle as an additional degree of freedom to expand the class of transformations generated by a single array. A mathematical approach, based on concepts from multilinear algebra, is introduced to model the recursive transformations implemented in linear arrays at each compute cycle. The proposed representation is general enough to encompass a large class of signal processing and control applications. A complete analytical model of the linear maps implementable by the array at each compute cycle is developed. The proposed methodology results in arrays that are more adaptable to the changing nature of operations. Lessons learned from analyzing existing arrays are used to design smart arrays for special algorithm realizations. Applications of the methodology include the design of flexible time structures and the ability to decompose a full size array into subarrays implementing smaller size problems

    Energy efficient enabling technologies for semantic video processing on mobile devices

    Get PDF
    Semantic object-based processing will play an increasingly important role in future multimedia systems due to the ubiquity of digital multimedia capture/playback technologies and increasing storage capacity. Although the object based paradigm has many undeniable benefits, numerous technical challenges remain before the applications becomes pervasive, particularly on computational constrained mobile devices. A fundamental issue is the ill-posed problem of semantic object segmentation. Furthermore, on battery powered mobile computing devices, the additional algorithmic complexity of semantic object based processing compared to conventional video processing is highly undesirable both from a real-time operation and battery life perspective. This thesis attempts to tackle these issues by firstly constraining the solution space and focusing on the human face as a primary semantic concept of use to users of mobile devices. A novel face detection algorithm is proposed, which from the outset was designed to be amenable to be offloaded from the host microprocessor to dedicated hardware, thereby providing real-time performance and reducing power consumption. The algorithm uses an Artificial Neural Network (ANN), whose topology and weights are evolved via a genetic algorithm (GA). The computational burden of the ANN evaluation is offloaded to a dedicated hardware accelerator, which is capable of processing any evolved network topology. Efficient arithmetic circuitry, which leverages modified Booth recoding, column compressors and carry save adders, is adopted throughout the design. To tackle the increased computational costs associated with object tracking or object based shape encoding, a novel energy efficient binary motion estimation architecture is proposed. Energy is reduced in the proposed motion estimation architecture by minimising the redundant operations inherent in the binary data. Both architectures are shown to compare favourable with the relevant prior art

    The anthropometric, environmental and genetic determinants of right ventricular structure and function

    Get PDF
    BACKGROUND Measures of right ventricular (RV) structure and function have significant prognostic value. The right ventricle is currently assessed by global measures, or point surrogates, which are insensitive to regional and directional changes. We aim to create a high-resolution three-dimensional RV model to improve understanding of its structural and functional determinants. These may be particularly of interest in pulmonary hypertension (PH), a condition in which RV function and outcome are strongly linked. PURPOSE To investigate the feasibility and additional benefit of applying three-dimensional phenotyping and contemporary statistical and genetic approaches to large patient populations. METHODS Healthy subjects and incident PH patients were prospectively recruited. Using a semi-automated atlas-based segmentation algorithm, 3D models characterising RV wall position and displacement were developed, validated and compared with anthropometric, physiological and genetic influences. Statistical techniques were adapted from other high-dimensional approaches to deal with the problems of multiple testing, contiguity, sparsity and computational burden. RESULTS 1527 healthy subjects successfully completed high-resolution 3D CMR and automated segmentation. Of these, 927 subjects underwent next-generation sequencing of the sarcomeric gene titin and 947 subjects completed genotyping of common variants for genome-wide association study. 405 incident PH patients were recruited, of whom 256 completed phenotyping. 3D modelling demonstrated significant reductions in sample size compared to two-dimensional approaches. 3D analysis demonstrated that RV basal-freewall function reflects global functional changes most accurately and that a similar region in PH patients provides stronger survival prediction than all anthropometric, haemodynamic and functional markers. Vascular stiffness, titin truncating variants and common variants may also contribute to changes in RV structure and function. CONCLUSIONS High-resolution phenotyping coupled with computational analysis methods can improve insights into the determinants of RV structure and function in both healthy subjects and PH patients. Large, population-based approaches offer physiological insights relevant to clinical care in selected patient groups.Open Acces

    Development of Novel Independent Component Analysis Techniques and their Applications

    Get PDF
    Real world problems very often provide minimum information regarding their causes. This is mainly due to the system complexities and noninvasive techniques employed by scientists and engineers to study such systems. Signal and image processing techniques used for analyzing such systems essentially tend to be blind. Earlier, training signal based techniques were used extensively for such analyses. But many times either these training signals are not practicable to be availed by the analyzer or become burden on the system itself. Hence blind signal/image processing techniques are becoming predominant in modern real time systems. In fact, blind signal processing has become a very important topic of research and development in many areas, especially biomedical engineering, medical imaging, speech enhancement, remote sensing, communication systems, exploration seismology, geophysics, econometrics, data mining, sensor networks etc. Blind Signal Processing has three major areas: Blind Signal Separation and Extraction, Independent Component Analysis (ICA) and Multichannel Blind Deconvolution and Equalization. ICA technique has also been typically applied to the other two areas mentioned above. Hence ICA research with its wide range of applications is quite interesting and has been taken up as the central domain of the present work
    corecore