414 research outputs found

    Distributed estimation over a low-cost sensor network: a review of state-of-the-art

    Get PDF
    Proliferation of low-cost, lightweight, and power efficient sensors and advances in networked systems enable the employment of multiple sensors. Distributed estimation provides a scalable and fault-robust fusion framework with a peer-to-peer communication architecture. For this reason, there seems to be a real need for a critical review of existing and, more importantly, recent advances in the domain of distributed estimation over a low-cost sensor network. This paper presents a comprehensive review of the state-of-the-art solutions in this research area, exploring their characteristics, advantages, and challenging issues. Additionally, several open problems and future avenues of research are highlighted

    Scalable and adaptable tracking of humans in multiple camera systems

    Get PDF
    The aim of this thesis is to track objects on a network of cameras both within [intra) and across (inter) cameras. The algorithms must be adaptable to change and are learnt in a scalable approach. Uncalibrated cameras are used that are patially separated, and therefore tracking must be able to cope with object oclusions, illuminations changes, and gaps between cameras.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Hadoop neural network for parallel and distributed feature selection

    Get PDF
    In this paper, we introduce a theoretical basis for a Hadoop-based neural network for parallel and distributed feature selection in Big Data sets. It is underpinned by an associative memory (binary) neural network which is highly amenable to parallel and distributed processing and fits with the Hadoop paradigm. There are many feature selectors described in the literature which all have various strengths and weaknesses. We present the implementation details of five feature selection algorithms constructed using our artificial neural network framework embedded in Hadoop YARN. Hadoop allows parallel and distributed processing. Each feature selector can be divided into subtasks and the subtasks can then be processed in parallel. Multiple feature selectors can also be processed simultaneously (in parallel) allowing multiple feature selectors to be compared. We identify commonalities among the five features selectors. All can be processed in the framework using a single representation and the overall processing can also be greatly reduced by only processing the common aspects of the feature selectors once and propagating these aspects across all five feature selectors as necessary. This allows the best feature selector and the actual features to select to be identified for large and high dimensional data sets through exploiting the efficiency and flexibility of embedding the binary associative-memory neural network in Hadoop

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    De/Mystifying smartphone-video through VilĂ©m Flusser’s quanta

    Get PDF
    Videos made on smartphones are recognised in popular culture in a manner that is not reciprocated in media theory and fine art practice. The difference between smartphone-video and other film and video technology has been obscured within post-medium contexts such as “moving image,” where an ideological indifference creates new physical and psychological barriers between video ‘user’ and moving image ‘artist.’ This thesis considers smartphone-video as a significantly different gesture to other moving image technologies, which I raise through media theorist VilĂ©m Flusser’s interpretation of “quanta,” and his interest in ‘the gesture of video’ as a “quantised phenomenon.” I approach these ideas through my own smartphone-videos, which are initially influenced by principles of Peter Gidal’s structural/materialist filmmaking. By readdressing Gidal’s methods of non-illusionist demystification, smartphone-video can be considered a very different gesture to filmmaking. Film becomes stable, causal, and Newtonian; while video becomes unstable, probable, and quantum. Developments in digital imaging and computer processors highlight such quantum mechanics, which although complex, function in ways classical physics cannot explain. This thesis proposes how Flusser’s concept of quanta can account for the unstable qualities found in smartphone-video’s manner of operation when de/mystified through principles of Gidal’s structural/materialist filmmaking. Such observations consider video's quantum instability through AI driven automation and user-friendly features that enable “quantum dialogues” between user and machine as decision-makers. Observing smartphone-videos as non-polarised quantum dialogues through improvisation in the act of recording, expresses Flusser’s theory of gestures, and elucidates his proto-decolonial efforts against “universal phenomena.” The gesture of smartphone-video encompasses much more than I had imagined, and subsequently — with the aid of Karen Barad — considerations are made to a de/mystification of video’s gesture, operating through proximity in an intra-subjective network of user(s)

    Identification of robotic manipulators' inverse dynamics coefficients via model-based adaptive networks

    Get PDF
    The values of a given manipulator's dynamics coefficients need to be accurately identified in order to employ model-based algorithms in the control of its motion. This thesis details the development of a novel form of adaptive network which is capable of accurately learning the coefficients of systems, such as manipulator inverse dynamics, where the algebraic form is known but the coefficients' values are not. Empirical motion data from a pair of PUMA 560s has been processed by the Context-Sensitive Linear Combiner (CSLC) network developed, and the coefficients of their inverse dynamics identified. The resultant precision of control is shown to be superior to that achieved from employing dynamics coefficients derived from direct measurement. As part of the development of the CSLC network, the process of network learning is examined. This analysis reveals that current network architectures for processing analogue output systems with high input order are highly unlikely to produce solutions that are good estimates throughout the entire problem space. In contrast, the CSLC network is shown to generalise intrinsically as a result of its structure, whilst its training is greatly simplified by the presence of only one minima in the network's error hypersurface. Furthermore, a fine-tuning algorithm for network training is presented which takes advantage of the CSLC network's single adaptive layer structure and does not rely upon gradient descent of the network error hypersurface, which commonly slows the later stages of network training

    Parallel architectures for image analysis

    Get PDF
    This thesis is concerned with the problem of designing an architecture specifically for the application of image analysis and object recognition. Image analysis is a complex subject area that remains only partially defined and only partially solved. This makes the task of designing an architecture aimed at efficiently implementing image analysis and recognition algorithms a difficult one. Within this work a massively parallel heterogeneous architecture, the Warwick Pyramid Machine is described. This architecture consists of SIMD, MIMD and MSIMD modes of parallelism each directed at a different part of the problem. The performance of this architecture is analysed with respect to many tasks drawn from very different areas of the image analysis problem. These tasks include an efficient straight line extraction algorithm and a robust and novel geometric model based recognition system. The straight line extraction method is based on the local extraction of line segments using a Hough style algorithm followed by careful global matching and merging. The recognition system avoids quantising the pose space, hence overcoming many of the problems inherent with this class of methods and includes an analytical verification stage. Results and detailed implementations of both of these tasks are given

    Coupling AAA protein function to regulated gene expression

    Get PDF
    AbstractAAA proteins (ATPases Associated with various cellular Activities) are involved in almost all essential cellular processes ranging from DNA replication, transcription regulation to protein degradation. One class of AAA proteins has evolved to adapt to the specific task of coupling ATPase activity to activating transcription. These upstream promoter DNA bound AAA activator proteins contact their target substrate, the σ54-RNA polymerase holoenzyme, through DNA looping, reminiscent of the eukaryotic enhance binding proteins. These specialised macromolecular machines remodel their substrates through ATP hydrolysis that ultimately leads to transcriptional activation. We will discuss how AAA proteins are specialised for this specific task. This article is part of a Special Issue entitled: AAA ATPases: structure and function
    • 

    corecore