414 research outputs found
Distributed estimation over a low-cost sensor network: a review of state-of-the-art
Proliferation of low-cost, lightweight, and power efficient sensors and advances in networked systems enable the employment of multiple sensors. Distributed estimation provides a scalable and fault-robust fusion framework with a peer-to-peer communication architecture. For this reason, there seems to be a real need for a critical review of existing and, more importantly, recent advances in the domain of distributed estimation over a low-cost sensor network. This paper presents a comprehensive review of the state-of-the-art solutions in this research area, exploring their characteristics, advantages, and challenging issues. Additionally, several open problems and future avenues of research are highlighted
Scalable and adaptable tracking of humans in multiple camera systems
The aim of this thesis is to track objects on a network of cameras both within [intra) and across (inter) cameras. The algorithms must be adaptable to change and are learnt in a scalable approach. Uncalibrated cameras are used that are patially separated, and therefore tracking must be able to cope with object oclusions, illuminations changes, and gaps between cameras.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Hadoop neural network for parallel and distributed feature selection
In this paper, we introduce a theoretical basis for a Hadoop-based neural network for parallel and distributed feature selection in Big Data sets. It is underpinned by an associative memory (binary) neural network which is highly amenable to parallel and distributed processing and fits with the Hadoop paradigm. There are many feature selectors described in the literature which all have various strengths and weaknesses. We present the implementation details of five feature selection algorithms constructed using our artificial neural network framework embedded in Hadoop YARN. Hadoop allows parallel and distributed processing. Each feature selector can be divided into subtasks and the subtasks can then be processed in parallel. Multiple feature selectors can also be processed simultaneously (in parallel) allowing multiple feature selectors to be compared. We identify commonalities among the five features selectors. All can be processed in the framework using a single representation and the overall processing can also be greatly reduced by only processing the common aspects of the feature selectors once and propagating these aspects across all five feature selectors as necessary. This allows the best feature selector and the actual features to select to be identified for large and high dimensional data sets through exploiting the efficiency and flexibility of embedding the binary associative-memory neural network in Hadoop
Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective
Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques
De/Mystifying smartphone-video through VilĂ©m Flusserâs quanta
Videos made on smartphones are recognised in popular culture in a manner that is not reciprocated in media theory and fine art practice. The difference between smartphone-video and other film and video technology has been obscured within post-medium contexts such as âmoving image,â where an ideological indifference creates new physical and psychological barriers between video âuserâ and moving image âartist.â This thesis considers smartphone-video as a significantly different gesture to other moving image technologies, which I raise through media theorist VilĂ©m Flusserâs interpretation of âquanta,â and his interest in âthe gesture of videoâ as a âquantised phenomenon.â I approach these ideas through my own smartphone-videos, which are initially influenced by principles of Peter Gidalâs structural/materialist filmmaking. By readdressing Gidalâs methods of non-illusionist demystification, smartphone-video can be considered a very different gesture to filmmaking. Film becomes stable, causal, and Newtonian; while video becomes unstable, probable, and quantum. Developments in digital imaging and computer processors highlight such quantum mechanics, which although complex, function in ways classical physics cannot explain. This thesis proposes how Flusserâs concept of quanta can account for the unstable qualities found in smartphone-videoâs manner of operation when de/mystified through principles of Gidalâs structural/materialist filmmaking. Such observations consider video's quantum instability through AI driven automation and user-friendly features that enable âquantum dialoguesâ between user and machine as decision-makers. Observing smartphone-videos as non-polarised quantum dialogues through improvisation in the act of recording, expresses Flusserâs theory of gestures, and elucidates his proto-decolonial efforts against âuniversal phenomena.â The gesture of smartphone-video encompasses much more than I had imagined, and subsequently â with the aid of Karen Barad â considerations are made to a de/mystification of videoâs gesture, operating through proximity in an intra-subjective network of user(s)
Identification of robotic manipulators' inverse dynamics coefficients via model-based adaptive networks
The values of a given manipulator's dynamics coefficients need to be accurately
identified in order to employ model-based algorithms in the control of its motion. This
thesis details the development of a novel form of adaptive network which is capable of
accurately learning the coefficients of systems, such as manipulator inverse dynamics,
where the algebraic form is known but the coefficients' values are not. Empirical motion
data from a pair of PUMA 560s has been processed by the Context-Sensitive Linear
Combiner (CSLC) network developed, and the coefficients of their inverse dynamics
identified. The resultant precision of control is shown to be superior to that achieved from
employing dynamics coefficients derived from direct measurement.
As part of the development of the CSLC network, the process of network learning is
examined. This analysis reveals that current network architectures for processing analogue
output systems with high input order are highly unlikely to produce solutions that are
good estimates throughout the entire problem space. In contrast, the CSLC network is
shown to generalise intrinsically as a result of its structure, whilst its training is greatly
simplified by the presence of only one minima in the network's error hypersurface.
Furthermore, a fine-tuning algorithm for network training is presented which takes
advantage of the CSLC network's single adaptive layer structure and does not rely upon
gradient descent of the network error hypersurface, which commonly slows the later
stages of network training
Parallel architectures for image analysis
This thesis is concerned with the problem of designing an architecture specifically for the application of image analysis and object recognition. Image analysis is a complex subject area that remains only partially defined and only partially solved. This makes the task of designing an architecture aimed at efficiently implementing image analysis and recognition algorithms a difficult one.
Within this work a massively parallel heterogeneous architecture, the Warwick Pyramid Machine is described. This architecture consists of SIMD, MIMD and MSIMD modes of parallelism each directed at a different part of the problem. The performance of this architecture is analysed with respect to many tasks drawn from very different areas of the image analysis problem. These tasks include an efficient straight line extraction algorithm and a robust and novel geometric model based recognition system. The straight line extraction method is based on the local extraction of line segments using a Hough style algorithm followed by careful global matching and merging. The recognition system avoids quantising the pose space, hence overcoming many of the problems inherent with this class of methods and includes an analytical verification stage. Results and detailed implementations of both of these tasks are given
Recommended from our members
Communication-protocol-based analysis and synthesis of networked systems: progress, prospects and challenges
In recent years, the communication-protocol-based synthesis and analysis issues have gained substantial research interest owing mainly to their significance in networked systems. In this work, we survey the control and filtering problems of networked systems under the effects induced by communication protocols. First, we introduce the engineering background of networked systems as well as the theoretical frameworks established to deal with the communication-protocol-based analysis and synthesis problems. Then, recent advances (especially the latest results) are reviewed on the stability analysis issue subject to protocol scheduling. Subsequently, the particular effort is devoted to presenting the latest progress on various communication-protocol-based control and filtering problems according to the characteristics of networked systems (e.g. time-varying nature, random behaviours, types of parameter uncertainties, and kinds of distributed structure). After that, we provide a systematic review of the communication-protocol-based fault diagnosis problems. Finally, some research challenges of communication-protocol-based control and filtering problems are outlined for future research
Coupling AAA protein function to regulated gene expression
AbstractAAA proteins (ATPases Associated with various cellular Activities) are involved in almost all essential cellular processes ranging from DNA replication, transcription regulation to protein degradation. One class of AAA proteins has evolved to adapt to the specific task of coupling ATPase activity to activating transcription. These upstream promoter DNA bound AAA activator proteins contact their target substrate, the Ï54-RNA polymerase holoenzyme, through DNA looping, reminiscent of the eukaryotic enhance binding proteins. These specialised macromolecular machines remodel their substrates through ATP hydrolysis that ultimately leads to transcriptional activation. We will discuss how AAA proteins are specialised for this specific task. This article is part of a Special Issue entitled: AAA ATPases: structure and function
Recommended from our members
3D multiple description coding for error resilience over wireless networks
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Mobile communications has gained a growing interest from both customers and service providers alike in the last 1-2 decades. Visual information is used in many application domains such as remote health care, video âon demand, broadcasting, video surveillance etc. In order to enhance the visual effects of digital video content, the depth perception needs to be provided with the actual visual content. 3D video has earned a significant interest from the research community in recent years, due to the tremendous impact it leaves on viewers and its enhancement of the userâs quality of experience (QoE). In the near future, 3D video is likely to be used in most video applications, as it offers a greater sense of immersion and perceptual experience. When 3D video is compressed and transmitted over error prone channels, the associated packet loss leads to visual quality degradation. When a picture is lost or corrupted so severely that the concealment result is not acceptable, the receiver typically pauses video playback and waits for the next INTRA picture to resume decoding. Error propagation caused by employing predictive coding may degrade the video quality severely. There are several ways used to mitigate the effects of such transmission errors. One widely used technique in International Video Coding Standards is error resilience.
The motivation behind this research work is that, existing schemes for 2D colour video compression such as MPEG, JPEG and H.263 cannot be applied to 3D video content. 3D video signals contain depth as well as colour information and are bandwidth demanding, as they require the transmission of multiple high-bandwidth 3D video streams. On the other hand, the capacity of wireless channels is limited and wireless links are prone to various types of errors caused by noise, interference, fading, handoff, error burst and network congestion. Given the maximum bit rate budget to represent the 3D scene, optimal bit-rate allocation between texture and depth information rendering distortion/losses should be minimised. To mitigate the effect of these errors on the perceptual 3D video quality, error resilience video coding needs to be investigated further to offer better quality of experience (QoE) to end users.
This research work aims at enhancing the error resilience capability of compressed 3D video, when transmitted over mobile channels, using Multiple Description Coding (MDC) in order to improve better userâs quality of experience (QoE).
Furthermore, this thesis examines the sensitivity of the human visual system (HVS) when employed to view 3D video scenes. The approach used in this study is to use subjective testing in order to rate peopleâs perception of 3D video under error free and error prone conditions through the use of a carefully designed bespoke questionnaire.Petroleum Technology Development Fund (PTDF
- âŠ