386 research outputs found

    High-Performance FPGA Implementation of Equivariant Adaptive Separation via Independence Algorithm for Independent Component Analysis

    Full text link
    Independent Component Analysis (ICA) is a dimensionality reduction technique that can boost efficiency of machine learning models that deal with probability density functions, e.g. Bayesian neural networks. Algorithms that implement adaptive ICA converge slower than their nonadaptive counterparts, however, they are capable of tracking changes in underlying distributions of input features. This intrinsically slow convergence of adaptive methods combined with existing hardware implementations that operate at very low clock frequencies necessitate fundamental improvements in both algorithm and hardware design. This paper presents an algorithm that allows efficient hardware implementation of ICA. Compared to previous work, our FPGA implementation of adaptive ICA improves clock frequency by at least one order of magnitude and throughput by at least two orders of magnitude. Our proposed algorithm is not limited to ICA and can be used in various machine learning problems that use stochastic gradient descent optimization

    Accelerating Audio Data Analysis with In-Network Computing

    Get PDF
    Digital transformation will experience massive connections and massive data handling. This will imply a growing demand for computing in communication networks due to network softwarization. Moreover, digital transformation will host very sensitive verticals, requiring high end-to-end reliability and low latency. Accordingly, the emerging concept “in-network computing” has been arising. This means integrating the network communications with computing and also performing computations on the transport path of the network. This can be used to deliver actionable information directly to end users instead of raw data. However, this change of paradigm to in-network computing raises disruptive challenges to the current communication networks. In-network computing (i) expects the network to host general-purpose softwarized network functions and (ii) encourages the packet payload to be modified. Yet, today’s networks are designed to focus on packet forwarding functions, and packet payloads should not be touched in the forwarding path, under the current end-to-end transport mechanisms. This dissertation presents fullstack in-network computing solutions, jointly designed from network and computing perspectives to accelerate data analysis applications, specifically for acoustic data analysis. In the computing domain, two design paradigms of computational logic, namely progressive computing and traffic filtering, are proposed in this dissertation for data reconstruction and feature extraction tasks. Two widely used practical use cases, Blind Source Separation (BSS) and anomaly detection, are selected to demonstrate the design of computing modules for data reconstruction and feature extraction tasks in the in-network computing scheme, respectively. Following these two design paradigms of progressive computing and traffic filtering, this dissertation designs two computing modules: progressive ICA (pICA) and You only hear once (Yoho) for BSS and anomaly detection, respectively. These lightweight computing modules can cooperatively perform computational tasks along the forwarding path. In this way, computational virtual functions can be introduced into the network, addressing the first challenge mentioned above, namely that the network should be able to host general-purpose softwarized network functions. In this dissertation, quantitative simulations have shown that the computing time of pICA and Yoho in in-network computing scenarios is significantly reduced, since pICA and Yoho are performed, simultaneously with the data forwarding. At the same time, pICA guarantees the same computing accuracy, and Yoho’s computing accuracy is improved. Furthermore, this dissertation proposes a stateful transport module in the network domain to support in-network computing under the end-to-end transport architecture. The stateful transport module extends the IP packet header, so that network packets carry message-related metadata (message-based packaging). Additionally, the forwarding layer of the network device is optimized to be able to process the packet payload based on the computational state (state-based transport component). The second challenge posed by in-network computing has been tackled by supporting the modification of packet payloads. The two computational modules mentioned above and the stateful transport module form the designed in-network computing solutions. By merging pICA and Yoho with the stateful transport module, respectively, two emulation systems, i.e., in-network pICA and in-network Yoho, have been implemented in the Communication Networks Emulator (ComNetsEmu). Through quantitative emulations, the experimental results showed that in-network pICA accelerates the overall service time of BSS by up to 32.18%. On the other hand, using in-network Yoho accelerates the overall service time of anomaly detection by a maximum of 30.51%. These are promising results for the design and actual realization of future communication networks

    PARALLEL INDEPENDENT COMPONENT ANALYSIS WITH REFERENCE FOR IMAGING GENETICS: A SEMI-BLIND MULTIVARIATE APPROACH

    Get PDF
    Imaging genetics is an emerging field dedicated to the study of genetic underpinnings of brain structure and function. Over the last decade, brain imaging techniques such as magnetic resonance imaging (MRI) have been increasingly applied to measure morphometry, task-based function and connectivity in living brains. Meanwhile, high-throughput genotyping employing genome-wide techniques has made it feasible to sample the entire genome of a substantial number of individuals. While there is growing interest in image-wide and genome-wide approaches which allow unbiased searches over a large range of variants, one of the most challenging problems is the correction for the huge number of statistical tests used in univariate models. In contrast, a reference-guided multivariate approach shows specific advantage for simultaneously assessing many variables for aggregate effects while leveraging prior information. It can improve the robustness of the results compared to a fully blind approach. In this dissertation we present a semi-blind multivariate approach, parallel independent component analysis with reference (pICA-R), to better reveal relationships between hidden factors of particular attributes. First, a consistency-based order estimation approach is introduced to advance the application of ICA to genotype data. The pICA-R approach is then presented, where independent components are extracted from two modalities in parallel and inter-modality associations are subsequently optimized for pairs of components. In particular, prior information is incorporated to elicit components of particular interests, which helps identify factors carrying small amounts of variance in large complex datasets. The pICA-R approach is further extended to accommodate multiple references whose interrelationships are unknown, allowing the investigation of functional influence on neurobiological traits of potentially related genetic variants implicated in biology. Applied to a schizophrenia study, pICA-R reveals that a complex genetic factor involving multiple pathways underlies schizophrenia-related gray matter deficits in prefrontal and temporal regions. The extended multi-reference approach, when employed to study alcohol dependence, delineates a complex genetic architecture, where the CREB-BDNF pathway plays a key role in the genetic factor underlying a proportion of variation in cue-elicited brain activations, which plays a role in phenotypic symptoms of alcohol dependence. In summary, our work makes several important contributions to advance the application of ICA to imaging genetics studies, which holds the promise to improve our understating of genetics underlying brain structure and function in healthy and disease

    An Open Architecture for Signal Monitoring and Recording Based on SDR and Docker Containers: A GNSS Use Case

    Get PDF
    Signal monitoring and recording station architectures based on software-defined radio (SDR) have been proposed and implemented since several years. However, the large amount of data to be transferred, stored, and managed when high sampling frequency and high quantization depth are required, poses a limit to the performance, mostly because of the data losses during the data transfer between the front-end and the storage unit. To overcome these limitations, thus allowing a reliable, high-fidelity recording of the signals as required by some applications, a novel architecture named SMART (Signal Monitoring, Analysis and Recording Tool) based on the implementation of Docker containers directly on a Network Attached Storage (NAS) unit is presented. Such paradigms allow for a fully open-source system being more affordable and flexible than previous prototypes. The proposed architecture reduces computational complexity, increases efficiency, and provides a compact, cost-effective system that is easy to move and deploy. As a case study, this architecture is implemented to monitor Radio-Frequency Interferences (RFI) on Global Navigation Satellite System (GNSS) L1/E1 and L5/E5 bands. The sample results show the benefits of a stable, long-term capture at a high sampling frequency to characterize the RFIs spectral signature effectively

    Exploration of disruptive technologies for low cost RFID manufacturing

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, System Design & Management Program, 2004.Includes bibliographical references (p. 81-83).Significant developments have taken place in defining technology standards and identifying avenues for technological innovations to reduce the cost of manufacturing RFID tags below the $0.05 price point. The Auto-ID center at MIT has been the central coordinating body with participation from 5 universities and over 100 industry partners. The primary focus of these efforts has been in developing a standard which minimizes the logic capability of on chip circuitry and using radical innovations to reduce the cost of assembly of the RFID tags. Various disruptive innovations are underway to explore lithographic techniques which can reduce the cost of fabrication in the sub 100 nm regime wherein photolithography faces significant challenges. This research analyzes the value chain in the RFID industry and reviews potential technology strategies using the double-helix model of business dynamics and Porter's five forces framework. It also explores the current state of the art in RFID tag manufacturing and proposes the application of disruptive technologies in conjunction with innovations in assembly and packaging to enable a low cost RFID system design. Five key emerging technologies which are examined in detail are Nanoimprint Lithography, Step and Flash Imprint Lithography, Inkjet Printing, Soft lithography and Spherical Integrated Circuit Processing. These are analyzed in terms of application to RFID tag manufacturing. Current innovations in high speed and low cost assembly and packaging techniques are also examined. Fluidic Self Assembly, Vibratory Assembly, Chip on Paper techniques are reviewed in terms of application to RFID manufacturing. A systems thinking approach is also pursued to explore the drivers for wider acceptance of RFID-based(cont.) applications in addition to just depending on cost reduction for crossing the chasm from early adopters to a wider market penetration.Badarinath Kommandur.S.M

    Service Migration from Cloud to Multi-tier Fog Nodes for Multimedia Dissemination with QoE Support.

    Get PDF
    A wide range of multimedia services is expected to be offered for mobile users via various wireless access networks. Even the integration of Cloud Computing in such networks does not support an adequate Quality of Experience (QoE) in areas with high demands for multimedia contents. Fog computing has been conceptualized to facilitate the deployment of new services that cloud computing cannot provide, particularly those demanding QoE guarantees. These services are provided using fog nodes located at the network edge, which is capable of virtualizing their functions/applications. Service migration from the cloud to fog nodes can be actuated by request patterns and the timing issues. To the best of our knowledge, existing works on fog computing focus on architecture and fog node deployment issues. In this article, we describe the operational impacts and benefits associated with service migration from the cloud to multi-tier fog computing for video distribution with QoE support. Besides that, we perform the evaluation of such service migration of video services. Finally, we present potential research challenges and trends

    Space shuttle avionics system

    Get PDF
    The Space Shuttle avionics system, which was conceived in the early 1970's and became operational in the 1980's represents a significant advancement of avionics system technology in the areas of systems and redundacy management, digital data base technology, flight software, flight control integration, digital fly-by-wire technology, crew display interface, and operational concepts. The origins and the evolution of the system are traced; the requirements, the constraints, and other factors which led to the final configuration are outlined; and the functional operation of the system is described. An overall system block diagram is included
    corecore