129,028 research outputs found

    Learning-Based Hardware Design for Data Acquisition Systems

    Get PDF
    This multidisciplinary research work aims to investigate the optimized information extraction from signals or data volumes and to develop tailored hardware implementations that trade-off the complexity of data acquisition with that of data processing, conceptually allowing radically new device designs. The mathematical results in classical Compressive Sampling (CS) support the paradigm of Analog-to-Information Conversion (AIC) as a replacement for conventional ADC technologies. The AICs simultaneously perform data acquisition and compression, seeking to directly sample signals for achieving specific tasks as opposed to acquiring a full signal only at the Nyquist rate to throw most of it away via compression. Our contention is that in order for CS to live up its name, both theory and practice must leverage concepts from learning. This work demonstrates our contention in hardware prototypes, with key trade-offs, for two different fields of application as edge and big-data computing. In the framework of edge-data computing, such as wearable and implantable ecosystems, the power budget is defined by the battery capacity, which generally limits the device performance and usability. This is more evident in very challenging field, such as medical monitoring, where high performance requirements are necessary for the device to process the information with high accuracy. Furthermore, in applications like implantable medical monitoring, the system performances have to merge the small area as well as the low-power requirements, in order to facilitate the implant bio-compatibility, avoiding the rejection from the human body. Based on our new mathematical foundations, we built different prototypes to get a neural signal acquisition chip that not only rigorously trades off its area, energy consumption, and the quality of its signal output, but also significantly outperforms the state-of-the-art in all aspects. In the framework of big-data and high-performance computation, such as in high-end servers application, the RF circuits meant to transmit data from chip-to-chip or chip-to-memory are defined by low power requirements, since the heat generated by the integrated circuits is partially distributed by the chip package. Hence, the overall system power budget is defined by its affordable cooling capacity. For this reason, application specific architectures and innovative techniques are used for low-power implementation. In this work, we have developed a single-ended multi-lane receiver for high speed I/O link in servers application. The receiver operates at 7 Gbps by learning inter-symbol interference and electromagnetic coupling noise in chip-to-chip communication systems. A learning-based approach allows a versatile receiver circuit which not only copes with large channel attenuation but also implements novel crosstalk reduction techniques, to allow single-ended multiple lines transmission, without sacrificing its overall bandwidth for a given area within the interconnect's data-path

    Low-power neuromorphic sensor fusion for elderly care

    Get PDF
    Smart wearable systems have become a necessary part of our daily life with applications ranging from entertainment to healthcare. In the wearable healthcare domain, the development of wearable fall recognition bracelets based on embedded systems is getting considerable attention in the market. However, in embedded low-power scenarios, the sensorā€™s signal processing has propelled more challenges for the machine learning algorithm. Traditional machine learning method has a huge number of calculations on the data classification, and it is difficult to implement real-time signal processing in low-power embedded systems. In an embedded system, ensuring data classification in a low-power and real-time processing to fuse a variety of sensor signals is a huge challenge. This requires the introduction of neuromorphic computing with software and hardware co-design concept of the system. This thesis is aimed to review various neuromorphic computing algorithms, research hardware circuits feasibility, and then integrate captured sensor data to realise data classification applications. In addition, it has explored a human being benchmark dataset, which is following defined different levels to design the activities classification task. In this study, firstly the data classification algorithm is applied to human movement sensors to validate the neuromorphic computing on human activity recognition tasks. Secondly, a data fusion framework has been presented, it implements multiple-sensing signals to help neuromorphic computing achieve sensor fusion results and improve classification accuracy. Thirdly, an analog circuits module design to carry out a neural network algorithm to achieve low power and real-time processing hardware has been proposed. It shows a hardware/software co-design system to combine the above work. By adopting the multi-sensing signals on the embedded system, the designed software-based feature extraction method will help to fuse various sensors data as an input to help neuromorphic computing hardware. Finally, the results show that the classification accuracy of neuromorphic computing data fusion framework is higher than that of traditional machine learning and deep neural network, which can reach 98.9% accuracy. Moreover, this framework can flexibly combine acquisition hardware signals and is not limited to single sensor data, and can use multi-sensing information to help the algorithm obtain better stability

    Deep Task-Based Analog-to-Digital Conversion

    Get PDF
    Analog-to-digital converters (ADCs) allow physical signals to be processed using digital hardware. Their conversion consists of two stages: Sampling, which maps a continuous-time signal into discrete-time, and quantization, i.e., representing the continuous-amplitude quantities using a finite number of bits. ADCs typically implement generic uniform conversion mappings that are ignorant of the task for which the signal is acquired, and can be costly when operating in high rates and fine resolutions. In this work we design task-oriented ADCs which learn from data how to map an analog signal into a digital representation such that the system task can be efficiently carried out. We propose a model for sampling and quantization that facilitates the learning of non-uniform mappings from data. Based on this learnable ADC mapping, we present a mechanism for optimizing a hybrid acquisition system comprised of analog combining, tunable ADCs with fixed rates, and digital processing, by jointly learning its components end-to-end. Then, we show how one can exploit the representation of hybrid acquisition systems as deep network to optimize the sampling rate and quantization rate given the task by utilizing Bayesian meta-learning techniques. We evaluate the proposed deep task-based ADC in two case studies: the first considers symbol detection in multi-antenna digital receivers, where multiple analog signals are simultaneously acquired in order to recover a set of discrete information symbols. The second application is the beamforming of analog channel data acquired in ultrasound imaging. Our numerical results demonstrate that the proposed approach achieves performance which is comparable to operating with high sampling rates and fine resolution quantization, while operating with reduced overall bit rate

    Context Aware Computing for The Internet of Things: A Survey

    Get PDF
    As we are moving towards the Internet of Things (IoT), the number of sensors deployed around the world is growing at a rapid pace. Market research has shown a significant growth of sensor deployments over the past decade and has predicted a significant increment of the growth rate in the future. These sensors continuously generate enormous amounts of data. However, in order to add value to raw sensor data we need to understand it. Collection, modelling, reasoning, and distribution of context in relation to sensor data plays critical role in this challenge. Context-aware computing has proven to be successful in understanding sensor data. In this paper, we survey context awareness from an IoT perspective. We present the necessary background by introducing the IoT paradigm and context-aware fundamentals at the beginning. Then we provide an in-depth analysis of context life cycle. We evaluate a subset of projects (50) which represent the majority of research and commercial solutions proposed in the field of context-aware computing conducted over the last decade (2001-2011) based on our own taxonomy. Finally, based on our evaluation, we highlight the lessons to be learnt from the past and some possible directions for future research. The survey addresses a broad range of techniques, methods, models, functionalities, systems, applications, and middleware solutions related to context awareness and IoT. Our goal is not only to analyse, compare and consolidate past research work but also to appreciate their findings and discuss their applicability towards the IoT.Comment: IEEE Communications Surveys & Tutorials Journal, 201

    Evaluating cost taxonomies for information systems management

    Get PDF
    The consideration of costs, benefits and risks underpin many Information System (IS) evaluation decisions. Yet, vendors and project-champions alike tend to identify and focus much of their effort on the benefits achievable from the adoption of new technology, as it is often not in the interest of key stakeholders to spend too much time considering the wider cost and risk implications of enterprise-wide technology adoptions. In identifying a void in the literature, the authors of the paper present a critical analysis of IS-cost taxonomies. In doing so, the authors establish that such cost taxonomies tend to be esoteric and difficult to operationalize, as they lack specifics in detail. Therefore, in developing a deeper understanding of IS-related costs, the authors position the need to identify, control and reduce IS-related costs within the information systems evaluation domain, through culminating and then synthesizing the literature into a frame of reference that supports the evaluation of information systems through a deeper understanding of IS-cost taxonomies. The paper then concludes by emphasizing that the total costs associated with IS-adoption can only be determined after having considered the multi-faceted dimensions of information system investments
    • ā€¦
    corecore