77 research outputs found
Accelerating Time Series Analysis via Processing using Non-Volatile Memories
Time Series Analysis (TSA) is a critical workload for consumer-facing
devices. Accelerating TSA is vital for many domains as it enables the
extraction of valuable information and predict future events. The
state-of-the-art algorithm in TSA is the subsequence Dynamic Time Warping
(sDTW) algorithm. However, sDTW's computation complexity increases
quadratically with the time series' length, resulting in two performance
implications. First, the amount of data parallelism available is significantly
higher than the small number of processing units enabled by commodity systems
(e.g., CPUs). Second, sDTW is bottlenecked by memory because it 1) has low
arithmetic intensity and 2) incurs a large memory footprint. To tackle these
two challenges, we leverage Processing-using-Memory (PuM) by performing in-situ
computation where data resides, using the memory cells. PuM provides a
promising solution to alleviate data movement bottlenecks and exposes immense
parallelism.
In this work, we present MATSA, the first MRAM-based Accelerator for Time
Series Analysis. The key idea is to exploit magneto-resistive memory crossbars
to enable energy-efficient and fast time series computation in memory. MATSA
provides the following key benefits: 1) it leverages high levels of parallelism
in the memory substrate by exploiting column-wise arithmetic operations, and 2)
it significantly reduces the data movement costs performing computation using
the memory cells. We evaluate three versions of MATSA to match the requirements
of different environments (e.g., embedded, desktop, or HPC computing) based on
MRAM technology trends. We perform a design space exploration and demonstrate
that our HPC version of MATSA can improve performance by 7.35x/6.15x/6.31x and
energy efficiency by 11.29x/4.21x/2.65x over server CPU, GPU and PNM
architectures, respectively
Tuning the Computational Effort: An Adaptive Accuracy-aware Approach Across System Layers
This thesis introduces a novel methodology to realize accuracy-aware systems, which will help designers integrate accuracy awareness into their systems. It proposes an adaptive accuracy-aware approach across system layers that addresses current challenges in that domain, combining and tuning accuracy-aware methods on different system layers. To widen the scope of accuracy-aware computing including approximate computing for other domains, this thesis presents innovative accuracy-aware methods and techniques for different system layers.
The required tuning of the accuracy-aware methods is integrated into a configuration layer that tunes the available knobs of the accuracy-aware methods integrated into a system
ECG Biometric for Human Authentication using Hybrid Method
Recently there is more usage of deep learning in biometrics. Electrocardiogram (ECG) for person authentication is not the exception. However the performance of the deep learning networks purely relay on the datasets and trainings, In this work we propose a fusion of pretrained Convolutional Neural Networks (CNN) such as Googlenet with SVM for person authentication using there ECG as biometric. The one dimensional ECG signals are filtered and converted into a standard size with suitable format before it is used to train the networks. An evaluation of performances shows the good results with the pre-trained network that is Googlenet. The accuracy results reveal that the proposed fusion method outperforms with an average accuracy of 95.0%
System-on-Chip Solution for Patients Biometric: A Compressive Sensing-Based Approach
IEEE The ever-increasing demand for biometric solutions for the internet of thing (IoT)-based connected health applications is mainly driven by the need to tackle fraud issues, along with the imperative to improve patient privacy, safety and personalized medical assistance. However, the advantages offered by the IoT platforms come with the burden of big data and its associated challenges in terms of computing complexity, bandwidth availability and power consumption. This paper proposes a solution to tackle both privacy issues and big data transmission by incorporating the theory of compressive sensing (CS) and a simple, yet, efficient identification mechanism using the electrocardiogram (ECG) signal as a biometric trait. Moreover, the paper presents the hardware implementation of the proposed solution on a system on chip (SoC) platform with an optimized architecture to further reduce hardware resource usage. First, we investigate the feasibility of compressing the ECG data while maintaining a high identification quality. The obtained results show a 98.88% identification rate using only a compression ratio of 30%. Furthermore, the proposed system has been implemented on a Zynq SoC using heterogeneous software/hardware solution, which is able to accelerate the software implementation by a factor of 7.73 with a power consumption of 2.318 W
Digital Twins in Industry
Digital Twins in Industry is a compilation of works by authors with specific emphasis on industrial applications. Much of the research on digital twins has been conducted by the academia in both theoretical considerations and laboratory-based prototypes. Industry, while taking the lead on larger scale implementations of Digital Twins (DT) using sophisticated software, is concentrating on dedicated solutions that are not within the reach of the average-sized industries. This book covers 11 chapters of various implementations of DT. It provides an insight for companies who are contemplating the adaption of the DT technology, as well as researchers and senior students in exploring the potential of DT and its associated technologies
Secure and Unclonable Integrated Circuits
Semiconductor manufacturing is increasingly reliant in offshore foundries, which has raised concerns with counterfeiting, piracy, and unauthorized overproduction by the contract foundry. The recent shortage of semiconductors has aggravated such problems, with the electronic components market being flooded by recycled, remarked, or even out-of-spec, and defective parts. Moreover, modern internet connected applications require mechanisms that enable secure communication, which must be protected by security countermeasures to mitigate various types of attacks. In this thesis, we describe techniques to aid counterfeit prevention, and mitigate secret extraction attacks that exploit power consumption information.
Counterfeit prevention requires simple and trustworthy identification. Physical unclonable functions (PUFs) harvest process variation to create a unique and unclonable digital fingerprint of an IC. However, learning attacks can model the PUF behavior, invalidating its unclonability claims. In this thesis, we research circuits and architectures to make PUFs more resilient to learning attacks. First, we propose the concept of non-monotonic response quantization, where responses not always encode the best performing circuit structure. Then, we explore the design space of PUF compositions, assessing the trade-off between stability and resilience to learning attacks. Finally, we introduce a lightweight key based challenge obfuscation technique that uses a chip unique secret to construct PUFs which are more resilient to learning attacks.
Modern internet protocols demand message integrity, confidentiality, and (often) non-repudiation. Adding support for such mechanisms requires on-chip storage of a secret key. Even if the key is produced by a PUF, it will be subject to key extraction attacks that use power consumption information. Secure integrated circuits must address power analysis attacks with appropriate countermeasures. Traditional mitigation techniques have limited scope of protection, and impose several restrictions on how sensitive data must be manipulated. We demonstrate a bit-serial RISC-V microprocessor implementation with no plain-text data in the clear, where all values are protected using Boolean masking and differential domino logic. Software can run with little to no countermeasures, reducing code size and performance overheads. Our methodology is fully automated and can be applied to designs of arbitrary size or complexity. We also provide details on other key components such as clock randomizer, memory protection, and random number generator
Transforming Time Series for Efficient and Accurate Classification
Time series data refer to sequences of data that are ordered either temporally, spatially or in another defined order. They can be frequently found in a variety of domains, including financial data analysis, medical and health monitoring and industrial automation applications. Due to their abundance and wide application scenarios, there has been an increasing need for efficient machine learning algorithms to extract information and build knowledge from these data. One of the major tasks in time series mining is time series classification (TSC), which consists of applying a learning algorithm on labeled data to train a model that will then be used to predict the classes of samples from an unlabeled data set. Due to the sequential characteristic of time series data, state-of-the-art classification algorithms (such as SVM and Random Forest) that performs well for generic data are usually not suitable for TSC. In order to improve the performance of TSC tasks, this dissertation proposes different methods to transform time series data for a better feature extraction process as well as novel algorithms to achieve better classification performance in terms of computation efficiency and classification accuracy.
In the first part of this dissertation, we conduct a large scale empirical study that takes advantage of discrete wavelet transform (DWT) for time series dimensionality reduction. We first transform real-valued time series data using different families of DWT. Then we apply dynamic time warping (DTW)-based 1NN classification on 39 datasets and find out that existing DWT-based lossy compression approaches can help to overcome the challenges of storage and computation time. Furthermore, we provide assurances to practitioners by empirically showing, with various datasets and with several DWT approaches, that TSC algorithms yield similar accuracy on both compressed (i.e.,
approximated) and raw time series data. We also show that, in some datasets, wavelets may actually help in reducing noisy variations which deteriorate the performance of TSC tasks. In a few cases, we note that the residual details/noises from compression are more useful for recognizing data patterns.
In the second part, we propose a language model-based approach for TSC named Domain Series Corpus (DSCo), in order to take advantage of mature techniques from both time series mining and Natural Language Processing (NLP) communities.
After transforming real-valued time series into texts using Symbolic Aggregate approXimation (SAX), we build per-class language models (unigrams and bigrams) from these symbolized text corpora. To classify unlabeled samples, we compute the fitness of each symbolized sample against all per-class models and choose the class represented by the model with the best fitness score. Through extensive experiments on an open dataset archive, we demonstrate that DSCo performs similarly to approaches working with original uncompressed numeric data. We further propose DSCo-NG to improve the computation efficiency and classification accuracy of DSCo. In contrast to DSCo where we try to find the best way to recursively segment time series, DSCo-NG breaks time series into smaller segments of the same size, this simplification also leads to simplified language model inference in the training phase and slightly higher classification accuracy.
The third part of this dissertation presents a multiscale visibility graph representation for time series as well as feature extraction methods for TSC, so that both global and local features are fully extracted from time series data. Unlike traditional TSC approaches that seek to find global similarities in time series databases (e.g., 1NN-DTW) or methods specializing in locating local patterns/subsequences (e.g., shapelets), we extract solely statistical features from graphs that are generated from time series. Specifically, we augment time series by means of their multiscale approximations, which are further transformed into a set of visibility graphs. After extracting probability distributions of small motifs, density, assortativity, etc., these features are used for building highly accurate classification models using generic classifiers (e.g., Support Vector Machine and eXtreme Gradient Boosting). Based on extensive experiments on a large number of open datasets and comparison with five state-of-the-art TSC algorithms, our approach is shown to be both accurate and efficient: it is more accurate than Learning Shapelets and at the same time faster than Fast Shapelets.
Finally, we list a few industrial applications that relevant to our research work, including Non-Intrusive Load Monitoring as well as anomaly detection and visualization by means for hierarchical clustering for time series data.
In summary, this dissertation explores different possibilities to improve the efficiency and accuracy of TSC algorithms. To that end, we employ a range of techniques including wavelet transforms, symbolic approximations, language models and graph mining algorithms. We experiment and evaluate our approaches using publicly available time series datasets. Comparison with the state-of-the-art shows that the approaches developed in this dissertation perform well, and contribute to advance the field of TSC
Developing A Toolbox To Probe Reaction Dynamics With Strong Field Ionization And Non-Linear Attosecond Spectroscopy
Electronic motions which happen in 10 to 100 of attoseconds are the heart of all processes in nature. Therefore monitoring and extracting details in this fundamental level will provide new prospect to the areas as information technology, basic energy science, medicine and life sciences. The challenge being, develop a tool to reach such a fast time scale for real time observation in atomic level. In this thesis work we have address this matter using two interesting approaches related to the laser matter interaction: strong field ionization and nonlinear attosecond spectroscopy. The first part is based on the studies related to the strong field ionization probe. Strong field ionization probe was verified to be sensitive to the sign of magnetic quantum number which evident the capability of probing atomic orientation. The next part is based on non-linear attosecond spectroscopy. With the use 1 kHz laser and the loose focusing geometry we were able to produce attosecond pulse trains with a sufficient flux to perform two photon double ionization. Further, we were also able to extract ion-electron coincidence measurements of the double ionization event of XUV-pump-XUV-probe system for the first time. The extended studies will be carried out with the combination of our newly developed 3D detector to this current setup which will facilitate the triple coincidence capabilities
- …