111 research outputs found

    Intelligent Biosignal Processing in Wearable and Implantable Sensors

    Get PDF
    This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine

    Four Essays in Econometrics and Macroeconomics

    Get PDF
    Chapter 1 proposes simple and robust diagnostic tests for spatial dependence, specifically for spatial error autocorrelation and spatial lag dependence. The idea of our tests is to reformulate the testing problem such that the outer product of gradients (OPG)-variant of the LM test can be employed. Our versions of the tests are based on simple auxiliary regressions, where ordinary regression t and F-statistics can be used to test for spatial autocorrelation and lag dependence. Monte Carlo simulations show that while, under homoskedasticity, our tests perform similarly to the established LM tests, the latter suffer from severe size distortions under heteroskedasticity. Therefore our approach gives practitioners an easy to implement and robust alternative to existing tests. Chapter 2 proposes various tests for serial correlation in fixed-effects panel data regression models with a small number of time periods. First, a simplified version of the test for serial correlation suggested by Wooldridge (2002) and Drukker (2003) is considered. The second test is based on the LM statistic suggested by Baltagi and Li (1995), and the third test is a modification of the classical Durbin-Watson statistic. Under the null hypothesis of no serial correlation, all tests possess a standard normal limiting distribution as N to infinity and T is fixed. Analyzing the local power of the tests, we find that the LM statistic has superior power properties. Furthermore, a generalization to test for autocorrelation up to some given lag order and a test statistic that is robust against time dependent heteroskedasticity are proposed. In chapter 3, we analyze the role of policy risk in explaining business cycle fluctuations by using an estimated New Keynesian model featuring policy risk as well as uncertainty about technology. The aftermath of the financial and economic crisis is clearly characterized by extraordinary uncertainty regarding U.S. economic policy. Hence, the argument that policy risk, i.e. uncertainty about monetary and fiscal policy, has been holding back the economic recovery in the U.S. during the Great Recession has a large popular appeal. But the empirical literature is still inconclusive with respect to the aggregate effects of (mostly TFP) uncertainty. Studies using different proxies and identification schemes to uncover the effects of uncertainty producing a variety of results. We analyze the role of policy risk in explaining business cycle fluctuations by using an estimated New Keynesian model featuring policy risk as well as uncertainty about technology. We directly measure uncertainty from aggregate time series using Sequential Monte Carlo Methods. While we find considerable evidence of policy risk in the data, we show that the "pure uncertainty"-effect of policy risk is unlikely to play a major role in business cycle fluctuations. In the estimated model, output effects are relatively small due to i) dampening general equilibrium effects that imply a low amplification and ii) counteracting partial effects of uncertainty. Finally, we show that policy risk has effects that are an order of magnitude larger than the ones of uncertainty about aggregate TFP. Central banks regularly communicate about financial stability issues, by publishing Financial Stability Reports (FSRs) and through speeches and interviews. Chapter 4 asks how such communications affect financial markets. For that purpose, we construct a unique and novel database on CB communication comprising more than 1000 releases of FSRs and speeches/interviews by central bank governors from 37 central banks over a time period from 1996 to 2009, i.e. spanning nearly one and a half decades. The degree of optimism that is expressed in these communications is determined using a computerized textual-analysis software. We then use an event study approach to analyze how financial sector stock indices react to the release of such communication. The findings suggest that FSRs have a significant and potentially long-lasting effect on stock market returns. At the same time, they tend to reduce stock market volatility. Speeches and interviews, in contrast, have little effect on market returns and do not generate a volatility reduction during tranquil times. However, they had a substantial effect during the 2007-10 financial crisis. It seems that financial stability communication by central banks are perceived by markets to contain relevant information, underlining the importance of differentiating between communication tools, their content, and the environment in which they are employed

    Compnet: A New Scheme for Single Image Super Resolution Based on Deep Convolutional Neural Network

    Get PDF
    The features produced by the layers of a neural network become increasingly more sparse as the network gets deeper and consequently, the learning capability of the network is not further enhanced as the number of layers is increased. In this paper, a novel residual deep network, called CompNet, is proposed for the single image super resolution problem without an excessive increase in the network complexity. The idea behind the proposed network is to compose the residual signal that is more representative of the features produced by the different layers of the network and it is not as sparse. The proposed network is experimented on different benchmark datasets and is shown to outperform the state-of-the-art schemes designed to solve the super resolution problem

    Similarity modeling for machine learning

    Get PDF
    Similarity is the extent to which two objects resemble each other. Modeling similarity is an important topic for both machine learning and computer vision. In this dissertation, we first propose a discriminative similarity learning method, then introduce two novel sparse similarity modeling methods for high dimensional data from the perspective of manifold learning and subspace learning. Our sparse similarity modeling methods learn sparse similarity and consequently generate a sparse graph over the data. The generated sparse graph leads to superior performance in clustering and semi-supervised learning, compared to existing sparse graph based methods such as 1\ell^{1}-graph and Sparse Subspace Clustering (SSC). More concretely, our discriminative similarity learning method adopts a novel pairwise clustering framework by bridging the gap between clustering and multi-class classification. This pairwise clustering framework learns an unsupervised nonparametric classifier from each data partition, and searches for the optimal partition of the data by minimizing the generalization error of the learned classifiers associated with the data partitions. Regarding to our sparse similarity modeling methods, we propose a novel 0\ell^{0} regularized 1\ell^{1}-graph (0\ell^{0}-1\ell^{1}-graph) to improve 1\ell^{1}-graph from the perspective of manifold learning. Our 0\ell^{0}-1\ell^{1}-graph generates a sparse graph that is aligned to the manifold structure of the data for better clustering performance. From the perspective of learning the subspace structures of the high dimensional data, we propose 0\ell^{0}-graph that generates a subspace-consistent sparse graph for clustering and semi-supervised learning. Subspace-consistent sparse graph is a sparse graph where a data point is only connected to other data that lie in the same subspace, and the representative method Sparse Subspace Clustering (SSC) proves to generate subspace-consistent sparse graph under certain assumptions on the subspaces and the data, e.g. independent/disjoint subspaces and subspace incoherence/affinity. In contrast, our 0\ell^{0}-graph can generate subspace-consistent sparse graph for arbitrary distinct underlying subspaces under far less restrictive assumptions, i.e. only i.i.d. random data generation according to arbitrary continuous distribution. Extensive experimental results on various data sets demonstrate the superiority of 0\ell^{0}-graph compared to other methods including SSC for both clustering and semi-supervised learning. The proposed sparse similarity modeling methods require sparse coding using the entire data as the dictionary, which can be inefficient especially in case of large-scale data. In order to overcome this challenge, we propose Support Regularized Sparse Coding (SRSC) where a compact dictionary is learned. The data similarity induced by the support regularized sparse codes leads to compelling clustering performance. Moreover, a feed-forward neural network, termed Deep-SRSC, is designed as a fast encoder to approximate the codes generated by SRSC, further improving the efficiency of SRSC

    Biometrics

    Get PDF
    Biometrics uses methods for unique recognition of humans based upon one or more intrinsic physical or behavioral traits. In computer science, particularly, biometrics is used as a form of identity access management and access control. It is also used to identify individuals in groups that are under surveillance. The book consists of 13 chapters, each focusing on a certain aspect of the problem. The book chapters are divided into three sections: physical biometrics, behavioral biometrics and medical biometrics. The key objective of the book is to provide comprehensive reference and text on human authentication and people identity verification from both physiological, behavioural and other points of view. It aims to publish new insights into current innovations in computer systems and technology for biometrics development and its applications. The book was reviewed by the editor Dr. Jucheng Yang, and many of the guest editors, such as Dr. Girija Chetty, Dr. Norman Poh, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park, Dr. Sook Yoon and so on, who also made a significant contribution to the book

    デバイスの限界を超えた正確な撮像を可能にする深層学習

    Get PDF
    Tohoku University博士(情報科学)thesi

    A built-in self-test technique for high speed analog-to-digital converters

    Get PDF
    Fundação para a Ciência e a Tecnologia (FCT) - PhD grant (SFRH/BD/62568/2009

    Lightweight cryptography on ultra-constrained RFID devices

    Full text link
    Devices of extremely small computational power like RFID tags are used in practice to a rapidly growing extent, a trend commonly referred to as ubiquitous computing. Despite their severely constrained resources, the security burden which these devices have to carry is often enormous, as their fields of application range from everyday access control to human-implantable chips providing sensitive medical information about a person. Unfortunately, established cryptographic primitives such as AES are way to 'heavy' (e.g., in terms of circuit size or power consumption) to be used in corresponding RFID systems, calling for new solutions and thus initiating the research area of lightweight cryptography. In this thesis, we focus on the currently most restricted form of such devices and will refer to them as ultra-constrained RFIDs. To fill this notion with life and in order to create a profound basis for our subsequent cryptographic development, we start this work by providing a comprehensive summary of conditions that should be met by lightweight cryptographic schemes targeting ultra-constrained RFID devices. Building on these insights, we then turn towards the two main topics of this thesis: lightweight authentication and lightweight stream ciphers. To this end, we first provide a general introduction to the broad field of authentication and study existing (allegedly) lightweight approaches. Drawing on this, with the (n,k,L)^-protocol, we suggest our own lightweight authentication scheme and, on the basis of corresponding hardware implementations for FPGAs and ASICs, demonstrate its suitability for ultra-constrained RFIDs. Subsequently, we leave the path of searching for dedicated authentication protocols and turn towards stream cipher design, where we first revisit some prominent classical examples and, in particular, analyze their state initialization algorithms. Following this, we investigate the rather young area of small-state stream ciphers, which try to overcome the limit imposed by time-memory-data tradeoff (TMD-TO) attacks on the security of classical stream ciphers. Here, we present some new attacks, but also corresponding design ideas how to counter these. Paving the way for our own small-state stream cipher, we then propose and analyze the LIZARD-construction, which combines the explicit use of packet mode with a new type of state initialization algorithm. For corresponding keystream generator-based designs of inner state length n, we prove a tight (2n/3)-bound on the security against TMD-TO key recovery attacks. Building on these theoretical results, we finally present LIZARD, our new lightweight stream cipher for ultra-constrained RFIDs. Its hardware efficiency and security result from combining a Grain-like design with the LIZARD-construction. Most notably, besides lower area requirements, the estimated power consumption of LIZARD is also about 16 percent below that of Grain v1, making it particularly suitable for passive RFID tags, which obtain their energy exclusively through an electromagnetic field radiated by the reading device. The thesis is concluded by an extensive 'Future Research Directions' chapter, introducing various new ideas and thus showing that the search for lightweight cryptographic solutions is far from being completed
    corecore