1,028 research outputs found

    Mixed-signal quadratic operators for the feature extraction of neural signals

    Get PDF
    This paper presents design principles for reusing charge-redistribution SAR ADCs as digital multipliers. This is illustrated with an 8-b fully-differential rail-to-rail SAR ADC/multiplier, designed in a 180 nm HV CMOS technology. This reconfigurability property can be exploited for the extraction of product-related features in neural signals, such as energy content, or for the discrimination of spikes using the Teager operator.Ministerio de Economía y Competitividad TEC2012-33634Office of Naval Research (USA) N0001414135

    A Survey of the methods on fingerprint orientation field estimation

    Get PDF
    Fingerprint orientation field (FOF) estimation plays a key role in enhancing the performance of the automated fingerprint identification system (AFIS): Accurate estimation of FOF can evidently improve the performance of AFIS. However, despite the enormous attention on the FOF estimation research in the past decades, the accurate estimation of FOFs, especially for poor-quality fingerprints, still remains a challenging task. In this paper, we devote to review and categorization of the large number of FOF estimation methods proposed in the specialized literature, with particular attention to the most recent work in this area. Broadly speaking, the existing FOF estimation methods can be grouped into three categories: gradient-based methods, mathematical models-based methods, and learning-based methods. Identifying and explaining the advantages and limitations of these FOF estimation methods is of fundamental importance for fingerprint identification, because only a full understanding of the nature of these methods can shed light on the most essential issues for FOF estimation. In this paper, we make a comprehensive discussion and analysis of these methods concerning their advantages and limitations. We have also conducted experiments using publically available competition dataset to effectively compare the performance of the most relevant algorithms and methods

    Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene

    Full text link
    © 2016 IEEE. Hierarchical neural networks have been shown to be effective in learning representative image features and recognizing object classes. However, most existing networks combine the low/middle level cues for classification without accounting for any spatial structures. For applications such as understanding a scene, how the visual cues are spatially distributed in an image becomes essential for successful analysis. This paper extends the framework of deep neural networks by accounting for the structural cues in the visual signals. In particular, two kinds of neural networks have been proposed. First, we develop a multitask deep convolutional network, which simultaneously detects the presence of the target and the geometric attributes (location and orientation) of the target with respect to the region of interest. Second, a recurrent neuron layer is adopted for structured visual detection. The recurrent neurons can deal with the spatial distribution of visible cues belonging to an object whose shape or structure is difficult to explicitly define. Both the networks are demonstrated by the practical task of detecting lane boundaries in traffic scenes. The multitask convolutional neural network provides auxiliary geometric information to help the subsequent modeling of the given lane structures. The recurrent neural network automatically detects lane boundaries, including those areas containing no marks, without any explicit prior knowledge or secondary modeling

    Feature and Decision Level Fusion Using Multiple Kernel Learning and Fuzzy Integrals

    Get PDF
    The work collected in this dissertation addresses the problem of data fusion. In other words, this is the problem of making decisions (also known as the problem of classification in the machine learning and statistics communities) when data from multiple sources are available, or when decisions/confidence levels from a panel of decision-makers are accessible. This problem has become increasingly important in recent years, especially with the ever-increasing popularity of autonomous systems outfitted with suites of sensors and the dawn of the ``age of big data.\u27\u27 While data fusion is a very broad topic, the work in this dissertation considers two very specific techniques: feature-level fusion and decision-level fusion. In general, the fusion methods proposed throughout this dissertation rely on kernel methods and fuzzy integrals. Both are very powerful tools, however, they also come with challenges, some of which are summarized below. I address these challenges in this dissertation. Kernel methods for classification is a well-studied area in which data are implicitly mapped from a lower-dimensional space to a higher-dimensional space to improve classification accuracy. However, for most kernel methods, one must still choose a kernel to use for the problem. Since there is, in general, no way of knowing which kernel is the best, multiple kernel learning (MKL) is a technique used to learn the aggregation of a set of valid kernels into a single (ideally) superior kernel. The aggregation can be done using weighted sums of the pre-computed kernels, but determining the summation weights is not a trivial task. Furthermore, MKL does not work well with large datasets because of limited storage space and prediction speed. These challenges are tackled by the introduction of many new algorithms in the following chapters. I also address MKL\u27s storage and speed drawbacks, allowing MKL-based techniques to be applied to big data efficiently. Some algorithms in this work are based on the Choquet fuzzy integral, a powerful nonlinear aggregation operator parameterized by the fuzzy measure (FM). These decision-level fusion algorithms learn a fuzzy measure by minimizing a sum of squared error (SSE) criterion based on a set of training data. The flexibility of the Choquet integral comes with a cost, however---given a set of N decision makers, the size of the FM the algorithm must learn is 2N. This means that the training data must be diverse enough to include 2N independent observations, though this is rarely encountered in practice. I address this in the following chapters via many different regularization functions, a popular technique in machine learning and statistics used to prevent overfitting and increase model generalization. Finally, it is worth noting that the aggregation behavior of the Choquet integral is not intuitive. I tackle this by proposing a quantitative visualization strategy allowing the FM and Choquet integral behavior to be shown simultaneously

    On the 3D point cloud for human-pose estimation

    Get PDF
    This thesis aims at investigating methodologies for estimating a human pose from a 3D point cloud that is captured by a static depth sensor. Human-pose estimation (HPE) is important for a range of applications, such as human-robot interaction, healthcare, surveillance, and so forth. Yet, HPE is challenging because of the uncertainty in sensor measurements and the complexity of human poses. In this research, we focus on addressing challenges related to two crucial components in the estimation process, namely, human-pose feature extraction and human-pose modeling. In feature extraction, the main challenge involves reducing feature ambiguity. We propose a 3D-point-cloud feature called viewpoint and shape feature histogram (VISH) to reduce feature ambiguity by capturing geometric properties of the 3D point cloud of a human. The feature extraction consists of three steps: 3D-point-cloud pre-processing, hierarchical structuring, and feature extraction. In the pre-processing step, 3D points corresponding to a human are extracted and outliers from the environment are removed to retain the 3D points of interest. This step is important because it allows us to reduce the number of 3D points by keeping only those points that correspond to the human body for further processing. In the hierarchical structuring, the pre-processed 3D point cloud is partitioned and replicated into a tree structure as nodes. Viewpoint feature histogram (VFH) and shape features are extracted from each node in the tree to provide a descriptor to represent each node. As the features are obtained based on histograms, coarse-level details are highlighted in large regions and fine-level details are highlighted in small regions. Therefore, the features from the point cloud in the tree can capture coarse level to fine level information to reduce feature ambiguity. In human-pose modeling, the main challenges involve reducing the dimensionality of human-pose space and designing appropriate factors that represent the underlying probability distributions for estimating human poses. To reduce the dimensionality, we propose a non-parametric action-mixture model (AMM). It represents high-dimensional human-pose space using low-dimensional manifolds in searching human poses. In each manifold, a probability distribution is estimated based on feature similarity. The distributions in the manifolds are then redistributed according to the stationary distribution of a Markov chain that models the frequency of human actions. After the redistribution, the manifolds are combined according to a probability distribution determined by action classification. Experiments were conducted using VISH features as input to the AMM. The results showed that the overall error and standard deviation of the AMM were reduced by about 7.9% and 7.1%, respectively, compared with a model without action classification. To design appropriate factors, we consider the AMM as a Bayesian network and propose a mapping that converts the Bayesian network to a neural network called NN-AMM. The proposed mapping consists of two steps: structure identification and parameter learning. In structure identification, we have developed a bottom-up approach to build a neural network while preserving the Bayesian-network structure. In parameter learning, we have created a part-based approach to learn synaptic weights by decomposing a neural network into parts. Based on the concept of distributed representation, the NN-AMM is further modified into a scalable neural network called NND-AMM. A neural-network-based system is then built by using VISH features to represent 3D-point-cloud input and the NND-AMM to estimate 3D human poses. The results showed that the proposed mapping can be utilized to design AMM factors automatically. The NND-AMM can provide more accurate human-pose estimates with fewer hidden neurons than both the AMM and NN-AMM can. Both the NN-AMM and NND-AMM can adapt to different types of input, showing the advantage of using neural networks to design factors

    Bottom-up design of porous electrodes by combining a genetic algorithm and a pore network model

    Get PDF
    The microstructure of porous electrodes determines multiple performance-defining properties, such as the available reactive surface area, mass transfer rates, and hydraulic resistance. Thus, optimizing the electrode architecture is a powerful approach to enhance the performance and cost-competitiveness of electrochemical technologies. To expand our current arsenal of electrode materials, we need to build predictive frameworks that can screen a large geometrical design space while being physically representative. Here, we present a novel approach for the optimization of porous electrode microstructures from the bottom-up that couples a genetic algorithm with a previously validated electrochemical pore network model. In this first demonstration, we focus on optimizing redox flow battery electrodes. The genetic algorithm manipulates the pore and throat size distributions of an artificially generated microstructure with fixed pore positions by selecting the best-performing networks, based on the hydraulic and electrochemical performance computed by the model. For the studied VO2+/VO2+ electrolyte, we find an increase in the fitness of 75 % compared to the initial configuration by minimizing the pumping power and maximizing the electrochemical power of the system. The algorithm generates structures with improved fluid distribution through the formation of a bimodal pore size distribution containing preferential longitudinal flow pathways, resulting in a decrease of 73 % for the required pumping power. Furthermore, the optimization yielded an 47 % increase in surface area resulting in an electrochemical performance improvement of 42 %. Our results show the potential of using genetic algorithms combined with pore network models to optimize porous electrode microstructures for a wide range of electrolyte composition and operation conditions.</p

    Machine Learning Tool for Transmission Capacity Forecasting of Overhead Lines based on Distributed Weather Data

    Get PDF
    Die Erhöhung des Anteils intermittierender erneuerbarer Energiequellen im elektrischen Energiesystem ist eine Herausforderung für die Netzbetreiber. Ein Beispiel ist die Zunahme der Nord-Süd Übertragung von Windenergie in Deutschland, die zu einer Erhöhung der Engpässe in den Freileitungen führt und sich direkt in den Stromkosten der Endverbraucher niederschlägt. Neben dem Ausbau neuer Freileitungen ist ein witterungsabhängiger Freileitungsbetrieb eine Lösung, um die aktuelle Auslastung des Systems zu verbessern. Aus der Analyse in einer Probeleitung in Deutschland wurde gezeigt, dass einen Zuwachs von ca. 28% der Stromtragfähigkeit eine Reduzierung der Kosten für Engpassmaßnahmen um ca. 55% bedeuten kann. Dieser Vorteil kann nur vom Netzbetreiber wahrgenommen werden, wenn eine Belastbarkeitsprognose für die Stromerzeugunsgplanung der konventionellen Kraftwerke zur Verfügung steht. Das in dieser Dissertation vorgestellte System prognostiziert die Belastbarkeit von Freileitungen für 48 Stunden, mit einer Verbesserung der Prognosegenauigkeit im Vergleich zum Stand-der-Technik von 6,13% in Durchschnitt. Der Ansatz passt die meteorologischen Vorhersagen an die lokale Wettersituation entlang der Leitung an. Diese Anpassungen sind aufgrund von Veränderungen der Topographie entlang der Leitungstrasse und Windschatten der umliegenden Bäume notwendig, da durch die meteorologischen Modelle diese nicht beschrieben werden können. Außerdem ist das in dieser Dissertation entwickelte Modell in der Lage die Genauigkeitsabweichungen der Wettervorhersage zwischen Tag und Nacht abzugleichen, was vorteilhaft für die Strombelastbarkeitsprognose ist. Die Zuverlässigkeit und deswegen auch die Effizienz des Stromerzeugungsplans für den nächsten 48 Stunden wurde um 10% gegenüber dem Stand der Technik erhöht. Außerdem wurde in Rahmen dieser Arbeit ein Verfahren für die Positionierung der Wetterstationen entwickelt, um die wichtigsten Stellen entlang der Leitung abzudecken und gleichzeitig die Anzahl der Wetterstationen zu minimieren. Wird ein verteiltes Sensornetzwerk in ganz Deutschland umgesetzt, wird die Einsparung von Redispatchingkosten eine Kapitalrendite von ungefähr drei Jahren bedeuten. Die Durchführung einer transienten Analyse ist im entwickelten System ebenfalls möglich, um Engpassfälle für einige Minuten zu lösen, ohne die maximale Leitertemperatur zu erreichen. Dieses Dokument versucht, die Vorteile der Freileitungsmonitoringssysteme zu verdeutlichen und stellt eine Lösung zur Unterstützung eines flexiblen elektrischen Netzes vor, die für eine erfolgreiche Energiewende erforderlich ist

    Control of quantum phenomena: Past, present, and future

    Full text link
    Quantum control is concerned with active manipulation of physical and chemical processes on the atomic and molecular scale. This work presents a perspective of progress in the field of control over quantum phenomena, tracing the evolution of theoretical concepts and experimental methods from early developments to the most recent advances. The current experimental successes would be impossible without the development of intense femtosecond laser sources and pulse shapers. The two most critical theoretical insights were (1) realizing that ultrafast atomic and molecular dynamics can be controlled via manipulation of quantum interferences and (2) understanding that optimally shaped ultrafast laser pulses are the most effective means for producing the desired quantum interference patterns in the controlled system. Finally, these theoretical and experimental advances were brought together by the crucial concept of adaptive feedback control, which is a laboratory procedure employing measurement-driven, closed-loop optimization to identify the best shapes of femtosecond laser control pulses for steering quantum dynamics towards the desired objective. Optimization in adaptive feedback control experiments is guided by a learning algorithm, with stochastic methods proving to be especially effective. Adaptive feedback control of quantum phenomena has found numerous applications in many areas of the physical and chemical sciences, and this paper reviews the extensive experiments. Other subjects discussed include quantum optimal control theory, quantum control landscapes, the role of theoretical control designs in experimental realizations, and real-time quantum feedback control. The paper concludes with a prospective of open research directions that are likely to attract significant attention in the future.Comment: Review article, final version (significantly updated), 76 pages, accepted for publication in New J. Phys. (Focus issue: Quantum control

    Optimization Methods Applied to Power Systems Ⅱ

    Get PDF
    Electrical power systems are complex networks that include a set of electrical components that allow distributing the electricity generated in the conventional and renewable power plants to distribution systems so it can be received by final consumers (businesses and homes). In practice, power system management requires solving different design, operation, and control problems. Bearing in mind that computers are used to solve these complex optimization problems, this book includes some recent contributions to this field that cover a large variety of problems. More specifically, the book includes contributions about topics such as controllers for the frequency response of microgrids, post-contingency overflow analysis, line overloads after line and generation contingences, power quality disturbances, earthing system touch voltages, security-constrained optimal power flow, voltage regulation planning, intermittent generation in power systems, location of partial discharge source in gas-insulated switchgear, electric vehicle charging stations, optimal power flow with photovoltaic generation, hydroelectric plant location selection, cold-thermal-electric integrated energy systems, high-efficiency resonant devices for microwave power generation, security-constrained unit commitment, and economic dispatch problems
    corecore