465 research outputs found

    The Influence of Global Constraints on Similarity Measures for Time-Series Databases

    Full text link
    A time series consists of a series of values or events obtained over repeated measurements in time. Analysis of time series represents and important tool in many application areas, such as stock market analysis, process and quality control, observation of natural phenomena, medical treatments, etc. A vital component in many types of time-series analysis is the choice of an appropriate distance/similarity measure. Numerous measures have been proposed to date, with the most successful ones based on dynamic programming. Being of quadratic time complexity, however, global constraints are often employed to limit the search space in the matrix during the dynamic programming procedure, in order to speed up computation. Furthermore, it has been reported that such constrained measures can also achieve better accuracy. In this paper, we investigate two representative time-series distance/similarity measures based on dynamic programming, Dynamic Time Warping (DTW) and Longest Common Subsequence (LCS), and the effects of global constraints on them. Through extensive experiments on a large number of time-series data sets, we demonstrate how global constrains can significantly reduce the computation time of DTW and LCS. We also show that, if the constraint parameter is tight enough (less than 10-15% of time-series length), the constrained measure becomes significantly different from its unconstrained counterpart, in the sense of producing qualitatively different 1-nearest neighbor graphs. This observation explains the potential for accuracy gains when using constrained measures, highlighting the need for careful tuning of constraint parameters in order to achieve a good trade-off between speed and accuracy

    On fault diagnosis for high-g accelerometers via data-driven models

    Get PDF
    Shock test is a pivotal stage for designing and manufacturing space instruments. As the essential components in shock test systems to measure shock signals accurately, high-g accelerometers are usually exposed to hazardous shock environment and could be subjected to various damages. Owing to that these damages to the accelerometers could result in erroneous measurements which would further lead to shock test failures, accurately diagnosing the fault type of each high-g accelerometer can be vital to ensure the reliability of the shock test experiments. Additionally, in practice, an accelerometer in one malfunction form usually outputs mutable signal waveforms, so that it is difficult to empirically judge the fault type of the accelerometer based on the erroneous readings. Moreover, traditional hardware diagnosis approaches require disassembling the sensor’s package shell and manually observing the damage of the elements inner the sensor, which are less-efficient and uneconomical. Aiming at these problems, several data-driven approaches are incorporated to diagnose the fault types of high-g accelerometers in this work. Firstly, several high-g accelerometers with most frequent types of damage are collected, and a shock signal dataset is gathered by conducting shock tests on these faulty accelerometers. Then, the obtained dataset is used to train several base classifiers to identify the fault types in a supervised fashion. Lastly, a hybrid ensemble learning model is established by integrating these base classifiers with both heterogeneous and homogeneous models. Experimental results show that these data-driven methods can accurately identify the fault types of high-g accelerometers from their mutable erroneous readings

    Design and Software Validation of Coded Communication Schemes using Multidimensional Signal Sets without Constellation Expansion Penalty in Band-Limited Gaussian and Fading Channels

    Get PDF
    It has been well reported that the use of multidimensional constellation signals can help to reduce the bit error rate in Additive Gaussian channels by using the hyperspace geometry more efficiently. Similarly, in fading channels, dimensionality provides an inherent signal space diversity (distinct components between two constellations points), so the amplitude degradation of the signal are combated significantly better. Moreover, the set of n-dimensional signals also provides great compatibility with various Trellis Coded modulation schemes: N-dimensional signaling joined with a convolutional encoder uses fewer redundant bits for each 2D signaling interval, and increases intra-subset minimum squared Euclidean distance (MSED) to approach the ultimate capacity limit predicted by Shannon\u27s theory. The multidimensional signals perform better for the same complexity than two-dimensional schemes. The inherent constellation expansion penalty factor paid for using classical mapping structures can be decreased by enlarging the constellation\u27s dimension. In this thesis, a multidimensional signal set construction paradigm that completely avoids the constellation expansion penalty is used in Band-limited channels and in fading channels. As such, theoretical work on performance analysis and computer simulations for Quadrature-Quadrature Phase Shift Keying (Q2PSK), Constant Envelope (CE) Q2PSK, and trellis-coded 16D CEQ2PSK in ideal band-limited channels of various bandwidths is presented along with a novel discussion on visualization techniques for 4D Quadrature-Quadrature Phase Shift Keying (Q2PSK), Saha\u27s Constant Envelope (CE) Q2PSK, and Cartwright\u27s CEQ2PSK in ideal band-limited channels. Furthermore, a metric designed to be used in fading channels, with Hamming Distance (HD) as a primary concern and Euclidean distance (ED) as secondary is also introduced. Simulation results show that the 16D TCM CEQ2PSK system performs well in channels with AWGN and fading, even with the simplest convolutional encoder tested; achievable coding gains using 16-D CEQ2PSK Expanded TCM schemes under various conditions are finally reported

    Advances in Patient Classification for Traditional Chinese Medicine: A Machine Learning Perspective

    Get PDF
    As a complementary and alternative medicine in medical field, traditional Chinese medicine (TCM) has drawn great attention in the domestic field and overseas. In practice, TCM provides a quite distinct methodology to patient diagnosis and treatment compared to western medicine (WM). Syndrome (ZHENG or pattern) is differentiated by a set of symptoms and signs examined from an individual by four main diagnostic methods: inspection, auscultation and olfaction, interrogation, and palpation which reflects the pathological and physiological changes of disease occurrence and development. Patient classification is to divide patients into several classes based on different criteria. In this paper, from the machine learning perspective, a survey on patient classification issue will be summarized on three major aspects of TCM: sign classification, syndrome differentiation, and disease classification. With the consideration of different diagnostic data analyzed by different computational methods, we present the overview for four subfields of TCM diagnosis, respectively. For each subfield, we design a rectangular reference list with applications in the horizontal direction and machine learning algorithms in the longitudinal direction. According to the current development of objective TCM diagnosis for patient classification, a discussion of the research issues around machine learning techniques with applications to TCM diagnosis is given to facilitate the further research for TCM patient classification

    Unsupervised Similarity-Based Risk Stratification for Cardiovascular Events Using Long-Term Time-Series Data

    Get PDF
    In medicine, one often bases decisions upon a comparative analysis of patient data. In this paper, we build upon this observation and describe similarity-based algorithms to risk stratify patients for major adverse cardiac events. We evolve the traditional approach of comparing patient data in two ways. First, we propose similarity-based algorithms that compare patients in terms of their long-term physiological monitoring data. Symbolic mismatch identifies functional units in long-term signals and measures changes in the morphology and frequency of these units across patients. Second, we describe similarity-based algorithms that are unsupervised and do not require comparisons to patients with known outcomes for risk stratification. This is achieved by using an anomaly detection framework to identify patients who are unlike other patients in a population and may potentially be at an elevated risk. We demonstrate the potential utility of our approach by showing how symbolic mismatch-based algorithms can be used to classify patients as being at high or low risk of major adverse cardiac events by comparing their long-term electrocardiograms to that of a large population. We describe how symbolic mismatch can be used in three different existing methods: one-class support vector machines, nearest neighbor analysis, and hierarchical clustering. When evaluated on a population of 686 patients with available long-term electrocardiographic data, symbolic mismatch-based comparative approaches were able to identify patients at roughly a two-fold increased risk of major adverse cardiac events in the 90 days following acute coronary syndrome. These results were consistent even after adjusting for other clinical risk variables.National Science Foundation (U.S.) (CAREER award 1054419

    Evaluation and Analysis of Array Antennas for Passive Coherent Location (PCL) Systems

    Get PDF
    Passive Coherent Location (PCL) systems use a special form of a radar receiver that exploits the ambient radiation in the environment to detect and track targets. Typical transmissions of opportunity that might be exploited include television and FM radiobroadcasts. PCL implies the use of a non-radar electromagnetic sources of illumination, such as commercial radio or television broadcasts also referred as transmitters of opportunity. The use of such illumination sources means that the receiver needs to process waveforms that are not designed for radar purposes. As a consequence, the receivers for PCL systems must be much more customized than traditional receivers, in order to obtain the most appropriate and best signal. Since antennas are the eyes of the receivers, processing of an incoming signal starts with the antennas. Yet, because PCL system is non-traditional, there has not been much work done in the evaluation of the antennas, even though PCL systems have some demanding constraints on the antenna system. During this research various array antenna designs will be studied by their radiation patterns, gain factors, input impedances, power efficiencies and other features by simulating these arrays in the computer environment. The goal is to show the better performance of the array antennas compared to traditional Yagi-Uda antennas that are currently used for PCL systems

    A novel power management and control design framework for resilient operation of microgrids

    Get PDF
    This thesis concerns the investigation of the integration of the microgrid, a form of future electric grids, with renewable energy sources, and electric vehicles. It presents an innovative modular tri-level hierarchical management and control design framework for the future grid as a radical departure from the ‘centralised’ paradigm in conventional systems, by capturing and exploiting the unique characteristics of a host of new actors in the energy arena - renewable energy sources, storage systems and electric vehicles. The formulation of the tri-level hierarchical management and control design framework involves a new perspective on the problem description of the power management of EVs within a microgrid, with the consideration of, among others, the bi-directional energy flow between storage and renewable sources. The chronological structure of the tri-level hierarchical management operation facilitates a modular power management and control framework from three levels: Microgrid Operator (MGO), Charging Station Operator (CSO), and Electric Vehicle Operator (EVO). At the top level is the MGO that handles long-term decisions of balancing the power flow between the Distributed Generators (DGs) and the electrical demand for a restructure realistic microgrid model. Optimal scheduling operation of the DGs and EVs is used within the MGO to minimise the total combined operating and emission costs of a hybrid microgrid including the unit commitment strategy. The results have convincingly revealed that discharging EVs could reduce the total cost of the microgrid operation. At the middle level is the CSO that manages medium-term decisions of centralising the operation of aggregated EVs connected to the bus-bar of the microgrid. An energy management concept of charging or discharging the power of EVs in different situations includes the impacts of frequency and voltage deviation on the system, which is developed upon the MGO model above. Comprehensive case studies show that the EVs can act as a regulator of the microgrid, and can control their participating role by discharging active or reactive power in mitigating frequency and/or voltage deviations. Finally, at the low level is the EVO that handles the short-term decisions of decentralising the functioning of an EV and essential power interfacing circuitry, as well as the generation of low-level switching functions. EVO level is a novel Power and Energy Management System (PEMS), which is further structured into three modular, hierarchical processes: Energy Management Shell (EMS), Power Management Shell (PMS), and Power Electronic Shell (PES). The shells operate chronologically with a different object and a different period term. Controlling the power electronics interfacing circuitry is an essential part of the integration of EVs into the microgrid within the EMS. A modified, multi-level, H-bridge cascade inverter without the use of a main (bulky) inductor is proposed to achieve good performance, high power density, and high efficiency. The proposed inverter can operate with multiple energy resources connected in series to create a synergized energy system. In addition, the integration of EVs into a simulated microgrid environment via a modified multi-level architecture with a novel method of Space Vector Modulation (SVM) by the PES is implemented and validated experimentally. The results from the SVM implementation demonstrate a viable alternative switching scheme for high-performance inverters in EV applications. The comprehensive simulation results from the MGO and CSO models, together with the experimental results at the EVO level, not only validate the distinctive functionality of each layer within a novel synergy to harness multiple energy resources, but also serve to provide compelling evidence for the potential of the proposed energy management and control framework in the design of future electric grids. The design framework provides an essential design to for grid modernisation

    Finite Element Analysis of Lamb Waves Acting within a Thin Aluminum Plate

    Get PDF
    Structural health monitoring (SHM) is an emerging technology that can be used to identify, locate and quantify structural damages before failure. Among SHM techniques, Lamb waves have become widely used since they can cover large areas from one single location. Due to the development of various structural simulation programs, there is increasing interest in whether SHM data obtained from the simulation can be verified by experimentation. The objective of this thesis is to determine Lamb wave responses using SHM models in ABAQUS CAE(a Finite Element Analysis(FEA) program). These results are then compared to experimental results and theoretical predictions under isothermal and thermal gradient conditions in order to assess the sensitivity of piezo-generated Lamb wave propagation. Simulations of isothermal tests are conducted over a temperature range of 0-190 deg F with 100kHz and 300kHz excitation signal frequencies. The changes in temperature-dependent material properties are correlated to measurable differences in the response signal\u27s waveform and propagation speed. An analysis of the simulated signal response data demonstrated that elevated temperatures delay wave propagation, although the delays are minimal at the temperatures tested in this study

    Modeling and Optimization Algorithm for SiC-based Three-phase Motor Drive System

    Get PDF
    More electric aircraft (MEA) and electrified aircraft propulsion (EAP) becomes the important topics in the area of transportation electrifications, expecting remarkable environmental and economic benefits. However, they bring the urgent challenges for the power electronics design since the new power architecture in the electrified aircraft requires many benchmark designs and comparisons. Also, a large number of power electronics converter designs with different specifications and system-level configurations need to be conducted in MEA and EAP, which demands huge design efforts and costs. Moreover, the long debugging and testing process increases the time to market because of gaps between the paper design and implementation. To address these issues, this dissertation covers the modeling and optimization algorithms for SiC-based three-phase motor drive systems in aviation applications. The improved models can help reduce the gaps between the paper design and implementation, and the implemented optimization algorithms can reduce the required execution time of the design program. The models related to magnetic core based inductors, geometry layouts, switching behaviors, device loss, and cooling design have been explored and improved, and several modeling techniques like analytical, numerical, and curve-fitting methods are applied. With the developed models, more physics characteristics of power electronics components are incorporated, and the design accuracy can be improved. To improve the design efficiency and to reduce the design time, optimization schemes for the filter design, device selection combined with cooling design, and system-level optimization are studied and implemented. For filter design, two optimization schemes including Ap based weight prediction and particle swarm optimization are adopted to reduce searching efforts. For device selection and related cooling design, a design iteration considering practical layouts and switching speed is proposed. For system-level optimization, the design algorithm enables the evaluation of different topologies, modulation schemes, switching frequencies, filter configurations, cooling methods, and paralleled converter structure. To reduce the execution time of system-level optimization, a switching function based simulation and waveform synthesis method are adopted. Furthermore, combined with the concept of design automation, software integrated with the developed models, optimization algorithms, and simulations is developed to enable visualization of the design configurations, database management, and design results
    • …
    corecore