24 research outputs found

    On the deployment of on-chip noise sensors

    Get PDF
    The relentless technology scaling has led to significantly reduced noise margin and complicated functionalities. As such, design time techniques per se are less likely to ensure power integrity, resulting in runtime voltage emergencies. To alleviate the issue, recently several works have shed light on the possibilities of dynamic noise management systems. Most of these works rely on on-chip noise sensors to accurately capture voltage emergencies. However, they all assume that the placement of the sensors is given. It remains an open problem in the literature how to optimally place a given number of noise sensors for best voltage emergency detection. The problem of noise sensor placement is defined at first along with a novel sensing quality metric (SQM) to be maximized. The threshold voltage for noise sensors to report emergencies serves as a critical tuning knob between the system failure rate and false alarms. The problem of minimizing the system alarm rate subject to a given system failure rate constraint is formulated. It is further shown that with the help of IDDQ measurements during testing which reveal process variation information, it is possible and efficient to compute a per-chip optimal threshold voltage threshold. In the third chapter, a novel framework to predict the resonance frequency using existing on-chip noise sensors, based on the theory of 1-bit compressed sensing is proposed. The proposed framework can help to achieve the resonance frequency of individual chips so as to effectively avoid resonance noise at runtime --Abstract, page iii

    Design and Implementation of Smart Sensors with Capabilities of Process Fault Detection and Variable Prediction

    Get PDF
    A typical sensor consists of a sensing element and a transmitter. The major functions of a transmitter are limited to data acquisition and communication. The recently developed transmitters with ‘smart’ functions have been focused on easy setup/maintenance of the transmitter itself such as self-calibration and self-configuration. Recognizing the growing computational capabilities of microcontroller units (MCUs) used in these transmitters and underutilized computational resources, this thesis investigates the feasibility of adding additional functionalities to a transmitter to make it ‘smart’ without modifying its foot-print, nor adding supplementary hardware. Hence, a smart sensor is defined as sensing elements combined with a smart transmitter. The added functionalities enhance a smart sensor with respect to performing process fault detection and variable prediction. This thesis starts with literature review to identify the state-of-the-arts in this field and also determine potential industry needs for the added functionalities. Particular attentions have been paid to an existing commercial temperature transmitter named NCS-TT105 from Microcyber Corporation. Detailed examination has been made in its internal hardware architecture, software execution environment, and additional computational resources available for accommodating additional functions. Furthermore, the schemes of the algorithms for realizing process fault detection and variable prediction have been examined from both theoretical and feasibility perspectives to incorporate onboard NCS-TT105. An important body of the thesis is to implement additional functions in the MCUs of NCS-TT105 by allocating real-time execution of different tasks with assigned priorities in the real-time operating system (RTOS). The enhanced NCS-TT105 has gone through extensive evaluation on a physical process control test facility under various normal/fault conditions. The test results are satisfactory and design specifications have been achieved. To the best knowledge of the author, this is the first time that process fault detection and variable prediction have been implemented right onboard of a commercial transmitter. The enhanced smart transmitter is capable of providing the information of incipient faults in the process and future changes of critical process variables. It is believed that this is an initial step towards the realization of distributed intelligence in process control, where important decisions regarding the process can be made at a sensor level

    Combining DNA Methylation with Deep Learning Improves Sensitivity and Accuracy of Eukaryotic Genome Annotation

    Get PDF
    Thesis (Ph.D.) - Indiana University, School of Informatics, Computing, and Engineering, 2020The genome assembly process has significantly decreased in computational complexity since the advent of third-generation long-read technologies. However, genome annotations still require significant manual effort from scientists to produce trust-worthy annotations required for most bioinformatic analyses. Current methods for automatic eukaryotic annotation rely on sequence homology, structure, or repeat detection, and each method requires a separate tool, making the workflow for a final product a complex ensemble. Beyond the nucleotide sequence, one important component of genetic architecture is the presence of epigenetic marks, including DNA methylation. However, no automatic annotation tools currently use this valuable information. As methylation data becomes more widely available from nanopore sequencing technology, tools that take advantage of patterns in this data will be in demand. The goal of this dissertation was to improve the annotation process by developing and training a recurrent neural network (RNN) on trusted annotations to recognize multiple classes of elements from both the reference sequence and DNA methylation. We found that our proposed tool, RNNotate, detected fewer coding elements than GlimmerHMM and Augustus, but those predictions were more often correct. When predicting transposable elements, RNNotate was more accurate than both Repeat-Masker and RepeatScout. Additionally, we found that RNNotate was significantly less sensitive when trained and run without DNA methylation, validating our hypothesis. To our best knowledge, we are not only the first group to use recurrent neural networks for eukaryotic genome annotation, but we also innovated in the data space by utilizing DNA methylation patterns for prediction

    Community Detection in Complex Networks

    Get PDF
    The stochastic block model is a powerful tool for inferring community structure from network topology. However, the simple block model considers community structure as the only underlying attribute for forming the relational interactions among the nodes, this makes it prefer a Poisson degree distribution within each community, while most real-world networks have a heavy-tailed degree distribution. This is essentially because the simple assumption under the traditional block model is not consistent with some real-world circumstances where factors other than the community memberships such as overall popularity also heavily affect the pattern of the relational interactions. The degree-corrected block model can accommodate arbitrary degree distributions within communities by taking nodes\u27 popularity or degree into account. But since it takes the vertex degrees as parameters rather than generating them, it cannot use them to help it classify the vertices, and its natural generalization to directed graphs cannot even use the orientations of the edges. We developed several variants of the block model with the best of both worlds: they can use vertex degrees and edge orientations in the classification process, while tolerating heavy-tailed degree distributions within communities. We show that for some networks, including synthetic networks and networks of word adjacencies in English text, these new block models achieve a higher accuracy than either standard or degree-corrected block models. Another part of my work is to develop even more generalized block models, which incorporates other attributes of the nodes. Many data sets contain rich information about objects, as well as pairwise relations between them. For instance, in networks of websites, scientific papers, patents and other documents, each node has content consisting of a collection of words, as well as hyperlinks or citations to other nodes. In order to perform inference on such data sets, and make predictions and recommendations, it is useful to have models that are able to capture the processes which generate the text at each node as well as the links between them. Our work combines classic ideas in topic modeling with a variant of the mixed-membership block model recently developed in the statistical physics community. The resulting model has the advantage that its parameters, including the mixture of topics of each document and the resulting overlapping communities, can be inferred with a simple and scalable expectation- maximization algorithm. We test our model on three data sets, performing unsupervised topic classification and link prediction. For both tasks, our model outperforms several existing state-of-the-art methods, achieving higher accuracy with significantly less computation, analyzing a data set with 1.3 million words and 44 thousand links in a few minutes

    Divergent Plume Reduction of a High-Efficiency Multistage Plasma Thruster

    Get PDF
    High Efficiency Multistage Plasma Thrusters (HEMPTs) are a relatively new form of electric propulsion that show promise for use on a variety of missions and have several advantages over their older EP competitors. One such advantage is their long predicted lifetime and minimal wall erosion due to a unique periodic permanent magnet system. A laboratory HEMPT was built and donated by JPL for testing at Cal Poly. Previous work was done to characterize the performance of this thruster and it was found to exhibit a large plume divergence, resulting in decreased thrust and specific impulse. This thesis explores the design and application of a magnetic shield to modify the thruster’s magnetic field to force more ion current towards the centerline. A previous Cal Poly thesis explored the same concept, and that work is continued and furthered here. The previous thesis tested a shield which increased centerline current but decreased performance. A new shield design which should avoid this performance decrease is studied here. Magnetic modelling of the thruster was performed using COMSOL. This model was verified using guassmeters to measure the field strength at many discrete points within and near the HEMPT, with a focus on the ionization channel and exit plane. A shield design which should significantly reduce the radial field strength at the exit plane without affecting the ionization channel field was modelled and implemented. The HEMPT was tested in a vacuum chamber with and without the shield to characterize any change to performance characteristics. Data were collected using a nude Faraday probe and retarding potential analyzer. The data show a significant increase in centerline current with the application of the shield, but due to RPA malfunction and thruster failure the actual change in performance could not be concluded. The unshielded HEMPT was characterized, however, and was found to produce 12.1 +/- 1.3 mN of thrust with a specific impulse of 1361 +/- 147s. The thruster operated with a total efficiency of 10.63 +/- 3.66%, an efficiency much lower than expected. A large contributor to this low efficiency is likely the use of argon in place of xenon. Its lower mass and higher ionization energy make it a less efficient propellant choice. Further, the thruster is prone to overheating, indicating that significant thermal losses are present in this design

    Estimating UK House Prices using Machine Learning

    Get PDF
    House price estimation is an important subject for property owners, property developers, investors and buyers. It has featured in many academic research papers and some government and commercial reports. The price of a house may vary depending on several features including geographic location, tenure, age, type, size, market, etc. Existing studies have largely focused on applying single or multiple machine learning techniques to single or groups of datasets to identify the best performing algorithms, models and/or most important predictors, but this paper proposes a cumulative layering approach to what it describes as a Multi-feature House Price Estimation (MfHPE) framework. The MfHPE is a process-oriented, data-driven and machine learning based framework that does not just identify the best performing algorithms or features that drive the accuracy of models but also exploits a cumulative multi-feature layering approach to creating machine learning models, optimising and evaluating them so as to produce tangible insights that enable the decision-making process for stakeholders within the housing ecosystem for a more realistic estimation of house prices. Fundamentally, the MfHPE framework development leverages the Design Science Research Methodology (DSRM) and HM Land Registry’s Price Paid Data is ingested as the base transactions data. 1.1 million London-based transaction records between January 2011 and December 2020 have been exploited for model design, optimisation and evaluation, while 84,051 2021 transactions have been used for model validation. With the capacity for updates to existing datasets and the introduction of new datasets and algorithms, the proposed framework has also leveraged a range of neighbourhood and macroeconomic features including the location of rail stations, supermarkets, bus stops, inflation rate, GDP, employment rate, Consumer Price Index (CPIH) and unemployment rate to explore their impact on the estimation of house prices and their influence on the behaviours of machine learning algorithms. Five machine learning algorithms have been exploited and three evaluation metrics have been used. Results show that the layered introduction of new variety of features in multiple tiers led to improved performance in 50% of models, a change in the best performing models as new variety of features are introduced, and that the choice of evaluation metrics should not just be based on technical problem types but on three components: (i) critical business objectives or project goals; (ii) variety of features; and (iii) machine learning algorithms

    Characterization and Avoidance of Critical Pipeline Structures in Aggressive Superscalar Processors

    Get PDF
    In recent years, with only small fractions of modern processors now accessible in a single cycle, computer architects constantly fight against propagation issues across the die. Unfortunately this trend continues to shift inward, and now the even most internal features of the pipeline are designed around communication, not computation. To address the inward creep of this constraint, this work focuses on the characterization of communication within the pipeline itself, architectural techniques to avoid it when possible, and layout co-design for early detection of problems. I present work in creating a novel detection tool for common case operand movement which can rapidly characterize an applications dataflow patterns. The results produced are suitable for exploitation as a small number of patterns can describe a significant portion of modern applications. Work on dynamic dependence collapsing takes the observations from the pattern results and shows how certain groups of operations can be dynamically grouped, avoiding unnecessary communication between individual instructions. This technique also amplifies the efficiency of pipeline data structures such as the reorder buffer, increasing both IPC and frequency. I also identify the same sets of collapsible instructions at compile time, producing the same benefits with minimal hardware complexity. This technique is also done in a backward compatible manner as the groups are exposed by simple reordering of the binarys instructions. I present aggressive pipelining approaches for these resources which avoids the critical timing often presumed necessary in aggressive superscalar processors. As these structures are designed for the worst case, pipelining them can produce greater frequency benefit than IPC loss. I also use the observation that the dynamic issue order for instructions in aggressive superscalar processors is predictable. Thus, a hardware mechanism is introduced for caching the wakeup order for groups of instructions efficiently. These wakeup vectors are then used to speculatively schedule instructions, avoiding the dynamic scheduling when it is not necessary. Finally, I present a novel approach to fast and high-quality chip layout. By allowing architects to quickly evaluate what if scenarios during early high-level design, chip designs are less likely to encounter implementation problems later in the process.Ph.D.Committee Chair: Scott Wills; Committee Member: David Schimmel; Committee Member: Gabriel Loh; Committee Member: Hsien-Hsin Lee; Committee Member: Yorai Ward

    Power System Stability Analysis Using Wide Area Measurement System

    Get PDF
    Advances in wide area measurement systems have transformed power system operation from simple visualization, state estimation, and post-mortem analysis tools to real-time protection and control at the systems level. Transient disturbances (such as lightning strikes) exist only for a fraction of a second but create transient stability issues and often trigger cascading type failures. The most common practice to prevent instabilities is with local generator out-of-step protection. Unfortunately, out-of-step protection operation of generators may not be fast enough, and an instability may take down nearby generators and the rest of the system by the time the local generator relay operates. Hence, it is important to assess power system stability over transmission lines as soon as the transient instability is detected instead of relying on purely localized out-of-step protection in generators. This thesis proposes a synchrophasor-based out-of-step prediction methodology at the transmission line level using wide area measurements from optimal phasor measurement unit (PMU) locations in the interconnected system. Voltage and current measurements from wide area measurement systems (WAMS) are utilized to find the swing angles. The proposed scheme was used to predict the first swing out-of-step condition in a Western Systems Coordinating Council (WSCC) 9 bus power system. A coherency analysis was first performed in this multi-machine system to determine the two coherent groups of generators. The coherent generator groups were then represented with a two-machine equivalent system, and the synchrophasor-based out-of-step prediction algorithm then applied to the reduced equivalent system. The coherency among the group of generators was determined within 100 ms for the contingency scenarios tested. The proposed technique is able to predict the instability 141.66 to 408.33 ms before the system actually reaches out-of-step conditions. The power swing trajectory is either a steady-state trajectory, monotonically increasing type (when the system becomes unstable), or oscillatory type (under stable conditions). Un- der large disturbance conditions, the swing could also become non-stationary. The mean and variance of the signal is not constant when it is monotonically increasing or non-stationary. An autoregressive integrated (ARI) approach was developed in this thesis, with differentiation of two successive samples done to make the mean and variance constant and facilitate time series prediction of the swing curve. Electromagnetic transient simulations with a real-time digital simulator (RTDS) were used to test the accuracy of the proposed algorithm with respect to predicting transient in- stability conditions. The studies show that the proposed method is computationally efficient and accurate for larger power systems. The proposed technique was also compared with a conventional two blinder technique and swing center voltage method. The proposed method was also implemented with actual PMU measurements from a relay (General Electric (GE) N60 relay). The testing was carried out with an interface between the N60 relay and the RTDS. The WSCC 9 bus system was modeled in the simulator and the analog time signals from the optimal location in the network communicated to the N60 relay. The synchrophasor data from the PMUs in the N60 were used to back-calculate the rotor angles of the generators in the system. Once the coherency was established, the swing curves for the coherent group of generators were found from time series prediction (ARI model). The test results with the actual PMUs match quite well with the results obtained from virtual PMU-based testing in the RTDS. The calculation times for the time series prediction are also very small. This thesis also discusses a novel out-of-step detection technique that was investigated in the course of this work for an IEEE Power Systems Relaying Committee J-5 Working Group document using real-time measurements of generator accelerating power. Using the derivative or second derivative of a measurement variable significantly amplifies the noise term and has limited the actual application of some methods in the literature, such as local measurements of voltage or voltage deviations at generator terminals. Another problem with the voltage based methods is taking an average over a period; the intermediate values cancel out and, as a result, just the first and last sample values are used to find the speed. This effectively means that the sample values in between are not used. The first solution proposed to overcome this is a polynomial fitting of the points of the calculated derivative points (to calculate speed). The second solution is the integral of the accelerating power method (this eliminates taking a derivative altogether). This technique shows the direct relationship of electrical power deviation to rotor acceleration and the integral of accelerating power to generator speed deviation. The accelerating power changes are straightforward to measure and the values obtained are more stable during transient conditions. A single machine infinite bus (SMIB) system was used for the purpose of verifying the proposed local measurement based method
    corecore