523,587 research outputs found

    Integrating Neural Networks with a Quantum Simulator for State Reconstruction

    Get PDF
    We demonstrate quantum many-body state reconstruction from experimental data generated by a programmable quantum simulator, by means of a neural network model incorporating known experimental errors. Specifically, we extract restricted Boltzmann machine (RBM) wavefunctions from data produced by a Rydberg quantum simulator with eight and nine atoms in a single measurement basis, and apply a novel regularization technique to mitigate the effects of measurement errors in the training data. Reconstructions of modest complexity are able to capture one- and two-body observables not accessible to experimentalists, as well as more sophisticated observables such as the R\'enyi mutual information. Our results open the door to integration of machine learning architectures with intermediate-scale quantum hardware.Comment: 15 pages, 13 figure

    Measurement of the properties of lossy materials inside a finite conducting cylinder

    Get PDF
    A computer code was developed to automatically perform swept frequency reflection and transmission measurements using a HP5510B Network Analyzer and computer. This software is used in conjunction with a modified high temperature test rig to obtain reflection measurements from a flat material sample. The software allows data processing to eliminate measurement errors and to obtain a reflection coefficient in the frequency or time domain. A description of the program is presented

    A Matrix-Variate t Model for Networks

    Get PDF
    Networks represent a useful tool to describe relationships among financial firms and network analysis has been extensively used in recent years to study financial connectedness. An aspect, which is often neglected, is that network observations come with errors from different sources, such as estimation and measurement errors, thus a proper statistical treatment of the data is needed before network analysis can be performed. We show that node centrality measures can be heavily affected by random errors and propose a flexible model based on the matrix-variate t distribution and a Bayesian inference procedure to de-noise the data. We provide an application to a network among European financial institutions

    Fault Location in Power Distribution Systems via Deep Graph Convolutional Networks

    Full text link
    This paper develops a novel graph convolutional network (GCN) framework for fault location in power distribution networks. The proposed approach integrates multiple measurements at different buses while taking system topology into account. The effectiveness of the GCN model is corroborated by the IEEE 123 bus benchmark system. Simulation results show that the GCN model significantly outperforms other widely-used machine learning schemes with very high fault location accuracy. In addition, the proposed approach is robust to measurement noise and data loss errors. Data visualization results of two competing neural networks are presented to explore the mechanism of GCN's superior performance. A data augmentation procedure is proposed to increase the robustness of the model under various levels of noise and data loss errors. Further experiments show that the model can adapt to topology changes of distribution networks and perform well with a limited number of measured buses.Comment: Accepcted by IEEE Journal on Selected Areas in Communicatio

    The effect of clock, media, and station location errors on Doppler measurement accuracy

    Get PDF
    Doppler tracking by the Deep Space Network (DSN) is the primary radio metric data type used by navigation to determine the orbit of a spacecraft. The accuracy normally attributed to orbits determined exclusively with Doppler data is about 0.5 microradians in geocentric angle. Recently, the Doppler measurement system has evolved to a high degree of precision primarily because of tracking at X-band frequencies (7.2 to 8.5 GHz). However, the orbit determination system has not been able to fully utilize this improved measurement accuracy because of calibration errors associated with transmission media, the location of tracking stations on the Earth's surface, the orientation of the Earth as an observing platform, and timekeeping. With the introduction of Global Positioning System (GPS) data, it may be possible to remove a significant error associated with the troposphere. In this article, the effect of various calibration errors associated with transmission media, Earth platform parameters, and clocks are examined. With the introduction of GPS calibrations, it is predicted that a Doppler tracking accuracy of 0.05 microradians is achievable

    Validation of northern latitude Tropospheric Emission Spectrometer stare ozone profiles with ARC-IONS sondes during ARCTAS: sensitivity, bias and error analysis

    Get PDF
    We compare Tropospheric Emission Spectrometer (TES) versions 3 and 4, V003 and V004, respectively, nadir-stare ozone profiles with ozonesonde profiles from the Arctic Intensive Ozonesonde Network Study (ARCIONS, http://croc.gsfc.nasa.gov/arcions/ during the Arctic Research on the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) field mission. The ozonesonde data are from launches timed to match Aura's overpass, where 11 coincidences spanned 44° N to 71° N from April to July 2008. Using the TES "stare" observation mode, 32 observations are taken over each coincidental ozonesonde launch. By effectively sampling the same air mass 32 times, comparisons are made between the empirically-calculated random errors to the expected random errors from measurement noise, temperature and interfering species, such as water. This study represents the first validation of high latitude (>70°) TES ozone. We find that the calculated errors are consistent with the actual errors with a similar vertical distribution that varies between 5% and 20% for V003 and V004 TES data. In general, TES ozone profiles are positively biased (by less than 15%) from the surface to the upper-troposphere (~1000 to 100 hPa) and negatively biased (by less than 20%) from the upper-troposphere to the lower-stratosphere (100 to 30 hPa) when compared to the ozonesonde data. Lastly, for V003 and V004 TES data between 44° N and 71° N there is variability in the mean biases (from −14 to +15%), mean theoretical errors (from 6 to 13%), and mean random errors (from 9 to 19%)

    Data Validation and reconstruction for performance enhancement and maintenance of water networks

    Get PDF
    In a real water network, a telecontrol system must periodically acquire, store and validate data gathered by sensor measurements in order to achieve accurate monitoring of the whole network in real time. For each sensor measurement, data are usually represented by one-dimensional time series. These values, known as raw data, need to be validated before further use to assure the reliability of the results obtained when using them. In real operation, problems affecting the communication system, lack of reliability of sensors, or other inherent errors often arise, generating missing or false data during certain periods of time. These wrong data must be detected and replaced by estimated data. Thus, it is important to provide the data system with procedures that can detect such problems and assist the user in monitoring and processing the incoming data. Data validation is an essential step to improve data reliability. The validated data represent measurements of the variables in the required form where unnecessary information from raw data has been removed. In this paper, a methodology for data validation and reconstruction of sensor data in a water network is used to analyze the performance of the sectors of a water network. Finally, from this analysis several indicators of the components (sensors, actuators and pipes) and of the sectors themselves can be derived in order to organize useful plans for performance enhancement and maintenance. Nice practices have been developed during a large period in the water network of the company ATLL Concessionària de la Generalitat de Catalunya, S.A.Postprint (author's final draft

    Modeling social networks from sampled data

    Full text link
    Network models are widely used to represent relational information among interacting units and the structural implications of these relations. Recently, social network studies have focused a great deal of attention on random graph models of networks whose nodes represent individual social actors and whose edges represent a specified relationship between the actors. Most inference for social network models assumes that the presence or absence of all possible links is observed, that the information is completely reliable, and that there are no measurement (e.g., recording) errors. This is clearly not true in practice, as much network data is collected though sample surveys. In addition even if a census of a population is attempted, individuals and links between individuals are missed (i.e., do not appear in the recorded data). In this paper we develop the conceptual and computational theory for inference based on sampled network information. We first review forms of network sampling designs used in practice. We consider inference from the likelihood framework, and develop a typology of network data that reflects their treatment within this frame. We then develop inference for social network models based on information from adaptive network designs. We motivate and illustrate these ideas by analyzing the effect of link-tracing sampling designs on a collaboration network.Comment: Published in at http://dx.doi.org/10.1214/08-AOAS221 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore