487 research outputs found

    Solving anharmonic oscillator with null states: Hamiltonian bootstrap and Dyson-Schwinger equations

    Full text link
    As basic quantum mechanical models, anharmonic oscillators are recently revisited by bootstrap methods. An effective approach is to make use of the positivity constraints in Hermitian theories. There exists an alternative avenue based on the null state condition, which applies to both Hermitian and non-Hermitian theories. In this work, we carry out an analytic bootstrap study of the quartic oscillator based on the small coupling expansion. In the Hamiltonian formalism, we obtain the anharmonic generalization of Dirac's ladder operators. Furthermore, the Schrodinger equation can be interpreted as a null state condition generated by an anharmonic ladder operator. This provides an explicit example in which dynamics is incorporated into the principle of nullness. In the Lagrangian formalism, we show that the existence of null states can effectively eliminate the indeterminacy of the Dyson-Schwinger equations and systematically determine nn-point Green's functions.Comment: v2: 33 pages, references updated, discussions improve

    Generative Adversarial Network (GAN)-assisted data quality monitoring approach for out-of-distribution detection of high dimensional data

    Get PDF
    Data quality monitoring plays a critical role in various real-world engineering system inspection problems. Anomalous or invalid inspection data commonly exist due to computer/human recording errors, sensor faults, etc. Thus, an efficient tool to detect data anomalies is critically needed. However, it is challenging due to high dimensionality, unknown underlying distribution, insufficient sample size, and high level of noise. To address these challenges, an effective approach that can learn the underlying distribution of normal data with anomaly detection rules was developed. In this approach, the Generative Adversarial Network (GAN) was employed to identify the underlying distribution of normal data and filter out noise. After using the trained GAN to generate points of the learned distribution, a k-nearest neighbor-based approach is used to define the anomaly detection rules. In the proposed approach, the normal records are used to train the GAN and establish the control rule. Specifically, after training the GAN using the normal records, the pairwise distances over all the GAN-generated data points are calculated, and the k-nearest neighbors for every single data point are accordingly determined. Then, the average distance from each single data point to its k-nearest neighbors is calculated as the statistics to indicate the data quality and establish a control chart. When a new record comes in, its similarity to the GAN-generated distribution can be evaluated by the established control chart to identify whether the new record is anomalous or not.Industrial Engineering and Managemen

    Music Artist Classification with WaveNet Classifier for Raw Waveform Audio Data

    Full text link
    Models for music artist classification usually were operated in the frequency domain, in which the input audio samples are processed by the spectral transformation. The WaveNet architecture, originally designed for speech and music generation. In this paper, we propose an end-to-end architecture in the time domain for this task. A WaveNet classifier was introduced which directly models the features from a raw audio waveform. The WaveNet takes the waveform as the input and several downsampling layers are subsequent to discriminate which artist the input belongs to. In addition, the proposed method is applied to singer identification. The model achieving the best performance obtains an average F1 score of 0.854 on benchmark dataset of Artist20, which is a significant improvement over the related works. In order to show the effectiveness of feature learning of the proposed method, the bottleneck layer of the model is visualized.Comment: 12 page

    Area-based depth estimation for monochromatic feature-sparse orthographic capture

    Get PDF
    With the rapid development of light field technology, depth estimation has been highlighted as one of the critical problems in the field, and a number of approaches have been proposed to extract the depth of the scene. However, depth estimation by stereo matching becomes difficult and unreliable when the captured images lack both color and feature information. In this paper, we propose a scheme that extracts robust depth from monochromatic, feature-sparse scenes recorded in orthographic sub-aperture images. Unlike approaches which rely on the rich color and texture information across the sub-aperture views, our approach is based on depth from focus techniques. First, we superimpose shifted sub-aperture images on top of an arbitrarily chosen central image. To focus on different depths, the shift amount is varied based on the micro-lens array properties. Next, an area-based depth estimation approach is applied to find the best match among the focal stack and generate the dense depth map. This process is repeated for each sub-aperture image. Finally, occlusions are handled by merging depth maps generated from different central images followed by a voting process. Results show that the proposed scheme is more suitable than conventional depth estimation approaches in the context of orthographic captures that have insufficient color and feature information, such as microscopic fluorescence imaging
    corecore