311 research outputs found
The Constructing of Mobile Internet-Based Ideological and Political Education of University Students Based on the Idea of U-Learning
As a type of educational concept, u-learning has gradually stepped from theory to practice relying on the continuous development of modern educational technology. While mobile Internet has provided a good opportunity to implant the idea of u-learning into the ideological and political education of university students, universities should be vigorously constructing the mobile Internet-based ideological and political education of university students, breaking through the restrictions of time and space of the traditional educational pattern, thus realizing an ubiquitous ideological and political education of university students
Quantum Phase Recognition via Quantum Kernel Methods
The application of quantum computation to accelerate machine learning
algorithms is one of the most promising areas of research in quantum
algorithms. In this paper, we explore the power of quantum learning algorithms
in solving an important class of Quantum Phase Recognition (QPR) problems,
which are crucially important in understanding many-particle quantum systems.
We prove that, under widely believed complexity theory assumptions, there
exists a wide range of QPR problems that cannot be efficiently solved by
classical learning algorithms with classical resources. Whereas using a quantum
computer, we prove the efficiency and robustness of quantum kernel methods in
solving QPR problems through Linear order parameter Observables. We numerically
benchmark our algorithm for a variety of problems, including recognizing
symmetry-protected topological phases and symmetry-broken phases. Our results
highlight the capability of quantum machine learning in predicting such quantum
phase transitions in many-particle systems
Scenario Generation for Cooling, Heating, and Power Loads Using Generative Moment Matching Networks
Scenario generations of cooling, heating, and power loads are of great
significance for the economic operation and stability analysis of integrated
energy systems. In this paper, a novel deep generative network is proposed to
model cooling, heating, and power load curves based on a generative moment
matching networks (GMMN) where an auto-encoder transforms high-dimensional load
curves into low-dimensional latent variables and the maximum mean discrepancy
represents the similarity metrics between the generated samples and the real
samples. After training the model, the new scenarios are generated by feeding
Gaussian noises to the scenario generator of the GMMN. Unlike the explicit
density models, the proposed GMMN does not need to artificially assume the
probability distribution of the load curves, which leads to stronger
universality. The simulation results show that the GMMN not only fits the
probability distribution of multi-class load curves well, but also accurately
captures the shape (e.g., large peaks, fast ramps, and fluctuation),
frequency-domain characteristics, and temporal-spatial correlations of cooling,
heating, and power loads. Furthermore, the energy consumption of generated
samples closely resembles that of real samples.Comment: This paper has been accepted by CSEE Journal of Power and Energy
System
A Review of Graph Neural Networks and Their Applications in Power Systems
Deep neural networks have revolutionized many machine learning tasks in power
systems, ranging from pattern recognition to signal processing. The data in
these tasks is typically represented in Euclidean domains. Nevertheless, there
is an increasing number of applications in power systems, where data are
collected from non-Euclidean domains and represented as graph-structured data
with high dimensional features and interdependency among nodes. The complexity
of graph-structured data has brought significant challenges to the existing
deep neural networks defined in Euclidean domains. Recently, many publications
generalizing deep neural networks for graph-structured data in power systems
have emerged. In this paper, a comprehensive overview of graph neural networks
(GNNs) in power systems is proposed. Specifically, several classical paradigms
of GNNs structures (e.g., graph convolutional networks) are summarized, and key
applications in power systems, such as fault scenario application, time series
prediction, power flow calculation, and data generation are reviewed in detail.
Furthermore, main issues and some research trends about the applications of
GNNs in power systems are discussed
NeTO:Neural Reconstruction of Transparent Objects with Self-Occlusion Aware Refraction-Tracing
We present a novel method, called NeTO, for capturing 3D geometry of solid
transparent objects from 2D images via volume rendering. Reconstructing
transparent objects is a very challenging task, which is ill-suited for
general-purpose reconstruction techniques due to the specular light transport
phenomena. Although existing refraction-tracing based methods, designed
specially for this task, achieve impressive results, they still suffer from
unstable optimization and loss of fine details, since the explicit surface
representation they adopted is difficult to be optimized, and the
self-occlusion problem is ignored for refraction-tracing. In this paper, we
propose to leverage implicit Signed Distance Function (SDF) as surface
representation, and optimize the SDF field via volume rendering with a
self-occlusion aware refractive ray tracing. The implicit representation
enables our method to be capable of reconstructing high-quality reconstruction
even with a limited set of images, and the self-occlusion aware strategy makes
it possible for our method to accurately reconstruct the self-occluded regions.
Experiments show that our method achieves faithful reconstruction results and
outperforms prior works by a large margin. Visit our project page at
\url{https://www.xxlong.site/NeTO/
Complexity analysis of weakly noisy quantum states via quantum machine learning
Quantum computers capable of fault-tolerant operation are expected to provide
provable advantages over classical computational models. However, the question
of whether quantum advantages exist in the noisy intermediate-scale quantum era
remains a fundamental and challenging problem. The root of this challenge lies
in the difficulty of exploring and quantifying the power of noisy quantum
states. In this work, we focus on the complexity of weakly noisy states, which
we define as the size of the shortest quantum circuit required to prepare the
noisy state. To analyze the complexity, we propose a quantum machine learning
(QML) algorithm that exploits the intrinsic-connection property of structured
quantum neural networks. The proposed QML algorithm enables efficiently
predicting the complexity of weakly noisy states from measurement results,
representing a paradigm shift in our ability to characterize the power of noisy
quantum computation
Orbital Expansion Variational Quantum Eigensolver: Enabling Efficient Simulation of Molecules with Shallow Quantum Circuit
In the noisy-intermediate-scale-quantum era, Variational Quantum Eigensolver
(VQE) is a promising method to study ground state properties in quantum
chemistry, materials science, and condensed physics. However, general quantum
eigensolvers are lack of systematical improvability, and achieve rigorous
convergence is generally hard in practice, especially in solving
strong-correlated systems. Here, we propose an Orbital Expansion VQE~(OE-VQE)
framework to construct an efficient convergence path. The path starts from a
highly correlated compact active space and rapidly expands and converges to the
ground state, enabling simulating ground states with much shallower quantum
circuits. We benchmark the OE-VQE on a series of typical molecules including
H-chain, H-ring and N, and the simulation results show that
proposed convergence paths dramatically enhance the performance of general
quantum eigensolvers.Comment: Wu et al 2023 Quantum Sci. Techno
Trainability Analysis of Quantum Optimization Algorithms from a Bayesian Lens
The Quantum Approximate Optimization Algorithm (QAOA) is an extensively
studied variational quantum algorithm utilized for solving optimization
problems on near-term quantum devices. A significant focus is placed on
determining the effectiveness of training the -qubit QAOA circuit, i.e.,
whether the optimization error can converge to a constant level as the number
of optimization iterations scales polynomially with the number of qubits. In
realistic scenarios, the landscape of the corresponding QAOA objective function
is generally non-convex and contains numerous local optima. In this work,
motivated by the favorable performance of Bayesian optimization in handling
non-convex functions, we theoretically investigate the trainability of the QAOA
circuit through the lens of the Bayesian approach. This lens considers the
corresponding QAOA objective function as a sample drawn from a specific
Gaussian process. Specifically, we focus on two scenarios: the noiseless QAOA
circuit and the noisy QAOA circuit subjected to local Pauli channels. Our first
result demonstrates that the noiseless QAOA circuit with a depth of
can be trained efficiently,
based on the widely accepted assumption that either the left or right slice of
each block in the circuit forms a local 1-design. Furthermore, we show that if
each quantum gate is affected by a -strength local Pauli channel with the
noise strength range of to 0.1, the noisy QAOA circuit with
a depth of can also be trained
efficiently. Our results offer valuable insights into the theoretical
performance of quantum optimization algorithms in the noisy intermediate-scale
quantum era
Mutual Information Learned Regressor: an Information-theoretic Viewpoint of Training Regression Systems
As one of the central tasks in machine learning, regression finds lots of
applications in different fields. An existing common practice for solving
regression problems is the mean square error (MSE) minimization approach or its
regularized variants which require prior knowledge about the models. Recently,
Yi et al., proposed a mutual information based supervised learning framework
where they introduced a label entropy regularization which does not require any
prior knowledge. When applied to classification tasks and solved via a
stochastic gradient descent (SGD) optimization algorithm, their approach
achieved significant improvement over the commonly used cross entropy loss and
its variants. However, they did not provide a theoretical convergence analysis
of the SGD algorithm for the proposed formulation. Besides, applying the
framework to regression tasks is nontrivial due to the potentially infinite
support set of the label. In this paper, we investigate the regression under
the mutual information based supervised learning framework. We first argue that
the MSE minimization approach is equivalent to a conditional entropy learning
problem, and then propose a mutual information learning formulation for solving
regression problems by using a reparameterization technique. For the proposed
formulation, we give the convergence analysis of the SGD algorithm for solving
it in practice. Finally, we consider a multi-output regression data model where
we derive the generalization performance lower bound in terms of the mutual
information associated with the underlying data distribution. The result shows
that the high dimensionality can be a bless instead of a curse, which is
controlled by a threshold. We hope our work will serve as a good starting point
for further research on the mutual information based regression.Comment: 28 pages, 2 figures, presubmitted to AISTATS2023 for reviewin
- …