2,246 research outputs found

    Hybrid Advanced Optimization Methods with Evolutionary Computation Techniques in Energy Forecasting

    Get PDF
    More accurate and precise energy demand forecasts are required when energy decisions are made in a competitive environment. Particularly in the Big Data era, forecasting models are always based on a complex function combination, and energy data are always complicated. Examples include seasonality, cyclicity, fluctuation, dynamic nonlinearity, and so on. These forecasting models have resulted in an over-reliance on the use of informal judgment and higher expenses when lacking the ability to determine data characteristics and patterns. The hybridization of optimization methods and superior evolutionary algorithms can provide important improvements via good parameter determinations in the optimization process, which is of great assistance to actions taken by energy decision-makers. This book aimed to attract researchers with an interest in the research areas described above. Specifically, it sought contributions to the development of any hybrid optimization methods (e.g., quadratic programming techniques, chaotic mapping, fuzzy inference theory, quantum computing, etc.) with advanced algorithms (e.g., genetic algorithms, ant colony optimization, particle swarm optimization algorithm, etc.) that have superior capabilities over the traditional optimization approaches to overcome some embedded drawbacks, and the application of these advanced hybrid approaches to significantly improve forecasting accuracy

    Nonclassicality of photon-added squeezed vacuum and its decoherence in thermal environment

    Full text link
    We study the nonclassicality of photon-added squeezed vacuum (PASV) and its decoherence in thermal environment in terms of the sub-Poissonian statistics and the negativity of Wigner function (WF). By converting the PASV to a squeezed Hermite polynomial excitation state, we derive a compact expression for the normalization factor of m-PASV, which is an m-order Legendre polynomial of squeezing parameter r. We also derive the explicit expression of WF of m-PASV and find the negative region of WF in phase space. We show that there is an upper bound value of r for this state to exhibit sub-Poissonian statistics increasing as m increases. Then we derive the explicit analytical expression of time evolution of WF of m-PASV in the thermal channel and discuss the loss of nonclassicality using the negativity of WF. The threshold value of decay time is presented for the single PASV.Comment: 14 pages and 7 figure

    Higgs compositeness in Sp(2N)\mathrm{Sp}(2N) gauge theories --- Resymplecticisation, scale setting and topology

    Get PDF
    As part of an ongoing programme to study Sp(2N)\mathrm{Sp}(2N) gauge theories as potential realisations of composite Higgs models, we consider the case of Sp(4)\mathrm{Sp}(4) on the lattice, both as a pure gauge theory, and with two Dirac fermion flavors in the fundamental representation. In order to compare results between these two cases and maintain control of lattice artefacts, we make use of the gradient flow to set the scale of the simulations. We present some technical aspects of the simulations, including preliminary results for the scale setting in the two cases and results for the topological charge history.Comment: 8 pages, 6 figures; talk presented at the 35th International Symposium on Lattice Field Theory, 18-24 June 2017, Granada, Spai

    Early developing syntactic knowledge influences sequential statistical learning in infancy

    Get PDF
    Adults\u2019 linguistic background influences their sequential statistical learning of an artificial language characterized by conflicting forward-going and backward-going transitional probabilities. English-speaking adults favor backward-going transitional probabilities, consistent with the head-initial structure of English. Korean-speaking adults favor forward-going transitional probabilities, consistent with the head-final structure of Korean. These experiments assess when infants develop this directional bias. In the experiments, 7-month-old infants showed no bias for forward-going or backward-going regularities. By 13 \u202fmonths, however, English-learning infants favored backward-going transitional probabilities over forward-going transitional probabilities, consistent with English-speaking adults. This indicates that statistical learning rapidly adapts to the predominant syntactic structure of the native language. Such adaptation may facilitate subsequent learning by highlighting statistical structures that are likely to be informative in the native linguistic environment

    Improving Multi-Task Generalization via Regularizing Spurious Correlation

    Full text link
    Multi-Task Learning (MTL) is a powerful learning paradigm to improve generalization performance via knowledge sharing. However, existing studies find that MTL could sometimes hurt generalization, especially when two tasks are less correlated. One possible reason that hurts generalization is spurious correlation, i.e., some knowledge is spurious and not causally related to task labels, but the model could mistakenly utilize them and thus fail when such correlation changes. In MTL setup, there exist several unique challenges of spurious correlation. First, the risk of having non-causal knowledge is higher, as the shared MTL model needs to encode all knowledge from different tasks, and causal knowledge for one task could be potentially spurious to the other. Second, the confounder between task labels brings in a different type of spurious correlation to MTL. We theoretically prove that MTL is more prone to taking non-causal knowledge from other tasks than single-task learning, and thus generalize worse. To solve this problem, we propose Multi-Task Causal Representation Learning framework, aiming to represent multi-task knowledge via disentangled neural modules, and learn which module is causally related to each task via MTL-specific invariant regularization. Experiments show that it could enhance MTL model's performance by 5.5% on average over Multi-MNIST, MovieLens, Taskonomy, CityScape, and NYUv2, via alleviating spurious correlation problem.Comment: Published on NeurIPS 202
    corecore