416 research outputs found
Blowup Equations for Refined Topological Strings
G\"{o}ttsche-Nakajima-Yoshioka K-theoretic blowup equations characterize the
Nekrasov partition function of five dimensional supersymmetric
gauge theories compactified on a circle, which via geometric engineering
correspond to the refined topological string theory on geometries. In
this paper, we study the K-theoretic blowup equations for general local
Calabi-Yau threefolds. We find that both vanishing and unity blowup equations
exist for the partition function of refined topological string, and the crucial
ingredients are the fields introduced in our previous paper. These
blowup equations are in fact the functional equations for the partition
function and each of them results in infinite identities among the refined free
energies. Evidences show that they can be used to determine the full refined
BPS invariants of local Calabi-Yau threefolds. This serves an independent and
sometimes more powerful way to compute the partition function other than the
refined topological vertex in the A-model and the refined holomorphic anomaly
equations in the B-model. We study the modular properties of the blowup
equations and provide a procedure to determine all the vanishing and unity fields from the polynomial part of refined topological string at large
radius point. We also find that certain form of blowup equations exist at
generic loci of the moduli space.Comment: 85 pages. v2: Journal versio
Blowup Equations for 6d SCFTs. I
We propose novel functional equations for the BPS partition functions of 6d
(1,0) SCFTs, which can be regarded as an elliptic version of
Gottsche-Nakajima-Yoshioka's K-theoretic blowup equations. From the viewpoint
of geometric engineering, these are the generalized blowup equations for
refined topological strings on certain local elliptic Calabi-Yau threefolds. We
derive recursion formulas for elliptic genera of self-dual strings on the
tensor branch from these functional equations and in this way obtain a
universal approach for determining refined BPS invariants. As examples, we
study in detail the minimal 6d SCFTs with SU(3) and SO(8) gauge symmetry. In
companion papers, we will study the elliptic blowup equations for all other
non-Higgsable clusters.Comment: 52 pages, 3 figure
Empirical tests of Fama-French three-factor model and Principle Component Analysis on the Chinese stock market
Date: 2014-06-03 Authors: Kaiwen Wang Jingjing Guo [email protected] [email protected] Mobile: 0762063660 0762187877 Title: Empirical tests of Fama-French three-factor model and Principle Component Analysis on the Chinese stock market Tutor: Anders Vilhelmsson, Department of Business Administration, Lund University Purpose: This paper aim to verify that the Fama-French three factor model (FF) captures more cross-sectional variation in returns for the Chinese stock market than the CAPM, over the period January 2004 to December 2013. Furthermore, we construct statistically optimal factors by using the principal component analysis (PCA) for the Fama-French portfolios and test whether the FF model leaves anything significant that can be explained by the PCA factors. Method: Following the procedure in Fama and French (1993), first we construct FF factors and portfolios based on firm size and book-to-market equity, and then compare the performance between CAPM and FF models by applying time-series regressions. For deeper comparison, we continue to explain the return matrix (120*9) with principal component analysis, which produces several PCs for new time-series regressions and study the overall fitness and factor loadings of both FF and PCA models. To see which model captures the most variation, we run cross-sectional regressions with respect to all the three afore-mentioned models. Conclusion: Our results show that the FF model tends to be more powerful than CAPM for explaining the variations in cross-sectional returns. Yet within the FF model, our data contains one divergence from the US market, we actually find a reversal of book-to-market equity effect. Finally, our results suggest that the PCA model performs better than the FF model
Reputation Analysis of E-commerce Products Based on Online Reviews—Take Amazon as an Example
Based on the product review information of amazon from 2006 to 2016, this paper measures the correlation between star rating and text review. First, a supervised SVM model was established to classify text emotions. Second, the word2vec model was built to analyze the unsupervised emotion of the comment text. Third, the relationship between the special text and the score is obtained by using the grey metric model. In addition, the article has also carried on the principal component analysis to the commodity reputation, and has determined the representative good commodity and the inferior commodity. Finally, this paper puts forward some Suggestions for merchants, e-commerce platforms and governments
ViCor: Bridging Visual Understanding and Commonsense Reasoning with Large Language Models
In our work, we explore the synergistic capabilities of pre-trained
vision-and-language models (VLMs) and large language models (LLMs) for visual
commonsense reasoning (VCR). We categorize the problem of VCR into visual
commonsense understanding (VCU) and visual commonsense inference (VCI). For
VCU, which involves perceiving the literal visual content, pre-trained VLMs
exhibit strong cross-dataset generalization. On the other hand, in VCI, where
the goal is to infer conclusions beyond image content, VLMs face difficulties.
We find that a baseline where VLMs provide perception results (image captions)
to LLMs leads to improved performance on VCI. However, we identify a challenge
with VLMs' passive perception, which often misses crucial context information,
leading to incorrect or uncertain reasoning by LLMs. To mitigate this issue, we
suggest a collaborative approach where LLMs, when uncertain about their
reasoning, actively direct VLMs to concentrate on and gather relevant visual
elements to support potential commonsense inferences. In our method, named
ViCor, pre-trained LLMs serve as problem classifiers to analyze the problem
category, VLM commanders to leverage VLMs differently based on the problem
classification, and visual commonsense reasoners to answer the question. VLMs
will perform visual recognition and understanding. We evaluate our framework on
two VCR benchmark datasets and outperform all other methods that do not require
in-domain supervised fine-tuning
HyperSNN: A new efficient and robust deep learning model for resource constrained control applications
In light of the increasing adoption of edge computing in areas such as
intelligent furniture, robotics, and smart homes, this paper introduces
HyperSNN, an innovative method for control tasks that uses spiking neural
networks (SNNs) in combination with hyperdimensional computing. HyperSNN
substitutes expensive 32-bit floating point multiplications with 8-bit integer
additions, resulting in reduced energy consumption while enhancing robustness
and potentially improving accuracy. Our model was tested on AI Gym benchmarks,
including Cartpole, Acrobot, MountainCar, and Lunar Lander. HyperSNN achieves
control accuracies that are on par with conventional machine learning methods
but with only 1.36% to 9.96% of the energy expenditure. Furthermore, our
experiments showed increased robustness when using HyperSNN. We believe that
HyperSNN is especially suitable for interactive, mobile, and wearable devices,
promoting energy-efficient and robust system design. Furthermore, it paves the
way for the practical implementation of complex algorithms like model
predictive control (MPC) in real-world industrial scenarios
- …