115 research outputs found

    Empirical tests of Fama-French three-factor model and Principle Component Analysis on the Chinese stock market

    Get PDF
    Date: 2014-06-03 Authors: Kaiwen Wang Jingjing Guo [email protected] [email protected] Mobile: 0762063660 0762187877 Title: Empirical tests of Fama-French three-factor model and Principle Component Analysis on the Chinese stock market Tutor: Anders Vilhelmsson, Department of Business Administration, Lund University Purpose: This paper aim to verify that the Fama-French three factor model (FF) captures more cross-sectional variation in returns for the Chinese stock market than the CAPM, over the period January 2004 to December 2013. Furthermore, we construct statistically optimal factors by using the principal component analysis (PCA) for the Fama-French portfolios and test whether the FF model leaves anything significant that can be explained by the PCA factors. Method: Following the procedure in Fama and French (1993), first we construct FF factors and portfolios based on firm size and book-to-market equity, and then compare the performance between CAPM and FF models by applying time-series regressions. For deeper comparison, we continue to explain the return matrix (120*9) with principal component analysis, which produces several PCs for new time-series regressions and study the overall fitness and factor loadings of both FF and PCA models. To see which model captures the most variation, we run cross-sectional regressions with respect to all the three afore-mentioned models. Conclusion: Our results show that the FF model tends to be more powerful than CAPM for explaining the variations in cross-sectional returns. Yet within the FF model, our data contains one divergence from the US market, we actually find a reversal of book-to-market equity effect. Finally, our results suggest that the PCA model performs better than the FF model

    Singular Trudinger--Moser inequality involving LpL^{p} norm in bounded domain

    Full text link
    In this paper, we use the method of blow-up analysis and capacity estimate to derive the singular Trudinger--Moser inequality involving NN-Finsler--Laplacian and LpL^{p} norm, precisely, for any p>1p>1, 0γ<γ1:=infuW01,N(Ω)\{0}ΩFN(u)dxupN0\leq\gamma<\gamma_{1}:= \inf\limits_{u\in W^{1, N}_{0}(\Omega)\backslash \{0\}}\frac{\int_{\Omega}F^{N}(\nabla u)dx}{\| u\|_p^N} and 0β<N0\leq\beta<N, we have \begin{align} \sup_{u\in W_{0}^{1,N}(\Omega),\;\int_{\Omega}F^{N}(\nabla u)dx-\gamma\| u\|_p^N\leq1}\int_{\Omega}\frac{e^{\lambda_{N}(1-\frac{\beta}{N})\lvert u\rvert^{\frac{N}{N-1}}}}{F^{o}(x)^{\beta}}\;\mathrm{d}x<+\infty\notag, \end{align} where λN=NNN1κN1N1\lambda_{N}=N^{\frac{N}{N-1}} \kappa_{N}^{\frac{1}{N-1}} and κN\kappa_{N} is the volume of a unit Wulff ball in RN\mathbb{R}^N, moreover, extremal functions for the inequality are also obtained. When F=F=\lvert\cdot\rvert and p=Np=N, we can obtain the singular version of Tintarev type inequality by the obove inequality, namely, for any 0α<α1(Ω):=infuW01,N(Ω)\{0}ΩuNdxuNN0\leq\alpha<\alpha_{1}(\Omega):=\inf\limits_{u\in W^{1, N}_{0}(\Omega)\backslash \{0\}}\frac{\int_{\Omega}|\nabla u|^Ndx}{\| u\|_N^N} and 0β<N0\leq\beta<N, it holds supuW01,N(Ω),  ΩuN  dxαuNN1ΩeαN(1βN)uNN1xβ  dx<+, \sup_{u\in W_{0}^{1,N}(\Omega),\;\int_{\Omega}\lvert\nabla u\rvert^{N}\;\mathrm{d}x-\alpha\|u\|_{N}^{N}\leq1}\int_{\Omega}\frac{e^{\alpha_{N}(1-\frac{\beta}{N})\lvert u\rvert^{\frac{N}{N-1}}}}{\lvert x\rvert^{\beta}}\;\mathrm{d}x<+\infty, where αN:=NNN1ωN1N1\alpha_{N}:=N^{\frac{N}{N-1}}\omega_{N}^{\frac{1}{N-1}} and ωN \omega_{N} is the volume of unit ball in RN\mathbb{R}^{N}. Our results extend many well-known Trudinger--Moser type inequalities to more general setting

    Size Effect and Scaling in Quasi-static and Fatigue Fracture of Graphene Polymer Nanocomposites

    Full text link
    This work investigated how the structure size affects the quasi-static and fatigue behaviors of graphene polymer nanocomposites, a topic that has been often overlooked. The results showed that both quasi-static and fatigue failure of these materials scale nonlinearly with the structure size due to the presence of a significant Fracture Process Zone (FPZ) ahead of the crack tip induced by graphene nanomodification. Such a complicated size effect and scaling in either quasi-static or fatigue scenario cannot be described by the Linear Elastic Fracture Mechanics (LEFM), but can be well captured by the Size Effect Law (SEL) which considers the FPZ. Thanks to the SEL, the enhanced quasi-static and fatigue fracture properties were properly characterized and shown to be independent of the structure size. In addition, the differences on the morphological and mechanical behaviors between quasi-static fracture and fatigue fracture were also identified and clarified in this work. The experimental data and analytical analyses reported in this paper are important to deeply understand the mechanics of polymer-based nanocomposite materials and even other quasi-brittle materials (e.g., fiber-reinforced polymers or its hybrid with nanoparticles, etc.), and further advance the development of computational models capable of capturing size-dependent fracture of materials in various loading conditions.Comment: 41 pages, 18 figure

    Pure Message Passing Can Estimate Common Neighbor for Link Prediction

    Full text link
    Message Passing Neural Networks (MPNNs) have emerged as the {\em de facto} standard in graph representation learning. However, when it comes to link prediction, they often struggle, surpassed by simple heuristics such as Common Neighbor (CN). This discrepancy stems from a fundamental limitation: while MPNNs excel in node-level representation, they stumble with encoding the joint structural features essential to link prediction, like CN. To bridge this gap, we posit that, by harnessing the orthogonality of input vectors, pure message-passing can indeed capture joint structural features. Specifically, we study the proficiency of MPNNs in approximating CN heuristics. Based on our findings, we introduce the Message Passing Link Predictor (MPLP), a novel link prediction model. MPLP taps into quasi-orthogonal vectors to estimate link-level structural features, all while preserving the node-level complexities. Moreover, our approach demonstrates that leveraging message-passing to capture structural features could offset MPNNs' expressiveness limitations at the expense of estimation variance. We conduct experiments on benchmark datasets from various domains, where our method consistently outperforms the baseline methods.Comment: preprin

    Precheck Sequence Based False Base Station Detection During Handover: A Physical Layer Based Security Scheme

    Full text link
    False Base Station (FBS) attack has been a severe security problem for the cellular network since 2G era. During handover, the user equipment (UE) periodically receives state information from surrounding base stations (BSs) and uploads it to the source BS. The source BS compares the uploaded signal power and shifts UE to another BS that can provide the strongest signal. An FBS can transmit signal with the proper power and attract UE to connect to it. In this paper, based on the 3GPP standard, a Precheck Sequence-based Detection (PSD) Scheme is proposed to secure the transition of legal base station (LBS) for UE. This scheme first analyzes the structure of received signals in blocks and symbols. Several additional symbols are added to the current signal sequence for verification. By designing a long table of symbol sequence, every UE which needs handover will be allocated a specific sequence from this table. The simulation results show that the performance of this PSD Scheme is better than that of any existing ones, even when a specific transmit power is designed for FBS

    Universal Link Predictor By In-Context Learning on Graphs

    Full text link
    Link prediction is a crucial task in graph machine learning, where the goal is to infer missing or future links within a graph. Traditional approaches leverage heuristic methods based on widely observed connectivity patterns, offering broad applicability and generalizability without the need for model training. Despite their utility, these methods are limited by their reliance on human-derived heuristics and lack the adaptability of data-driven approaches. Conversely, parametric link predictors excel in automatically learning the connectivity patterns from data and achieving state-of-the-art but fail short to directly transfer across different graphs. Instead, it requires the cost of extensive training and hyperparameter optimization to adapt to the target graph. In this work, we introduce the Universal Link Predictor (UniLP), a novel model that combines the generalizability of heuristic approaches with the pattern learning capabilities of parametric models. UniLP is designed to autonomously identify connectivity patterns across diverse graphs, ready for immediate application to any unseen graph dataset without targeted training. We address the challenge of conflicting connectivity patterns-arising from the unique distributions of different graphs-through the implementation of In-context Learning (ICL). This approach allows UniLP to dynamically adjust to various target graphs based on contextual demonstrations, thereby avoiding negative transfer. Through rigorous experimentation, we demonstrate UniLP's effectiveness in adapting to new, unseen graphs at test time, showcasing its ability to perform comparably or even outperform parametric models that have been finetuned for specific datasets. Our findings highlight UniLP's potential to set a new standard in link prediction, combining the strengths of heuristic and parametric methods in a single, versatile framework.Comment: Preprin

    Grouped Knowledge Distillation for Deep Face Recognition

    Full text link
    Compared with the feature-based distillation methods, logits distillation can liberalize the requirements of consistent feature dimension between teacher and student networks, while the performance is deemed inferior in face recognition. One major challenge is that the light-weight student network has difficulty fitting the target logits due to its low model capacity, which is attributed to the significant number of identities in face recognition. Therefore, we seek to probe the target logits to extract the primary knowledge related to face identity, and discard the others, to make the distillation more achievable for the student network. Specifically, there is a tail group with near-zero values in the prediction, containing minor knowledge for distillation. To provide a clear perspective of its impact, we first partition the logits into two groups, i.e., Primary Group and Secondary Group, according to the cumulative probability of the softened prediction. Then, we reorganize the Knowledge Distillation (KD) loss of grouped logits into three parts, i.e., Primary-KD, Secondary-KD, and Binary-KD. Primary-KD refers to distilling the primary knowledge from the teacher, Secondary-KD aims to refine minor knowledge but increases the difficulty of distillation, and Binary-KD ensures the consistency of knowledge distribution between teacher and student. We experimentally found that (1) Primary-KD and Binary-KD are indispensable for KD, and (2) Secondary-KD is the culprit restricting KD at the bottleneck. Therefore, we propose a Grouped Knowledge Distillation (GKD) that retains the Primary-KD and Binary-KD but omits Secondary-KD in the ultimate KD loss calculation. Extensive experimental results on popular face recognition benchmarks demonstrate the superiority of proposed GKD over state-of-the-art methods.Comment: 9 pages, 2 figures, 7 tables, accepted by AAAI 202

    Towards an Understanding of Large Language Models in Software Engineering Tasks

    Full text link
    Large Language Models (LLMs) have drawn widespread attention and research due to their astounding performance in tasks such as text generation and reasoning. Derivative products, like ChatGPT, have been extensively deployed and highly sought after. Meanwhile, the evaluation and optimization of LLMs in software engineering tasks, such as code generation, have become a research focus. However, there is still a lack of systematic research on the application and evaluation of LLMs in the field of software engineering. Therefore, this paper is the first to comprehensively investigate and collate the research and products combining LLMs with software engineering, aiming to answer two questions: (1) What are the current integrations of LLMs with software engineering? (2) Can LLMs effectively handle software engineering tasks? To find the answers, we have collected related literature as extensively as possible from seven mainstream databases, and selected 123 papers for analysis. We have categorized these papers in detail and reviewed the current research status of LLMs from the perspective of seven major software engineering tasks, hoping this will help researchers better grasp the research trends and address the issues when applying LLMs. Meanwhile, we have also organized and presented papers with evaluation content to reveal the performance and effectiveness of LLMs in various software engineering tasks, providing guidance for researchers and developers to optimize
    corecore