264 research outputs found

    Testing the CAPM and Three Factors Model in China: Evidence from the Shanghai Stock Exchange

    Get PDF
    Since inception, China’s stock market has grown rapidly and has become one of the most important emerging markets in the world. However, many popular financial media depicts China’s stock market as irrational. Besides, empirical studies on asset pricing in China’s stock market do not provide a consistent conclusion for different periods. This study tests the Capital Asset Pricing Model (CAPM) and Fama-French Three Factors Model in Shanghai Stock Exchange, China. For validity test of the CAPM, I follow the Fama-MacBeth (1973) procedure on a data set consisting of 180 A-shares with daily frequencies. Considerable evidence is obtained to conclude that the CAPM is not suitable to predict stock returns in China’s stock market. Beta alone cannot solely measure the risk and the stock return is not linear related to it. For validity test of Three Factors Model, I employ the Fama-French (1993) procedure to examine whether the size and book-to-market effect exists in China. Empirical results confirm the “small firm effect” but challenge Fama and French (1996) who states value firm outperform the growth firm. Besides, the results provide evidence that Three Factors Model have a better explanatory power than the CAPM. The findings suggest that mean-variance efficient investors can form their portfolio with small and low book-to-market equity firms to achieve higher risk-adjusted returns

    Fabrication of imitative cracks by 3D printing for electromagnetic nondestructive testing and evaluations

    Get PDF
    AbstractThis study demonstrates that 3D printing technology offers a simple, easy, and cost-effective method to fabricate artificial flaws simulating real cracks from the viewpoint of eddy current testing. The method does not attempt to produce a flaw whose morphology mirrors that of a real crack but instead produces a relatively simple artificial flaw. The parameters of this flaw that have dominant effects on eddy current signals can be quantitatively controlled. Three artificial flaws in type 316L austenitic stainless steel plates were fabricated using a powderbed-based laser metal additive manufacturing machine. The three artificial flaws were designed to have the same length, depth, and opening but different branching and electrical contacts between flaw surfaces. The flaws were measured by eddy current testing using an absolute type pancake probe. The signals due to the three flaws clearly differed from each other although the flaws had the same length and depth. These results were supported by subsequent destructive tests and finite element analyses

    EDIS: Entity-Driven Image Search over Multimodal Web Content

    Full text link
    Making image retrieval methods practical for real-world search applications requires significant progress in dataset scales, entity comprehension, and multimodal information fusion. In this work, we introduce \textbf{E}ntity-\textbf{D}riven \textbf{I}mage \textbf{S}earch (EDIS), a challenging dataset for cross-modal image search in the news domain. EDIS consists of 1 million web images from actual search engine results and curated datasets, with each image paired with a textual description. Unlike datasets that assume a small set of single-modality candidates, EDIS reflects real-world web image search scenarios by including a million multimodal image-text pairs as candidates. EDIS encourages the development of retrieval models that simultaneously address cross-modal information fusion and matching. To achieve accurate ranking results, a model must: 1) understand named entities and events from text queries, 2) ground entities onto images or text descriptions, and 3) effectively fuse textual and visual representations. Our experimental results show that EDIS challenges state-of-the-art methods with dense entities and a large-scale candidate set. The ablation study also proves that fusing textual features with visual features is critical in improving retrieval results

    Overhead Line Defect Recognition Based on Unsupervised Semantic Segmentation

    Full text link
    Overhead line inspection greatly benefits from defect recognition using visible light imagery. Addressing the limitations of existing feature extraction techniques and the heavy data dependency of deep learning approaches, this paper introduces a novel defect recognition framework. This is built on the Faster RCNN network and complemented by unsupervised semantic segmentation. The approach involves identifying the type and location of the target equipment, utilizing semantic segmentation to differentiate between the device and its backdrop, and finally employing similarity measures and logical rules to categorize the type of defect. Experimental results indicate that this methodology focuses more on the equipment rather than the defects when identifying issues in overhead lines. This leads to a notable enhancement in accuracy and exhibits impressive adaptability. Thus, offering a fresh perspective for automating the inspection of distribution network equipment

    Theoretical Explanation and Improvement of Deep Learning-aided Cryptanalysis

    Get PDF
    At CRYPTO 2019, Gohr demonstrated that differential-neural distinguishers (DNDs) for Speck32/64 can learn more features than classical cryptanalysis\u27s differential distribution tables (DDT). Furthermore, a non-classical key recovery procedure is devised by combining the Upper Confidence Bound (UCB) strategy and the BayesianKeySearch algorithm. Consequently, the time complexity of 11-round key recovery attacks on Speck32/64 is significantly reduced compared with the state-of-the-art results in classical cryptanalysis. This advancement in deep learning-assisted cryptanalysis has opened up new possibilities. However, the specific encryption features exploited by DNDs remain unclear. In this paper, we begin by analyzing the features learned by DND based on the probability distribution of a ciphertext pair. Our analysis reveals that DND not only learns the differential features of the ciphertext pair but also captures the XOR information of the left and right branches of the ciphertext pair. This explains why the performance of DND can outperform DDT in certain cases. For other ciphers, we can also predict whether deep learning methods can achieve superior results to classical methods based on the probability distribution of the ciphertext pair. Next, we modify the input data format and network structure based on the specific features that can be learned to train DND specifically. With these modifications, it is possible to reduce the size of their parameters to only 1/16 of their previous networks while maintaining high precision. Additionally, the training time for the DNDs is significantly reduced. Finally, to improve the efficiency of deep learning-assisted cryptanalysis, we introduce Bayes-UCB to select promising ciphertext structures more efficiently. We also introduce an improved BayesianKeySearch algorithm to retain guessed keys with the highest scores in key guessing. We use both methods to launch 11-round, 12-round, and 13-round key recovery attacks on Speck32/64. The results show that under the same conditions, the success rate of 11-round key recovery attacks has increased from Gohr\u27s 36.1% to 52.8%, the success rate of 12-round key recovery attacks has increased from Gohr\u27s 39% to 50%, and the success rate of 13-round key recovery attacks has increased from Zhang et al.\u27s 21% to 24%. In addition, the time complexity of these experiments is also significantly reduced

    VELMA: Verbalization Embodiment of LLM Agents for Vision and Language Navigation in Street View

    Full text link
    Incremental decision making in real-world environments is one of the most challenging tasks in embodied artificial intelligence. One particularly demanding scenario is Vision and Language Navigation~(VLN) which requires visual and natural language understanding as well as spatial and temporal reasoning capabilities. The embodied agent needs to ground its understanding of navigation instructions in observations of a real-world environment like Street View. Despite the impressive results of LLMs in other research areas, it is an ongoing problem of how to best connect them with an interactive visual environment. In this work, we propose VELMA, an embodied LLM agent that uses a verbalization of the trajectory and of visual environment observations as contextual prompt for the next action. Visual information is verbalized by a pipeline that extracts landmarks from the human written navigation instructions and uses CLIP to determine their visibility in the current panorama view. We show that VELMA is able to successfully follow navigation instructions in Street View with only two in-context examples. We further finetune the LLM agent on a few thousand examples and achieve 25%-30% relative improvement in task completion over the previous state-of-the-art for two datasets.Comment: Accepted at AAAI 202
    • …
    corecore