70 research outputs found

    An Empirical Assessment of IAS 40 Investment Property

    Get PDF
    This study examines the value relevance of the fair value model versus the cost model for evaluating investment properties under IAS 40 Investment Property. Contrary to the popular belief that fair value is the most relevant measurement attribute, we find that the coefficient estimate of investment properties for Chinese companies that adopted IAS 40’s fair value model is significantly smaller than its theoretical value, and is not significantly different from zero, suggesting that reported fair values are not value relevant as perceived by investors for the sample firms. Furthermore, investors tend to adjust the valuation of fair value companies’ non-investment property assets downward. The findings do not support the claim that fair value is superior to historical cost for the investment property valuation. Our findings highlight the need for more implementation guidelines from the IASB to enhance the value relevance of fair value estimates under IAS 40

    Mobility Accelerates Learning: Convergence Analysis on Hierarchical Federated Learning in Vehicular Networks

    Full text link
    Hierarchical federated learning (HFL) enables distributed training of models across multiple devices with the help of several edge servers and a cloud edge server in a privacy-preserving manner. In this paper, we consider HFL with highly mobile devices, mainly targeting at vehicular networks. Through convergence analysis, we show that mobility influences the convergence speed by both fusing the edge data and shuffling the edge models. While mobility is usually considered as a challenge from the perspective of communication, we prove that it increases the convergence speed of HFL with edge-level heterogeneous data, since more diverse data can be incorporated. Furthermore, we demonstrate that a higher speed leads to faster convergence, since it accelerates the fusion of data. Simulation results show that mobility increases the model accuracy of HFL by up to 15.1% when training a convolutional neural network on the CIFAR-10 dataset.Comment: Submitted to IEEE for possible publicatio

    A Systematic Evaluation of Large Language Models on Out-of-Distribution Logical Reasoning Tasks

    Full text link
    Large language models (LLMs), such as GPT-3.5 and GPT-4, have greatly advanced the performance of artificial systems on various natural language processing tasks to human-like levels. However, their generalisation and robustness to perform logical reasoning remain under-evaluated. To probe this ability, we propose three new logical reasoning datasets named "ReClor-plus", "LogiQA-plus" and "LogiQAv2-plus", each featuring three subsets: the first with randomly shuffled options, the second with the correct choices replaced by "none of the other options are correct", and a combination of the previous two subsets. We carry out experiments on these datasets with both discriminative and generative LLMs and show that these simple tricks greatly hinder the performance of the language models. Despite their superior performance on the original publicly available datasets, we find that all models struggle to answer our newly constructed datasets. We show that introducing task variations by perturbing a sizable training set can markedly improve the model's generalisation and robustness in logical reasoning tasks. Moreover, applying logic-driven data augmentation for fine-tuning, combined with prompting can enhance the generalisation performance of both discriminative large language models and generative large language models. These results offer insights into assessing and improving the generalisation and robustness of large language models for logical reasoning tasks. We make our source code and data publicly available \url{https://github.com/Strong-AI-Lab/Logical-and-abstract-reasoning}.Comment: Accepted for oral presentation at the LLM@IJCAI 2023 non-archival symposiu

    Enhancing Logical Reasoning of Large Language Models through Logic-Driven Data Augmentation

    Full text link
    Combining large language models with logical reasoning enhance their capacity to address problems in a robust and reliable manner. Nevertheless, the intricate nature of logical reasoning poses challenges to gathering reliable data from web for building comprehensive training datasets, subsequently affecting the performance on downstream tasks. To address this, we introduce a novel logic-driven data augmentation approach, AMR-LDA. AMR-LDA converts the original text into an Abstract Meaning Representation (AMR) graph, a structured semantic representation that encapsulates the logic structure of the sentence, upon which operations are performed to generate logically modified AMR graphs. The modified AMR graphs are subsequently converted back into texts to create augmented data. Notably, our methodology is architecture-agnostic and enhances generative large language models, such as GPT-3.5 and GPT-4, through prompt augmentation, and fine-tuning discriminative large language models through contrastive learning with logic-driven data augmentation. Empirical evidence underscores the efficacy of our proposed method with improvement in performance across seven downstream tasks, such as logical reasoning reading comprehension, textual entailment, and natural language inference. Furthermore, our method ranked first on the ReClor leaderboard \url{https://eval.ai/web/challenges/challenge-page/503/leaderboard/1347}. The source code and data are publicly available \url{https://github.com/Strong-AI-Lab/Logical-Equivalence-driven-AMR-Data-Augmentation-for-Representation-Learning}.Comment: Accepted for oral presentation at the LLM@IJCAI 2023 non-archival symposiu

    Efficient COI barcoding using high throughput single-end 400 bp sequencing

    Get PDF
    Background Over the last decade, the rapid development of high-throughput sequencing platforms has accelerated species description and assisted morphological classification through DNA barcoding. However, the current high-throughput DNA barcoding methods cannot obtain full-length barcode sequences due to read length limitations (e.g. a maximum read length of 300 bp for the Illumina’s MiSeq system), or are hindered by a relatively high cost or low sequencing output (e.g. a maximum number of eight million reads per cell for the PacBio’s SEQUEL II system). Results Pooled cytochrome c oxidase subunit I (COI) barcodes from individual specimens were sequenced on the MGISEQ-2000 platform using the single-end 400 bp (SE400) module. We present a bioinformatic pipeline, HIFI-SE, that takes reads generated from the 5′ and 3′ ends of the COI barcode region and assembles them into full-length barcodes. HIFI-SE is written in Python and includes four function modules of filter, assign, assembly and taxonomy. We applied the HIFI-SE to a set of 845 samples (30 marine invertebrates, 815 insects) and delivered a total of 747 fully assembled COI barcodes as well as 70 Wolbachia and fungi symbionts. Compared to their corresponding Sanger sequences (72 sequences available), nearly all samples (71/72) were correctly and accurately assembled, including 46 samples that had a similarity score of 100% and 25 of ca. 99%. Conclusions The HIFI-SE pipeline represents an efficient way to produce standard full-length barcodes, while the reasonable cost and high sensitivity of our method can contribute considerably more DNA barcodes under the same budget. Our method thereby advances DNA-based species identification from diverse ecosystems and increases the number of relevant applications

    Efficient \u3ci\u3eCOI\u3c/i\u3e Barcoding Using High Throughput Single-End 400 bp Sequencing

    Get PDF
    Background Over the last decade, the rapid development of high-throughput sequencing platforms has accelerated species description and assisted morphological classification through DNA barcoding. However, the current highthroughput DNA barcoding methods cannot obtain full-length barcode sequences due to read length limitations (for example, a maximum read length of 300 bp for the Illumina’s MiSeq system), or are hindered by a relatively high cost or low sequencing output (e.g. a maximum number of eight million reads per cell for the PacBio’s SEQUEL II system). Results Pooled cytochrome c oxidase subunit I (COI) barcodes from individual specimens were sequenced on the MGISEQ-2000 platform using the single-end 400 bp (SE400) module. We present a bioinformatic pipeline, HIFI-SE, that takes reads generated from the 5′ and 3′ ends of the COI barcode region and assembles them into full-length barcodes. HIFI-SE is written in Python and includes four function modules of filter, assign, assembly, and taxonomy. We applied the HIFI-SE to a set of 845 samples (30 marine invertebrates, 815 insects) and delivered a total of 747 fully assembled COI barcodes as well as 70 Wolbachia and fungi symbionts. Compared to their corresponding Sanger sequences (72 sequences available), nearly all samples (71/72) were correctly and accurately assembled, including 46 samples that had a similarity score of 100% and 25 of ca. 99%. Conclusions The HIFI-SE pipeline represents an efficient way to produce standard full-length barcodes, while the reasonable cost and high sensitivity of our method can contribute considerably more DNA barcodes under the same budget. Our method thereby advances DNA-based species identification from diverse ecosystems and increases the number of relevant applications

    Multi-Level Variational Spectroscopy using a Programmable Quantum Simulator

    Full text link
    Energy spectroscopy is a powerful tool with diverse applications across various disciplines. The advent of programmable digital quantum simulators opens new possibilities for conducting spectroscopy on various models using a single device. Variational quantum-classical algorithms have emerged as a promising approach for achieving such tasks on near-term quantum simulators, despite facing significant quantum and classical resource overheads. Here, we experimentally demonstrate multi-level variational spectroscopy for fundamental many-body Hamiltonians using a superconducting programmable digital quantum simulator. By exploiting symmetries, we effectively reduce circuit depth and optimization parameters allowing us to go beyond the ground state. Combined with the subspace search method, we achieve full spectroscopy for a 4-qubit Heisenberg spin chain, yielding an average deviation of 0.13 between experimental and theoretical energies, assuming unity coupling strength. Our method, when extended to 8-qubit Heisenberg and transverse-field Ising Hamiltonians, successfully determines the three lowest energy levels. In achieving the above, we introduce a circuit-agnostic waveform compilation method that enhances the robustness of our simulator against signal crosstalk. Our study highlights symmetry-assisted resource efficiency in variational quantum algorithms and lays the foundation for practical spectroscopy on near-term quantum simulators, with potential applications in quantum chemistry and condensed matter physics
    corecore