1,628 research outputs found

    Stable Isotope Paleolimnology of Barry Lake, Ontario, Canada Since AD - 1268

    Get PDF
    The paleolimnology of Barry Lake, SE Ontario, Canada is described using mineralogy, magnetic susceptibility, carbon:nitrogen ratio, mass accumulation rates, grain-size, δ18O and δ13C of authigenic calcite and mollusc aragonite, δ13C and δ15N of organic matter, and archival records. These sediments span the Medieval Warm Period (MWP), Little Ice Age (LIA), and human settlement. Interval I proxies (AD 1268-1350, MWP) indicate warmer and drier conditions and elevated lacustrine production. Interval II (AD 1350-1615) was cooler and wetter, with lower lacustrine production and low-oxygen conditions causing loss of shelly fauna. Interval III (AD 1615-1850, LIA) was colder, with lower lacustrine production beginning at AD 1720, coincident with European activity beginning at AD 1830. Interval IV (AD 1850-2011) is marked by rising temperature and lacustrine production, declining human impact, and since AD ~1950, new nitrogen input (fertilizer?). These data provide a baseline for future change in climatic and anthropogenic factors affecting Barry Lake

    Essays in financial intermediation

    Get PDF
    The thesis consists of three papers. Credit Rating and Competition (co-authored wth Pragyan Deb and Nelson Camanho) studies the behaviour of credit rating agencies in a competitive framework with the presence of conflicts of interest. We show that competition for market share through reputation is insufficient to discipline rating agencies in equilibrium. More importantly, our results suggest that, in most cases, competition will aggravate the lax behaviour of rating agencies, resulting in greater ratings inflation. This result has important policy implications since it suggests that enhanced competition in the ratings industry is likely to make the situation worse. Credit Default Swaps - Default Risk, Counter-party Risk and Systemic Risk examines the implications of CDS on systemic risk. I show that CDS can contribute to systemic risk in two ways: through counter-party risk and through sharing of default risks. A central clearing house, which can only reduce counter-party risk, is by no means a panacea. More importantly, excessive risk taken by one reckless institution may spread to the entire financial system via the CDS market. This could potentially explain the US government's decision to bail out AIG during the recent financial crisis. Policies requiring regulatory disclosure of CDS trades would be desirable. Investor Cash Flow and Mutual Fund Behaviour (co-authored with Zhigang Qiu) analyzes the trading incentives of mutual fund managers. In open-ended funds, investors are only willing to invest in the fund when the share price of the fund is expected to increase, i.e. the fund is expected to make profits in the future.We show that the fund manager may buy the asset even when he perceives the asset to be over-valued, given that his portfolio choices are disclosed to the investors and that he is paid a fixed fraction of the terminal value of the fund

    A comparative study on data science and information science: From the perspective of job market demands in China

    Get PDF
    With the development of big data, data science related positions are highly demanded in the job market. Since information science and data science greatly overlap and share similar concerns, this paper aims to compare them from the perspective of the job market demands in China. We crawled 2,680 recruitment posts related to data science and information science. Then we made a comparative study on these two domains about the skills, salary, and clusters of position responsibilities. The results showed that they had differ-ent emphasis on the skills, the qualification standard and the application ar-ea

    FX Resilience around the World: Fighting Volatile Cross-Border Capital Flows

    Full text link
    We show that capital flow (CF) volatility exerts an adverse effect on exchange rate (FX) volatility, regardless of whether capital controls have been put in place. However, this effect can be significantly moderated by certain macroeconomic fundamentals that reflect trade openness, foreign assets holdings, monetary policy easing, fiscal sustainability, and financial development. Passing the threshold levels of these macroeconomic fundamentals, the adverse effect of CF volatility may be negligible. We further construct an intuitive FX resilience measure, which provides an assessment of the strength of a country's exchange rates

    Design of cycloidal rays in optical waveguides in analogy to the fastest descending problem

    Full text link
    In this work, we present the design of cycloidal waveguides from a gradient refractive index (GRIN) medium in analogy to the fastest descending problem in classical mechanics. Light rays propagate along cycloids in this medium, of which the refractive index can be determined from relating to the descending speed under gravity force. It can be used as GRIN lenses or waveguides, and the frequency specific focusing and imaging properties have been discussed. The results suggest that the waveguide can be viewed as an optical filter. Its frequency response characteristics change with the refractive index profile and the device geometries.Comment: 10 pages, 6 figure

    Multi-Agent-Based Cloud Architecture of Smart Grid

    Get PDF
    AbstractPower system is a huge hierarchical controlled network. Large volumes of data are within the system and the requirement of real-time analysis and processing is high. With the smart grid construction, these requirements will be further improved. The emergence of cloud computing provides an effective way to solve these problems low-costly, high efficiently and reliably. This paper analyzes the feasibility of cloud computing for the construction of smart grid, extends cloud computing to cloud-client computing. Through “Energy Hub”, Microgrid is separated into a network of three storeys that match with the conception of cloud-client computing. This paper introduces multi-agent technology to control each node in the system. On these bases, cloud architecture of smart grid is proposed. Finally, an example is given to explain the application of cloud computing in power grid CPS structure

    A Data-Centric Solution to NonHomogeneous Dehazing via Vision Transformer

    Full text link
    Recent years have witnessed an increased interest in image dehazing. Many deep learning methods have been proposed to tackle this challenge, and have made significant accomplishments dealing with homogeneous haze. However, these solutions cannot maintain comparable performance when they are applied to images with non-homogeneous haze, e.g., NH-HAZE23 dataset introduced by NTIRE challenges. One of the reasons for such failures is that non-homogeneous haze does not obey one of the assumptions that is required for modeling homogeneous haze. In addition, a large number of pairs of non-homogeneous hazy image and the clean counterpart is required using traditional end-to-end training approaches, while NH-HAZE23 dataset is of limited quantities. Although it is possible to augment the NH-HAZE23 dataset by leveraging other non-homogeneous dehazing datasets, we observe that it is necessary to design a proper data-preprocessing approach that reduces the distribution gaps between the target dataset and the augmented one. This finding indeed aligns with the essence of data-centric AI. With a novel network architecture and a principled data-preprocessing approach that systematically enhances data quality, we present an innovative dehazing method. Specifically, we apply RGB-channel-wise transformations on the augmented datasets, and incorporate the state-of-the-art transformers as the backbone in the two-branch framework. We conduct extensive experiments and ablation study to demonstrate the effectiveness of our proposed method.Comment: Accepted by CVPRW 202

    Check Me If You Can: Detecting ChatGPT-Generated Academic Writing using CheckGPT

    Full text link
    With ChatGPT under the spotlight, utilizing large language models (LLMs) for academic writing has drawn a significant amount of discussions and concerns in the community. While substantial research efforts have been stimulated for detecting LLM-Generated Content (LLM-content), most of the attempts are still in the early stage of exploration. In this paper, we present a holistic investigation of detecting LLM-generate academic writing, by providing a dataset, evidence, and algorithms, in order to inspire more community effort to address the concern of LLM academic misuse. We first present GPABenchmark, a benchmarking dataset of 600,000 samples of human-written, GPT-written, GPT-completed, and GPT-polished abstracts of research papers in CS, physics, and humanities and social sciences (HSS). We show that existing open-source and commercial GPT detectors provide unsatisfactory performance on GPABenchmark, especially for GPT-polished text. Moreover, through a user study of 150+ participants, we show that it is highly challenging for human users, including experienced faculty members and researchers, to identify GPT-generated abstracts. We then present CheckGPT, a novel LLM-content detector consisting of a general representation module and an attentive-BiLSTM classification module, which is accurate, transferable, and interpretable. Experimental results show that CheckGPT achieves an average classification accuracy of 98% to 99% for the task-specific discipline-specific detectors and the unified detectors. CheckGPT is also highly transferable that, without tuning, it achieves ~90% accuracy in new domains, such as news articles, while a model tuned with approximately 2,000 samples in the target domain achieves ~98% accuracy. Finally, we demonstrate the explainability insights obtained from CheckGPT to reveal the key behaviors of how LLM generates texts

    Credit rating and competition

    Get PDF
    In principle, credit rating agencies are supposed to be impartial observers that bridge the gap between private information of issuers and the information available to the wider pool of investors. However, since the 1970s, rating agencies have relied on an issuer-pay model, creating a conflict of interest - the largest source of income for the rating agencies are the fees paid by the issuers the rating agencies are supposed to impartially rate. In this paper, we explore the trade-off between reputation and fees and find that relative to monopoly, rating agencies are more prone to inflate ratings under competition, resulting in lower expected welfare. Our results suggest that more competition by itself is undesirable under the current issuer-pay model and will do little to resolve the conflict of interest problem
    • …
    corecore