946 research outputs found
Value Relevance of Foreign Currency Accounts: A study of US publicly listed firms value relevance with foreign currency transaction and translation factors
Foreign currency risk has become an increasing concern for multinational companies, due to
the expansion of business geographical scope and increased volatility of foreign exchange
rates. The robustness of firm foreign currency exposure management is receiving more
attention from many stakeholders. Companies exposed to foreign currency risk may
experience severe profit and loss unpredictability. This can cause challenges to business
decision-making and misinterpretation of performance from the firms, the investors, and the
market. Therefore, it is crucial to understand the disclosure of foreign currency accounts in
firms' financial statements and how they influence the market value of public firms.
In this thesis, I discuss the research question regarding the value relevance of foreign currency
transaction gains or losses and translation adjustment using a sample of US publicly listed
firms that disclose such information from 2002 to 2020. I find that foreign currency
transaction gains or losses and translation adjustment positively relate to firm stock return. I
separately test firms reporting foreign currency transaction gains and foreign currency
transaction losses, discovering that their association to firm value is positive and negative. I
further discover that foreign currency translation adjustment is positively associated with firm
stock return in the manufacturing industry. Foreign currency transaction is significantly value
relevant for firms in the new economy when the analysis is taken for gains and losses
separately. Also, I find foreign currency transaction is more value relevant than earnings, and
the relevance is more substantial when it has a higher proportion of earnings. The value
relevance of foreign currency transaction gain gets stronger with the time horizon increasing
from three months to one year.nhhma
Bidding to drive: Car license auction policy in Shanghai and its public acceptance
Increased automobile ownership and use in China over the last two decades has increased energy consumption, worsened air pollution, and exacerbated congestion. However, the countrywide growth in car ownership conceals great variation among cities. For example, Shanghai and Beijing each had about 2 million motor vehicles in 2004, but by 2010, Beijing had 4.8 million motor vehicles whereas Shanghai had only 3.1 million. Among the factors contributing to this divergence is Shanghai’s vehicle control policy, which uses monthly license auctions to limit the number of new cars. The policy appears to be effective: in addition to dampening growth in car ownership, it generates annual revenues up to 5 billion CNY (800 million USD). But, despite these apparent successes, the degree to which the public accepts this policy is unknown.
This study surveys 524 employees at nine Shanghai companies to investigate the policy acceptance of Shanghai’s license auction by the working population, and the factors that contribute to that acceptance: Perceived policy effectiveness, affordability, equity concerns, and implementation. Respondents perceive the policy to be effective, but are moderately negative towards the policy nonetheless. However, they expect that others accept the policy more than they do. Respondents also hold consistently negative perceptions about the affordability of the license, the effects on equity, and the implementation process. Revenue usage is not seen as transparent, which is exacerbated by a perception that government vehicles enjoy advantages in obtaining a license, issues with the bidding process and technology, and difficulties in obtaining information about the auction policy. Nevertheless, respondents believe that license auctions and congestion charges are more effective and acceptable than parking charges and fuel taxes. To improve public acceptability of the policy, we make five recommendations: Transparency in revenue usage; transparency in government vehicle licensing and use, categorizing licenses by vehicle type, implementation and technology improvements to increase bidding convenience, and policies that restrict vehicle usage in congested locations
A Systematic Survey on Deep Generative Models for Graph Generation
Graphs are important data representations for describing objects and their
relationships, which appear in a wide diversity of real-world scenarios. As one
of a critical problem in this area, graph generation considers learning the
distributions of given graphs and generating more novel graphs. Owing to its
wide range of applications, generative models for graphs have a rich history,
which, however, are traditionally hand-crafted and only capable of modeling a
few statistical properties of graphs. Recent advances in deep generative models
for graph generation is an important step towards improving the fidelity of
generated graphs and paves the way for new kinds of applications. This article
provides an extensive overview of the literature in the field of deep
generative models for the graph generation. Firstly, the formal definition of
deep generative models for the graph generation as well as preliminary
knowledge is provided. Secondly, two taxonomies of deep generative models for
unconditional, and conditional graph generation respectively are proposed; the
existing works of each are compared and analyzed. After that, an overview of
the evaluation metrics in this specific domain is provided. Finally, the
applications that deep graph generation enables are summarized and five
promising future research directions are highlighted
Subunit-Selective Interrogation of CO Recombination in Carbonmonoxy Hemoglobin by Isotope-Edited Time-Resolved Resonance Raman Spectroscopy
Hemoglobin (Hb) is an allosteric tetrameric protein made up of αβ heterodimers. The α and β chains are similar, but are chemically and structurally distinct. To investigate dynamical differences between the chains, we have prepared tetramers in which the chains are isotopically distinguishable, via reconstitution with 15N-heme. Ligand recombination and heme structural evolution, following HbCO dissociation, was monitored with chain selectivity by resonance Raman (RR) spectroscopy. For α but not for β chains, the frequency of the ν4 porphyrin breathing mode increased on the microsecond time scale. This increase is a manifestation of proximal tension in the Hb T-state, and its time course is parallel to the formation of T contacts, as determined previously by UVRR spectroscopy. Despite the localization of proximal constraint in the α chains, geminate recombination was found to be equally probable in the two chains, with yields of 39 ± 2%. We discuss the possibility that this equivalence is coincidental, in the sense that it arises from the evolutionary pressure for cooperativity, or that it reflects mechanical coupling across the αβ interface, evidence for which has emerged from UVRR studies of site mutants
Overcoming Language Priors in Visual Question Answering via Distinguishing Superficially Similar Instances
Despite the great progress of Visual Question Answering (VQA), current VQA
models heavily rely on the superficial correlation between the question type
and its corresponding frequent answers (i.e., language priors) to make
predictions, without really understanding the input. In this work, we define
the training instances with the same question type but different answers as
\textit{superficially similar instances}, and attribute the language priors to
the confusion of VQA model on such instances. To solve this problem, we propose
a novel training framework that explicitly encourages the VQA model to
distinguish between the superficially similar instances. Specifically, for each
training instance, we first construct a set that contains its superficially
similar counterparts. Then we exploit the proposed distinguishing module to
increase the distance between the instance and its counterparts in the answer
space. In this way, the VQA model is forced to further focus on the other parts
of the input beyond the question type, which helps to overcome the language
priors. Experimental results show that our method achieves the state-of-the-art
performance on VQA-CP v2. Codes are available at
\href{https://github.com/wyk-nku/Distinguishing-VQA.git}{Distinguishing-VQA}.Comment: Published in COLING 202
NoiSER: Noise is All You Need for Low-Light Image Enhancement
In this paper, we present an embarrassingly simple yet effective solution to
a seemingly impossible mission, low-light image enhancement (LLIE) without
access to any task-related data. The proposed solution, Noise SElf-Regression
(NoiSER), simply learns a convolutional neural network equipped with a
instance-normalization layer by taking a random noise image,
for each pixel, as both input and output for each
training pair, and then the low-light image is fed to the learned network for
predicting the normal-light image. Technically, an intuitive explanation for
its effectiveness is as follows: 1) the self-regression reconstructs the
contrast between adjacent pixels of the input image, 2) the
instance-normalization layers may naturally remediate the overall
magnitude/lighting of the input image, and 3) the
assumption for each pixel enforces the output image to follow the well-known
gray-world hypothesis \cite{Gary-world_Hypothesis} when the image size is big
enough, namely, the averages of three RGB components of an image converge to
the same value. Compared to existing SOTA LLIE methods with access to different
task-related data, NoiSER is surprisingly highly competitive in enhancement
quality, yet with a much smaller model size, and much lower training and
inference cost. With only 1K parameters, NoiSER realizes about 1 minute
for training and 1.2 ms for inference with 600x400 resolution on RTX 2080 Ti.
As a bonus, NoiSER possesses automated over-exposure suppression ability and
shows excellent performance on over-exposed photos
- …