191 research outputs found
User Cost of Credit Card Services under Risk with Intertemporal Nonseparability
This paper derives the user cost of monetary assets and credit card services with interest rate risk under the assumption of intertemporal non-separability. Barnett and Su (2016) derived theory permitting inclusion of credit card transaction services into Divisia monetary aggregates. The risk adjustment in their theory is based on CCAPM under intertemporal separability. The equity premium puzzle focusses on downward bias in the CCAPM risk adjustment to common stock returns. Despite the high risk of credit card interest rates, the risk adjustment under the CCAPM assumption of intertemporal separability might nevertheless be similarly small. While the known downward bias of CCAPM risk adjustments are of little concern with Divisia monetary aggregates containing only low risk monetary assets, that downward bias cannot be ignored, once high risk credit card services are included. We believe that extending to intertemporal non-separability could provide a non-negligible risk adjustment, as has been emphasized by Barnett and Wu (2015).
In this paper, we extend the credit-card-augmented Divisia monetary quantity aggregates to the case of risk aversion and intertemporal non-separability in consumption. Our results are for the “representative consumer” aggregated over all consumers. While credit-card interest-rate risk may be low for some consumers, the volatility of credit card interest rates for the representative consumer is high, as reflected by the high volatility of the Federal Reserve’s data on credit card interest rates aggregated over consumers. One method of introducing intertemporal non-separability is to assume habit formation. We explore that possibility.
To implement our theory, we introduce a pricing kernel, in accordance with the approach advocated by Barnett and Wu (2015). We assume that the pricing kernel is a linear function of the rate of return on a well-diversified wealth portfolio. We find that the risk adjustment of the credit-card-services user cost to its certainty equivalence level can be measured by its beta. That beta depends upon the covariance between the interest rates on credit card services and on the wealth portfolio of the consumer, in a manner analogous to the standard CAPM adjustment. As a result, credit card services’ risk premia depend on their market portfolio risk exposure, which is measured by the beta of the credit card interest rates.
We are currently conducting research on empirical implementation of the theory proposed in this paper
User Cost of Credit Card Services under Risk with Intertemporal Nonseparability
This paper derives the user cost of monetary assets and credit card services with interest rate risk under the assumption of intertemporal non-separability. Barnett and Su (2016) derived theory permitting inclusion of credit card transaction services into Divisia monetary aggregates. The risk adjustment in their theory is based on CCAPM under intertemporal separability. The equity premium puzzle focusses on downward bias in the CCAPM risk adjustment to common stock returns. Despite the high risk of credit card interest rates, the risk adjustment under the CCAPM assumption of intertemporal separability might nevertheless be similarly small. While the known downward bias of CCAPM risk adjustments are of little concern with Divisia monetary aggregates containing only low risk monetary assets, that downward bias cannot be ignored, once high risk credit card services are included. We believe that extending to intertemporal non-separability could provide a non-negligible risk adjustment, as has been emphasized by Barnett and Wu (2015).
In this paper, we extend the credit-card-augmented Divisia monetary quantity aggregates to the case of risk aversion and intertemporal non-separability in consumption. Our results are for the “representative consumer” aggregated over all consumers. While credit-card interest-rate risk may be low for some consumers, the volatility of credit card interest rates for the representative consumer is high, as reflected by the high volatility of the Federal Reserve’s data on credit card interest rates aggregated over consumers. One method of introducing intertemporal non-separability is to assume habit formation. We explore that possibility.
To implement our theory, we introduce a pricing kernel, in accordance with the approach advocated by Barnett and Wu (2015). We assume that the pricing kernel is a linear function of the rate of return on a well-diversified wealth portfolio. We find that the risk adjustment of the credit-card-services user cost to its certainty equivalence level can be measured by its beta. That beta depends upon the covariance between the interest rates on credit card services and on the wealth portfolio of the consumer, in a manner analogous to the standard CAPM adjustment. As a result, credit card services’ risk premia depend on their market portfolio risk exposure, which is measured by the beta of the credit card interest rates.
We are currently conducting research on empirical implementation of the theory proposed in this paper
Faster Depth-Adaptive Transformers
Depth-adaptive neural networks can dynamically adjust depths according to the
hardness of input words, and thus improve efficiency. The main challenge is how
to measure such hardness and decide the required depths (i.e., layers) to
conduct. Previous works generally build a halting unit to decide whether the
computation should continue or stop at each layer. As there is no specific
supervision of depth selection, the halting unit may be under-optimized and
inaccurate, which results in suboptimal and unstable performance when modeling
sentences. In this paper, we get rid of the halting unit and estimate the
required depths in advance, which yields a faster depth-adaptive model.
Specifically, two approaches are proposed to explicitly measure the hardness of
input words and estimate corresponding adaptive depth, namely 1) mutual
information (MI) based estimation and 2) reconstruction loss based estimation.
We conduct experiments on the text classification task with 24 datasets in
various sizes and domains. Results confirm that our approaches can speed up the
vanilla Transformer (up to 7x) while preserving high accuracy. Moreover,
efficiency and robustness are significantly improved when compared with other
depth-adaptive approaches.Comment: AAAI-2021. Code will appear at:
https://github.com/Adaxry/Adaptive-Transforme
N-Terminus of GRXCR2 Interacts With CLIC5 and Is Essential for Auditory Perception
Stereocilia of cochlear hair cells are specialized mechanosensing organelles that convert sound-induced vibration to electrical signals. Glutaredoxin domain-containing cysteine-rich protein 2 (GRXCR2) is localized at the base of stereocilia and is necessary for stereocilia morphogenesis and auditory perception. However, the detailed functions of GRXCR2 in hair cells are still largely unknown. Here, we report that GRXCR2 interacts with chloride intracellular channel protein 5 (CLIC5) which is also localized at the base of stereocilia and required for normal hearing in human and mouse. Immunolocalization analyses suggest that GRXCR2 is not required for the localization of CLIC5 to the stereociliary base during development, or vice versa. Using clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 system, we deleted 60 amino acids near the N-terminus of GRXCR2 essential for its interaction with CLIC5. Interestingly, mice harboring this in-frame deletion in Grxcr2 exhibit moderate hearing loss at lower frequencies and severe hearing loss at higher frequencies although the morphogenesis of stereocilia is minimally affected. Thus, our findings reveal that the interaction between GRXCR2 and CLIC5 is crucial for normal hearing
Correlation between adherence rates measured by MEMS and self-reported questionnaires: a meta-analysis
<p>Abstract</p> <p>Purpose</p> <p>It is vital to understand the associations between the medication event monitoring systems (MEMS) and self-reported questionnaires (SRQs) because both are often used to measure medication adherence and can produce different results. In addition, the economic implication of using alternative measures is important as the cost of electronic monitoring devices is not covered by insurance, while self-reports are the most practical and cost-effective method in the clinical settings. This meta-analysis examined the correlations of two measurements of medication adherence: MEMS and SRQs.</p> <p>Methods</p> <p>The literature search (1980-2009) used PubMed, OVID MEDLINE, PsycINFO (EBSCO), CINAHL (EBSCO), OVID HealthStar, EMBASE (Elsevier), and Cochrane Databases. Studies were included if the correlation coefficients [Pearson (r<sub>p</sub>) or Spearman (r<sub>s</sub>)] between adherences measured by both MEMS and SRQs were available or could be calculated from other statistics in the articles. Data were independently abstracted in duplicate with standardized protocol and abstraction form including 1) first author's name; 2) year of publication; 3) disease status of participants; 4) sample size; 5) mean age (year); 6) duration of trials (month); 7) SRQ names if available; 8) adherence (%) measured by MEMS; 9) adherence (%) measured by SRQ; 10) correlation coefficient and relative information, including p-value, 95% confidence interval (CI). A meta-analysis was conducted to pool the correlation coefficients using random-effect model.</p> <p>Results</p> <p>Eleven studies (N = 1,684 patients) met the inclusion criteria. The mean of adherence measured by MEMS was 74.9% (range 53.4%-92.9%), versus 84.0% by SRQ (range 68.35%-95%). The correlation between adherence measured by MEMS and SRQs ranged from 0.24 to 0.87. The pooled correlation coefficient for 11 studies was 0.45 (p = 0.001, 95% confidence interval [95% CI]: 0.34-0.56). The subgroup meta-analysis on the seven studies reporting r<sub>p </sub>and four studies reporting r<sub>s </sub>reported the pooled correlation coefficient: 0.46 (p = 0.011, 95% CI: 0.33-0.59) and 0.43 (p = 0.0038, 95% CI: 0.23-0.64), respectively. No differences were found for other subgroup analyses.</p> <p>Conclusion</p> <p>Medication adherence measured by MEMS and SRQs tends to be at least moderately correlated, suggesting that SRQs give a good estimate of medication adherence.</p
Improving Translation Faithfulness of Large Language Models via Augmenting Instructions
Large Language Models (LLMs) present strong general capabilities, and a
current compelling challenge is stimulating their specialized capabilities,
such as machine translation, through low-cost instruction tuning. The standard
instruction-following data is sequentially organized as the concatenation of an
instruction, an input, and a response. As the attention mechanism of LLMs has
limitations on local focus, LLMs tend to focus more on the words or sentences
nearby at each position. This leads to a high risk of instruction forgetting
during decoding. To alleviate the above issues, We propose SWIE
(Segment-Weighted Instruction Embedding) and an instruction-following dataset
OVERMISS. SWIE improves the model instruction understanding by adding a global
instruction representation on the following input and response representations.
OVERMISS improves model faithfulness by comparing over-translation and
miss-translation results with the correct translation. We apply our methods to
two main-stream open-source LLMs, BLOOM and LLaMA. The experimental results
demonstrate significant improvements in translation performance with SWIE based
on BLOOMZ-3b, particularly in zero-shot and long text translations due to
reduced instruction forgetting risk. Additionally, OVERMISS outperforms the
baseline in translation performance (e.g. an increase in BLEU scores from 0.69
to 3.12 and an average improvement of 0.48 percentage comet scores for
LLaMA-7b) with further enhancements seen in models combining OVERMISS and SWIE
(e.g. the BLUE scores increase up to 0.56 from English to German across three
different backbones), and both exhibit improvements in the faithfulness metric
based on word alignment.Comment: Our code and datasets are released in Github:
https://github.com/pppa2019/swie_overmiss_llm4m
Stock Market Prediction via Deep Learning Techniques: A Survey
The stock market prediction has been a traditional yet complex problem
researched within diverse research areas and application domains due to its
non-linear, highly volatile and complex nature. Existing surveys on stock
market prediction often focus on traditional machine learning methods instead
of deep learning methods. Deep learning has dominated many domains, gained much
success and popularity in recent years in stock market prediction. This
motivates us to provide a structured and comprehensive overview of the research
on stock market prediction focusing on deep learning techniques. We present
four elaborated subtasks of stock market prediction and propose a novel
taxonomy to summarize the state-of-the-art models based on deep neural networks
from 2011 to 2022. In addition, we also provide detailed statistics on the
datasets and evaluation metrics commonly used in the stock market. Finally, we
highlight some open issues and point out several future directions by sharing
some new perspectives on stock market prediction
- …