164 research outputs found

    Monetary Policy in the Small Open Economy with Market Segmentation

    Get PDF
    We extend a New Keynesian small open economy DSGE model with non-tradable goods and intermediate inputs. Firstly, we show that the optimal monetary policy faces a trade-off between composite domestic inflation and output gap stabilization due to net exports externalities. Secondly, we rank alternative monetary policy rules associated with welfare and show that setting graduate interest rates towards their target levels rather than an immediate response is desirable. However, when the economy is highly exposed to foreign goods market and non-tradable productivity shocks, the CPI-based Taylor rule can be the best alternative policy. Lastly, we identify linkages between final and intermediate sectors and explain “sectoral heterogeneity” under the optimal policy and alternative monetary policy regimes

    Financial Frictions in the Small Open Economy

    Get PDF
    This paper introduces a global banking system in a small open economy DSGE model with financial frictions. The model features global relative price adjustments with incomplete asset market. Three main findings stand out. Firstly, foreign financial shocks capture negative spillovers from foreign country in a global financial crisis. We show that country differences in the severity of the shocks depend on the degree of trade openness and banking system stability. Secondly, credit policy could be more powerful than monetary policy to alleviate foreign financial shocks since an expansionary monetary policy and alternative policy rules are not a sufficient tool in the global financial crisis. In particular, credit policy based on international credit spread outperforms credit policy based on domestic credit spread since the latter leads to “excess smoothness” in the real exchange rate. Lastly, foreign credit policy has a negligible influence on domestic welfare so that the small open economy can effectively reduce welfare losses only if the central bank in the economy injects credit

    Diagnostic performance of ultrasonography-guided core-needle biopsy according to MRI LI-RADS diagnostic categories

    Get PDF
    Purpose According to the American Association for the Study of Liver Diseases (AASLD) guidelines, biopsy is a diagnostic option for focal hepatic lesions depending on the Liver Imaging Reporting and Data System (LI-RADS) category. We evaluated the diagnostic performance of ultrasonography-guided core-needle biopsy (CNB) according to LI-RADS categories. Methods A total of 145 High-risk patients for hepatocellular carcinoma (HCC) who underwent magnetic resonance imaging (MRI) followed by CNB for a focal hepatic lesion preoperatively were retrospectively enrolled. Focal hepatic lesions on MRI were evaluated according to LI-RADS version 2018. Pathologic results were categorized into HCC, non-HCC malignancies, and benignity. The categorization was defined as correct when the CNB pathology and surgical pathology reports were identical. Nondiagnostic results were defined as inadequate CNB pathology findings for a specific diagnosis. The proportion of correct categorizations was calculated for each LI-RADS category, excluding nondiagnostic results. Results After excluding 16 nondiagnostic results, 131 lesions were analyzed (45 LR-5, 24 LR-4, 4 LR-3, and 58 LR-M). All LR-5 lesions were HCC, and CNB correctly categorized 97.8% (44/45) of LR-5 lesions. CNB correctly categorized all 24 LR-4 lesions, 16.7% (4/24) of which were non-HCC malignancies. All LR-M lesions were malignant, and 62.1% (36/58) were non-HCC malignancies. CNB correctly categorized 93.1% (54/58) of LR-M lesions, and 12.5% (3/24) of lesions with CNB results of HCC were confirmed as non-HCC malignancies. Conclusion In agreement with AASLD guidelines, CNB could be helpful for LR-4 lesions, but is unnecessary for LR-5 lesions. In LR-M lesions, CNB results of HCC did not exclude non-HCC malignancy

    Prediction of cognitive impairment via deep learning trained with multi-center neuropsychological test data

    Get PDF
    Background Neuropsychological tests (NPTs) are important tools for informing diagnoses of cognitive impairment (CI). However, interpreting NPTs requires specialists and is thus time-consuming. To streamline the application of NPTs in clinical settings, we developed and evaluated the accuracy of a machine learning algorithm using multi-center NPT data. Methods Multi-center data were obtained from 14,926 formal neuropsychological assessments (Seoul Neuropsychological Screening Battery), which were classified into normal cognition (NC), mild cognitive impairment (MCI) and Alzheimers disease dementia (ADD). We trained a machine learning model with artificial neural network algorithm using TensorFlow (https://www.tensorflow.org) to distinguish cognitive state with the 46-variable data and measured prediction accuracies from 10 randomly selected datasets. The features of the NPT were listed in order of their contribution to the outcome using Recursive Feature Elimination. Results The ten times mean accuracies of identifying CI (MCI and ADD) achieved by 96.66 ± 0.52% of the balanced dataset and 97.23 ± 0.32% of the clinic-based dataset, and the accuracies for predicting cognitive states (NC, MCI or ADD) were 95.49 ± 0.53 and 96.34 ± 1.03%. The sensitivity to the detection CI and MCI in the balanced dataset were 96.0 and 96.0%, and the specificity were 96.8 and 97.4%, respectively. The time orientation and 3-word recall score of MMSE were highly ranked features in predicting CI and cognitive state. The twelve features reduced from 46 variable of NPTs with age and education had contributed to more than 90% accuracy in predicting cognitive impairment. Conclusions The machine learning algorithm for NPTs has suggested potential use as a reference in differentiating cognitive impairment in the clinical setting.The publication costs, design of the study, data management and writing the manuscript for this article were supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2017S1A6A3A01078538), Korea Ministry of Health & Welfare, and from the Original Technology Research Program for Brain Science through the National Research Foundation of Korea funded by the Korean Government (MSIP; No. 2014M3C7A1064752)

    FeedFormer: Revisiting Transformer Decoder for Efficient Semantic Segmentation

    No full text
    With the success of Vision Transformer (ViT) in image classification, its variants have yielded great success in many downstream vision tasks. Among those, the semantic segmentation task has also benefited greatly from the advance of ViT variants. However, most studies of the transformer for semantic segmentation only focus on designing efficient transformer encoders, rarely giving attention to designing the decoder. Several studies make attempts in using the transformer decoder as the segmentation decoder with class-wise learnable query. Instead, we aim to directly use the encoder features as the queries. This paper proposes the Feature Enhancing Decoder transFormer (FeedFormer) that enhances structural information using the transformer decoder. Our goal is to decode the high-level encoder features using the lowest-level encoder feature. We do this by formulating high-level features as queries, and the lowest-level feature as the key and value. This enhances the high-level features by collecting the structural information from the lowest-level feature. Additionally, we use a simple reformation trick of pushing the encoder blocks to take the place of the existing self-attention module of the decoder to improve efficiency. We show the superiority of our decoder with various light-weight transformer-based decoders on popular semantic segmentation datasets. Despite the minute computation, our model has achieved state-of-the-art performance in the performance computation trade-off. Our model FeedFormer-B0 surpasses SegFormer-B0 with 1.8% higher mIoU and 7.1% less computation on ADE20K, and 1.7% higher mIoU and 14.4% less computation on Cityscapes, respectively. Code will be released at: https://github.com/jhshim1995/FeedFormer
    corecore