181 research outputs found

    RNA-SEQUENCING ANALYSIS: READ ALIGNMENT AND DISCOVERY AND RECONSTRUCTION OF FUSION TRANSCRIPTS

    Get PDF
    RNA-sequencing technologies, which sequence the RNA molecules being transcribed in cells, allow us to explore the process of transcription in exquisite detail. One of the primary goals of RNA sequencing analysis is to reconstruct the full set of transcripts (isoforms) of genes that were present in the original cells. In addition to the transcript structures, experimenters need to estimate the expression levels for all transcripts. The first step in the analysis process is to map the RNA-seq reads against the reference genome, which provides the location from which the reads originated. In contrast to DNA sequence alignment, RNA-seq mapping algorithms have two additional challenges. First, any RNA-seq alignment program must be able to handle gapped alignment (or spliced alignment) with very large gaps due to introns, typically from 50-100,000 bases in mammalian genomes. Second, the presence of processed pseudogenes from which introns have been removed may cause many exon-spanning reads to map incorrectly. In order to cope with these problems effectively, I have developed new alignment algorithms and implemented them in TopHat2, a second version of TopHat (one of the first spliced aligners for RNA-seq reads). The new TopHat2 program can align reads of various lengths produced by the latest sequencing technologies, while allowing for variable-length insertions and deletions with respect to the reference genome. TopHat2 combines the ability to discover novel splice sites with direct mapping to known transcripts, producing more sensitive and accurate alignments, even for highly repetitive genomes or in the presence of processed pseudogenes. These new capabilities will contribute to improvements in the quality of downstream analysis. In addition to its splice junction mapping algorithm, I have developed novel algorithms to align reads across fusion break points, which result from the breakage and re-joining of two different chromosomes, or from rearrangements within a chromosome. Based on this new fusion alignment algorithm, I have developed TransFUSE, one of the first systems for reconstruction and quantification of full- length fusion gene transcripts. TransFUSE can be run with or without known gene annotations, and it can discover novel fusion transcripts that are transcribed from known or unknown genes

    Role of dermal melanocytes in cutaneous pigmentation of stasis dermatitis: a histopathological study of 20 cases.

    Get PDF
    Stasis dermatitis is an itchy, scaly, and hyperpigmented condition of the lower leg due to venous insufficiency. Hemosiderin and/or melanin have been considered responsible for the brown pigmentation. However, there are not sufficient histopathologic studies. In this retrospective study the hospital records and biopsy slides of 20 patients were reviewed to determine the pathogenetic mechanisms of brown pigmentation in stasis dermatitis. Fifteen were men (75%) and 5 were women (25%) with a mean age of 46.2+/-8.2 yr (18-76), mean age at onset of 43.4+/-18.0 yr (17-73), and a mean duration of the disease 2.8+/-2.5 yr (0.25-10). All patients had varicose vein and complained of pruritus. On histopathologic evaluation, two cases out of 20 (3 skin biopsy specimens from 25 samples) showed dermal melanocytes containing melanin, and incontinence of melanin pigment was observed in 5 cases, which indicates that melanin pigments from epidermis could contribute to cutaneous pigmentation in stasis dermatitis. However, the existence of dermal melanocytes in two cases cannot be explained because normally the dermis contains no melanocytes. Further studies concerning the role of iron or inflammatory cytokines on the development of dermal melanocytes should be conducted

    Uses and Misuses of the Black-Litterman Model in Portfolio Construction

    Get PDF
    The Black-Litterman model has gained popularity in applications in the area of quantitative equity portfolio management. Unfortunately, many recent applications of the Black-Litterman to novel aspects of quantitative portfolio management have neglected the rigor of the original Black-Litterman modelling. In this article, we critically examine some of these applications from a Bayesian perspective. We identify three reasons why these applications may create losses to investors. These three reasons are: (1) Using a prior without anchoring the prior to an equilibrium model, (2) Using a prior and an equilibrium model that conflict with one another, and (3) Ignoring the implications of the estimation error of the variance-covariance matrix. We also quantify the loss first analytically, and also numerically based on historical data on 10 major world stock market indices. Our conservative estimate of the loss is around a 1% reduction in the annualized return of the portfolio

    Mean-Variance Efficiency of Reserve Portfolios

    Get PDF
    This paper analyzes the mean-variance efficiency of the reserve portfolios of central banks in an effort to shed light on the recent debate regarding the need for portfolio diversification. Using likelihood ratio test statistics, we examine the efficiency of the reserve portfolios of 18 countries from 2000 to 2009. The null hypothesis of efficiency is rejected for approximately half of the countries. However, overall inefficiency appears to have decreased over time, particularly in those countries that previously had inefficient portfolio diversification. Along with the continued dominance of the US dollar in reserve portfolios, our findings suggest that the status of the US dollar as an international reserve currency did not decline

    Proxy Anchor-based Unsupervised Learning for Continuous Generalized Category Discovery

    Full text link
    Recent advances in deep learning have significantly improved the performance of various computer vision applications. However, discovering novel categories in an incremental learning scenario remains a challenging problem due to the lack of prior knowledge about the number and nature of new categories. Existing methods for novel category discovery are limited by their reliance on labeled datasets and prior knowledge about the number of novel categories and the proportion of novel samples in the batch. To address the limitations and more accurately reflect real-world scenarios, in this paper, we propose a novel unsupervised class incremental learning approach for discovering novel categories on unlabeled sets without prior knowledge. The proposed method fine-tunes the feature extractor and proxy anchors on labeled sets, then splits samples into old and novel categories and clusters on the unlabeled dataset. Furthermore, the proxy anchors-based exemplar generates representative category vectors to mitigate catastrophic forgetting. Experimental results demonstrate that our proposed approach outperforms the state-of-the-art methods on fine-grained datasets under real-world scenarios.Comment: Accepted to ICCV 202

    Analysis on Response Characteristics of Semiconductor Methane Gas Sensor by Ultrasonic Process

    Get PDF
    The thin films of Pt and Zn were coated with on the electrode in the board. Thin films of Pt are fabricated by ion plasma and Zn is manufactured by DC sputtering methods. Then the deposited boards were produced by ultrasonic chemical deposition in 0.01M aqueous solution of C6H12N4 and Zn(NO3)2?6H2O. To make the Zinc oxide, prepared-substrates were annealed at 600? for 1h and analysis on response characteristics of ZnO-structured sensors are tested for Methane gas. In the experiments, the concentration of Methane gas was used from 15% to 40% LEL. We measured the change of the voltage before and after the Methane gas injections, it was judged whether it had a suitable performance as the Methane gas sensors. As a result of the sensitivity of the fabricated sensor, it was confirmed that the voltage increases according to the Methane concentration. The sensitivity of the sensor was constantly increase so the graph showed a linear shape. Also, the fabricated sensors showed a very short stabilization time, fast reaction and recovery. As a result, the using possibility of the detector is suggested in the industrial facilities

    ConMatch: Semi-Supervised Learning with Confidence-Guided Consistency Regularization

    Full text link
    We present a novel semi-supervised learning framework that intelligently leverages the consistency regularization between the model's predictions from two strongly-augmented views of an image, weighted by a confidence of pseudo-label, dubbed ConMatch. While the latest semi-supervised learning methods use weakly- and strongly-augmented views of an image to define a directional consistency loss, how to define such direction for the consistency regularization between two strongly-augmented views remains unexplored. To account for this, we present novel confidence measures for pseudo-labels from strongly-augmented views by means of weakly-augmented view as an anchor in non-parametric and parametric approaches. Especially, in parametric approach, we present, for the first time, to learn the confidence of pseudo-label within the networks, which is learned with backbone model in an end-to-end manner. In addition, we also present a stage-wise training to boost the convergence of training. When incorporated in existing semi-supervised learners, ConMatch consistently boosts the performance. We conduct experiments to demonstrate the effectiveness of our ConMatch over the latest methods and provide extensive ablation studies. Code has been made publicly available at https://github.com/JiwonCocoder/ConMatch.Comment: Accepted at ECCV 202

    AI-KD: Adversarial learning and Implicit regularization for self-Knowledge Distillation

    Full text link
    We present a novel adversarial penalized self-knowledge distillation method, named adversarial learning and implicit regularization for self-knowledge distillation (AI-KD), which regularizes the training procedure by adversarial learning and implicit distillations. Our model not only distills the deterministic and progressive knowledge which are from the pre-trained and previous epoch predictive probabilities but also transfers the knowledge of the deterministic predictive distributions using adversarial learning. The motivation is that the self-knowledge distillation methods regularize the predictive probabilities with soft targets, but the exact distributions may be hard to predict. Our method deploys a discriminator to distinguish the distributions between the pre-trained and student models while the student model is trained to fool the discriminator in the trained procedure. Thus, the student model not only can learn the pre-trained model's predictive probabilities but also align the distributions between the pre-trained and student models. We demonstrate the effectiveness of the proposed method with network architectures on multiple datasets and show the proposed method achieves better performance than state-of-the-art methods.Comment: 12 pages, 7 figure

    Bankruptcy of Lehman Brothers: Determinants of Cross-country Impacts on Stock Market Volatility

    Get PDF
    We empirically examine the determinants of the short-term cross-country impacts of Lehman Brothers' bankruptcy on the volatility of stock prices. According to the results of this study, countries with lower financial openness and greater stock market depth experienced a smaller increase in stock price volatility. This suggests that capital control and greater stock market development were relatively more useful in maintaining the stability of stock markets at the time of Lehman's failure. On the other hand, we find little evidence for the role of international imbalances, trade openness, economic sizes and income levels, and macroeconomic fundamentals. Ā  Keywords: Lehman Brothers; volatility of stock prices; financial openness; stock market development JEL Classifications: F32; F36; F38; F4

    The Effect of Ad Exposure on Consumer Visual Attentions Depending on Time Lapse in the Context of eSports

    Get PDF
    PURPOSE This study examined how consumers' visual attention to ads during eSports media consumption varies over time. METHODS An experimental study with a single factor, three-level within subject experimental design was conducted, utilizing an eye-tracker to measure visual attentions, including fixation count and duration. Seventy-eight students from a national university in city B participated in the experiment. A repeated measures ANOVA was conducted using the open-source statistical program R to test the research hypothesis. RESULTS Both the fixation count and duration were highest for the first ad and then gradually decreased for the second and third ad. CONCLUSIONS It is recommended that eSports sponsors should consider differentiating ad pricing based on the order of exposure, then expose the first ad presented more frequently and for extended periods, and consider different shapes, colors, and movements to prevent adaptation to the initial allocation of attention
    • ā€¦
    corecore