346 research outputs found

    Mining the Relationship Between COVID-19 Sentiment and Market Performance

    Full text link
    At the beginning of the COVID-19 outbreak in March, we observed one of the largest stock market crashes in history. Within the months following this, a volatile bullish climb back to pre-pandemic performances and higher. In this paper, we study the stock market behavior during the initial few months of the COVID-19 pandemic in relation to COVID-19 sentiment. Using text sentiment analysis of Twitter data, we look at tweets that contain key words in relation to the COVID-19 pandemic and the sentiment of the tweet to understand whether sentiment can be used as an indicator for stock market performance. There has been previous research done on applying natural language processing and text sentiment analysis to understand the stock market performance, given how prevalent the impact of COVID-19 is to the economy, we want to further the application of these techniques to understand the relationship that COVID-19 has with stock market performance. Our findings show that there is a strong relationship to COVID-19 sentiment derived from tweets that could be used to predict stock market performance in the future.Comment: 18 pages, 7 figures, 5 table

    BiRA-Net: Bilinear Attention Net for Diabetic Retinopathy Grading

    Full text link
    Diabetic retinopathy (DR) is a common retinal disease that leads to blindness. For diagnosis purposes, DR image grading aims to provide automatic DR grade classification, which is not addressed in conventional research methods of binary DR image classification. Small objects in the eye images, like lesions and microaneurysms, are essential to DR grading in medical imaging, but they could easily be influenced by other objects. To address these challenges, we propose a new deep learning architecture, called BiRA-Net, which combines the attention model for feature extraction and bilinear model for fine-grained classification. Furthermore, in considering the distance between different grades of different DR categories, we propose a new loss function, called grading loss, which leads to improved training convergence of the proposed approach. Experimental results are provided to demonstrate the superior performance of the proposed approach.Comment: Accepted at ICIP 201

    No-Reference Light Field Image Quality Assessment Based on Micro-Lens Image

    Full text link
    Light field image quality assessment (LF-IQA) plays a significant role due to its guidance to Light Field (LF) contents acquisition, processing and application. The LF can be represented as 4-D signal, and its quality depends on both angular consistency and spatial quality. However, few existing LF-IQA methods concentrate on effects caused by angular inconsistency. Especially, no-reference methods lack effective utilization of 2-D angular information. In this paper, we focus on measuring the 2-D angular consistency for LF-IQA. The Micro-Lens Image (MLI) refers to the angular domain of the LF image, which can simultaneously record the angular information in both horizontal and vertical directions. Since the MLI contains 2-D angular information, we propose a No-Reference Light Field image Quality assessment model based on MLI (LF-QMLI). Specifically, we first utilize Global Entropy Distribution (GED) and Uniform Local Binary Pattern descriptor (ULBP) to extract features from the MLI, and then pool them together to measure angular consistency. In addition, the information entropy of Sub-Aperture Image (SAI) is adopted to measure spatial quality. Extensive experimental results show that LF-QMLI achieves the state-of-the-art performance

    Semi-Supervised Self-Taught Deep Learning for Finger Bones Segmentation

    Full text link
    Segmentation stands at the forefront of many high-level vision tasks. In this study, we focus on segmenting finger bones within a newly introduced semi-supervised self-taught deep learning framework which consists of a student network and a stand-alone teacher module. The whole system is boosted in a life-long learning manner wherein each step the teacher module provides a refinement for the student network to learn with newly unlabeled data. Experimental results demonstrate the superiority of the proposed method over conventional supervised deep learning methods.Comment: IEEE BHI 2019 accepte

    In Vivo Direct Reprogramming of Reactive Glial Cells into Functional Neurons after Brain Injury and in an Alzheimer’s Disease Model

    Get PDF
    SummaryLoss of neurons after brain injury and in neurodegenerative disease is often accompanied by reactive gliosis and scarring, which are difficult to reverse with existing treatment approaches. Here, we show that reactive glial cells in the cortex of stab-injured or Alzheimer’s disease (AD) model mice can be directly reprogrammed into functional neurons in vivo using retroviral expression of a single neural transcription factor, NeuroD1. Following expression of NeuroD1, astrocytes were reprogrammed into glutamatergic neurons, while NG2 cells were reprogrammed into glutamatergic and GABAergic neurons. Cortical slice recordings revealed both spontaneous and evoked synaptic responses in NeuroD1-converted neurons, suggesting that they integrated into local neural circuits. NeuroD1 expression was also able to reprogram cultured human cortical astrocytes into functional neurons. Our studies therefore suggest that direct reprogramming of reactive glial cells into functional neurons in vivo could provide an alternative approach for repair of injured or diseased brain

    Quality Assessment of Stereoscopic 360-degree Images from Multi-viewports

    Full text link
    Objective quality assessment of stereoscopic panoramic images becomes a challenging problem owing to the rapid growth of 360-degree contents. Different from traditional 2D image quality assessment (IQA), more complex aspects are involved in 3D omnidirectional IQA, especially unlimited field of view (FoV) and extra depth perception, which brings difficulty to evaluate the quality of experience (QoE) of 3D omnidirectional images. In this paper, we propose a multi-viewport based fullreference stereo 360 IQA model. Due to the freely changeable viewports when browsing in the head-mounted display (HMD), our proposed approach processes the image inside FoV rather than the projected one such as equirectangular projection (ERP). In addition, since overall QoE depends on both image quality and depth perception, we utilize the features estimated by the difference map between left and right views which can reflect disparity. The depth perception features along with binocular image qualities are employed to further predict the overall QoE of 3D 360 images. The experimental results on our public Stereoscopic OmnidirectionaL Image quality assessment Database (SOLID) show that the proposed method achieves a significant improvement over some well-known IQA metrics and can accurately reflect the overall QoE of perceived images
    corecore