7 research outputs found

    What Twitter Profile and Posted Images Reveal About Depression and Anxiety

    Full text link
    Previous work has found strong links between the choice of social media images and users' emotions, demographics and personality traits. In this study, we examine which attributes of profile and posted images are associated with depression and anxiety of Twitter users. We used a sample of 28,749 Facebook users to build a language prediction model of survey-reported depression and anxiety, and validated it on Twitter on a sample of 887 users who had taken anxiety and depression surveys. We then applied it to a different set of 4,132 Twitter users to impute language-based depression and anxiety labels, and extracted interpretable features of posted and profile pictures to uncover the associations with users' depression and anxiety, controlling for demographics. For depression, we find that profile pictures suppress positive emotions rather than display more negative emotions, likely because of social media self-presentation biases. They also tend to show the single face of the user (rather than show her in groups of friends), marking increased focus on the self, emblematic for depression. Posted images are dominated by grayscale and low aesthetic cohesion across a variety of image features. Profile images of anxious users are similarly marked by grayscale and low aesthetic cohesion, but less so than those of depressed users. Finally, we show that image features can be used to predict depression and anxiety, and that multitask learning that includes a joint modeling of demographics improves prediction performance. Overall, we find that the image attributes that mark depression and anxiety offer a rich lens into these conditions largely congruent with the psychological literature, and that images on Twitter allow inferences about the mental health status of users.Comment: ICWSM 201

    Blood vessel enhancement via multi-dictionary and sparse coding: Application to retinal vessel enhancing

    No full text
    International audienceBlood vessel images can provide considerable information of many diseases, which are widely used by ophthalmologists for disease diagnosis and surgical planning. In this paper, we propose a novel method for the blood Vessel Enhancement via Multi-dictionary and Sparse Coding (VE-MSC). In the proposed method, two dictionaries are utilized to gain the vascular structures and details, including the Representation Dictionary (RD) generated from the original vascular images and the Enhancement Dictionary (ED) extracted from the corresponding label images. The sparse coding technology is utilized to represent the original target vessel image with RD. After that, the enhanced target vessel image can be reconstructed using the obtained sparse coefficients and ED. The proposed method has been evaluated for the retinal vessel enhancement on the DRIVE and STARE databases. Experimental results indicate that the proposed method can not only effectively improve the image contrast but also enhance the retinal vascular structures and details

    Retinal Fundus Image Enhancement Using the Normalized Convolution and Noise Removing

    Get PDF
    Retinal fundus image plays an important role in the diagnosis of retinal related diseases. The detailed information of the retinal fundus image such as small vessels, microaneurysms, and exudates may be in low contrast, and retinal image enhancement usually gives help to analyze diseases related to retinal fundus image. Current image enhancement methods may lead to artificial boundaries, abrupt changes in color levels, and the loss of image detail. In order to avoid these side effects, a new retinal fundus image enhancement method is proposed. First, the original retinal fundus image was processed by the normalized convolution algorithm with a domain transform to obtain an image with the basic information of the background. Then, the image with the basic information of the background was fused with the original retinal fundus image to obtain an enhanced fundus image. Lastly, the fused image was denoised by a two-stage denoising method including the fourth order PDEs and the relaxed median filter. The retinal image databases, including the DRIVE database, the STARE database, and the DIARETDB1 database, were used to evaluate image enhancement effects. The results show that the method can enhance the retinal fundus image prominently. And, different from some other fundus image enhancement methods, the proposed method can directly enhance color images

    Color Image Enhancement via Combine Homomorphic Ratio and Histogram Equalization Approaches: Using Underwater Images as Illustrative Examples

    Get PDF
    The histogram is one of the important characteristics of grayscale images, and the histogram equalization is effective method of image enhancement. When processing color images in models, such as the RGB model, the histogram equalization can be applied for each color component and, then, a new color image is composed from processed components. This is a traditional way of processing color images, which does not preserve the existent relation or correlation between colors at each pixel. In this work, a new model of color image enhancement is proposed, by preserving the ratios of colors at all pixels after processing the image. This model is described for the color histogram equalization (HE) and examples of application on color images are given. Our preliminary results show that the application of the model with the HE can be effectively used for enhancing color images, including underwater images. Intensive computer simulations show that for single underwater image enhancement, the presented method increases the image contrast and brightness and indicates a good natural appearance and relatively genuine color

    Multimodal Mental Health Analysis in Social Media

    Get PDF
    Depression is a major public health concern in the U.S. and globally. While successful early identification and treatment can lead to many positive health and behavioral outcomes, depression, remains undiagnosed, untreated or undertreated due to several reasons, including denial of the illness as well as cultural and social stigma. With the ubiquity of social media platforms, millions of people are now sharing their online persona by expressing their thoughts, moods, emotions, and even their daily struggles with mental health on social media. Unlike traditional observational cohort studies conducted through questionnaires and self-reported surveys, we explore the reliable detection of depressive symptoms from tweets obtained, unobtrusively. Particularly, we examine and exploit multimodal big (social) data to discern depressive behaviors using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques to fuse heterogeneous sets of features obtained through the processing of visual, textual, and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inferences from social media. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions
    corecore