8,116 research outputs found

    What Twitter Profile and Posted Images Reveal About Depression and Anxiety

    Full text link
    Previous work has found strong links between the choice of social media images and users' emotions, demographics and personality traits. In this study, we examine which attributes of profile and posted images are associated with depression and anxiety of Twitter users. We used a sample of 28,749 Facebook users to build a language prediction model of survey-reported depression and anxiety, and validated it on Twitter on a sample of 887 users who had taken anxiety and depression surveys. We then applied it to a different set of 4,132 Twitter users to impute language-based depression and anxiety labels, and extracted interpretable features of posted and profile pictures to uncover the associations with users' depression and anxiety, controlling for demographics. For depression, we find that profile pictures suppress positive emotions rather than display more negative emotions, likely because of social media self-presentation biases. They also tend to show the single face of the user (rather than show her in groups of friends), marking increased focus on the self, emblematic for depression. Posted images are dominated by grayscale and low aesthetic cohesion across a variety of image features. Profile images of anxious users are similarly marked by grayscale and low aesthetic cohesion, but less so than those of depressed users. Finally, we show that image features can be used to predict depression and anxiety, and that multitask learning that includes a joint modeling of demographics improves prediction performance. Overall, we find that the image attributes that mark depression and anxiety offer a rich lens into these conditions largely congruent with the psychological literature, and that images on Twitter allow inferences about the mental health status of users.Comment: ICWSM 201

    Inter-CubeSat Communication with V-band "Bull's eye" antenna

    Get PDF
    We present the study of a simple communication scenario between two CubeSats using a V-band “Bull's eye” antenna that we designed for this purpose. The return loss of the antenna has a -10dB bandwidth of 0.7 GHz and a gain of 15.4dBi at 60 GHz. Moreover, the low-profile shape makes it easily integrable in a CubeSat chassis. The communication scenario study shows that, using 0.01W VubiQ modules and V-band “Bull’s eye” antennas, CubeSats can efficiently transmit data within a 500 MHz bandwidth and with a 10-6 BER while being separated by up to 98m, under ideal conditions, or 50m under worst case operating conditions (5° pointing misalignment in E- and H-plane of the antenna, and 5° polarisation misalignment)

    Modeling Human Categorization of Natural Images Using Deep Feature Representations

    Get PDF
    Over the last few decades, psychologists have developed sophisticated formal models of human categorization using simple artificial stimuli. In this paper, we use modern machine learning methods to extend this work into the realm of naturalistic stimuli, enabling human categorization to be studied over the complex visual domain in which it evolved and developed. We show that representations derived from a convolutional neural network can be used to model behavior over a database of >300,000 human natural image classifications, and find that a group of models based on these representations perform well, near the reliability of human judgments. Interestingly, this group includes both exemplar and prototype models, contrasting with the dominance of exemplar models in previous work. We are able to improve the performance of the remaining models by preprocessing neural network representations to more closely capture human similarity judgments.Comment: 13 pages, 7 figures, 6 tables. Preliminary work presented at CogSci 201

    Learning Visual Importance for Graphic Designs and Data Visualizations

    Full text link
    Knowing where people look and click on visual designs can provide clues about how the designs are perceived, and where the most important or relevant content lies. The most important content of a visual design can be used for effective summarization or to facilitate retrieval from a database. We present automated models that predict the relative importance of different elements in data visualizations and graphic designs. Our models are neural networks trained on human clicks and importance annotations on hundreds of designs. We collected a new dataset of crowdsourced importance, and analyzed the predictions of our models with respect to ground truth importance and human eye movements. We demonstrate how such predictions of importance can be used for automatic design retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven applications are on par with, or outperform, current state-of-the-art methods, including natural image saliency. We also provide a demonstration of how our importance predictions can be built into interactive design tools to offer immediate feedback during the design process

    CMB component separation by parameter estimation

    Get PDF
    We propose a solution to the CMB component separation problem based on standard parameter estimation techniques. We assume a parametric spectral model for each signal component, and fit the corresponding parameters pixel by pixel in a two-stage process. First we fit for the full parameter set (e.g., component amplitudes and spectral indices) in low-resolution and high signal-to-noise ratio maps using MCMC, obtaining both best-fit values for each parameter, and the associated uncertainty. The goodness-of-fit is evaluated by a chi^2 statistic. Then we fix all non-linear parameters at their low-resolution best-fit values, and solve analytically for high-resolution component amplitude maps. This likelihood approach has many advantages: The fitted model may be chosen freely, and the method is therefore completely general; all assumptions are transparent; no restrictions on spatial variations of foreground properties are imposed; the results may be rigorously monitored by goodness-of-fit tests; and, most importantly, we obtain reliable error estimates on all estimated quantities. We apply the method to simulated Planck and six-year WMAP data based on realistic models, and show that separation at the muK level is indeed possible in these cases. We also outline how the foreground uncertainties may be rigorously propagated through to the CMB power spectrum and cosmological parameters using a Gibbs sampling technique.Comment: 20 pages, 10 figures, submitted to ApJ. For a high-resolution version, see http://www.astro.uio.no/~hke/docs/eriksen_et_al_fgfit.p

    Machine learning methods for histopathological image analysis

    Full text link
    Abundant accumulation of digital histopathological images has led to the increased demand for their analysis, such as computer-aided diagnosis using machine learning techniques. However, digital pathological images and related tasks have some issues to be considered. In this mini-review, we introduce the application of digital pathological image analysis using machine learning algorithms, address some problems specific to such analysis, and propose possible solutions.Comment: 23 pages, 4 figure
    • …
    corecore