29 research outputs found

    See and Read: Detecting Depression Symptoms in Higher Education Students Using Multimodal Social Media Data

    Full text link
    Mental disorders such as depression and anxiety have been increasing at alarming rates in the worldwide population. Notably, the major depressive disorder has become a common problem among higher education students, aggravated, and maybe even occasioned, by the academic pressures they must face. While the reasons for this alarming situation remain unclear (although widely investigated), the student already facing this problem must receive treatment. To that, it is first necessary to screen the symptoms. The traditional way for that is relying on clinical consultations or answering questionnaires. However, nowadays, the data shared at social media is a ubiquitous source that can be used to detect the depression symptoms even when the student is not able to afford or search for professional care. Previous works have already relied on social media data to detect depression on the general population, usually focusing on either posted images or texts or relying on metadata. In this work, we focus on detecting the severity of the depression symptoms in higher education students, by comparing deep learning to feature engineering models induced from both the pictures and their captions posted on Instagram. The experimental results show that students presenting a BDI score higher or equal than 20 can be detected with 0.92 of recall and 0.69 of precision in the best case, reached by a fusion model. Our findings show the potential of large-scale depression screening, which could shed light upon students at-risk.Comment: This article was accepted (15 November 2019) and will appear in the proceedings of ICWSM 202

    Distance perception in a natural outdoor setting: is there a developmental trend to overconstancy?

    Get PDF
    The main purpose of the present study was to investigate whether in natural environment, using very large physical distances, there is a trend to overconstancy for distance estimates during development. One hundred and twenty-nine children aged 5 to 13 years old and twenty-one adults (in a control group), participated as observers. The observer’s task was to bisect egocentric distances, ranging from 1.0 to 296.0 m, presented in a large open field. The analyses focused on two parameters, constant errors and variable errors, such as measuring accuracy and precision, respectively. A third analysis focused on the developmental pattern of shifts in constancy as a function of age and range of distances. Constant error analysis showed that there are two relevant parameters for accuracy, age, and range of distances. For short distances, there are three developmental stages: 5-7 years, when children have unstable responses, 7-11, underconstancy, and 13 to adulthood, when accuracy is reached. For large distances, there is a two-stage development: 5-11 years, with severe underconstancy, and beyond this age, with mild underconstancy. Variable errors analyses indicate that precision is noted for 7 year-old children, independently of the range of distances. The constancy analyses indicated that there is a shift from constancy (or slightly overconstancy) to underconstancy as a function of physical distance for all age groups. The age difference is noted in the magnitude of underconstancy that occurs in larger distances, where adults presented lower levels of underconstancy than children. The present data were interpreted as due to a developmental change in cognitive processing rather than to changes in visual space perception.El principal objetivo de este estudio fue investigar si en un medio natural, empleando distancias físicas muy grandes, hay una tendencia a sobre-constancia para las estimaciones de distancias durante el desarrollo evolutivo. Participaron como observadores 129 niños de edades entre 5 y 13 años y 21 adultos (en un grupo control). La tarea de los observadores consistió en biseccionar unas distancias egocéntricas, que variaban entre 1,0 y 296,0 m, presentadas en un gran campo abierto. El análisis se centró en dos parámetros, error constante y error variable, de la exactitud y precisión de medida, respectivamente. Un tercer análisis se centró en el patrón evolutivo de cambios en la constancia en función de la edad y el rango de distancias. El análisis de los errores constantes mostró que hay dos parámetros relevantes para la precisión, edad y rango de distancias. Para distancias cortas, hay tres fases evolutivas: 5-7 años, cuando los niños dan respuestas inestables, 7-11, infra-constancia, y 13 años hasta la adultez, cuando alcanzan la exactitud (constancia). Para las distancias largas, hay un desarrollo de dos fases: 5-11 años, con infra-constancia severa, y más allá de esta edad, con ligera infraconstancia. El análisis del error variable indica que se alcanza precisión a partir de 7 años, con independencia del rango de distancias. En análisis de la constancia indica que existe un cambio desde la constancia (o una ligera sobre-constancia) a infra-constancia en función de la distancia física para todos los grupos de edad. La diferencia de edad se nota en la magnitud de la infra-constancia que ocurre en las distancias más largas, donde los adultos presentaban niveles menores de infra-constancia que los niños. Estos datos se interpretan como debidos a un cambio evolutivo en el procesamiento cognitivo más que a cambios en la percepción visual del espacio

    An integrative model of visual control of action

    No full text
    O presente estudo apresenta uma proposta integrativa dos modelos de controle visual da ação, principalmente no que diz respeito às tarefas visualmente dirigidas. Dentro destes modelos, a calibração entre os sinais visuais e os sinais vestibulo-cinestésicos é de fundamental importância, especialmente no caso das tarefas visualmente dirigidas. O Modelo de Controle Hierarquizado (Marken, 1985), o Modelo de Organização Funcional (Rieser et al., 1995), a Heurística Temporal (Lederman et al., 1987) e o Modelo de Controle Visual da Locomoção (Lee & Lishman, 1977b), são integrados dentro de um único modelo, que ainda incorpora desenvolvimentos recentes da pesquisa empírica. O modelo proposto fornece um arcabouço teórico para orientar a pesquisa experimental do controle visual da ação, de forma a determinar as etapas e os fluxos processuais ainda não esclarecidos pelas evidências empíricas.El presente estudio presenta una propuesta integradora de los modelos de control visual de la acción, principalmente con respecto a las tareas dirigidas visualmente. Dentro de estos modelos, la calibración entre señales visuales y señales vestibulo-kinestésicas es de fundamental importancia, especialmente en el caso de tareas dirigidas visualmente. El Modelo de Control Jerárquico (Marken, 1985), el Modelo de Organización Funcional (Rieser et al., 1995), la Heuristica Temporal (Lederman et al., 1987), y el Modelo de Control de Locomoción Visual (Lee & Lishman, 1977b) se integran en un solo modelo, que todavía incorpora desarrollos recientes en la investigación empírica. El modelo propuesto proporciona un marco teórico para guiar la investigación experimental del control visual de la acción, con el fin de determinar los pasos y flujos de procesamiento aún no aclarados por la evidencia empírica.The present study offers an integrative proposal of a model of visual control of action, specifically in relation to the visually directed tasks. Within these models, the calibration between visual signals and vestibulo-kinesthetic signals is of fundamental importance, especially in the case of visually directed tasks. The Hierarchical Control Model (Marken, 1985), the Functional Organization Model (Rieser et al., 1995), the Time-based Heuristics (Lederman et al., 1987), and the Model of Visual Control of Locomotion (Lee & Lishman, 1977b), are integrated into a single model, which still incorporates recent developments in empirical research. The proposed model provides a theoretical framework to guide the experimental research of the visual control of action, in order to determine the processing steps and paths not yet clarified by the empirical evidence

    INTERAÇÕES ENTRE SISTEMAS DE REFERÊNCIA ALOCÊNTRICOS E EGOCÊNTRICOS: EVIDÊNCIAS DOS ESTUDOS COM DIREÇÃO PERCEBIDA

    No full text
    Sistemas de referência (frames of reference) são definidos como um locus ou conjunto de loci em relação ao qual as localizações espaciais são determinadas. Os sistemas de referência egocêntricos definem as localizações espaciais em relação ao observador, enquanto que nos sistemas de referência alocêntricos, as localizações são determinadas em relação a loci externos ao observador. Analisam-se diversas evidências da utilização destes sistemas de referência pelo sistema visual humano, tanto para tarefas de percepção visual quanto para tarefas de navegação. Hipóteses da dissociação do sistema visual em função destes sistemas de referência são evidenciadas por resultados de estudos neuropsicológicos. O conjunto das evidências experimentais indica uma efetiva interação entre codificações em sistemas de referência egocêntricos e alocêntricos, de modo que os sistemas visual e visomotor introduzem transformações nas coordenadas de um sistema em função de outro, adequando a informação às características das diferentes tarefas

    A questão ontológica da percepção de cor

    Get PDF
    O presente estudo focaliza as questões sobre a ontologia da cor, essencialmente a discussão entre Fisicalismo e Subjetivismo. Analisando estes debates, entre uma concepção físicalista, que considera a cor uma propriedade física dos objetos externos, e uma concepção subjetivista, em que a cor é identificada com um processamento mental, encontrou-se argumentação diversa, desde questões silogísticas até evidências experimentais. Entretanto, uma hipótese alternativa é oferecida, o Psicofisicalismo, em que evidências da psicofísica da cor e modelos evolucionários fornecem uma descrição da essência da cor. A cor deve ser considerada uma propriedade dual, com uma vertente física e uma psicológica estreitamente relacionadas

    Screening for Depressed Individuals by Using Multimodal Social Media Data

    No full text
    Depression has increased at alarming rates in the worldwide population. One alternative to finding depressed individuals is using social media data to train machine learning (ML) models to identify depressed cases automatically. Previous works have already relied on ML to solve this task with reasonably good F-measure scores. Still, several limitations prevent the full potential of these models. In this work, we show that the depression identification task through social media is better modeled as a Multiple Instance Learning (MIL) problem that can exploit the temporal dependencies between posts

    Apresentação

    No full text

    Apresentação

    No full text

    Exploring Binocular Visual Attention by Presenting Rapid Dichoptic and Dioptic Series

    No full text
    This study addresses an issue in attentional distribution in a binocular visual system using RSVP tasks under Attentional Blink (AB) experimental protocols. In Experiment 1, we employed dichoptic RSVP to verify whether, under interocular competition, attention may be captured by a monocular channel. Experiment 2 was a control experiment, where a monoptic RSVP assessed by both or only one eye determines whether Experiment 1 monocular condition results were due to an allocation of attention to one eye. Experiment 3 was also a control experiment designed to determine whether Experiment 1 results were due to the effect of interocular competition or to a diminished visual contrast. Results from Experiment 1 revealed that dichoptic presentations caused a delay in the type stage of the Wyble’s eSTST model, postponing the subsequent tokenization process. The delay in monocular conditions may be further explained by a visual attenuation, due to fusion of target and an empty frame. Experiment 2 evidenced the attentional allocation to monocular channels when forced by eye occlusion. Experiment 3 disclosed that monocular performance in Experiment 1 differs significantly from conditions with interocular competition. While both experiments revealed similar performance in monocular conditions, rivalry conditions exhibit lower detection rates, suggesting that competing stimuli was not responsible for Experiment 1 results. These findings highlight the differences between dichoptic and monoptic presentations of stimuli, particularly on the AB effect, which appears attenuated or absent in dichoptic settings. Furthermore, results suggest that monoptic presentation and binocular fusion stages were a necessary condition for the attentional allocation
    corecore