338 research outputs found

    Screenomics : a new approach for observing and studying individuals' digital lives

    Get PDF
    This study describes when and how adolescents engage with their fast-moving and dynamic digital environment as they go about their daily lives. We illustrate a new approach—screenomics—for capturing, visualizing, and analyzing screenomes, the record of individuals’ day-to-day digital experiences. Sample includes over 500,000 smartphone screenshots provided by four Latino/Hispanic youth, age 14 to 15 years, from low-income, racial/ethnic minority neighborhoods. Screenomes collected from smartphones for 1 to 3 months, as sequences of smartphone screenshots obtained every 5 seconds that the device is activated, are analyzed using computational machinery for processing images and text, machine learning algorithms, human labeling, and qualitative inquiry. Adolescents’ digital lives differ substantially across persons, days, hours, and minutes. Screenomes highlight the extent of switching among multiple applications, and how each adolescent is exposed to different content at different times for different durations—with apps, food-related content, and sentiment as illustrative examples. We propose that the screenome provides the fine granularity of data needed to study individuals’ digital lives, for testing existing theories about media use, and for generation of new theory about the interplay between digital media and development

    A Robust Algorithm for Emoji Detection in Smartphone Screenshot Images

    Get PDF
    The increasing use of smartphones and social media apps for communication results in a massive number of screenshot images. These images enrich the written language through text and emojis. In this regard, several studies in the image analysis field have considered text. However, they ignored the use of emojis. In this study, a robust two-stage algorithm for detecting emojis in screenshot images is proposed. The first stage localizes the regions of candidate emojis by using the proposed RGB-channel analysis method followed by a connected component method with a set of proposed rules. In the second verification stage, each of the emojis and non-emojis are classified by using proposed features with a decision tree classifier. Experiments were conducted to evaluate each stage independently and assess the performance of the proposed algorithm completely by using a self-collected dataset. The results showed that the proposed RGB-channel analysis method achieved better performance than the Niblack and Sauvola methods. Moreover, the proposed feature extraction method with decision tree classifier achieved more satisfactory performance than the LBP feature extraction method with all Bayesian network, perceptron neural network, and decision table rules. Overall, the proposed algorithm exhibited high efficiency in detecting emojis in screenshot images

    On- Device Information Extraction from Screenshots in form of tags

    Full text link
    We propose a method to make mobile screenshots easily searchable. In this paper, we present the workflow in which we: 1) preprocessed a collection of screenshots, 2) identified script presentin image, 3) extracted unstructured text from images, 4) identifiedlanguage of the extracted text, 5) extracted keywords from the text, 6) identified tags based on image features, 7) expanded tag set by identifying related keywords, 8) inserted image tags with relevant images after ranking and indexed them to make it searchable on device. We made the pipeline which supports multiple languages and executed it on-device, which addressed privacy concerns. We developed novel architectures for components in the pipeline, optimized performance and memory for on-device computation. We observed from experimentation that the solution developed can reduce overall user effort and improve end user experience while searching, whose results are published

    Computer screenshot classification for boosting ADHD productivity in a VR environment

    Get PDF
    Individuals with ADHD face significant challenges in their daily lives due to difficulties with attention, hyperactivity, and impulsivity. These challenges are especially pronounced in the workplace or educational settings, where the ability to sustain attention and manage time effectively is crucial for success. Virtual reality (VR) software has emerged as a promising tool for improving productivity in individuals with ADHD. However, the effectiveness of such software depends on the identification of potential distractions and timely intervention. The proposed computer screenshot classification approach addresses this need by providing a means for identifying and analyzing potential distractions within VR software. By integrating Convolutional Neural Networks (CNNs), Optical Character Recognition (OCR), and Natural Language Processing (NLP), the proposed approach can accurately classify screenshots and extract features, facilitating the identification of distractions and enabling timely intervention to minimize their impact on productivity. The implications of this research are significant, as ADHD affects a substantial portion of the population and has a significant impact on productivity and quality of life. By providing a novel approach for studying, detecting, and enhancing productivity, this research has the potential to improve outcomes for individuals with ADHD and increase the efficiency and effectiveness of workplaces and educational settings. Moreover, the proposed approach holds promise for wider applicability to other productivity studies involving computer users, where the classification of screenshots and feature extraction play a crucial role in discerning behavioral patterns.Les persones amb TDAH s’enfronten a reptes importants en la seva vida diĂ ria a causa de les dificultats d’atenciĂł, hiperactivitat i impulsivitat. Aquests reptes sĂłn especialment pronunciats al lloc de treball o en entorns educatius, on la capacitat de mantenir l’atenciĂł i gestionar el temps de manera eficaç Ă©s crucial per a l’ùxit. El software de realitat virtual (RV) s’ha revelat com a eina prometedora per millorar la productivitat de les persones amb TDAH. Tanmateix, l’eficĂ cia del software esmentat depĂšn de la identificaciĂł de distraccions potencials i de la intervenciĂł oportuna. L’enfocament de classificaciĂł de captures de pantalla d’ordinador proposat aborda aquesta necessitat proporcionant un mitjĂ  per identificar i analitzar les distraccions potencials dins del programari de RV. Mitjançant la integraciĂł de xarxes neuronals convolucionals (CNN), el reconeixement ĂČptic de carĂ cters (OCR) i el processament del llenguatge natural (NLP), l’enfocament proposat pot classificar amb precisiĂł les captures de pantalla i extreure’n caracterĂ­stiques, facilitant la identificaciĂł de les distraccions i permetent una intervenciĂł oportuna per minimitzar-ne l’impacte en la productivitat. Les implicacions d’aquesta investigaciĂł sĂłn importants, ja que el TDAH afecta una part substancial de la poblaciĂł i tĂ© un impacte significatiu a la productivitat i la qualitat de vida. En proporcionar un enfocament nou per estudiar, detectar i millorar la productivitat, aquesta investigaciĂł tĂ© el potencial de millorar els resultats per a les persones amb TDAH i augmentar l’eficiĂšncia i l’eficĂ cia dels llocs de treball i els entorns educatius. A mĂ©s, l’enfocament proposat promet una aplicabilitat mĂ©s gran a altres estudis de productivitat en quĂš participin usuaris d’ordinadors, en quĂš la classificaciĂł de captures de pantalla i l’extracciĂł de caracterĂ­stiques tenen un paper crucial a l’hora de discernir patrons de comportament

    UbiqLog: a generic mobile phone based life-log framework

    Get PDF
    Smart phones are conquering the mobile phone market; they are not just phones they also act as media players, gaming consoles, personal calendars, storage, etc. They are portable computers with fewer computing capabilities than personal computers. However unlike personal computers users can carry their smartphone with them at all times. The ubiquity of mobile phones and their computing capabilities provide an opportunity of using them as a life logging device. Life-logs (personal e-memories) are used to record users' daily life events and assist them in memory augmentation. In a more technical sense, life-logs sense and store users' contextual information from their environment through sensors, which are core components of life-logs. Spatio-temporal aggregation of sensor information can be mapped to users' life events. We propose UbiqLog, a lightweight, configurable and extendable life-log framework that uses mobile phone as a device for life logging. The proposed framework extends previous research in this field, which investigated mobile phones as life-log tool through continuous sensing. Its openness in terms of sensor configuration allows developers to create exible, multipurpose life-log tools. In addition to that this framework contains a data model and an architecture, which can be used as reference model for further life-log development, including its extension to other devices, such as ebook readers, T.V.s, etc

    GLOBEM Dataset: Multi-Year Datasets for Longitudinal Human Behavior Modeling Generalization

    Full text link
    Recent research has demonstrated the capability of behavior signals captured by smartphones and wearables for longitudinal behavior modeling. However, there is a lack of a comprehensive public dataset that serves as an open testbed for fair comparison among algorithms. Moreover, prior studies mainly evaluate algorithms using data from a single population within a short period, without measuring the cross-dataset generalizability of these algorithms. We present the first multi-year passive sensing datasets, containing over 700 user-years and 497 unique users' data collected from mobile and wearable sensors, together with a wide range of well-being metrics. Our datasets can support multiple cross-dataset evaluations of behavior modeling algorithms' generalizability across different users and years. As a starting point, we provide the benchmark results of 18 algorithms on the task of depression detection. Our results indicate that both prior depression detection algorithms and domain generalization techniques show potential but need further research to achieve adequate cross-dataset generalizability. We envision our multi-year datasets can support the ML community in developing generalizable longitudinal behavior modeling algorithms.Comment: Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Trac

    LifeLogging: personal big data

    Get PDF
    We have recently observed a convergence of technologies to foster the emergence of lifelogging as a mainstream activity. Computer storage has become significantly cheaper, and advancements in sensing technology allows for the efficient sensing of personal activities, locations and the environment. This is best seen in the growing popularity of the quantified self movement, in which life activities are tracked using wearable sensors in the hope of better understanding human performance in a variety of tasks. This review aims to provide a comprehensive summary of lifelogging, to cover its research history, current technologies, and applications. Thus far, most of the lifelogging research has focused predominantly on visual lifelogging in order to capture life details of life activities, hence we maintain this focus in this review. However, we also reflect on the challenges lifelogging poses to an information retrieval scientist. This review is a suitable reference for those seeking a information retrieval scientist’s perspective on lifelogging and the quantified self

    Distributed, Low-Cost, Non-Expert Fine Dust Sensing with Smartphones

    Get PDF
    Diese Dissertation behandelt die Frage, wie mit kostengĂŒnstiger Sensorik FeinstĂ€ube in hoher zeitlicher und rĂ€umlicher Auflösung gemessen werden können. Dazu wird ein neues Sensorsystem auf Basis kostengĂŒnstiger off-the-shelf-Sensoren und Smartphones vorgestellt, entsprechende robuste Algorithmen zur Signalverarbeitung entwickelt und Erkenntnisse zur Interaktions-Gestaltung fĂŒr die Messung durch Laien prĂ€sentiert. AtmosphĂ€rische Aerosolpartikel stellen im globalen Maßstab ein gravierendes Problem fĂŒr die menschliche Gesundheit dar, welches sich in Atemwegs- und Herz-Kreislauf-Erkrankungen Ă€ußert und eine VerkĂŒrzung der Lebenserwartung verursacht. Bisher wird LuftqualitĂ€t ausschließlich anhand von Daten relativ weniger fester Messstellen beurteilt und mittels Modellen auf eine hohe rĂ€umliche Auflösung gebracht, so dass deren ReprĂ€sentativitĂ€t fĂŒr die flĂ€chendeckende Exposition der Bevölkerung ungeklĂ€rt bleibt. Es ist unmöglich, derartige rĂ€umliche Abbildungen mit den derzeitigen statischen Messnetzen zu bestimmen. Bei der gesundheitsbezogenen Bewertung von Schadstoffen geht der Trend daher stark zu rĂ€umlich differenzierenden Messungen. Ein vielversprechender Ansatz um eine hohe rĂ€umliche und zeitliche Abdeckung zu erreichen ist dabei Participatory Sensing, also die verteilte Messung durch Endanwender unter Zuhilfenahme ihrer persönlichen EndgerĂ€te. Insbesondere fĂŒr LuftqualitĂ€tsmessungen ergeben sich dabei eine Reihe von Herausforderungen - von neuer Sensorik, die kostengĂŒnstig und tragbar ist, ĂŒber robuste Algorithmen zur Signalauswertung und Kalibrierung bis hin zu Anwendungen, die Laien bei der korrekten AusfĂŒhrung von Messungen unterstĂŒtzen und ihre PrivatsphĂ€re schĂŒtzen. Diese Arbeit konzentriert sich auf das Anwendungsszenario Partizipatorischer Umweltmessungen, bei denen Smartphone-basierte Sensorik zum Messen der Umwelt eingesetzt wird und ĂŒblicherweise Laien die Messungen in relativ unkontrollierter Art und Weise ausfĂŒhren. Die HauptbeitrĂ€ge hierzu sind: 1. Systeme zum Erfassen von Feinstaub mit Smartphones (Low-cost Sensorik und neue Hardware): Ausgehend von frĂŒher Forschung zur Feinstaubmessung mit kostengĂŒnstiger off-the-shelf-Sensorik wurde ein Sensorkonzept entwickelt, bei dem die Feinstaub-Messung mit Hilfe eines passiven Aufsatzes auf einer Smartphone-Kamera durchgefĂŒhrt wird. Zur Beurteilung der Sensorperformance wurden teilweise Labor-Messungen mit kĂŒnstlich erzeugtem Staub und teilweise Feldevaluationen in Ko-Lokation mit offiziellen Messstationen des Landes durchgefĂŒhrt. 2. Algorithmen zur Signalverarbeitung und Auswertung: Im Zuge neuer Sensordesigns werden Kombinationen bekannter OpenCV-Bildverarbeitungsalgorithmen (Background-Subtraction, Contour Detection etc.) zur Bildanalyse eingesetzt. Der resultierende Algorithmus erlaubt im Gegensatz zur Auswertung von Lichtstreuungs-Summensignalen die direkte ZĂ€hlung von Partikeln anhand individueller Lichtspuren. Ein zweiter neuartiger Algorithmus nutzt aus, dass es bei solchen Prozessen ein signalabhĂ€ngiges Rauschen gibt, dessen VerhĂ€ltnis zum Mittelwert des Signals bekannt ist. Dadurch wird es möglich, Signale die von systematischen unbekannten Fehlern betroffen sind auf Basis ihres Rauschens zu analysieren und das "echte" Signal zu rekonstruieren. 3. Algorithmen zur verteilten Kalibrierung bei gleichzeitigem Schutz der PrivatsphĂ€re: Eine Herausforderung partizipatorischer Umweltmessungen ist die wiederkehrende Notwendigkeit der Sensorkalibrierung. Dies beruht zum einen auf der InstabilitĂ€t insbesondere kostengĂŒnstiger LuftqualitĂ€tssensorik und zum anderen auf der Problematik, dass Endbenutzern die Mittel fĂŒr eine Kalibrierung ĂŒblicherweise fehlen. Bestehende AnsĂ€tze zur sogenannten Cross-Kalibrierung von Sensoren, die sich in Ko-Lokation mit einer Referenzstation oder anderen Sensoren befinden, wurden auf Daten gĂŒnstiger Feinstaubsensorik angewendet sowie um Mechanismen erweitert, die eine Kalibrierung von Sensoren untereinander ohne Preisgabe privater Informationen (IdentitĂ€t, Ort) ermöglicht. 4. Mensch-Maschine-Interaktions-Gestaltungsrichtlinien fĂŒr Participatory Sensing: Auf Basis mehrerer kleiner explorativer Nutzerstudien wurde empirisch eine Taxonomie der Fehler erstellt, die Laien beim Messen von Umweltinformationen mit Smartphones machen. Davon ausgehend wurden mögliche Gegenmaßnahmen gesammelt und klassifiziert. In einer großen summativen Studie mit einer hohen Teilnehmerzahl wurde der Effekt verschiedener dieser Maßnahmen durch den Vergleich vier unterschiedlicher Varianten einer App zur partizipatorischen Messung von UmgebungslautstĂ€rke evaluiert. Die dabei gefundenen Erkenntnisse bilden die Basis fĂŒr Richtlinien zur Gestaltung effizienter Nutzerschnittstellen fĂŒr Participatory Sensing auf MobilgerĂ€ten. 5. Design Patterns fĂŒr Participatory Sensing Games auf MobilgerĂ€ten (Gamification): Ein weiterer erforschter Ansatz beschĂ€ftigt sich mit der Gamifizierung des Messprozesses um Nutzerfehler durch den Einsatz geeigneter Spielmechanismen zu minimieren. Dabei wird der Messprozess z.B. in ein Smartphone-Spiel (sog. Minigame) eingebettet, das im Hintergrund bei geeignetem Kontext die Messung durchfĂŒhrt. Zur Entwicklung dieses "Sensified Gaming" getauften Konzepts wurden Kernaufgaben im Participatory Sensing identifiziert und mit aus der Literatur zu sammelnden Spielmechanismen (Game Design Patterns) gegenĂŒbergestellt
    • 

    corecore