4 research outputs found

    Eye Tracking Methods for Analysis of Visuo-Cognitive Behavior in Medical Imaging

    Get PDF
    Predictive modeling of human visual search behavior and the underlying metacognitive processes is now possible thanks to significant advances in bio-sensing device technology and machine intelligence. Eye tracking bio-sensors, for example, can measure psycho-physiological response through change events in configuration of the human eye. These events include positional changes such as visual fixation, saccadic movements, and scanpath, and non-positional changes such as blinks and pupil dilation and constriction. Using data from eye-tracking sensors, we can model human perception, cognitive processes, and responses to external stimuli. In this study, we investigated the visuo-cognitive behavior of clinicians during the diagnostic decision process for breast cancer screening under clinically equivalent experimental conditions involving multiple monitors and breast projection views. Using a head-mounted eye tracking device and a customized user interface, we recorded eye change events and diagnostic decisions from 10 clinicians (three breast-imaging radiologists and seven Radiology residents) for a corpus of 100 screening mammograms (comprising cases of varied pathology and breast parenchyma density). We proposed novel features and gaze analysis techniques, which help to encode discriminative pattern changes in positional and non-positional measures of eye events. These changes were shown to correlate with individual image readers' identity and experience level, mammographic case pathology and breast parenchyma density, and diagnostic decision. Furthermore, our results suggest that a combination of machine intelligence and bio-sensing modalities can provide adequate predictive capability for the characterization of a mammographic case and image readers diagnostic performance. Lastly, features characterizing eye movements can be utilized for biometric identification purposes. These findings are impactful in real-time performance monitoring and personalized intelligent training and evaluation systems in screening mammography. Further, the developed algorithms are applicable in other application domains involving high-risk visual tasks

    Recognition of Everyday Activities through Wearable Sensors and Machine Learning

    Get PDF
    Over the past several years, the use of wearable devices has increased dramatically, primarily for fitness monitoring, largely due to their greater sensor reliability, increased functionality, smaller size, increased ease of use, and greater affordability. These devices have helped many people of all ages live healthier lives and achieve their personal fitness goals, as they are able to see quantifiable and graphical results of their efforts every step of the way (i.e. in real-time). Yet, while these device systems work well within the fitness domain, they have yet to achieve a convincing level of functionality in the larger domain of healthcare. As an example, according to the Alzheimer’s Association, there are currently approximately 5.5 million Americans with Alzheimer’s Disease and approximately 5.3 million of them are over the age of 65, comprising 10% of this age group in the U.S. The economic toll of this disease is estimated to be around 259billion.By2050thenumberofAmericanswithAlzheimer’sdiseaseispredictedtoreacharound16millionwithaneconomictollofover259 billion. By 2050 the number of Americans with Alzheimer’s disease is predicted to reach around 16 million with an economic toll of over 1 trillion. There are other prevalent and chronic health conditions that are critically important to monitor, such as diabetes, complications from obesity, congestive heart failure, and chronic obstructive pulmonary disease (COPD) among others. The goal of this research is to explore and develop accurate and quantifiable sensing and machine learning techniques for eventual real-time health monitoring by wearable device systems. To that end, a two-tier recognition system is presented that is designed to identify health activities in a naturalistic setting based on accelerometer data of common activities. In Tier I a traditional activity recognition approach is employed to classify short windows of data, while in Tier II these classified windows are grouped to identify instances of a specific activity. Everyday activities that were explored in this research include brushing one’s teeth, combing one’s hair, scratching one’s chin, washing one’s hands, taking medication, and drinking. Results show that an F-measure of 0.83 is achievable when identifying these activities from each other and an F-measure of 0.82 is achievable when identifying instances of brushing teeth over the course of a day

    Recognizing Seatbelt-Fastening Behavior with Wearable Technology and Machine Learning

    Get PDF
    In the case of many fatal automobile accidents, the victims were found to have not been wearing a seatbelt. This occurs in spite of the numerous safety sensors and warning indicators embedded within modern vehicles. Indeed, there is yet room for improvement in terms of seatbelt adoption. This work aims to lay the foundation for a novel method of encouraging seatbelt use: the utilization of wearable technology. Wearable technology has enabled considerable advances in health and wellness. Specifically, fitness trackers have achieved widespread popularity for their ability to quantify and analyze patterns of physical activity. Thanks to wearable technology’s ease of use and convenient integration with mobile phones, users are quick to adopt. Of course, the practicality of wearable technology depends on activity recognition—the models and algorithms which are used to identify a pattern of sensor data as a particular physical activity (e.g. running, sitting, sleeping). Activity recognition is the basis of this research. In order to utilize wearable trackers toward the cause of seatbelt usage, there must exist a system for identifying whether a user has buckled their seatbelt. This was our primary goal. To develop such a system, we collected motion data from 20 different users. From this data, we identified trends which inspired the development of novel features. With these features, machine learning was used to train models to identify the motion of fastening a seatbelt in real time. This model serves as the basis for future work in systems which may provide more intelligent feedback as well as methods for interventions in dangerous user behavior
    corecore