9 research outputs found

    Recognizing Hospital Care Activities with a Coat Pocket Worn Smartphone

    Get PDF
    In this work, we show how a smart-phone worn unobtrusively in a nurses coat pocket can be used to document the patient care activities performed during a regular morning routine. The main contribution is to show how, taking into account certain domain specific boundary conditions, a single sensor node worn in such an (from the sensing point of view) unfavorable location can still recognize complex, sometimes subtle activities. We evaluate our approach in a large real life dataset from day to day hospital operation. In total, 4 runs of patient care per day were collected for 14 days at a geriatric ward and annotated in high detail by following the performing nurses for the entire duration. This amounts to over 800 hours of sensor data including acceleration, gyroscope, compass, wifi and sound annotated with groundtruth at less than 1min resolution

    Remember and Transfer what you have Learned - Recognizing Composite Activities based on Activity Spotting

    No full text
    Activity recognition approaches have shown to enable good performance for a wide variety of applications. Most approaches rely on machine learning techniques requiring significant amounts of training data for each application. Consequently they have to be retrained for each new appli-cation limiting the real-world applicability of today’s activ-ity recognition methods. This paper explores the possibility to transfer learned knowledge from one application to oth-ers thereby significantly reducing the required training data for new applications. To achieve this transferability the pa-per proposes a new layered activity recognition approach that lends itself to transfer knowledge across applications. Besides allowing to transfer knowledge across applications this layered approach also shows improved recognition per-formance both of composite activities as well as of activity events.

    Técnicas de computación evolutiva aplicadas a la clasificación a partir de monitores de actividad física

    Get PDF
    Actualmente, varios factores están haciendo que el campo de reconocimiento de actividades humanas cobre una mayor importancia, como por ejemplo, la proliferación de dispositivos “wearables” que permiten monitorizar la actividad física o la tendencia de la población mundial hacia un estilo de vida cada vez más sedentario. Este estilo de vida sedentario propio de la sociedad actual se traduce en insuficiente actividad física y se considera uno de los mayores factores de riesgo para la salud, estando entre los primeros puestos de factores de riesgo de mortalidad a nivel mundial, según la OMS [11]. De esta manera, dentro del ámbito de la salud y el bienestar, gracias al avance en la miniaturización de sensores, que incluso permite su uso incorporados a la ropa de las personas, el reconocimiento automático de actividades se presenta como una solución a problemas de diversa índole, como por ejemplo, prevención de enfermedades, envejecimiento activo, monitorización remota de enfermos, además de un amplio espectro de aplicaciones en el ámbito deportivo. Es por esto que se convierten en dispositivos de monitorización sumamente útiles en otras áreas de investigación, introduciendo el reconocimiento de actividades humanas en la computación ubicua, el entretenimiento, el registro de actividades diarias personales o el seguimiento del rendimiento deportivo o profesional. Con la principal motivación de explorar nuevos frentes de investigación del reconocimiento de actividades, con un enfoque distinto a los planteados hasta ahora, en este trabajo se propone un sistema de reconocimiento automático de actividades que integra un algoritmo evolutivo, para la tarea de clasificación de actividades, y un enjambre de partículas, para la realización de un clustering que mejore el aprendizaje automático. El sistema ha sido evaluado mediante validación cruzada del tipo leave-one-subject-out, para comprobar su rendimiento en situaciones de reconocimiento independiente del sujeto, obteniendo un 52,37% de acierto. También, se ha evaluado el sistema con validación cruzada estándar de 10-folds en cada sujeto, para analizar la capacidad del sistema en casos de clasificación dependiente del sujeto, alcanzando un 98,07% de acierto. Un resultado significativamente más positivo que el primero, que muestra que el sistema puede tender a la personalización del reconocimiento de actividades. Además, se ha llevado a cabo la evaluación del sistema con validación cruzada estándar de 10-folds en el conjunto de todos los sujetos, con un 70,2267% de acierto, abundándose en la conclusión expuesta más arriba, de que el sistema presenta un mejor funcionamiento en situaciones de personalización del reconocimiento de actividades.In the current time, various factors are making the field of activity recognition become more important, such as the proliferation of wearable devices that allow to monitor physical activity or global population’s tendency towards a more sendentary lifestyle. This sedentary lifestyle is turning into insufficient physical activity and is considered one of the factors with a highest risk for health, being among the leading risk factors of mortality, regarding the WHO [11]. This way, within health and wellness field, thanks to the advance in sensor miniaturization, which even allows sensor usage incorporated to people clothes, activity automatic recognition is presented as a solution to very diverse problems, such as diseases prevention, active aging, patient remote monitoring, as well as a wide range of applications in sports. For that reason, wearable sensors happen to be extremely useful monitorizing devices in other research areas, introducing human activity recognition to ubiquitous computing, entertainment industry, daily life activities logging and sportive and professional perfomance monitoring, among others. With the main motivation of exploring new research horizons, through a different approach to the previous works, in this project, an activity automatic recognition system that integrates an evolutionary algorithm, for the activity classification task, and a particle swarm, for a clustering that improves the automatic learning, is proposed. The system has been evaluated with leave-one-subject-out (LOSO) cross validation, in order to assess its performance in situations where the recognition is subject independent, obtaining an accuracy rate of 52,37%. Also, the system has been evaluated with 10-fold standard cross validation within each subject, to analyze the system’s capacity in subject dependent classification cases, reaching an accuracy rate of 98,07%. A significantly more positive result than the first one, that shows the system might tend to personalization of activity recognition. In addition, the system evaluation has been carried with 10-fold standard cross validation within the whole set of all the subjects, getting an accuracy rate of 70,2267%, which supports the conclusion presented above that the system works better in situations of personalization of the activity recognition.Grado en Ingeniería Informátic

    Rhythm Modelling of Long-Term Activity Data

    Get PDF
    Long-term monitoring for activity recognition opens up new possibilities for deriving characteristics from the data, such as daily activity rhythms and certain quality measures for the activity performed or for identifying similarities or differences in daily routines. This thesis investigates the detection of activities with wearable sensors and addresses two major challenges in particular: The modelling of a person’s behaviour into rhythmic patterns and the detection of high-level activities, e.g., having lunch or sleeping. To meet these challenges, this thesis makes the following contributions: First, we study different platforms that are suitable for long-term data recording: A wrist-worn sensor and mobile phones. The latter has shown different carrying behaviours for various users. This has to be considered in ubiquitous systems for accurately recognizing the user’s context. We evaluate our findings in a study with a wrist-worn accelerometer by correlating with the inertial data of a smart phone. Second, we investigate datasets that exhibit rhythmic patterns to be used for recognizing high-level activities. Such statistical information obtained over a population is collected with time use surveys which describe how often certain activities are performed by the user. From such datasets we extract features like time and location to describe which activities are detectable by making use of prior information, showing also the benefits and limits of such data. Third, in order to improve on the recognition rates of high-level activities from wearable sensor data only, we propose the use of the aforementioned prior information from time use data. In our approach we investigate the results of a common classifier for several high-level activities, after which we compare them to the outcome of a maximum-likelihood estimation on the time use survey data. In a last step, we show how these two classification approaches are fused to raise the recognition rates. In a fourth contribution we introduce a recording platform to capture sleep and sleep behaviour in the user’s common environment, enabling the unobtrusive monitoring of patterns over several weeks. We use a wrist-worn sensor to record inertial data from which we extract sleep segments. For this purpose, we present three different sleep detection approaches: A Gaussian-, generative model- and stationary segments-based algorithm are evaluated and are found to exhibit different accuracies for detecting sleep. The latter algorithm is pitted against two clinically evaluated sleep detection approaches, indicating that we are able to reach an optimum trade-off between sleep and wake segments, while the two common algorithms tend to overestimate sleep. Further, we investigate the rhythmic patterns within sleep: We classify sleep postures and detect muscle contractions with a high confidence, enabling physicians to efficiently browse through the data

    Erfassung, Erkennung und qualitative Analyse von menschlicher Bewegung

    Get PDF
    Visionen vom Internet of Things und der nahtlosen Einbettung der virtuellen Welt in den physischen Alltag des Menschen sind durch ubiquitäre Vernetzung, stationäre und mobile Computer sowie miniaturisierte Sensorik längst Realität geworden. Zusammen mit Algorithmen des Data Minings und der künstlichen Intelligenz werden so kontextsensitive Dienste und vernetzte Alltagsgegenstände geschaffen, welche einen immensen Mehrwert im privaten, kommerziellen und industriellen Raum schaffen. Im Rahmen dieses Szenarios vielbeachtete Forschungsgebiete sind die Erschließung von menschlichem Kontext und die menschliche Aktivitätserkennung mithilfe von mobiler Sensorik. Während es auf dem Gebiet der rein quantitativen Erkennung von menschlicher Aktivität bereits viele Verfahren zur Vorhersage und Erkennung von Bewegungsereignissen auf Basis von Bewegungs- oder Tiefeninformationen sowie visueller Sensorik gibt, sind Konzepte zur feingranularen, automatisierten Analyse mit qualitativem Schwerpunkt bislang kaum existent. Typische Anwendungsgebiete für diese sind zum Beispiel die Identifikation von Notfallsituationen im medizinischen Bereich oder die Erkennung von Fehlstellungen und Anomalien bei physischer, menschlicher Aktivität. Um solche Fragestellungen aus dem Bereich der Erfassung, Erkennung und qualitativen Analyse von menschlicher Bewegungsaktivität zu adressieren, wird in dieser Arbeit zunächst ein ganzheitliches, verteiltes Sensorsystem, welches auf Basis von Bewegungsinformationen menschliche Bewegungsaktivität untersucht, spezifiziert. Anschließend wird ein Vorgehen zur automatisierten und qualitativen Analyse individueller, wiederkehrender, menschlicher Bewegungsereignisse, mithilfe eines adaptiven Segmentierungsverfahrens und eines Konzepts zur Formalisierung und Diskretisierung von subjektiven Qualitätsmerkmalen in menschlichen Bewegungsabläufen, vorgestellt. Im Anschluss steht die qualitative Untersuchung von nicht vorhersehbarer, menschlicher Bewegungsaktivität im Fokus. Hierzu werden neue Konzepte zur Segmentierung und zur generischen Projektion der physischen, menschlichen Leistung des Menschen auf diskrete Merkmalsvektoren vorgestellt. Zusammengefasst stellt die vorliegende Arbeit ein umfassendes Paket zur generischen Untersuchung von menschlicher Bewegungsaktivität vor. Mit diesem lassen sich quantitative und qualitative Analysen von Bewegungsaktivität für sowohl wiederkehrende als auch für nicht vorhersehbare, menschliche Bewegungsereignisse effizient umsetzen.The visions created by the Internet of Things, which encompass the seamless embedding of the virtual world into daily human life have become reality by now. One important reason for that is the ubiquitous availability of fast communication connections, of stationary as well as of mobile computers, and of miniaturized sensors and wearables. Thereby, the combination of this infrastructure along with data mining algorithms and concepts of artificial intelligence allow the creation of context-aware services and interconnected items of daily life, which provide vast added value for potential users. Related to that, the recognition of human activity by using mobile sensors as a basis and the extraction of human context are currently much-noticed aspects within this field of research. By now, concepts focusing on quantitative recognition of human activity are a well studied area while examinations targeting qualitative analysis are currently handled only sparsely. Typical szenarios of usage for such concepts, which target qualitative analysis, are, e.g., the detection of emergency cases within medical environments or of anomalies within human movement during the conduction of physical activities. In order to target research questions related to these qualitative topics, a holistic, distributed sensor system, which is capable of capturing and analyzing human motion is developed within this work at first. Subsequently, a concept for automated, qualitative assessment of individual, recurrent human motion events is presented. Therefore, it makes usage of a new adaptive segmentation algorithm and proposes an advance for formalization and discretization of subjective characteristics of quality within physical human activities. Afterwards, this work deals with the qualitative analysis of non-recurrent human motion. For this purpose, new concepts of segmentation are necessary and a new approach for generic projection of physical human performance onto discrete feature vectors is introduced. To sum up, this thesis presents a comprehensive package for generic analysis of human motion activity. Therefore, it enables the detailed and efficient examination of recurrent as well as of non-recurrent human motion and moreover, allows a qualitative as well as a quantitative focus of analysis

    Combining visual recognition and computational linguistics : linguistic knowledge for visual recognition and natural language descriptions of visual content

    Get PDF
    Extensive efforts are being made to improve visual recognition and semantic understanding of language. However, surprisingly little has been done to exploit the mutual benefits of combining both fields. In this thesis we show how the different fields of research can profit from each other. First, we scale recognition to 200 unseen object classes and show how to extract robust semantic relatedness from linguistic resources. Our novel approach extends zero-shot to few shot recognition and exploits unlabeled data by adopting label propagation for transfer learning. Second, we capture the high variability but low availability of composite activity videos by extracting the essential information from text descriptions. For this we recorded and annotated a corpus for fine-grained activity recognition. We show improvements in a supervised case but we are also able to recognize unseen composite activities. Third, we present a corpus of videos and aligned descriptions. We use it for grounding activity descriptions and for learning how to automatically generate natural language descriptions for a video. We show that our proposed approach is also applicable to image description and that it outperforms baselines and related work. In summary, this thesis presents a novel approach for automatic video description and shows the benefits of extracting linguistic knowledge for object and activity recognition as well as the advantage of visual recognition for understanding activity descriptions.Trotz umfangreicher Anstrengungen zur Verbesserung der die visuelle Erkennung und dem automatischen Verständnis von Sprache, ist bisher wenig getan worden, um diese beiden Forschungsbereiche zu kombinieren. In dieser Dissertation zeigen wir, wie beide voneinander profitieren können. Als erstes skalieren wir Objekterkennung zu 200 ungesehen Klassen und zeigen, wie man robust semantische Ähnlichkeiten von Sprachressourcen extrahiert. Unser neuer Ansatz kombiniert Transfer und halbüberwachten Lernverfahren und kann so Daten ohne Annotation ausnutzen und mit keinen als auch mit wenigen Trainingsbeispielen auskommen. Zweitens erfassen wir die hohe Variabilität aber geringe Verfügbarkeit von Videos mit zusammengesetzten Aktivitäten durch Extraktion der wesentlichen Informationen aus Textbeschreibungen. Wir verbessern überwachtes Training als auch die Erkennung von ungesehenen Aktivitäten. Drittens stellen wir einen parallelen Datensatz von Videos und Beschreibungen vor. Wir verwenden ihn für Grounding von Aktivitätsbeschreibungen und um die automatische Generierung natürlicher Sprache für ein Video zu erlernen. Wir zeigen, dass sich unsere Ansatz auch für Bildbeschreibung einsetzten lässt und das er bisherige Ansätze übertrifft. Zusammenfassend stellt die Dissertation einen neuen Ansatz zur automatische Videobeschreibung vor und zeigt die Vorteile von sprachbasierten Ähnlichkeitsmaßen für die Objekt- und Aktivitätserkennung als auch umgekehrt
    corecore