2,518 research outputs found

    Multimodal segmentation of lifelog data

    Get PDF
    A personal lifelog of visual and audio information can be very helpful as a human memory augmentation tool. The SenseCam, a passive wearable camera, used in conjunction with an iRiver MP3 audio recorder, will capture over 20,000 images and 100 hours of audio per week. If used constantly, very soon this would build up to a substantial collection of personal data. To gain real value from this collection it is important to automatically segment the data into meaningful units or activities. This paper investigates the optimal combination of data sources to segment personal data into such activities. 5 data sources were logged and processed to segment a collection of personal data, namely: image processing on captured SenseCam images; audio processing on captured iRiver audio data; and processing of the temperature, white light level, and accelerometer sensors onboard the SenseCam device. The results indicate that a combination of the image, light and accelerometer sensor data segments our collection of personal data better than a combination of all 5 data sources. The accelerometer sensor is good for detecting when the user moves to a new location, while the image and light sensors are good for detecting changes in wearer activity within the same location, as well as detecting when the wearer socially interacts with others

    Detection and classification of small impacts on vehicles based on deep learning algorithms

    Get PDF
    Dissertação de mestrado integrado em Informatics EngineeringThis thesis explores the detection of impacts that cause damage based on data retrieved by an accelerometer placed inside a vehicle and subsequently classified by deep learning algorithms. The real world application of this work inserts itself in the car sharing market, by providing an automated service that allows constant monitoring on the vehicle status. The proposed solution was set as an alternative to the current machine learning algorithms in use. Previous research showed that deep learning algorithms are achieving better performance results when compared to non deep learning algorithms. We use data retrieved from two types of events: Normal driving and damage causing situations to test if the models are capable of generalising damage events. The approach to achieve this objective consisted in exploring and testing different algorithms: Multi Layer Perceptron (MLP), Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). Results revealed promising performance, with the MLP reaching a 82% true positive rate. Despite not matching the result obtained by the current non deep learning algorithm allows us to assess that deep learning is a strong alternative in the long term as more data is collected.O principal objectivo desta tese foi a exploração e detecção de impactos que causam danos com base em dados recolhidos por um acelerómetro colocado no interior um veículo e posteriormente classificados por algoritmos de deep learning. A aplicação deste trabalho no mundo real insere-se no mercado de partilha de veículos, ao fornecer um serviço automático que permite uma monitorização constante do estado do veículo. A solução proposta foi definida como uma alternativa aos actuais algoritmos de machine learning em uso. A revisão de literatura revelou que algoritmos de deep learning estão a alcançar melhores resultados de desempenho quando comparados com algoritmos de machine learning. Utilizamos dados recolhidos de dois tipos de eventos: Condução normal e situações que causam dano e testar se os modelos são capazes de generalizar os eventos de danos. A abordagem para alcançar este objectivo consistiu em explorar e testar diferentes algoritmos: MLP, CNN e RNN. Os resultados revelaram um desempenho promissor, com a MLP a atingir uma taxa de 82% de verdadeiros positivos. Apesar de não corresponder ao melhor resultado obtido pelo actual algoritmo de machine learning em uso permite-nos avaliar que deep learning é uma forte alternativa a longo prazo à medida que mais dados forem recolhidos

    Map matching by using inertial sensors: literature review

    Get PDF
    This literature review aims to clarify what is known about map matching by using inertial sensors and what are the requirements for map matching, inertial sensors, placement and possible complementary position technology. The target is to develop a wearable location system that can position itself within a complex construction environment automatically with the aid of an accurate building model. The wearable location system should work on a tablet computer which is running an augmented reality (AR) solution and is capable of track and visualize 3D-CAD models in real environment. The wearable location system is needed to support the system in initialization of the accurate camera pose calculation and automatically finding the right location in the 3D-CAD model. One type of sensor which does seem applicable to people tracking is inertial measurement unit (IMU). The IMU sensors in aerospace applications, based on laser based gyroscopes, are big but provide a very accurate position estimation with a limited drift. Small and light units such as those based on Micro-Electro-Mechanical (MEMS) sensors are becoming very popular, but they have a significant bias and therefore suffer from large drifts and require method for calibration like map matching. The system requires very little fixed infrastructure, the monetary cost is proportional to the number of users, rather than to the coverage area as is the case for traditional absolute indoor location systems.Siirretty Doriast

    Evaluation of Pedometer Performance Across Multiple Gait Types Using Video for Ground Truth

    Get PDF
    This dissertation is motivated by improving healthcare through the development of wearable sensors. This work seeks improvement in the evaluation and development of pedometer algorithms, and is composed of two chapters describing the collection of the dataset and describing the im-plementation and evaluation of three previously developed pedometer algorithms on the dataset collected. Our goal is to analyze pedometer algorithms under more natural conditions that occur during daily living where gaits are frequently changing or remain regular for only brief periods of time. We video recorded 30 participants performing 3 activities: walking around a track, walking through a building, and moving around a room. The ground truth time of each step was manu-ally marked in the accelerometer signals through video observation. Collectively 60,853 steps were recorded and annotated. A subclass of steps called shifts were identified as those occurring at the beginning and end of regular strides, during gait changes, and during pivots changing the direction of motion. While shifts comprised only .03% of steps in the regular stride activity, they comprised 10-25% of steps in the semi-regular and unstructured activities. We believe these motions should be identified separately, as they provide different accelerometer signals, and likely result in different amounts of energy expenditure. This dataset will be the first to specifically allow for pedometer algorithms to be evaluated on unstructured gaits that more closely model natural activities. In order to provide pilot evaluation data, a commercial pedometer, the Fitbit Charge 2, and three prior step detection algorithms were analyzed. The Fitbit consistently underestimated the total number of steps taken across each gait type. Because the Fitbit algorithm is proprietary, it could not be reimplemented and examined beyond a raw step count comparison. Three previously published step detection algorithms, however, were implemented and examined in detail on the dataset. The three algorithms are based on three different methods of step detection; peak detection, zero crossing (threshold based), and autocorrelation. The evaluation of these algorithms was performed across 5 dimensions, including algorithm, parameter set, gait type, sensor position, and evaluation metric, which yielded 108 individual measures of accuracy. Accuracy across each of the 5 dimensions were examined individually in order to determine trends. In general, training parameters to this dataset caused a significant accuracy improvement. The most accurate algorithm was dependent on gait type, sensor position, and evaluation metric, indicating no clear “best approach” to step detection. In general, algorithms were most accurate for regular gait and least accurate for unstructured gait. In general, accuracy was higher for hip and ankle worn sensors than it was for wrist worn sensors. Finally, evaluation across running count accuracy (RCA) and step detection accuracy (SDA) revealed similar trends across gait type and sensor position, but each metric indicated a different algorithm with the highest overall accuracy. A classifier was developed to identify gait type in an effort to use this information to improve pedometer accuracy. The classifier’s features are based on the Fast Fourier Transform (FFT) applied to the accelerometer data gathered from each sensor throughout each activity. A peak detector was developed to identify the maximum value of the FFT, the width of the peak yielding the maximum value, and the number of peaks in each FFT. These features were then applied to a Naive Bayes classifier, which correctly identified the gait (regular, semi-regular, or unstructured) with 84% accuracy. A varying algorithm pedometer was then developed which switched between the peak detection, threshold crossing, and autocorrelation based algorithms depending on which algorithm performed best for the sensor location and detected gait type. This process yielded a step detection accuracy of 84%. This was a 3% improvement when compared to the greatest accuracy achieved by the best performing algorithm, the peak detection algorithm. It was also identified that in order to provide quicker real-time transitions between algorithms, the data should be examined in smaller windows. Window sizes of 3, 5, 8, 10, 15, 20, and 30 seconds were tested, and the highest overall accuracy was found for a window size of 5 seconds. These smaller windows of time included behaviors which do not correspond directly with the regular, semi-regular, and unstructured gait activities. Instead, three stride types were identified: steady stride, irregular stride, and idle. These stride types were identified with 82% accuracy. This experiment showed that at an activity level, gait detection can improve pedometer accuracy and indicated that applying the same principles to a smaller window size could allow for more responsive real-time algorithm selection

    Recognition of elementary arm movements using orientation of a tri-axial accelerometer located near the wrist

    No full text
    In this paper we present a method for recognising three fundamental movements of the human arm (reach and retrieve, lift cup to mouth, rotation of the arm) by determining the orientation of a tri-axial accelerometer located near the wrist. Our objective is to detect the occurrence of such movements performed with the impaired arm of a stroke patient during normal daily activities as a means to assess their rehabilitation. The method relies on accurately mapping transitions of predefined, standard orientations of the accelerometer to corresponding elementary arm movements. To evaluate the technique, kinematic data was collected from four healthy subjects and four stroke patients as they performed a number of activities involved in a representative activity of daily living, 'making-a-cup-of-tea'. Our experimental results show that the proposed method can independently recognise all three of the elementary upper limb movements investigated with accuracies in the range 91–99% for healthy subjects and 70–85% for stroke patients

    Dance of the bulrushes: building conversations between social creatures

    Get PDF
    The interactive installation is in vogue. Interaction design and physical installations are accepted fixtures of modern life, and with these technology-driven installations beginning to exert influence on modes of mass communication and general expectations for user experiences, it seems appropriate to explore the variety of interactions that exist. This paper surveys a number of successful projects with a critical eye toward assessing the type of communication and/or conversation generated between interactive installations and human participants. Moreover, this exploration seeks to identify whether specific tactics and/or technologies are particularly suited to engendering layers of dialogue or ‘conversations’ within interactive physical computing installations. It is asserted that thoughtful designs incorporating self-organizational abilities can foster rich dialogues in which participants and the installation collaboratively generate value in the interaction. To test this hypothesis an interactive installation was designed and deployed in locations in and around London. Details of the physical objects and employed technologies are discussed, and results of the installation sessions are shown to corroborate the key tenets of this argument in addition to highlighting other concerns that are specifically relevant to the broad topic of interactive design

    Using infrastructure-provided context filters for efficient fine-grained activity sensing

    Get PDF
    Ministry of Education, Singapore under its Academic Research Funding Tier
    corecore