3,999 research outputs found

    Transportation mode recognition fusing wearable motion, sound and vision sensors

    Get PDF
    We present the first work that investigates the potential of improving the performance of transportation mode recognition through fusing multimodal data from wearable sensors: motion, sound and vision. We first train three independent deep neural network (DNN) classifiers, which work with the three types of sensors, respectively. We then propose two schemes that fuse the classification results from the three mono-modal classifiers. The first scheme makes an ensemble decision with fixed rules including Sum, Product, Majority Voting, and Borda Count. The second scheme is an adaptive fuser built as another classifier (including Naive Bayes, Decision Tree, Random Forest and Neural Network) that learns enhanced predictions by combining the outputs from the three mono-modal classifiers. We verify the advantage of the proposed method with the state-of-the-art Sussex-Huawei Locomotion and Transportation (SHL) dataset recognizing the eight transportation activities: Still, Walk, Run, Bike, Bus, Car, Train and Subway. We achieve F1 scores of 79.4%, 82.1% and 72.8% with the mono-modal motion, sound and vision classifiers, respectively. The F1 score is remarkably improved to 94.5% and 95.5% by the two data fusion schemes, respectively. The recognition performance can be further improved with a post-processing scheme that exploits the temporal continuity of transportation. When assessing generalization of the model to unseen data, we show that while performance is reduced - as expected - for each individual classifier, the benefits of fusion are retained with performance improved by 15 percentage points. Besides the actual performance increase, this work, most importantly, opens up the possibility for dynamically fusing modalities to achieve distinct power-performance trade-off at run time

    Photonic Crystal Hydrogels: Simulation, Fabrication & Biomedical Application

    Get PDF
    Photonic crystal (PhC) hydrogels are a unique class of material that has tremendous promise as biomedical sensors. The underlying crystal structure allows for simple analysis of microstructural properties by assessing the diffraction pattern generated following laser illumination. The hydrogel medium provides elasticity, regenerability, and potential functionalization. Combining these two properties, photonic crystal hydrogels have the potential for sensing physical forces and chemical reagents using a low-cost, reusable platform. The development of biomedical sensors using this material is limited due to the lack of a method to accurately predict the diffraction pattern generated. To overcome this, a computational model was developed specifically for PhCs and validated against existing analytical models and an existing electromagnetic scattering model in the literature. Assessment of its accuracy in comparison to existing analytical equations and a more generalized multiparticle scattering model in the literature, CELES, found clear alignment. Another challenge is the lack of a technique to assess the specific positions of each particle in the crystal structure non-destructively. To overcome this, a novel fabrication approach was created using fluorescent particles, allowing subsequent confocal fluorescence microscopy and analyses to extract per-particle position information. This technique was used to directly compare experimental, computational, and analytical results within a single sample. To demonstrate a novel biomedical application of this material, ultrasound detection was chosen since it would be able to leverage the elastomeric structure of the PhC hydrogel as well as the ability to optically measure small changes in crystal microstructure. The sensitivity, frequency bandwidth, and limit of detection of fabricated PhC hydrogels were assessed using three ultrasound transducers. All transducers created a measurable optical response, with the limit of detection growing steadily with transducer frequency. These results provide evidence that the platform can be utilized across a variety of biomedical disciplines. For biomedical imaging, this platform can be used for all-optical non-contact ultrasound sensing. For cell and tissue engineering, this platform can provide a novel approach for characterizing and monitoring contractile cells, such as cardiomyocytes. Finally, for environmental engineering, this platform can be used as a continuous monitoring solution for dangerous toxins in environmental waterways

    Navigation capabilities of mid-cost GNSS/INS vs. smartphone analysis and comparison in urban navigation scenarios

    Get PDF
    Proceedings of: 17th International Conference on Information Fusion (FUSION 2014): Salamanca, Spain 7-10 July 2014.High accuracy navigation usually require expensive sensors and/or its careful integration into a complex and finely tuned system. Smartphones pack a high number of sensors in a portable format, becoming a source of low-quality information with a high heterogeneity and redundancy. This work compares pure GNSS/INS capabilities on both types of platform, and discuss the weaknesses/opportunities offered by the smartphone. The analysis is carried out in a modular context-aware sensor fusion architecture developed for a previous work. It intends to serve as a preparation for answering bigger questions: can smartphones provide robust and high-quality navigation in vehicles? In which conditions? Where are the limits in the different navigation scenarios?This work was supported in part by Projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485)Publicad

    A Construction Kit for Efficient Low Power Neural Network Accelerator Designs

    Get PDF
    Implementing embedded neural network processing at the edge requires efficient hardware acceleration that couples high computational performance with low power consumption. Driven by the rapid evolution of network architectures and their algorithmic features, accelerator designs are constantly updated and improved. To evaluate and compare hardware design choices, designers can refer to a myriad of accelerator implementations in the literature. Surveys provide an overview of these works but are often limited to system-level and benchmark-specific performance metrics, making it difficult to quantitatively compare the individual effect of each utilized optimization technique. This complicates the evaluation of optimizations for new accelerator designs, slowing-down the research progress. This work provides a survey of neural network accelerator optimization approaches that have been used in recent works and reports their individual effects on edge processing performance. It presents the list of optimizations and their quantitative effects as a construction kit, allowing to assess the design choices for each building block separately. Reported optimizations range from up to 10'000x memory savings to 33x energy reductions, providing chip designers an overview of design choices for implementing efficient low power neural network accelerators

    Methods for monitoring the human circadian rhythm in free-living

    Get PDF
    Our internal clock, the circadian clock, determines at which time we have our best cognitive abilities, are physically strongest, and when we are tired. Circadian clock phase is influenced primarily through exposure to light. A direct pathway from the eyes to the suprachiasmatic nucleus, where the circadian clock resides, is used to synchronise the circadian clock to external light-dark cycles. In modern society, with the ability to work anywhere at anytime and a full social agenda, many struggle to keep internal and external clocks synchronised. Living against our circadian clock makes us less efficient and poses serious health impact, especially when exercised over a long period of time, e.g. in shift workers. Assessing circadian clock phase is a cumbersome and uncomfortable task. A common method, dim light melatonin onset testing, requires a series of eight saliva samples taken in hourly intervals while the subject stays in dim light condition from 5 hours before until 2 hours past their habitual bedtime. At the same time, sensor-rich smartphones have become widely available and wearable computing is on the rise. The hypothesis of this thesis is that smartphones and wearables can be used to record sensor data to monitor human circadian rhythms in free-living. To test this hypothesis, we conducted research on specialised wearable hardware and smartphones to record relevant data, and developed algorithms to monitor circadian clock phase in free-living. We first introduce our smart eyeglasses concept, which can be personalised to the wearers head and 3D-printed. Furthermore, hardware was integrated into the eyewear to recognise typical activities of daily living (ADLs). A light sensor integrated into the eyeglasses bridge was used to detect screen use. In addition to wearables, we also investigate if sleep-wake patterns can be revealed from smartphone context information. We introduce novel methods to detect sleep opportunity, which incorporate expert knowledge to filter and fuse classifier outputs. Furthermore, we estimate light exposure from smartphone sensor and weather in- formation. We applied the Kronauer model to compare the phase shift resulting from head light measurements, wrist measurements, and smartphone estimations. We found it was possible to monitor circadian phase shift from light estimation based on smartphone sensor and weather information with a weekly error of 32±17min, which outperformed wrist measurements in 11 out of 12 participants. Sleep could be detected from smartphone use with an onset error of 40±48 min and wake error of 42±57 min. Screen use could be detected smart eyeglasses with 0.9 ROC AUC for ambient light intensities below 200lux. Nine clusters of ADLs were distinguished using Gaussian mixture models with an average accuracy of 77%. In conclusion, a combination of the proposed smartphones and smart eyeglasses applications could support users in synchronising their circadian clock to the external clocks, thus living a healthier lifestyle

    Edge Computing for Internet of Things

    Get PDF
    The Internet-of-Things is becoming an established technology, with devices being deployed in homes, workplaces, and public areas at an increasingly rapid rate. IoT devices are the core technology of smart-homes, smart-cities, intelligent transport systems, and promise to optimise travel, reduce energy usage and improve quality of life. With the IoT prevalence, the problem of how to manage the vast volumes of data, wide variety and type of data generated, and erratic generation patterns is becoming increasingly clear and challenging. This Special Issue focuses on solving this problem through the use of edge computing. Edge computing offers a solution to managing IoT data through the processing of IoT data close to the location where the data is being generated. Edge computing allows computation to be performed locally, thus reducing the volume of data that needs to be transmitted to remote data centres and Cloud storage. It also allows decisions to be made locally without having to wait for Cloud servers to respond

    Activity-Based User Authentication Using Smartwatches

    Get PDF
    Smartwatches, which contain an accelerometer and gyroscope, have recently been used to implement gait and gesture- based biometrics; however, the prior studies have long-established drawbacks. For example, data for both training and evaluation was captured from single sessions (which is not realistic and can lead to overly optimistic performance results), and in cases when the multi-day scenario was considered, the evaluation was often either done improperly or the results are very poor (i.e., greater than 20% of EER). Moreover, limited activities were considered (i.e., gait or gestures), and data captured within a controlled environment which tends to be far less realistic for real world applications. Therefore, this study remedies these past problems by training and evaluating the smartwatch-based biometric system on data from different days, using large dataset that involved the participation of 60 users, and considering different activities (i.e., normal walking (NW), fast walking (FW), typing on a PC keyboard (TypePC), playing mobile game (GameM), and texting on mobile (TypeM)). Unlike the prior art that focussed on simply laboratory controlled data, a more realistic dataset, which was captured within un-constrained environment, is used to evaluate the performance of the proposed system. Two principal experiments were carried out focusing upon constrained and un-constrained environments. The first experiment included a comprehensive analysis of the aforementioned activities and tested under two different scenarios (i.e., same and cross day). By using all the extracted features (i.e., 88 features) and the same day evaluation, EERs of the acceleration readings were 0.15%, 0.31%, 1.43%, 1.52%, and 1.33% for the NW, FW, TypeM, TypePC, and GameM respectively. The EERs were increased to 0.93%, 3.90%, 5.69%, 6.02%, and 5.61% when the cross-day data was utilized. For comparison, a more selective set of features was used and significantly maximize the system performance under the cross day scenario, at best EERs of 0.29%, 1.31%, 2.66%, 3.83%, and 2.3% for the aforementioned activities respectively. A realistic methodology was used in the second experiment by using data collected within unconstrained environment. A light activity detection approach was developed to divide the raw signals into gait (i.e., NW and FW) and stationary activities. Competitive results were reported with EERs of 0.60%, 0% and 3.37% for the NW, FW, and stationary activities respectively. The findings suggest that the nature of the signals captured are sufficiently discriminative to be useful in performing transparent and continuous user authentication.University of Kuf
    corecore