24 research outputs found
Piezoelectric Sensors Used for Daily Life Monitoring
This chapter presents an unrestrained and predictive sensor system to analyze human behavior patterns, especially in a case that occurs when a patient leaves a bed. Our developed prototype system comprises three sensors: a pad sensor, a pillow sensor, and a bolt sensor. A triaxial accelerometer is used for the pillow sensor, and piezoelectric elements are used for the pad sensors and the bolt sensor that were installed under a bed mat and a bed handrail, respectively. The noteworthy features of these sensors are their easy installation, low cost, high reliability, and toughness. We developed a machine-learning-based method to recognize bed-leaving behavior patterns obtained from sensor signals. Our prototype system was evaluated by the examination with 10 subjects in an environment representing a clinical site. The experimentally obtained result revealed that the mean recognition accuracy for seven behavior patterns was 75.5%. Particularly, the recognition accuracies for longitudinal sitting, terminal sitting, and left the bed were 83.3, 98.3, and 95.0%, respectively. However, falsely recognized patterns remained inside of respective behavior categories of sleeping and sitting. Our prototype system is applicable and used for an actual environment as a novel sensor system without restraint for patients
Mallard Detection Using Microphone Arrays Combined with Delay-and-Sum Beamforming for Smart and Remote Rice–Duck Farming
This paper presents an estimation method for a sound source of pre-recorded mallard calls from acoustic information using two microphone arrays combined with delay-and-sum beamforming. Rice farming using mallards saves labor because mallards work instead of farmers. Nevertheless, the number of mallards declines when they are preyed upon by natural enemies such as crows, kites, and weasels. We consider that efficient management can be achieved by locating and identifying the locations of mallards and their natural enemies using acoustic information that can be widely sensed in a paddy field. For this study, we developed a prototype system that comprises two sets of microphone arrays. We used 64 microphones in all installed on our originally designed and assembled sensor mounts. We obtained three acoustic datasets in an outdoor environment for our benchmark evaluation. The experimentally obtained results demonstrated that the proposed system provides adequate accuracy for application to rice–duck farming
キョウシ ナシ カテゴリ ケイセイ ト ロボット ビジョン エノ オウヨウ
博士(Doctor)工学(Engineering)奈良先端科学技術大学院大学博第962号甲第962号博士(工学)奈良先端科学技術大学院大
Category Maps Describe Driving Episodes Recorded with Event Data Recorders
This study was conducted to create driving episodes using machine-learning-based algorithms that address long-term memory (LTM) and topological mapping. This paper presents a novel episodic memory model for driving safety according to traffic scenes. The model incorporates three important features: adaptive resonance theory (ART), which learns time-series features incrementally while maintaining stability and plasticity; self-organizing maps (SOMs), which represent input data as a map with topological relations using self-mapping characteristics; and counter propagation networks (CPNs), which label category maps using input features and counter signals. Category maps represent driving episode information that includes driving contexts and facial expressions. The bursting states of respective maps produce LTM created on ART as episodic memory. For a preliminary experiment using a driving simulator (DS), we measure gazes and face orientations of drivers as their internal information to create driving episodes. Moreover, we measure cognitive distraction according to effects on facial features shown in reaction to simulated near-misses. Evaluation of the experimentally obtained results show the possibility of using recorded driving episodes with image datasets obtained using an event data recorder (EDR) with two cameras. Using category maps, we visualize driving features according to driving scenes on a public road and an expressway
Visualization and Semantic Labeling of Mood States Based on Time-Series Features of Eye Gaze and Facial Expressions by Unsupervised Learning
This study is intended to develop a stress measurement and visualization system for stress management in terms of simplicity and reliability. We present a classification and visualization method of mood states based on unsupervised machine learning (ML) algorithms. Our proposed method attempts to examine the relation between mood states and extracted categories in human communication from facial expressions, gaze distribution area and density, and rapid eye movements, defined as saccades. Using a psychological check sheet and a communication video with an interlocutor, an original benchmark dataset was obtained from 20 subjects (10 male, 10 female) in their 20s for four or eight weeks at weekly intervals. We used a Profile of Mood States Second edition (POMS2) psychological check sheet to extract total mood disturbance (TMD) and friendliness (F). These two indicators were classified into five categories using self-organizing maps (SOM) and U-Matrix. The relation between gaze and facial expressions was analyzed from the extracted five categories. Data from subjects in the positive categories were found to have a positive correlation with the concentrated distributions of gaze and saccades. Regarding facial expressions, the subjects showed a constant expression time of intentional smiles. By contrast, subjects in negative categories experienced a time difference in intentional smiles. Moreover, three comparative experiment results demonstrated that the feature addition of gaze and facial expressions to TMD and F clarified category boundaries obtained from U-Matrix. We verify that the use of SOM and its two variants is the best combination for the visualization of mood states
Automatic Calibration of Piezoelectric Bed-Leaving Sensor Signals Using Genetic Network Programming Algorithms
This paper presents a filter generating method that modifies sensor signals using genetic network programming (GNP) for automatic calibration to absorb individual differences. For our earlier study, we developed a prototype that incorporates bed-leaving detection sensors using piezoelectric films and a machine-learning-based behavior recognition method using counter-propagation networks (CPNs). Our method learns topology and relations between input features and teaching signals. Nevertheless, CPNs have been insufficient to address individual differences in parameters such as weight and height used for bed-learning behavior recognition. For this study, we actualize automatic calibration of sensor signals for invariance relative to these body parameters. This paper presents two experimentally obtained results from our earlier study. They were obtained using low-accuracy sensor signals. For the preliminary experiment, we optimized the original sensor signals to approximate high-accuracy ideal sensor signals using generated filters. We used fitness to assess differences between the original signal patterns and ideal signal patterns. For application experiments, we used fitness calculated from the recognition accuracy obtained using CPNs. The experimentally obtained results reveal that our method improved the mean accuracies for three datasets
Vision-Based Indoor Scene Recognition from Time-Series Aerial Images Obtained Using a MAV Mounted Monocular Camera
This paper presents a vision-based indoor scene recognition method from aerial time-series images obtained using a micro air vehicle (MAV). The proposed method comprises two procedures: a codebook feature description procedure, and a recognition procedure using category maps. For the former procedure, codebooks are created automatically as visual words using self-organizing maps (SOMs) after extracting part-based local features using a part-based descriptor from time-series scene images. For the latter procedure, category maps are created using counter propagation networks (CPNs) with the extraction of category boundaries using a unified distance matrix (U-Matrix). Using category maps, topologies of image features are mapped into a low-dimensional space based on competitive and neighborhood learning. We obtained aerial time-series image datasets of five sets for two flight routes: a round flight route and a zigzag flight route. The experimentally obtained results with leave-one-out cross-validation (LOOCV) revealed respective mean recognition accuracies for the round flight datasets (RFDs) and zigzag flight datasets (ZFDs) of 71.7% and 65.5% for 10 zones. The category maps addressed the complexity of scenes because of segmented categories. Although extraction results of category boundaries using U-Matrix were partially discontinuous, we obtained comprehensive category boundaries that segment scenes into several categories