257,655 research outputs found

    Developmental Stages of Perception and Language Acquisition in a Perceptually Grounded Robot

    Get PDF
    The objective of this research is to develop a system for language learning based on a minimum of pre-wired language-specific functionality, that is compatible with observations of perceptual and language capabilities in the human developmental trajectory. In the proposed system, meaning (in terms of descriptions of events and spatial relations) is extracted from video images based on detection of position, motion, physical contact and their parameters. Mapping of sentence form to meaning is performed by learning grammatical constructions that are retrieved from a construction inventory based on the constellation of closed class items uniquely identifying the target sentence structure. The resulting system displays robust acquisition behavior that reproduces certain observations from developmental studies, with very modest “innate” language specificity

    Evaluation of deep learning models in contactless human motion detection system for next generation healthcare

    Get PDF
    Recent decades have witnessed the growing importance of human motion detection systems based on artificial intelligence (AI). The growing interest in human motion detection systems is the advantages of automation in the monitoring of patients remotely and giving warnings to doctors promptly. Currently, wearable devices are frequently used for human motion detection systems. However, such devices have several limitations, such as the elderly not wearing devices due to lack of comfort or forgetfulness and/or battery limitations. To overcome the problems of wearable devices, we propose an AI-driven human motion detection system (deep learning-based system) using channel state information (CSI) extracted from Radio Frequency (RF) signals. The main contribution of this paper is to improve the performance of the deep learning models through techniques, including structure modification and dimension reduction of the original data. In this work, We firstly collected the CSI data with the center frequency 5.32 GHz and implemented the structure of the basic deep learning network in our previous work. After that, we changed the basic deep learning network by increasing the depth, increasing the width, adapting some advanced network structures, and reducing dimensions. After finishing those modifications, we observed the results and analyzed how to further improve the deep learning performance of this contactless AI-enabled human motion detection system. It can be found that reducing the dimension of the original data can work better than modifying the structure of the deep learning model

    Biomimetic hydrogel-CNT network induced enhancement of fluid-structure interactions for ultrasensitive nanosensors

    Get PDF
    Flexible, self-powered, miniaturized, ultrasensitive flow sensors are in high demand for human motion detection, myoelectric prosthesis, biomedical robots, and health-monitoring devices. This paper reports a biomimetic nanoelectromechanical system (NEMS) flow sensor featuring a PVDF nanofiber sensing membrane with a hydrogel infused, vertically aligned carbon nanotube (VACNT) bundle that mechanically interacts with the flow. The hydrogel-VACNT structure mimics the cupula structure in biological flow sensors and gives the NEMS flow sensor ultrahigh sensitivity via a material-induced drag force enhancement mechanism. Through hydrodynamic experimental flow characterization, this work investigates the contributions of the mechanical and structural properties of the hydrogel in offering a sensing performance superior to that of conventional sensors. The ultrahigh sensitivity of the developed sensor enabled the detection of minute flows generated during human motion and micro-droplet propagation. The novel fabrication strategies and combination of materials used in the biomimetic NEMS sensor fabrication may guide the development of several wearable, flexible, and self-powered nanosensors in the future.Singapore. Prime Minister’s Offic

    Development of Real-Time Electromyography Controlled 3D Printed Robot Hand Prototype

    Get PDF
    Developing an anthropomorphic robotic hand (ARH) has become a relevant research field due to the need to help the amputees live their life as normal people. However, the current state of research is unsatisfactory, especially in terms of structural design and the robot control method. This paper, which proposes a 3D printed ARH structure that follows the average size of an adult human hand, consists of five fingers with a tendon-driven actuator mechanism embedded in each finger structure. Besides that, the movement capability of the developed 3D printed robot hand validated by using motion capture analysis to ensure the similarity to the expected motion range in structural design is achieved. Its system functionality test was conducted in three stages: (1) muscular activity detection, (2) object detection for individual finger movement control, and (3) integration of both stages in one algorithm. Finally, an ARH was developed, which resembles human hand features, as well as a reliable system that can perform opened hand palm and some grasping postures for daily use

    Recognition of Human Periodic Movements From Unstructured Information Using A Motion-based Frequency Domain Approach

    Get PDF
    Feature-based motion cues play an important role in biological visual perception. We present a motion-based frequency-domain scheme for human periodic motion recognition. As a baseline study of feature based recognition we use unstructured feature-point kinematic data obtained directly from a marker-based optical motion capture (MoCap) system, rather than accommodate bootstrapping from the low-level image processing of feature detection. Motion power spectral analysis is applied to a set of unidentified trajectories of feature points representing whole body kinematics. Feature power vectors are extracted from motion power spectra and mapped to a low dimensionality of feature space as motion templates that offer frequency domain signatures to characterise different periodic motions. Recognition of a new instance of periodic motion against pre-stored motion templates is carried out by seeking best motion power spectral similarity. We test this method through nine examples of human periodic motion using MoCap data. The recognition results demonstrate that feature-based spectral analysis allows classification of periodic motions from low-level, un-structured interpretation without recovering underlying kinematics. Contrasting with common structure-based spatio-temporal approaches, this motion-based frequency-domain method avoids a time-consuming recovery of underlying kinematic structures in visual analysis and largely reduces the parameter domain in the presence of human motion irregularities

    Efficient Pedestrian Detection in Urban Traffic Scenes

    Get PDF
    Pedestrians are important participants in urban traffic environments, and thus act as an interesting category of objects for autonomous cars. Automatic pedestrian detection is an essential task for protecting pedestrians from collision. In this thesis, we investigate and develop novel approaches by interpreting spatial and temporal characteristics of pedestrians, in three different aspects: shape, cognition and motion. The special up-right human body shape, especially the geometry of the head and shoulder area, is the most discriminative characteristic for pedestrians from other object categories. Inspired by the success of Haar-like features for detecting human faces, which also exhibit a uniform shape structure, we propose to design particular Haar-like features for pedestrians. Tailored to a pre-defined statistical pedestrian shape model, Haar-like templates with multiple modalities are designed to describe local difference of the shape structure. Cognition theories aim to explain how human visual systems process input visual signals in an accurate and fast way. By emulating the center-surround mechanism in human visual systems, we design multi-channel, multi-direction and multi-scale contrast features, and boost them to respond to the appearance of pedestrians. In this way, our detector is considered as a top-down saliency system. In the last part of this thesis, we exploit the temporal characteristics for moving pedestrians and then employ motion information for feature design, as well as for regions of interest (ROIs) selection. Motion segmentation on optical flow fields enables us to select those blobs most probably containing moving pedestrians; a combination of Histogram of Oriented Gradients (HOG) and motion self difference features further enables robust detection. We test our three approaches on image and video data captured in urban traffic scenes, which are rather challenging due to dynamic and complex backgrounds. The achieved results demonstrate that our approaches reach and surpass state-of-the-art performance, and can also be employed for other applications, such as indoor robotics or public surveillance. In this thesis, we investigate and develop novel approaches by interpreting spatial and temporal characteristics of pedestrians, in three different aspects: shape, cognition and motion. The special up-right human body shape, especially the geometry of the head and shoulder area, is the most discriminative characteristic for pedestrians from other object categories. Inspired by the success of Haar-like features for detecting human faces, which also exhibit a uniform shape structure, we propose to design particular Haar-like features for pedestrians. Tailored to a pre-defined statistical pedestrian shape model, Haar-like templates with multiple modalities are designed to describe local difference of the shape structure. Cognition theories aim to explain how human visual systems process input visual signals in an accurate and fast way. By emulating the center-surround mechanism in human visual systems, we design multi-channel, multi-direction and multi-scale contrast features, and boost them to respond to the appearance of pedestrians. In this way, our detector is considered as a top-down saliency system. In the last part of this thesis, we exploit the temporal characteristics for moving pedestrians and then employ motion information for feature design, as well as for regions of interest (ROIs) selection. Motion segmentation on optical flow fields enables us to select those blobs most probably containing moving pedestrians; a combination of Histogram of Oriented Gradients (HOG) and motion self difference features further enables robust detection. We test our three approaches on image and video data captured in urban traffic scenes, which are rather challenging due to dynamic and complex backgrounds. The achieved results demonstrate that our approaches reach and surpass state-of-the-art performance, and can also be employed for other applications, such as indoor robotics or public surveillance

    Multi-disciplinary Collaborations in Measurement of Human Motion

    Get PDF
    Comparative Medicine - OneHealth and Comparative Medicine Poster SessionBioengineering is a broad and rapidly-growing discipline defined as the application of engineering principles to biological systems. Although bioengineering is diverse in nature, the study of human movement is common to many bioengineering subdisciplines such as biomechanics and biometrics. Biomechanics is the science that examines the forces acting upon and within a biological structure and effects produced by such forces [1]. Measurement of ground reaction forces, limb motion, and muscle activation are fundamental research components in musculoskeletal biomechanics. Researchers in this field have used these measurements to quantify human gait, balance, and posture in a multitude of applications including age-related fall risk [2-4], muscle fatigue [5-7], and balance-related pathologies such as Parkinson's disease [8-10], and stroke [11, 12]. Additionally, these measurements play a vital role in computational biomechanics models. For example, the inverse dynamics method incorporates measured ground reaction forces and body motions to calculate the net reaction forces and torques acting on body joints [13]. Biometrics is the science of confirming or discovering individuals' identities based on their specific biological or behavioral traits [14]. Gait is one such modality which can be used for biometric identification. It is based on the uniqueness of an individual's locomotion patterns [15]. In addition, we are interested in high-speed video analyses of micro-saccades and blink reflexes for spoof-proofing of biometric identification systems, biometric identification, and psychometry. We have shown that startle blink intensity can be derived from high- speed video [18], enabling video-based psychophysiological biometrics for detection of subject-specific affective-cognitive information [19]. The Human Motion Laboratory at the University of Missouri - Kansas City is dedicated to measuring the characteristics of human motion. The lab includes a VICON MX 6-camera motion capture system, 4 AMTI OR6-6 force platforms, and a Delsys Myomonitor IV 16-channel wireless EMG system. This equipment represents an experimental infrastructure mutually supporting the biomechanics and biometrics research efforts of four research labs. The scope of these research efforts includes aging, affective computing, psychophysiological biometrics, orthopedics, and human dynamics pathology. The lab capitalizes on a synergistic environment for characterization and measurement of human movement and the interrelated nature of the research activities. The four main research areas that the Human Motion Laboratory supports are: •Computational Biomechanics •Biometrics of Human Motion •Experimental Biomechanics •Body Area Sensor Network

    A space-variant architecture for active visual target tracking

    Get PDF
    An active visual target tracking system is an automatic feedback control system that can track a moving target by controlling the movement of a camera or sensor array. This kind of system is often used in applications such as automatic surveillance and human-computer interaction. The design of an effective target tracking system is challenging because the system should be able to precisely detect the fine movements of a target while still being able to detect a large range of target velocities. Achieving this in a computationally efficient manner is difficult with a conventional system architecture. This thesis presents an architecture for an active visual target tracking system based on the idea of space-variant motion detection. In general, space-variant imaging involves the use of a non-uniform distribution of sensing elements across a sensor array, similar to how the photoreceptors in the human eye are not evenly distributed. In the proposed architecture, space-variant imaging is used to design an array of elementary motion detectors (EMDs). The EMDs are tuned in such a way as to make it possible to detect motion both precisely and over a wide range of velocities in a computationally efficient manner. The increased ranges are achieved without additional computational costs beyond the basic mechanism of motion detection. The technique is general in that it can be used with different motion detection mechanisms and the overall space-variant structure can be varied to suit a particular application. The design of a tracking system based on a space-variant motion detection array is a difficult task. This thesis presents a method of analysis and design for such a tracking system. The method of analysis consists of superimposing a phase-plane plot of the continuous-time dynamics of the tracking system onto a map of the detection capabilities of the array of EMDs. With the help of this 'sensory-motor' plot, a simple optimization algorithm is used to design a tracking system to meet particular objectives for settling time, steady-state error and overshoot. Several simulations demonstrate the effectiveness of the method. A complete active vision system is implemented and a set of target tracking experiments are performed. Experimental results support the effectiveness of the approac

    A spatio-temporal learning approach for crowd activity modelling to detect anomalies

    Get PDF
    With security and surveillance gaining paramount importance in recent years, it has become important to reliably automate some surveillance tasks for monitoring crowded areas. The need to automate this process also supports human operators who are overwhelmed with a large number of security screens to monitor. Crowd events like excess usage throughout the day, sudden peaks in crowd volume, chaotic motion (obvious to spot) all emerge over time which requires constant monitoring in order to be informed of the event build up. To ease this task, the computer vision community has been addressing some surveillance tasks using image processing and machine learning techniques. Currently tasks such as crowd density estimation or people counting, crowd detection and abnormal crowd event detection are being addressed. Most of the work has focused on crowd detection and estimation with the focus slowly shifting on crowd event learning for abnormality detection.This thesis addresses crowd abnormality detection. However, by way of the modelling approach used, implicitly, the tasks of crowd detection and estimation are also handled. The existing approaches in the literature have a number of drawbacks that keep them from being scalable for any public scene. Most pieces of work use simple scene settings where motion occurs wholly in the near-field or far-field of the camera view. Thus, with assumptions on the expected location of person motion, small blobs are arbitrarily filtered out as noise when they may be legitimate motion in the far-field. Such an approach makes it difficult to deal with complex scenes where entry/exit points occur in the centre of the scene or multiple pathways running from the near to the far-field of the camera view that produce blobs of differing sizes. Further, most authors assume the number of directions people motion should exhibit rather than discover what these may be. Approaches with such assumptions would result in loss of accuracy while dealing with (say) a railway platform which shows a number of motion directions, namely two-way, one-way, dispersive, etc. Finally, very few contributions of work use time as a video feature to model the human intuitiveness of time-of-day abnormalities. That is certain motion patterns may be abnormal if they have not been seen for a given time of day. Most works use it (time) as an extra qualifier to spatial data for trajectory definition.In this thesis most of these drawbacks have been addressed by dealing with these in the modelling of crowd activity. Firstly, no assumptions are made on scene structure or blob sizes resulting therefrom. The optical flow algorithm used is robust and even the noise presented (which is infact unwanted motion of swaying hands and legs as opposed to that from the torso) is fairly consistent and therefore can be factored into the modelling. Blobs, no matter what the size are not discarded as they may be legitimate emerging motion in the far-field. The modelling also deals with paths extending from the far to the near-field of the camera view and segments these such that each segment contains self-comparable fields of motion. The need for a normalisation factor for comparisons across near and far field motion fields implies prior knowledge of the scene. As the system is intended for generic public locations having varying scene structures, normalisation is not an option in the processing used and yet the near & far-field motion changes are accounted for. Secondly, this thesis describes a system that learns the true distribution of motion along the detected paths and maintains these. The approach is such that doing so does not generalise the direction distributions which would cause loss in precision. No impositions are made on expected motion and if the underlying motion is well defined (one-way or two-way), then this is represented as a well defined distribution and as a mixture of directions if the underlying motion presents itself as so.Finally, time as a video feature is used to allow for activity to re-enforce itself on a daily basis such that motion patterns for a given time and space begin to define themselves through re-enforcement which acts as the model used for abnormality detection in time and space (spatio-temporal). The system has been tested with real-world data datasets with varying fields of camera view. The testing has shown no false negatives, very few false positives and detects crowd abnormalities quite well with respect to the ground truths of the datasets used
    • …
    corecore