14,496 research outputs found
Human behavioural analysis with self-organizing map for ambient assisted living
This paper presents a system for automatically classifying the resting location of a moving object in an indoor environment. The system uses an unsupervised neural network (Self Organising Feature Map) fully implemented on a low-cost, low-power automated home-based surveillance system, capable of monitoring activity level of elders living alone independently. The proposed system runs on an embedded platform with a specialised ceiling-mounted video sensor for intelligent activity monitoring. The system has the ability to learn resting locations, to measure overall activity levels and to detect specific events such as potential falls. First order motion information, including first order moving average smoothing, is generated from the 2D image coordinates (trajectories). A novel edge-based object detection algorithm capable of running at a reasonable speed on the embedded platform has been developed. The classification is dynamic and achieved in real-time. The dynamic classifier is achieved using a SOFM and a probabilistic model. Experimental results show less than 20% classification error, showing the robustness of our approach over others in literature with minimal power consumption. The head location of the subject is also estimated by a novel approach capable of running on any resource limited platform with power constraints
The Evolution of First Person Vision Methods: A Survey
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart
Glasses, Computer Vision, Video Analytics, Human-machine Interactio
Editorial Special Issue on Enhancement Algorithms, Methodologies and Technology for Spectral Sensing
The paper is an editorial issue on enhancement algorithms, methodologies and technology for spectral sensing and serves as a valuable and useful reference for researchers and technologists interested in the evolving state-of-the-art and/or the emerging science and technology base associated with spectral-based sensing and monitoring problem. This issue is particularly relevant to those seeking new and improved solutions for detecting chemical, biological, radiological and explosive threats on the land, sea, and in the air
A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones
Fully-autonomous miniaturized robots (e.g., drones), with artificial
intelligence (AI) based visual navigation capabilities are extremely
challenging drivers of Internet-of-Things edge intelligence capabilities.
Visual navigation based on AI approaches, such as deep neural networks (DNNs)
are becoming pervasive for standard-size drones, but are considered out of
reach for nanodrones with size of a few cm. In this work, we
present the first (to the best of our knowledge) demonstration of a navigation
engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based
visual navigation. To achieve this goal we developed a complete methodology for
parallel execution of complex DNNs directly on-bard of resource-constrained
milliwatt-scale nodes. Our system is based on GAP8, a novel parallel
ultra-low-power computing platform, and a 27 g commercial, open-source
CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the
software mapping techniques that enable the state-of-the-art deep convolutional
neural network presented in [1] to be fully executed on-board within a strict 6
fps real-time constraint with no compromise in terms of flight results, while
all processing is done with only 64 mW on average. Our navigation engine is
flexible and can be used to span a wide performance range: at its peak
performance corner it achieves 18 fps while still consuming on average just
3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication
in the IEEE Internet of Things Journal (IEEE IOTJ
HOG, LBP and SVM based Traffic Density Estimation at Intersection
Increased amount of vehicular traffic on roads is a significant issue. High
amount of vehicular traffic creates traffic congestion, unwanted delays,
pollution, money loss, health issues, accidents, emergency vehicle passage and
traffic violations that ends up in the decline in productivity. In peak hours,
the issues become even worse. Traditional traffic management and control
systems fail to tackle this problem. Currently, the traffic lights at
intersections aren't adaptive and have fixed time delays. There's a necessity
of an optimized and sensible control system which would enhance the efficiency
of traffic flow. Smart traffic systems perform estimation of traffic density
and create the traffic lights modification consistent with the quantity of
traffic. We tend to propose an efficient way to estimate the traffic density on
intersection using image processing and machine learning techniques in real
time. The proposed methodology takes pictures of traffic at junction to
estimate the traffic density. We use Histogram of Oriented Gradients (HOG),
Local Binary Patterns (LBP) and Support Vector Machine (SVM) based approach for
traffic density estimation. The strategy is computationally inexpensive and can
run efficiently on raspberry pi board. Code is released at
https://github.com/DevashishPrasad/Smart-Traffic-Junction.Comment: paper accepted at IEEE PuneCon 201
- …