19,751 research outputs found
Real-time 2D–3D door detection and state classification on a low-power device
In this paper, we propose three methods for door state classifcation with the goal to improve robot navigation in indoor
spaces. These methods were also developed to be used in other areas and applications since they are not limited to door
detection as other related works are. Our methods work ofine, in low-powered computers as the Jetson Nano, in real-time
with the ability to diferentiate between open, closed and semi-open doors. We use the 3D object classifcation, PointNet,
real-time semantic segmentation algorithms such as, FastFCN, FC-HarDNet, SegNet and BiSeNet, the object detection
algorithm, DetectNet and 2D object classifcation networks, AlexNet and GoogleNet. We built a 3D and RGB door dataset
with images from several indoor environments using a 3D Realsense camera D435. This dataset is freely available online.
All methods are analysed taking into account their accuracy and the speed of the algorithm in a low powered computer.
We conclude that it is possible to have a door classifcation algorithm running in real-time on a low-power device.info:eu-repo/semantics/publishedVersio
The Evolution of First Person Vision Methods: A Survey
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart
Glasses, Computer Vision, Video Analytics, Human-machine Interactio
A Home Security System Based on Smartphone Sensors
Several new smartphones are released every year. Many people upgrade to new phones, and their old phones are not put to any further use. In this paper, we explore the feasibility of using such retired smartphones and their on-board sensors to build a home security system. We observe that door-related events such as opening and closing have unique vibration signatures when compared to many types of environmental vibrational noise. These events can be captured by the accelerometer of a smartphone when the phone is mounted on a wall near a door. The rotation of a door can also be captured by the magnetometer of a smartphone when the phone is mounted on a door. We design machine learning and threshold-based methods to detect door opening events based on accelerometer and magnetometer data and build a prototype home security system that can detect door openings and notify the homeowner via email, SMS and phone calls upon break-in detection. To further augment our security system, we explore using the smartphone’s built-in microphone to detect door and window openings across multiple doors and windows simultaneously. Experiments in a residential home show that the accelerometer- based detection can detect door open events with an accuracy higher than 98%, and magnetometer-based detection has 100% accuracy. By using the magnetometer method to automate the training phase of a neural network, we find that sound-based detection of door openings has an accuracy of 90% across multiple doors
Deep learning model for doors detection a contribution for context awareness recognition of patients with Parkinson’s disease
Freezing of gait (FoG) is one of the most disabling motor symptoms in Parkinson’s disease, which is described as a symptom where walking is interrupted by a brief, episodic absence, or marked reduction, of forward progression despite the intention to continue walking. Although FoG causes are multifaceted, they often occur in response of environment triggers, as turnings and passing through narrow spaces such as a doorway. This symptom appears to be overcome using external sensory cues. The recognition of such environments has consequently become a pertinent issue for PD-affected community. This study aimed to implement a real-time DL-based door detection model to be integrated into a wearable biofeedback device for delivering on-demand proprioceptive cues. It was used transfer-learning concepts to train a MobileNet-SSD in TF environment. The model was then integrated in a RPi being converted to a faster and lighter computing power model using TensorFlow Lite settings. Model performance showed a considerable precision of 97,2%, recall of 78,9% and a good F1-score of 0,869. In real-time testing with the wearable device, DL-model showed to be temporally efficient (~2.87 fps) to detect with accuracy doors over real-life scenarios. Future work will include the integration of sensory cues with the developed model in the wearable biofeedback device aiming to validate the final solution with end-users
RFID Localisation For Internet Of Things Smart Homes: A Survey
The Internet of Things (IoT) enables numerous business opportunities in
fields as diverse as e-health, smart cities, smart homes, among many others.
The IoT incorporates multiple long-range, short-range, and personal area
wireless networks and technologies into the designs of IoT applications.
Localisation in indoor positioning systems plays an important role in the IoT.
Location Based IoT applications range from tracking objects and people in
real-time, assets management, agriculture, assisted monitoring technologies for
healthcare, and smart homes, to name a few. Radio Frequency based systems for
indoor positioning such as Radio Frequency Identification (RFID) is a key
enabler technology for the IoT due to its costeffective, high readability
rates, automatic identification and, importantly, its energy efficiency
characteristic. This paper reviews the state-of-the-art RFID technologies in
IoT Smart Homes applications. It presents several comparable studies of RFID
based projects in smart homes and discusses the applications, techniques,
algorithms, and challenges of adopting RFID technologies in IoT smart home
systems.Comment: 18 pages, 2 figures, 3 table
Real-time human ambulation, activity, and physiological monitoring:taxonomy of issues, techniques, applications, challenges and limitations
Automated methods of real-time, unobtrusive, human ambulation, activity, and wellness monitoring and data analysis using various algorithmic techniques have been subjects of intense research. The general aim is to devise effective means of addressing the demands of assisted living, rehabilitation, and clinical observation and assessment through sensor-based monitoring. The research studies have resulted in a large amount of literature. This paper presents a holistic articulation of the research studies and offers comprehensive insights along four main axes: distribution of existing studies; monitoring device framework and sensor types; data collection, processing and analysis; and applications, limitations and challenges. The aim is to present a systematic and most complete study of literature in the area in order to identify research gaps and prioritize future research directions
Wireless sensors and IoT platform for intelligent HVAC control
Energy consumption of buildings (residential and non-residential) represents approximately 40% of total world electricity consumption, with half of this energy consumed by HVAC systems. Model-Based Predictive Control (MBPC) is perhaps the technique most often proposed for HVAC control, since it offers an enormous potential for energy savings. Despite the large number of papers on this topic during the last few years, there are only a few reported applications of the use of MBPC for existing buildings, under normal occupancy conditions and, to the best of our knowledge, no commercial solution yet. A marketable solution has been recently presented by the authors, coined the IMBPC HVAC system. This paper describes the design, prototyping and validation of two components of this integrated system, the Self-Powered Wireless Sensors and the IOT platform developed. Results for the use of IMBPC in a real building under normal occupation demonstrate savings in the electricity bill while maintaining thermal comfort during the whole occupation schedule.QREN SIDT [38798]; Portuguese Foundation for Science & Technology, through IDMEC, under LAETA [ID/EMS/50022/2013
Artificial Vision for Humans
According to the World Health Organization and the The International Agency for the
Prevention of Blindness, 253 million people are blind or vision impaired (2015). One
hundred seventeen million have moderate or severe distance vision impairment, and 36
million are blind. Over the years, portable navigation systems have been developed to help
visually impaired people to navigate. The first primary mobile navigation system was the
white-cane. This is still the most common mobile system used by visually impaired people
since it is cheap and reliable. The disadvantage is it just provides obstacle information at
the feet-level, and it isn’t hands-free. Initially, the portable systems being developed were
focused in obstacle avoiding, but these days they are not limited to that. With the advances
of computer vision and artificial intelligence, these systems aren’t restricted to obstacle
avoidance anymore and are capable of describing the world, text recognition and even
face recognition. The most notable portable navigation systems of this type nowadays are
the Brain Port Pro Vision and the Orcam MyEye system and both of them are hands-free
systems. These systems can improve visually impaired people’s life quality, but they are
not accessible by everyone. About 89% of vision impaired people live in low and middleincome countries, and the most of the 11% that don’t live in these countries don’t have
access to a portable navigation system like the previous ones.
The goal of this project was to develop a portable navigation system that uses computer
vision and image processing algorithms to help visually impaired people to navigate. This
compact system has two modes, one for solving specific visually impaired people’s problems and the other for generic obstacle avoidance. It was also a goal of this project to
continuously improve this system based on the feedback of real users, but due to the pandemic of SARS-CoV-2 Virus I couldn’t achieve this objective of this work. The specific
problem that was more studied in this work was the Door Problem. This is, according to
visually impaired and blind people, a typical problem that usually occurs in indoor environments shared with other people. Another visually impaired people’s problem that was
also studied was the Stairs Problem but due to its rarity, I focused more on the previous
one. By doing an extensive overview of the methods that the newest navigation portable
systems were using, I found that they were using computer vision and image processing
algorithms to provide descriptive information about the world. I also overview Ricardo
Domingos’s work about solving the Door Problem in a desktop computer, that served as
a baseline for this work.
I built two portable navigation systems to help visually impaired people to navigate. One
is based on the Raspberry Pi 3 B+ system and the other uses the Nvidia Jetson Nano. The
first system was used for collecting data, and the other was the final prototype system that
I propose in this work. This system is hands-free, it doesn’t overheat, is light and can be
carried in a simple backpack or suitcase. This prototype system has two modes, one that
works as a car parking sensor system which is used for obstacle avoidance and the other is used to solve the Door Problem by providing information about the state of the door (open,
semi-open or closed door). So, in this document, I proposed three different methods to
solve the Door Problem, that use computer vision algorithms and work in the prototype
system. The first one is based on 2D semantic segmentation and 3D object classification,
it can detect the door and classify it. This method works at 3 FPS. The second method is
a small version of the previous one. It is based on 3D object classification, but it works
at 5 to 6 FPS. The latter method is based on 2d semantic segmentation, object detection
and 2d image classification. It can detect the door, and classify it. This method works at
1 to 2 FPS, but it is the best in terms of door classification accuracy. I also propose a Door
dataset and a Stairs dataset that has 3D information and 2d information. This dataset
was used to train the computer vision algorithms used in the proposed methods to solve
the Door Problem. This dataset is freely available online for scientific proposes along
with the information of the train, validation, and test sets. All methods work in the final
prototype portable system in real-time. The developed system it’s a cheaper approach
for the visually impaired people that cannot afford the most current portable navigation
systems. The contributions of this work are, the two develop mobile navigation systems,
the three methods produce for solving the Door Problem and the dataset built for training
the computer vision algorithms. This work can also be scaled to other areas. The methods
developed for door detection and classification can be used by a portable robot that works
in indoor environments. The dataset can be used to compare results and to train other
neural network models for different tasks and systems.De acordo com a Organização Mundial da Saúde e A Agência Internacional para a Prevenção da Cegueira 253 milhões de pessoas são cegas ou têm problemas de visão (2015).
117 milhões têm uma deficiência visual moderada ou grave à distância e 36 milhões são totalmente cegas. Ao longo dos anos, sistemas de navegação portáteis foram desenvolvidos
para ajudar pessoas com deficiência visual a navegar no mundo. O sistema de navegação
portátil que mais se destacou foi a white-cane. Este ainda é o sistema portátil mais usado
por pessoas com deficiência visual, uma vez que é bastante acessivel monetáriamente e é
sólido. A desvantagem é que fornece apenas informações sobre obstáculos ao nÃvel dos
pés e também não é um sistema hands-free. Inicialmente, os sistemas portáteis que estavam a ser desenvolvidos focavam-se em ajudar a evitar obstáculos, mas atualmente já
não estão limitados a isso. Com o avanço da visão computacional e da inteligência artificial, estes sistemas não são mais restritos à prevenção de obstáculos e são capazes de
descrever o mundo, fazer reconhecimento de texto e até mesmo reconhecimento facial.
Atualmente, os sistemas de navegação portáteis mais notáveis deste tipo são o Brain Port
Pro Vision e o Orcam MyEye system. Ambos são sistemas hands-free. Estes sistemas
podem realmente melhorar a qualidade de vida das pessoas com deficiência visual, mas
não são acessÃveis para todos. Cerca de 89% das pessoas com deficiência visual vivem em
paÃses de baixo e médio rendimento. Mesmo a maior parte dos 11% que não vive nestes
paÃses não tem acesso a estes sistema de navegação portátil mais recentes.
O objetivo desta dissertação é desenvolver um sistema de navegação portátil que através
de algoritmos de visão computacional e processamento de imagem possa ajudar pessoas
com deficiência visual a navegar no mundo. Este sistema portátil possui 2 modos, um
para solucionar problemas especÃficos de pessoas com deficiência visual e outro genérico
para evitar colisões com obstáculos. Também era um objetivo deste projeto melhorar
continuamente este sistema com base em feedback de utilizadores reais, mas devido Ã
pandemia do COVID-19, não consegui entregar o meu sistema a nenhum utilizador alvo.
O problema especÃfico mais trabalhado nesta dissertação foi o Problema da Porta, ou em
inglês, The Door Problem. Este é, de acordo com as pessoas com deficiência visual e cegas,
um problema frequente que geralmente ocorre em ambientes internos onde vivem outras
pessoas para além do cego. Outro problema das pessoas com deficiência visual também
abordado neste trabalho foi o Problema nas escadas, mas devido à raridade da sua ocurrência, foquei-me mais em resolver o problema anterior. Ao fazer uma extensa revisão
dos métodos que os sistemas portáteis de navegação mais recentes usam, descobri que os
mesmos baseiam-se em algoritmos de visão computacional e processamento de imagem
para fornecer ao utilizador informações descritivas acerca do mundo. Também estudei
o trabalho do Ricardo Domingos, aluno de licenciatura da UBI, sobre, como resolver o
Problema da Porta num computador desktop. Este trabalho contribuiu como uma linha
de base para a realização desta dissertação. Nesta dissertação desenvolvi dois sistemas portáteis de navegação para ajudar pessoas
com deficiência visual a navegar. Um é baseado no sistema Raspberry Pi 3 B + e o outro
usa o Jetson Nano da Nvidia. O primeiro sistema foi usado para colectar dados e o outro é
o sistema protótipo final que proponho neste trabalho. Este sistema é hands-free, não sobreaquece, é leve e pode ser transportado numa simples mochila ou mala. Este protótipo
tem dois modos, um que funciona como um sistema de sensor de estacionamento, cujo
objectivo é evitar obstáculos e o outro modo foi desenvolvido para resolver o Problema da
Porta, fornecendo ao utilizador informações sobre o estado da porta (aberta, semi-aberta
ou fechada). Neste documento, propus três métodos diferentes para resolver o Problema
da Porta. Estes métodos usam algoritmos de visão computacional e funcionam no protótipo. O primeiro é baseado em segmentação semântica 2D e classificação de objetos
3D, e consegue detectar a porta e classificá-la. Este método funciona a 3 FPS. O segundo
método é uma versão reduzida do anterior. É baseado somente na classificação de objetos 3D e consegue funcionar entre 5 a 6 FPS. O último método é baseado em segmentação
semântica, detecção de objeto 2D e classificação de imagem 2D. Este método consegue
detectar a porta e classificá-la. Funciona entre 1 a 2 FPS, mas é o melhor método em termos de precisão da classificação da porta. Também proponho nesta dissertação uma base
de dados de Portas e Escadas que possui informações 3D e 2D. Este conjunto de dados foi
usado para treinar os algoritmos de visão computacional usados nos métodos anteriores
propostos para resolver o Problema da Porta. Este conjunto de dados está disponÃvel
gratuitamente online, com as informações dos conjuntos de treino, teste e validação para
fins cientÃficos. Todos os métodos funcionam no protótipo final do sistema portátil em
tempo real. O sistema desenvolvido é uma abordagem mais barata para as pessoas com
deficiência visual que não têm condições para adquirir os sistemas de navegação portáteis
mais atuais. As contribuições deste trabalho são: os dois sistemas de navegação portáteis
desenvolvidos, os três métodos desenvolvidos para resolver o Problema da Porta e o conjunto de dados criado para o treino dos algoritmos de visão computacional. Este trabalho
também pode ser escalado para outras áreas. Os métodos desenvolvidos para detecção e
classificação de portas podem ser usados por um robô portátil que trabalha em ambientes
internos. O conjunto de dados pode ser usado para comparar resultados e treinar outros
modelos de redes neuronais para outras tarefas e sistemas
- …