389 research outputs found

    Multispectral Image Analysis Using Random Forest

    Get PDF
    Classical methods for classification of pixels in multispectral images include supervised classifiers such as the maximum-likelihood classifier, neural network classifiers, fuzzy neural networks, support vector machines, and decision trees. Recently, there has been an increase of interest in ensemble learning – a method that generates many classifiers and aggregates their results. Breiman proposed Random Forestin 2001 for classification and clustering. Random Forest grows many decision trees for classification. To classify a new object, the input vector is run through each decision tree in the forest. Each tree gives a classification. The forest chooses the classification having the most votes. Random Forest provides a robust algorithm for classifying large datasets. The potential of Random Forest is not been explored in analyzing multispectral satellite images. To evaluate the performance of Random Forest, we classified multispectral images using various classifiers such as the maximum likelihood classifier, neural network, support vector machine (SVM), and Random Forest and compare their results

    A smart home environment to support safety and risk monitoring for the elderly living independently

    Get PDF
    The elderly prefer to live independently despite vulnerability to age-related challenges. Constant monitoring is required in cases where the elderly are living alone. The home environment can be a dangerous environment for the elderly living independently due to adverse events that can occur at any time. The potential risks for the elderly living independently can be categorised as injury in the home, home environmental risks and inactivity due to unconsciousness. The main research objective was to develop a Smart Home Environment (SHE) that can support risk and safety monitoring for the elderly living independently. An unobtrusive and low cost SHE solution that uses a Raspberry Pi 3 model B, a Microsoft Kinect Sensor and an Aeotec 4-in-1 Multisensor was implemented. The Aeotec Multisensor was used to measure temperature, motion, lighting, and humidity in the home. Data from the multisensor was collected using OpenHAB as the Smart Home Operating System. The information was processed using the Raspberry Pi 3 and push notifications were sent when risk situations were detected. An experimental evaluation was conducted to determine the accuracy with which the prototype SHE detected abnormal events. Evaluation scripts were each evaluated five times. The results show that the prototype has an average accuracy, sensitivity and specificity of 94%, 96.92% and 88.93% respectively. The sensitivity shows that the chance of the prototype missing a risk situation is 3.08%, and the specificity shows that the chance of incorrectly classifying a non-risk situation is 11.07%. The prototype does not require any interaction on the part of the elderly. Relatives and caregivers can remotely monitor the elderly person living independently via the mobile application or a web portal. The total cost of the equipment used was below R3000

    MRsensing: environmental monitoring and context recognition with cooperative mobile robots in catastrophic incidents

    Get PDF
    Dissertação de Mestrado em Engenharia Electrotécnica e de Computadores, apresentada à Faculdade de Ciências e Tecnologia da Universidade de CoimbraMulti-sensor information fusion theory concerns the environmental perception activities to combine data from multiple sensory resources. Humans, as any other animals, gather information from the environment around them using different biological sensors. Combining them allows structuring the decisions and actions when interacting with the environment. Under disaster conditions, effective mult-robot information sensor fusion can yield a better situation awareness to support the collective decision-making. Mobile robots can gather information from the environment by combining data from different sensors as a way to organize decisions and augment human perception. The is especially useful to retrieve contextual environmental information in catastrophic incidents where human perception may be limited (e.g., lack of visibility). To that end, this work proposes a specific configuration of sensors assembled in a mobile robot, which can be used as a proof of concept to measure important environmental variables in an urban search and rescue (USAR) mission, such as toxic gas density, temperature gradient and smoke particles density. This data is processed through a support vector machine classifier with the purpose of detecting relevant contexts in the course of the mission. The outcome provided by the experiments conducted with TraxBot and Pioneer-3DX robots under the Robot Operating System framework opens the door for new multi-robot applications on USAR scenarios. This work was developed within the CHOPIN research project1 which aims at exploiting the cooperation between human and robotic teams in catastrophic accidents.O tema da fusão sensorial abrange a perceção ambiental para combinar dados de vários recursos naturais. Os seres humanos, como todos os outros animais, recolhem informações do seu redor, utilizando diferentes sensores biológicos. Combinando-se informação dos diferentes sensores é possível estruturar decisões e ações ao interagir com o meio ambiente. Sob condições de desastres, a fusão sensorial de informação eficaz proveniente de múltiplos robôs pode levar a um melhor reconhecimento da situação para a tomada de decisão coletiva. Os robôs móveis podem extrair informações do ambiente através da combinação de dados de diferentes sensores, como forma de organizar as decisões e aumentar a perceção humana. Isto é especialmente útil para obter informações de contexto ambientais em cenários de catástrofe, onde a perceção humana pode ser limitada (por exemplo, a falta de visibilidade). Para este fim, este trabalho propõe uma configuração específica de sensores aplicados num robô móvel, que pode ser usado como prova de conceito para medir variáveis ambientais importantes em missões de busca e salvamento urbano (USAR), tais como a densidade do gás tóxico, gradiente de temperatura e densidade de partículas de fumo. Esta informação é processada através de uma máquina de vetores de suporte com a finalidade de classificar contextos relevantes no decorrer da missão. O resultado fornecido pelas experiências realizadas com os robôs TraxBot e Pioneer 3DX usando a arquitetura Robot Operating System abre a porta para novas aplicações com múltiplos robôs em cenários USAR

    Нейромережева модель розпізнавання вирв від бомбардування за супутниковими даними

    Get PDF
    Магістерська дисертація містить 109 сторінок, 20 ілюстрацію, і 181 джерел літератури. Наразі задача розпізнавання вирв від бомбардувань стає все гострішою. Після повномасштабної військової агресії російської федерації, чимало фондів намагаються оцінити збитки, які були нанесені об’єктам інфраструктури, цивільним будівлям, тощо. Нейромережева модель розпізнавання вирв від бомбардувань за супутниковими даними дасть змогу комплексно та всеціло оцінити масштаб руйнувань, який в подальшому може бути використаний для підрахунку збитків. Для досягнення мети було використано: нейромережеву модель U-Net ; Google Collaboratory; бібліотеки pytorch, torchvision, matplotlib, Pillow, imutils, scikit-learn, tqdm, gdal, numpy.The master's thesis contains 109 pages, 20 illustrations, and 181 references. Nowadays the task of recognizing explosions from bombings is becoming more and more acute. After the full-scale military aggression of the Russian Federation, many funds are trying to assess the damage that was caused to infrastructure objects, civilian buildings, etc. A neural network model for recognizing bombardment eruptions based on satellite data will enable a comprehensive and comprehensive assessment of the scale of destruction, which can later be used to estimate damages. To achieve the goal, was used the following: U-Net neural network model; Google Collaboratory; Libraries pytorch, torchvision, matplotlib, Pillow, imutils, scikit-learn, tqdm, gdal, numpy

    A smart home environment to support safety and risk monitoring for the elderly living independently

    Get PDF
    The elderly prefer to live independently despite vulnerability to age-related challenges. Constant monitoring is required in cases where the elderly are living alone. The home environment can be a dangerous environment for the elderly living independently due to adverse events that can occur at any time. The potential risks for the elderly living independently can be categorised as injury in the home, home environmental risks and inactivity due to unconsciousness. The main research objective was to develop a Smart Home Environment (SHE) that can support risk and safety monitoring for the elderly living independently. An unobtrusive and low cost SHE solution that uses a Raspberry Pi 3 model B, a Microsoft Kinect Sensor and an Aeotec 4-in-1 Multisensor was implemented. The Aeotec Multisensor was used to measure temperature, motion, lighting, and humidity in the home. Data from the multisensor was collected using OpenHAB as the Smart Home Operating System. The information was processed using the Raspberry Pi 3 and push notifications were sent when risk situations were detected. An experimental evaluation was conducted to determine the accuracy with which the prototype SHE detected abnormal events. Evaluation scripts were each evaluated five times. The results show that the prototype has an average accuracy, sensitivity and specificity of 94%, 96.92% and 88.93% respectively. The sensitivity shows that the chance of the prototype missing a risk situation is 3.08%, and the specificity shows that the chance of incorrectly classifying a non-risk situation is 11.07%. The prototype does not require any interaction on the part of the elderly. Relatives and caregivers can remotely monitor the elderly person living independently via the mobile application or a web portal. The total cost of the equipment used was below R3000

    Wireless Sensor Networks And Data Fusion For Structural Health Monitoring Of Aircraft

    Get PDF
    This thesis discusses an architecture and design of a sensor web to be used for structural health monitoring of an aircraft. Also presented are several prototypes of critical parts of the sensor web. The proposed sensor web will utilize sensor nodes situated throughout the structure. These nodes and one or more workstations will support agents that communicate and collaborate to monitor the health of the structure. Agents can be any internal or external autonomous entity that has direct access to affect a given system. For the purposes of this document, an agent will be defined as an autonomous software resource that has the ability to make decisions for itself based on given tasks and abilities while also collaborating with others to find a feasible answer to a given problem regarding the structural health monitoring system. Once the agents have received relevant data from nodes, they will utilize applications that perform data fusion techniques to classify events and further improve the functionality of the system for more accurate future classifications. Agents will also pass alerts up a self-configuring hierarchy of monitor agents and make them available for review by personnel. This thesis makes use of previous results from applying the Gaia methodology for analysis and design of the multiagent system

    Wireless Sensor Networks And Data Fusion For Structural Health Monitoring Of Aircraft

    Get PDF
    This thesis discusses an architecture and design of a sensor web to be used for structural health monitoring of an aircraft. Also presented are several prototypes of critical parts of the sensor web. The proposed sensor web will utilize sensor nodes situated throughout the structure. These nodes and one or more workstations will support agents that communicate and collaborate to monitor the health of the structure. Agents can be any internal or external autonomous entity that has direct access to affect a given system. For the purposes of this document, an agent will be defined as an autonomous software resource that has the ability to make decisions for itself based on given tasks and abilities while also collaborating with others to find a feasible answer to a given problem regarding the structural health monitoring system. Once the agents have received relevant data from nodes, they will utilize applications that perform data fusion techniques to classify events and further improve the functionality of the system for more accurate future classifications. Agents will also pass alerts up a self-configuring hierarchy of monitor agents and make them available for review by personnel. This thesis makes use of previous results from applying the Gaia methodology for analysis and design of the multiagent system
    corecore