56 research outputs found
Coordinating a team of agents in the forest firefighting domain
Documento confidencial. Não pode ser disponibilizado para consultaTese de mestrado. Inteligência Artificial e Sistemas Inteligentes. 2006. Faculdade de Engenharia. Universidade do Port
Coordination methodologies applied to RoboCup : a graphical definition of setplays
Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 200
AUTOMATED TREE-LEVEL FOREST QUANTIFICATION USING AIRBORNE LIDAR
Traditional forest management relies on a small field sample and interpretation of aerial photography that not only are costly to execute but also yield inaccurate estimates of the entire forest in question. Airborne light detection and ranging (LiDAR) is a remote sensing technology that records point clouds representing the 3D structure of a forest canopy and the terrain underneath. We present a method for segmenting individual trees from the LiDAR point clouds without making prior assumptions about tree crown shapes and sizes. We then present a method that vertically stratifies the point cloud to an overstory and multiple understory tree canopy layers. Using the stratification method, we modeled the occlusion of higher canopy layers with respect to point density. We also present a distributed computing approach that enables processing the massive data of an arbitrarily large forest. Lastly, we investigated using deep learning for coniferous/deciduous classification of point cloud segments representing individual tree crowns. We applied the developed methods to the University of Kentucky Robinson Forest, a natural, majorly deciduous, closed-canopy forest. 90% of overstory and 47% of understory trees were detected with false positive rates of 14% and 2% respectively. Vertical stratification improved the detection rate of understory trees to 67% at the cost of increasing their false positive rate to 12%. According to our occlusion model, a point density of about 170 pt/m² is needed to segment understory trees located in the third layer as accurately as overstory trees. Using our distributed processing method, we segmented about two million trees within a 7400-ha forest in 2.5 hours using 192 processing cores, showing a speedup of ~170. Our deep learning experiments showed high classification accuracies (~82% coniferous and ~90% deciduous) without the need to manually assemble the features. In conclusion, the methods developed are steps forward to remote, accurate quantification of large natural forests at the individual tree level
Multi-strategy learning made easy
This paper presents the AFRANCI tool for the development of Multi-Strategy learning systems. Designing a Multi-Strategy system using AFRANCI is a two step process. The use interactively designs the structure of the system and then chooses the learning strategies for each module. After providing the datasets all modules as automatically trained. The system is aware and takes into consideration the inter-dependency of the modules. The tool has built-in learning algorithms but can use external programs implementing the learning algorithms. The tool has the following facilities. It allows any user to design in an interactive and easy fashion the structure of the target system. The structure of the target system is a collection of interconnected modules. The user may then choose the different learning algorithms to construct each module. The tool has several built-in Machine Learning algorithms has has interfaces that enables it to use external learning tools like WEKA and CN2. AFRANCI uses the interdependency of the modules to determine the sequence of training. For each module the system uses a wrapper to tune automatically the parameters of the learning algorithm. In the final step of the design sequence AFRANCI generates a compact and legible ready-to-use ANSI C open-source code for the final system
Multi-robot Task Allocation using Agglomerative Clustering
The main objective of this thesis is to solve the problem of balancing tasks in the Multi-robot Task Allocation problem domain. When allocating a large number of tasks to a multi-robot system, it is important to balance the load effectively across the robots in the system. In this thesis an algorithm is proposed in which tasks are allocated through clustering, investigating the effectiveness of agglomerative hierarchical clustering as compared to K-means clustering. Once the tasks are clustered, each agent claims a cluster through a greedy self-assignment. This thesis investigates the performance both when all tasks are known ahead of time as well as when new tasks are injected into the system periodically. To account for new tasks, both global re-clustering and greedy clustering methods are considered. Three metrics: 1) total travel cost, 2) maximum distance traveled per robot, and 3) balancing cost index are used to compare the performance of the overall system in environments both with and without obstacles. The results collected from the experiments show that agglomerative hierarchical clustering is deterministic and better at minimizing the total travel cost, especially for large numbers of agents, whereas K-means works better to balance costs. In addition to this, the greedy approach for clustering new tasks works better for frequently appearing tasks than infrequent ones
Practical and conceptual issues in the use of agent-based modelling for disaster management
Application of agent-based modelling technology (ABM) to disaster management has to date been limited in nature. Existing research has concentrated on extending the model structures and agent architectures of complex algorithms to test robustness and extensibility of this simulation approach. Less attention has been brought to bear on testing the current state-of-the-art in ABM for modelling real-life systems.
This thesis aims to take first steps in remedying this gap. It focuses on identifying the practical and conceptual issues which preclude wider utilisation of ABM in disaster management. It identifies that insufficient attention is put on incorporating real-life information and domain knowledge into model definitions. This research first proposes a methodology by which some of these issues may be overcome, and consequently tests and evaluates it through implementation of InSiM (Incident Simulation Model), which depicts reaction of pedestrians to a CBRN (chemical, biological, radiological or nuclear) explosion in a city centre.
A number of steps are conducted to obtain real-life information related to human response to CBRN incidents. This information is then used for design and parameterisation of InSiM which is implemented in three configurations. In order to identify the effects use of real-life data have on the simulation results each configuration incorporates the information at different level of complexity. The effects are assessed by comparison of the generated dispersion patterns of agents along the city centre. However, use of conventional statistical goodness-of-fit tests for assessing the degree of the difference was challenged by inhomogeneous nature of the data. Hence, alternative approaches are also adopted so that results can be qualitatively assessed. Nevertheless, the evaluation reveals significant differences at global and local level.
This research highlights that incorporation of real-life information and domain knowledge into ABM is not without problems. Each time a problem was addressed, additional issues began to emerge. Most of these challenges were related to generalisation of the complex real-life systems that the model represents. Therefore, further investigations are needed at every methodological step before ABM can fully realise its potential to support disaster management
- …