3 research outputs found

    Automated Archaeological Feature Detection Using Deep Learning on Optical UAV Imagery: Preliminary Results

    Get PDF
    This communication article provides a call for unmanned aerial vehicle (UAV) users in archaeology to make imagery data more publicly available while developing a new application to facilitate the use of a common deep learning algorithm (mask region-based convolutional neural network; Mask R-CNN) for instance segmentation. The intent is to provide specialists with a GUI-based tool that can apply annotation used for training for neural network models, enable training and development of segmentation models, and allow classification of imagery data to facilitate auto-discovery of features. The tool is generic and can be used for a variety of settings, although the tool was tested using datasets from the United Arab Emirates (UAE), Oman, Iran, Iraq, and Jordan. Current outputs suggest that trained data are able to help identify ruined structures, that is, structures such as burials, exposed building ruins, and other surface features that are in some degraded state. Additionally, qanat(s), or ancient underground channels having surface access holes, and mounded sites, which have distinctive hill-shaped features, are also identified. Other classes are also possible, and the tool helps users make their own training-based approach and feature identification classes. To improve accuracy, we strongly urge greater publication of UAV imagery data by projects using open journal publications and public repositories. This is something done in other fields with UAV data and is now needed in heritage and archaeology. Our tool is provided as part of the outputs give

    Automated Archaeological Feature Detection Using Deep Learning on Optical UAV Imagery: Preliminary Results

    Get PDF
    This communication article provides a call for unmanned aerial vehicle (UAV) users in archaeology to make imagery data more publicly available while developing a new application to facilitate the use of a common deep learning algorithm (mask region-based convolutional neural network; Mask R-CNN) for instance segmentation. The intent is to provide specialists with a GUI-based tool that can apply annotation used for training for neural network models, enable training and development of segmentation models, and allow classification of imagery data to facilitate auto-discovery of features. The tool is generic and can be used for a variety of settings, although the tool was tested using datasets from the United Arab Emirates (UAE), Oman, Iran, Iraq, and Jordan. Current outputs suggest that trained data are able to help identify ruined structures, that is, structures such as burials, exposed building ruins, and other surface features that are in some degraded state. Additionally, qanat(s), or ancient underground channels having surface access holes, and mounded sites, which have distinctive hill-shaped features, are also identified. Other classes are also possible, and the tool helps users make their own training-based approach and feature identification classes. To improve accuracy, we strongly urge greater publication of UAV imagery data by projects using open journal publications and public repositories. This is something done in other fields with UAV data and is now needed in heritage and archaeology. Our tool is provided as part of the outputs given

    Identifying Simple Shapes to Classify the Big Picture

    No full text
    In recent years, Deep Artificial Neural Networks (DNNs) have demonstrated their ability in solving visual classification problems. However, an impediment is transparency where it is difficult to interpret why an object is classified in a particular way. Furthermore, it is also difficult to validate whether a learned model truly represents a problem space. Learning Classifier Systems (LCSs) are an Evolutionary Computation technique capable of producing human-readable rules that explain why an instance has been classified, i.e. the system is fully transparent. However, because they can encode complex relationships between features, they are not best suited to domains with a large number of input features, e.g. classification in pixel images. Thus, the aim of this work is to develop a novel DNN-LCS system where the former extracts features from pixels and the latter classifies objects from these features with clear decision boundaries. Results show that the system can explain its classification decisions on curated image data, e.g. plates have elliptical or rectangular shapes. This work represents a promising step towards explainable artificial intelligence in computer vision.</p
    corecore