8,120 research outputs found

    Design Knowledge for Deep-Learning-Enabled Image-Based Decision Support Systems

    Get PDF
    With the ever-increasing societal dependence on electricity, one of the critical tasks in power supply is maintaining the power line infrastructure. In the process of making informed, cost-effective, and timely decisions, maintenance engineers must rely on human-created, heterogeneous, structured, and also largely unstructured information. The maturing research on vision-based power line inspection driven by advancements in deep learning offers first possibilities to move towards more holistic, automated, and safe decision-making. However, (current) research focuses solely on the extraction of information rather than its implementation in decision-making processes. The paper addresses this shortcoming by designing, instantiating, and evaluating a holistic deep-learning-enabled image-based decision support system artifact for power line maintenance at a German distribution system operator in southern Germany. Following the design science research paradigm, two main components of the artifact are designed: A deep-learning-based model component responsible for automatic fault detection of power line parts as well as a user-oriented interface responsible for presenting the captured information in a way that enables more informed decisions. As a basis for both components, preliminary design requirements are derived from literature and the application field. Drawing on justificatory knowledge from deep learning as well as decision support systems, tentative design principles are derived. Based on these design principles, a prototype of the artifact is implemented that allows for rigorous evaluation of the design knowledge in multiple evaluation episodes, covering different angles. Through a technical experiment the technical novelty of the artifact’s capability to capture selected faults (regarding insulators and safety pins) in unmanned aerial vehicle (UAV)-captured image data (model component) is validated. Subsequent interviews, surveys, and workshops in a natural environment confirm the usefulness of the model as well as the user interface component. The evaluation provides evidence that (1) the image processing approach manages to address the gap of power line component inspection and (2) that the proposed holistic design knowledge for image-based decision support systems enables more informed decision-making. The paper therefore contributes to research and practice in three ways. First, the technical feasibility to detect certain maintenance-intensive parts of power lines with the help of unique UAV image data is shown. Second, the distribution system operators’ specific problem is solved by supporting decisions in maintenance with the proposed image-based decision support system. Third, precise design knowledge for image-based decision support systems is formulated that can inform future system designs of a similar nature

    Design knowledge for deep-learning-enabled image-based decision support systems — evidence from power line maintenance decision-making [in press]

    Get PDF
    With the ever-increasing societal dependence on electricity, one of the critical tasks in power supply is maintaining the power line infrastructure. In the process of making informed, cost-effective, and timely decisions, maintenance engineers must rely on human-created, heterogeneous, structured, and also largely unstructured information. The maturing research on vision-based power line inspection driven by advancements in deep learning offers first possibilities to move towards more holistic, automated, and safe decision-making. However, (current) research focuses solely on the extraction of information rather than its implementation in decision-making processes. This paper addresses this shortcoming by designing, instantiating, and evaluating a holistic deep-learning-enabled image-based decision support system artifact for power line maintenance at a German distribution system operator in southern Germany. Following the design science research paradigm two main components of the artifact are designed: A deep-learning-based model component responsible for automatic fault detection of power line parts as well as a user-oriented interface responsible for presenting the captured information in a way that enables more informed decisions. As a basis for both components, preliminary design requirements from literature and the application field are derived. Drawing on justificatory knowledge from deep learning as well as decision support systems, tentative design principles are derived. Based on these design principles, a prototype of the artifact is implemented that allows for rigorous evaluation of the design knowledge in multiple evaluation episodes, covering different angles. Through a technical experiment the technical novelty of the artifact\u27s capability to capture selected faults (regarding insulators and safety pins) on unmanned aerial vehicle (UAV)-captured image data (model component) is validated. Subsequent interviews, surveys, and workshops in a natural environment confirm the usefulness of the model as well as the user interface component. The evaluation provides evidence that (1) the image processing approach manages to address the gap of power line component inspection and (2) that the proposed holistic design knowledge for image-based decision support systems enables more informed decision-making. This paper therefore contributes to research and practice in three ways. First, the technical feasibility to detect certain maintenance-intensive parts of power lines with the help of unique UAV image data is shown. Second, the distribution system operators specific problem is solved by supporting decisions in maintenance with the proposed image-based decision support system. Third, precise design knowledge for image-based decision support systems is formulated that can inform future system designs of a similar nature

    On Creating Benchmark Dataset for Aerial Image Interpretation: Reviews, Guidances and Million-AID

    Get PDF
    The past years have witnessed great progress on remote sensing (RS) image interpretation and its wide applications. With RS images becoming more accessible than ever before, there is an increasing demand for the automatic interpretation of these images. In this context, the benchmark datasets serve as essential prerequisites for developing and testing intelligent interpretation algorithms. After reviewing existing benchmark datasets in the research community of RS image interpretation, this article discusses the problem of how to efficiently prepare a suitable benchmark dataset for RS image interpretation. Specifically, we first analyze the current challenges of developing intelligent algorithms for RS image interpretation with bibliometric investigations. We then present the general guidances on creating benchmark datasets in efficient manners. Following the presented guidances, we also provide an example on building RS image dataset, i.e., Million-AID, a new large-scale benchmark dataset containing a million instances for RS image scene classification. Several challenges and perspectives in RS image annotation are finally discussed to facilitate the research in benchmark dataset construction. We do hope this paper will provide the RS community an overall perspective on constructing large-scale and practical image datasets for further research, especially data-driven ones

    An Automated Machine Learning Framework in Unmanned Aircraft Systems:New Insights into Agricultural Management Practices Recognition Approaches

    Get PDF
    The recent trend of automated machine learning (AutoML) has been driving further significant technological innovation in the application of artificial intelligence from its automated algorithm selection and hyperparameter optimization of the deployable pipeline model for unraveling substance problems. However, a current knowledge gap lies in the integration of AutoML technology and unmanned aircraft systems (UAS) within image-based data classification tasks. Therefore, we employed a state-of-the-art (SOTA) and completely open-source AutoML framework, Auto-sklearn, which was constructed based on one of the most widely used ML systems: Scikit-learn. It was combined with two novel AutoML visualization tools to focus particularly on the recognition and adoption of UAS-derived multispectral vegetation indices (VI) data across a diverse range of agricultural management practices (AMP). These include soil tillage methods (STM), cultivation methods (CM), and manure application (MA), and are under the four-crop combination fields (i.e., red clover-grass mixture, spring wheat, pea-oat mixture, and spring barley). Furthermore, they have currently not been efficiently examined and accessible parameters in UAS applications are absent for them. We conducted the comparison of AutoML performance using three other common machine learning classifiers, namely Random Forest (RF), support vector machine (SVM), and artificial neural network (ANN). The results showed AutoML achieved the highest overall classification accuracy numbers after 1200 s of calculation. RF yielded the second-best classification accuracy, and SVM and ANN were revealed to be less capable among some of the given datasets. Regarding the classification of AMPs, the best recognized period for data capture occurred in the crop vegetative growth stage (in May). The results demonstrated that CM yielded the best performance in terms of classification, followed by MA and STM. Our framework presents new insights into plant–environment interactions with capable classification capabilities. It further illustrated the automatic system would become an important tool in furthering the understanding for future sustainable smart farming and field-based crop phenotyping research across a diverse range of agricultural environmental assessment and management applications

    Using Prior Knowledge for Verification and Elimination of Stationary and Variable Objects in Real-time Images

    Get PDF
    With the evolving technologies in the autonomous vehicle industry, now it has become possible for automobile passengers to sit relaxed instead of driving the car. Technologies like object detection, object identification, and image segmentation have enabled an autonomous car to identify and detect an object on the road in order to drive safely. While an autonomous car drives by itself on the road, the types of objects surrounding the car can be dynamic (e.g., cars and pedestrians), stationary (e.g., buildings and benches), and variable (e.g., trees) depending on if the location or shape of an object changes or not. Different from the existing image-based approaches to detect and recognize objects in the scene, in this research 3D virtual world is employed to verify and eliminate stationary and variable objects to allow the autonomous car to focus on dynamic objects that may cause danger to its driving. This methodology takes advantage of prior knowledge of stationary and variable objects presented in a virtual city and verifies their existence in a real-time scene by matching keypoints between the virtual and real objects. In case of a stationary or variable object that does not exist in the virtual world due to incomplete pre-existing information, this method uses machine learning for object detection. Verified objects are then removed from the real-time image with a combined algorithm using contour detection and class activation map (CAM), which helps to enhance the efficiency and accuracy when recognizing moving objects
    • …
    corecore