12 research outputs found

    Feature extraction of overlapping hevea leaves: a comparative study

    Get PDF
    Automation of rubber tree clone classification has inspired research into new methods of leaf feature extraction. In current practice, rubber clone inspectors has been using several leaf features to identify clone types. One of the unique features of rubber tree leaf is palmate leaflets. This characteristic generates different leaflet positions, where the leaves are overlapping or separated. In this research, we propose keypoint extraction and line detection methods to extract shape and axil (angle between petioles) features of leaflet positions. The results of keypoint extraction methods, namely, SIFT, Harris, and FAST, were compared and discussed for shape feature extraction. Next, Hough transformation and boundary-tracing methods were compared to identify the suitable axil detection method. The evaluation result demonstrates the proper keypoint extraction method for shape context and the clear advantages of Hough Transformation in accuracy of angle detection

    Towards a dynamic balanced scorecard model for humanitarian relief organizations' performance management

    Get PDF
    Purpose: In recent years, the Balanced Scorecard (BSC) has received considerable interest among practitioners for managing their organization's performance. Unfortunately existing BSC frameworks, particularly for humanitarian supply chains, lack causal relationships among performance indicators, actions, and outcomes. They are not able to provide a dynamic perspective of the organization with factors that drive the organization's behavior towards its mission. Lack of conceptual references seems to hinder the development of a performance measurement system towards this direction. Design/methodology/approach: We formulate the interdependencies among KPIs in terms of cause-and-effect relationships based on published case studies reported in international journals from 1996 to 2017. Findings: This paper aims to identify the conceptual interdependencies among key performance indicators (KPIs) and represent them in the form of a conceptual model. Research limitations/implications: The study is solely based on relevant existing literature. Therefore further practical research is needed to validate the interdependencies of performance indicators. Practical implications: The proposed conceptual model provides the structure of a Dynamic Balanced Scorecard (DBSC) in the humanitarian supply chain and should serve as a starting reference for the development of a practical DBSC model. The conceptual framework proposed in this paper aims to facilitate further research in developing a DBSC for humanitarian organizations. Originality/value: Existing BSC frameworks do not provide a dynamic perspective of the organization. The proposed conceptual framework is a useful reference for further work in developing a DBSC for humanitarian organizations

    Context-based image explanations for deep neural networks

    No full text
    With the increased use of machine learning in decision-making scenarios, there has been a growing interest in explaining and understanding the outcomes of machine learning models. Despite this growing interest, existing works on interpretability and explanations have been mostly intended for expert users. Explanations for general users have been neglected in many usable and practical applications (e.g., image tagging, caption generation). It is important for non-technical users to understand features and how they affect an instance-specific prediction to satisfy the need for justification. In this paper, we propose a model-agnostic method for generating context-based explanations aiming for general users. We implement partial masking on segmented components to identify the contextual importance of each segment in scene classification tasks. We then generate explanations based on feature importance. We present visual and text-based explanations: (i) saliency map presents the pertinent components with a descriptive textual justification, (ii) visual map with a color bar graph showing the relative importance of each feature for a prediction. Evaluating the explanations using a user study (N = 50), we observed that our proposed explanation method visually outperformed existing gradient and occlusion based methods. Hence, our proposed explanation method could be deployed to explain models’ decisions to non-expert users in real-world applications

    Enhancement of template-based method for overlapping rubber tree leaf identification

    No full text
    The position of rubber tree leaflets is one of the critical features for rubber clone classification. These leaflets exist in three possible positions: overlapping, touching, or separated. This paper proposed a template-based method for overlapping rubber tree leaf identification. Initially, the key point based feature extraction method is adopted. The key features of overlapping and non-overlapping leaf assist in identifying similar shapes through comparison, using the nearest neighbor algorithm. This process is implemented by constructing a directory which consists of various rubber leaf images with different positions. Next, the key points in the input leaf image are compared with the key points of the template image to identify the position of leaflets. The outcome of this study proves that the template-based method is suitable for overlapping and non-overlapping rubber tree leaf identification

    From spoken thoughts to automated driving commentary: Predicting and explaining intelligent vehicles' actions

    No full text
    Commentary driving is a technique in which drivers verbalise their observations, assessments and intentions. By speaking out their thoughts, both learning and expert drivers are able to create a better understanding and awareness of their surroundings. In the intelligent vehicle context, automated driving commentary can provide intelligible explanations about driving actions, and thereby assist a driver or an end-user during driving operations in challenging and safety-critical scenarios. In this paper, we conducted a field study in which we deployed a research vehicle in an urban environment to obtain data. While collecting sensor data of the vehicle’s surroundings, we obtained driving commentary from a driving instructor using the think-aloud protocol. We analysed the driving commentary and uncovered an explanation style; the driver first announces his observations, announces his plans, and then makes general remarks. He also made counterfactual comments. We successfully demonstrated how factual and counterfactual natural language explanations that follow this style could be automatically generated using a simple tree-based approach. Generated explanations for longitudinal actions (e.g., stop and move) were deemed more intelligible and plausible by human judges compared to lateral actions, such as lane changes. We discussed how our approach can be built on in the future to realise more robust and effective explainability for driver assistance as well as partial and conditional automation of driving functions

    Creating 3D/Mid-air gestures

    No full text
    Researchers and developers have continually proposed various forms of gestures for computing applications. Real problems arise when choosing the best set of gestures for a given application, and there is a strong debate in the literature on whether to include users in the design process of the gestures. This paper elaborates on this debate by synthesizing the ideas and theories put forth by previous work, and describes the emergence of user-centered approach amongst the primary domination of developer-based approach in this particular research area. Three influential methods were summarized to represents the essence of the user-centered approach; and recent works that applied these methods were reviewed to consider the various ways they were adopted and adapted in creating gesture languages for computing systems. By presenting the overview of our observation and findings, we hope to provide another perspective on the user- centered design approach that would be of assistance to other researchers with similar interests in this area

    Towards explainable and trustworthy autonomous physical systems

    Get PDF
    The safe deployment of autonomous physical systems in real-world scenarios requires them to be explainable and trustworthy, especially in critical domains. In contrast with ‘black-box’ systems, explainable and trustworthy autonomous physical systems will lend themselves to easy assessments by system designers and regulators. This promises to pave ways for easy improvements that can lead to enhanced performance, and as well, increased public trust. In this one-day virtual workshop, we aim to gather a globally distributed group of researchers and practitioners to discuss the opportunities and social challenges in the design, implementation, and deployment of explainable and trustworthy autonomous physical systems, especially in a post-pandemic era. Interactions will be fostered through panel discussions and a series of spotlight talks. To ensure lasting impact of the workshop, we will conduct a pre-workshop survey which will examine the public perception of the trustworthiness of autonomous physical systems. Further, we will publish a summary report providing details about the survey as well as the identified challenges resulting from the workshop’s panel discussions

    Explainable Agents as Static Web Pages : UAV Simulation Example

    No full text
    Motivated by the apparent societal need to design complex autonomous systems whose decisions and actions are humanly intelligible, the study of explainable artificial intelligence, and with it, research on explainable autonomous agents has gained increased attention from the research community. One important objective of research on explainable agents is the evaluation of explanation approaches in human-computer interaction studies. In this demonstration paper, we present a way to facilitate such studies by implementing explainable agents and multi-agent systems that i) can be deployed as static files, not requiring the execution of server-side code, which minimizes administration and operation overhead, and ii) can be embedded into web front ends and other JavaScript-enabled user interfaces, hence increasing the ability to reach a broad range of users. We then demonstrate the approach with the help of an application that was designed to assess the effect of different explainability approaches on the human intelligibility of an unmanned aerial vehicle simulation

    Decision Theory Meets Explainable AI

    No full text
    Explainability has been a core research topic in AI for decades and therefore it is surprising that the current concept of Explain- able AI (XAI) seems to have been launched as late as 2016. This is a problem with current XAI research because it tends to ignore existing knowledge and wisdom gathered over decades or even centuries by other relevant domains. This paper presents the notion of Contextual Impor- tance and Utility (CIU), which is based on known notions and methods of Decision Theory. CIU extends the notions of importance and utility for the non-linear models of AI systems and notably those produced by Machine Learning methods. CIU provides a universal and model-agnostic foundation for XAI.WASP-A
    corecore