324 research outputs found

    Argumentation for Knowledge Representation, Conflict Resolution, Defeasible Inference and Its Integration with Machine Learning

    Get PDF
    Modern machine Learning is devoted to the construction of algorithms and computational procedures that can automatically improve with experience and learn from data. Defeasible argumentation has emerged as sub-topic of artificial intelligence aimed at formalising common-sense qualitative reasoning. The former is an inductive approach for inference while the latter is deductive, each one having advantages and limitations. A great challenge for theoretical and applied research in AI is their integration. The first aim of this chapter is to provide readers informally with the basic notions of defeasible and non-monotonic reasoning. It then describes argumentation theory, a paradigm for implementing defeasible reasoning in practice as well as the common multi-layer schema upon which argument-based systems are usually built. The second aim is to describe a selection of argument-based applications in the medical and health-care sectors, informed by the multi-layer schema. A summary of the features that emerge from the applications under review is aimed at showing why defeasible argumentation is attractive for knowledge-representation, conflict resolution and inference under uncertainty. Open problems and challenges in the field of argumentation are subsequently described followed by a future outlook in which three points of integration with machine learning are proposed

    Enhancing Data Classification Quality of Volunteered Geographic Information

    Get PDF
    Geographic data is one of the fundamental components of any Geographic Information System (GIS). Nowadays, the utility of GIS becomes part of everyday life activities, such as searching for a destination, planning a trip, looking for weather information, etc. Without a reliable data source, systems will not provide guaranteed services. In the past, geographic data was collected and processed exclusively by experts and professionals. However, the ubiquity of advanced technology results in the evolution of Volunteered Geographic Information (VGI), when the geographic data is collected and produced by the general public. These changes influence the availability of geographic data, when common people can work together to collect geographic data and produce maps. This particular trend is known as collaborative mapping. In collaborative mapping, the general public shares an online platform to collect, manipulate, and update information about geographic features. OpenStreetMap (OSM) is a prominent example of a collaborative mapping project, which aims to produce a free world map editable and accessible by anyone. During the last decade, VGI has expanded based on the power of crowdsourcing. The involvement of the public in data collection raises great concern about the resulting data quality. There exist various perspectives of geographic data quality this dissertation focuses particularly on the quality of data classification (i.e., thematic accuracy). In professional data collection, data is classified based on quantitative and/or qualitative ob- servations. According to a pre-defined classification model, which is usually constructed by experts, data is assigned to appropriate classes. In contrast, in most collaborative mapping projects data classification is mainly based on individualsa cognition. Through online platforms, contributors collect information about geographic features and trans- form their perceptions into classified entities. In VGI projects, the contributors mostly have limited experience in geography and cartography. Therefore, the acquired data may have a questionable classification quality. This dissertation investigates the challenges of data classification in VGI-based mapping projects (i.e., collaborative mapping projects). In particular, it lists the challenges relevant to the evolution of VGI as well as to the characteristics of geographic data. Furthermore, this work proposes a guiding approach to enhance the data classification quality in such projects. The proposed approach is based on the following premises (i) the availability of large amounts of data, which fosters applying machine learning techniques to extract useful knowledge, (ii) utilization of the extracted knowledge to guide contributors to appropriate data classification, (iii) the humanitarian spirit of contributors to provide precise data, when they are supported by a guidance system, and (iv) the power of crowdsourcing in data collection as well as in ensuring the data quality. This cumulative dissertation consists of five peer-reviewed publications in international conference proceedings and international journals. The publications divide the disser- tation into three parts the first part presents a comprehensive literature review about the relevant previous work of VGI quality assurance procedures (Chapter 2), the second part studies the foundations of the approach (Chapters 3-4), and the third part discusses the proposed approach and provides a validation example for implementing the approach (Chapters 5-6). Furthermore, Chapter 1 presents an overview about the research ques- tions and the adapted research methodology, while Chapter 7 concludes the findings and summarizes the contributions. The proposed approach is validated through empirical studies and an implemented web application. The findings reveal the feasibility of the proposed approach. The output shows that applying the proposed approach results in enhanced data classification quality. Furthermore, the research highlights the demands for intuitive data collection and data interpretation approaches adequate to VGI-based mapping projects. An interaction data collection approach is required to guide the contributors toward enhanced data quality, while an intuitive data interpretation approach is needed to derive more precise information from rich VGI resources

    A Normative Model For Strategic Planning

    Get PDF
    Abstract The thesis proposes a normative model for strategic planning using stakeholder theory as the primary theoretical framework. Development of the normative model is achieved by analysis of the literature and corroborative engagement with local government practitioners. Strategic planning processes in public sector agencies involve many challenges; the processes are directed by government but influenced by many stakeholders who have an interest in the outcomes. Effective management of the strategic planning process suggests it is important for organisations to identify how stakeholders use their status and position to influence the process and final decision. Organisations can then apply the appropriate processes to manage stakeholders‘ interests and expectations to improve the quality of information used to inform decision making and to improve accountability and transparency of decision making. A review of stakeholder theory identifies the fundamental requirements for effective stakeholder management. A further comprehensive review and analysis of the literature from sustainable development and strategic management allows a normative model for decision making to be developed based on those perspectives1. The model is then used to specify criteria for a targeted assessment of New Zealand government documentation and local authorities‘ statements and processes. 1 A model can be viewed as a likeness of something ((Frankfort-Nachmias & Nachmias, 1997). Frankfort-Nachmias and Nachmias go on to say that models are used to gain insight into phenomena that the scientist cannot observe directly. Hardina (2002) describes models as constructs used to understand or visualize patterns of relationships among concepts, individual, groups and organisations. In this case the final normative model is made up of literature and practitioner perspectives of reality. A Normative Model for Strategic Planning The scope and boundaries of the thesis are established through an initial analysis of four studies (international and New Zealand), an audit report, 28 local authorities‘ documents and New Zealand government legislation. The analysis highlights issues of understanding devolution, accountability, responsibility and participation in decision making. Selected local authority interviewees rate the characteristics and processes of the original normative model to provide feedback on the relative importance to local authorities‘ strategic planning processes. Furthermore, the interviewees share their views on the additional requirements to further improve the model. The final analysis distinguishes the differences between the original normative model (what may occur), how local authorities currently complete strategic planning (what does occur) and the modified normative model (what should occur). The thesis concludes with a modified normative model which if adopted by local authorities (or in fact other public sector agencies) has the potential to improve strategic planning through more effective stakeholder management

    Through a Model, Darkly: An Investigation of Modellers’ Conceptualisation of Uncertainty in Climate and Energy Systems Modelling and an Application to Epidemiology

    Get PDF
    Policy responses to climate change require the use of complex computer models to understand the physical dynamics driving change, to evaluate its impacts and to evaluate the efficacy and costs of different mitigation and adaptation options. These models are often complex and built by large teams of dedicated researchers. All modelling requires assumptions, approximations and analytic conveniences to be employed. No model is without uncertainty. Authors have attempted to understand these uncertainties over the years and have developed detailed typologies to deal with them. However, it remains unknown how modellers themselves conceptualise the uncertainty inherent in their work. The core of this thesis involves the interviews of 38 modellers from climate science, energy systems modelling and integrated assessment to understand how they conceptualise the uncertainty in their work. This study finds that there is diversity in how uncertainty is understood and that various concepts from the literature are selectively employed to organise uncertainties. Uncertainty analysis is conceived as consisting of different phases in the model development process. The interplay between the complexity of the model and the capacities of modellers to manipulate these models shapes the ways in which uncertainty can be conceptualised. How we can attempt to wrangle with uncertainty in the present is determined by the path-dependent decisions made in the past; decisions that are influenced by a variety of factors within the context of the model’s creation. Furthermore, this thesis examines the application of these concepts to another field, epidemiology, to examine their generalisability in other contexts. This thesis concludes that in a situation such as climate change, where the nature of the problem changes in a dynamic way, emphasis should be placed on reducing the grip of these path dependencies and the resource costs of adapting models to face new challenges and answer new policy questions

    Executive Orders in Court

    Get PDF

    THE EFFECTS OF VIRTUAL PANOPTICISM

    Get PDF
    As technology further integrates into everyday life, the effects of technological advancement surface. The research contained in this thesis places philosopher Michel Foucault’s ideas of the panoptic, discipline, punishment and a carceral society in a virtual reality thus creating a virtual panopticon. Adapting Foucault’s theories to the present-day technological climate allows researchers to begin understanding the why behind humans’ interactions with various forms of technology (e.g. iPhone usage, Smart TVs, online banking, Alexa/Echo, etc.). Additionally, virtual panopticism sheds light on the corruption of those who manipulate information online to wield power, maintain control and make money. I discuss surveillance capitalism and highlight Foucault’s main influencers such as Karl Marx and Friedrich Nietzsche. By conducting a voluntary survey, participants revealed how they operate within a virtual panopticon specifically in the areas of religion, personal technology usage, literature and film and education. Since thinking directly affects actions, the importance of understanding this information is critical to interpreting modern-day culture. The goal of this research is to reveal the effects of virtual panoptical structures on thinking, while simultaneously emphasizing the need for technological accountability

    Special issue on challenges for reasoning under uncertainty, inconsistency, vagueness, and preferences: a topical snapshot

    No full text
    Managing uncertainty, inconsistency, vagueness, and preferences has been extensively explored in artificial intelligence (AI). During the recent years, especially with the emerging of smart services and devices, technologies for managing uncertainty, inconsistency, vagueness, and preferences to tackle the problems of dynamic, real-world scenarios have started to play a key role also in other areas, such as information systems and the (Social and/or Semantic) Web. These application areas have sparked another wave of strong interest into formalisms and logics for dealing with uncertainty, inconsistency, vagueness, and preferences. Important examples are fuzzy and probabilistic approaches for description logics, or rule systems for handling vagueness and uncertainty in the Semantic Web, or formalisms for handling user preferences in the context of ontological knowledge in the Social Semantic Web. While scalability of these approaches is an important issue to be addressed, also the need for combining various of these approaches with each other and/or with more classical ways of reasoning have become obvious (hybrid reasoning under uncertainty). This special issue presents several state-of-the-art formalisms and methodologies for managing uncertainty, inconsistency, vagueness, and preferences

    Special issue on challenges for reasoning under uncertainty, inconsistency, vagueness, and preferences: a topical snapshot

    No full text
    Managing uncertainty, inconsistency, vagueness, and preferences has been extensively explored in artificial intelligence (AI). During the recent years, especially with the emerging of smart services and devices, technologies for managing uncertainty, inconsistency, vagueness, and preferences to tackle the problems of dynamic, real-world scenarios have started to play a key role also in other areas, such as information systems and the (Social and/or Semantic) Web. These application areas have sparked another wave of strong interest into formalisms and logics for dealing with uncertainty, inconsistency, vagueness, and preferences. Important examples are fuzzy and probabilistic approaches for description logics, or rule systems for handling vagueness and uncertainty in the Semantic Web, or formalisms for handling user preferences in the context of ontological knowledge in the Social Semantic Web. While scalability of these approaches is an important issue to be addressed, also the need for combining various of these approaches with each other and/or with more classical ways of reasoning have become obvious (hybrid reasoning under uncertainty). This special issue presents several state-of-the-art formalisms and methodologies for managing uncertainty, inconsistency, vagueness, and preferences

    Tätigkeitsbericht 2014-2016

    Get PDF
    corecore