6,348 research outputs found

    Human-Intelligence and Machine-Intelligence Decision Governance Formal Ontology

    Get PDF
    Since the beginning of the human race, decision making and rational thinking played a pivotal role for mankind to either exist and succeed or fail and become extinct. Self-awareness, cognitive thinking, creativity, and emotional magnitude allowed us to advance civilization and to take further steps toward achieving previously unreachable goals. From the invention of wheels to rockets and telegraph to satellite, all technological ventures went through many upgrades and updates. Recently, increasing computer CPU power and memory capacity contributed to smarter and faster computing appliances that, in turn, have accelerated the integration into and use of artificial intelligence (AI) in organizational processes and everyday life. Artificial intelligence can now be found in a wide range of organizational systems including healthcare and medical diagnosis, automated stock trading, robotic production, telecommunications, space explorations, and homeland security. Self-driving cars and drones are just the latest extensions of AI. This thrust of AI into organizations and daily life rests on the AI community’s unstated assumption of its ability to completely replicate human learning and intelligence in AI. Unfortunately, even today the AI community is not close to completely coding and emulating human intelligence into machines. Despite the revolution of digital and technology in the applications level, there has been little to no research in addressing the question of decision making governance in human-intelligent and machine-intelligent (HI-MI) systems. There also exists no foundational, core reference, or domain ontologies for HI-MI decision governance systems. Further, in absence of an expert reference base or body of knowledge (BoK) integrated with an ontological framework, decision makers must rely on best practices or standards that differ from organization to organization and government to government, contributing to systems failure in complex mission critical situations. It is still debatable whether and when human or machine decision capacity should govern or when a joint human-intelligence and machine-intelligence (HI-MI) decision capacity is required in any given decision situation. To address this deficiency, this research establishes a formal, top level foundational ontology of HI-MI decision governance in parallel with a grounded theory based body of knowledge which forms the theoretical foundation of a systemic HI-MI decision governance framework

    The Generation of Common Purpose in Innovation Partnerships : a Design Perspective

    No full text
    The official version of the article is available here : http://www.emeraldinsight.com/journals.htm?articleid=17042798&ini=aobInternational audiencePurpose - Scholars and practitioners have both emphasized the importance of collaboration in innovation context. They have also largely acknowledged that the definition of common purpose is a major driver of successful collaboration, but surprisingly, researchers have put little effort into investigating the process whereby the partners define the common purpose. This research aims to explore the Generation of Common Purpose (GCP) in innovation partnerships. Design/methodology/approach - An action-research approach combined with modeling has been followed. Our research is based on an in-depth qualitative case study of a cross-industry exploratory partnership through which four partners, from very different arenas, aim to collectively define innovation projects based on micro-nanotechnologies. Based on a design reasoning framework, the mechanisms of GCP mechanism are depicted. Findings - Regarding GCP, two main interdependent facets are identified: (1) the determination of existing intersections between the parties' concept and knowledge spaces ('Matching'); (2) an introspective learning process that allows the parties to transforms those spaces ('Building'). Practical implications - The better understanding of the GCP and the specific notion of "C-K profiles", which is an original way to characterize each partner involved in a partnership, should improve the capabilities of organizations to efficiently define collaborative innovation projects. Originality/value - This article explores one of the cornerstones of successful collaboration in innovation: the process whereby several parties define the common purpose of their partnership

    Toward Cyborg PPGIS: exploring socio-technical requirements for the use of web-based PPGIS in two municipal planning cases, Stockholm region, Sweden

    Get PDF
    Web-based Public Participation Geographic Information Systems (PPGIS) are increasingly used for surveying place values and informing municipal planning in contexts of urban densification. However, research is lagging behind the rapid deployment of PPGIS applications. Some of the main opportunities and challenges for the uptake and implementation of web-based PPGIS are derived from a literature review and two case studies dealing with municipal planning for urban densification in the Stockholm region, Sweden. A simple clustering analysis identified three interconnected themes that together determine the performance of PPGIS: (i) tool design and affordances; (ii) organisational capacity; and (iii) governance. The results of the case studies augment existing literature regarding the connections between the different socio-technical dimensions for the design, implementation and evaluation of PPGIS applications in municipal planning. A cyborg approach to PPGIS is then proposed to improve the theoretical basis for addressing these dimensions together

    A Two-Level Information Modelling Translation Methodology and Framework to Achieve Semantic Interoperability in Constrained GeoObservational Sensor Systems

    Get PDF
    As geographical observational data capture, storage and sharing technologies such as in situ remote monitoring systems and spatial data infrastructures evolve, the vision of a Digital Earth, first articulated by Al Gore in 1998 is getting ever closer. However, there are still many challenges and open research questions. For example, data quality, provenance and heterogeneity remain an issue due to the complexity of geo-spatial data and information representation. Observational data are often inadequately semantically enriched by geo-observational information systems or spatial data infrastructures and so they often do not fully capture the true meaning of the associated datasets. Furthermore, data models underpinning these information systems are typically too rigid in their data representation to allow for the ever-changing and evolving nature of geo-spatial domain concepts. This impoverished approach to observational data representation reduces the ability of multi-disciplinary practitioners to share information in an interoperable and computable way. The health domain experiences similar challenges with representing complex and evolving domain information concepts. Within any complex domain (such as Earth system science or health) two categories or levels of domain concepts exist. Those concepts that remain stable over a long period of time, and those concepts that are prone to change, as the domain knowledge evolves, and new discoveries are made. Health informaticians have developed a sophisticated two-level modelling systems design approach for electronic health documentation over many years, and with the use of archetypes, have shown how data, information, and knowledge interoperability among heterogenous systems can be achieved. This research investigates whether two-level modelling can be translated from the health domain to the geo-spatial domain and applied to observing scenarios to achieve semantic interoperability within and between spatial data infrastructures, beyond what is possible with current state-of-the-art approaches. A detailed review of state-of-the-art SDIs, geo-spatial standards and the two-level modelling methodology was performed. A cross-domain translation methodology was developed, and a proof-of-concept geo-spatial two-level modelling framework was defined and implemented. The Open Geospatial Consortium’s (OGC) Observations & Measurements (O&M) standard was re-profiled to aid investigation of the two-level information modelling approach. An evaluation of the method was undertaken using II specific use-case scenarios. Information modelling was performed using the two-level modelling method to show how existing historical ocean observing datasets can be expressed semantically and harmonized using two-level modelling. Also, the flexibility of the approach was investigated by applying the method to an air quality monitoring scenario using a technologically constrained monitoring sensor system. This work has demonstrated that two-level modelling can be translated to the geospatial domain and then further developed to be used within a constrained technological sensor system; using traditional wireless sensor networks, semantic web technologies and Internet of Things based technologies. Domain specific evaluation results show that twolevel modelling presents a viable approach to achieve semantic interoperability between constrained geo-observational sensor systems and spatial data infrastructures for ocean observing and city based air quality observing scenarios. This has been demonstrated through the re-purposing of selected, existing geospatial data models and standards. However, it was found that re-using existing standards requires careful ontological analysis per domain concept and so caution is recommended in assuming the wider applicability of the approach. While the benefits of adopting a two-level information modelling approach to geospatial information modelling are potentially great, it was found that translation to a new domain is complex. The complexity of the approach was found to be a barrier to adoption, especially in commercial based projects where standards implementation is low on implementation road maps and the perceived benefits of standards adherence are low. Arising from this work, a novel set of base software components, methods and fundamental geo-archetypes have been developed. However, during this work it was not possible to form the required rich community of supporters to fully validate geoarchetypes. Therefore, the findings of this work are not exhaustive, and the archetype models produced are only indicative. The findings of this work can be used as the basis to encourage further investigation and uptake of two-level modelling within the Earth system science and geo-spatial domain. Ultimately, the outcomes of this work are to recommend further development and evaluation of the approach, building on the positive results thus far, and the base software artefacts developed to support the approach

    Re-energising Knowledge Management: Communication challenges, interdisciplinary intersections, and paradigm change

    Get PDF
    Knowledge Management (KM) in the 1990s was a key upwardly-mobile management discipline. Indeed, a proliferation of articles suggested KM had the potential to make a radical departure from conventional views of organisational assets and resources, and even held the promise of transforming economies. Instead, however, KM has tended to become incorporated as a subset of traditional management. This thesis suggests that, as a result, knowledge has been perceived simply as another resource to be managed for competitive advantage. It further argues that KM need not subscribe to conventional views of management and that knowledge need not be just another resource to be exploited, hoarded, and traded. Instead, it contends that knowledge is an outcome of the process of connecting to one another in new ways and explores the field’s still-unrealised potential for generating fresh approaches relevant to contemporary conditions. In seeking to revive the excitement, and rekindle the potential, that originally surrounded the field, the thesis intervenes in current debates in KM. It attends to, and expands, the existing discourses of KM while presenting the case for a re-energised understanding of the communication of knowledge. Exploring intersections with other disciplines as well as KM’s own multidisciplinary base, it proposes transdisciplinary research as a productive focus for KM. In making these recommendations for KM’s future, the thesis seeks to make the field more responsive to current complex and dynamic academic, organisational, and social contexts. Its overall goal is not only to ensure KM’s ongoing relevance and effectiveness as a field, but to direct KM towards fulfilling its early potential

    Epistemology of Intelligence Agencies

    Get PDF
    About the analogy between the epistemological and methodological aspects of the activity of intelligence agencies and some scientific disciplines, advocating for a more scientific approach to the process of collecting and analyzing information within the intelligence cycle. I assert that the theoretical, ontological and epistemological aspects of the activity of many intelligence agencies are underestimated, leading to incomplete understanding of current phenomena and confusion in inter-institutional collaboration. After a brief Introduction, which includes a history of the evolution of the intelligence concept after World War II, Intelligence Activity defines the objectives and organization of intelligence agencies, the core model of these organizations (the intelligence cycle), and the relevant aspects of the intelligence gathering and intelligence analysis. In the Ontology section, I highlight the ontological aspects and the entities that threaten and are threatened. The Epistemology section includes aspects specific to intelligence activity, with the analysis of the traditional (Singer) model, and a possible epistemological approach through the concept of tacit knowledge developed by scientist Michael Polanyi. In the Methodology section there are various methodological theories with an emphasis on structural analytical techniques, and some analogies with science, archeology, business and medicine. In Conclusions I argue on the possibility of a more scientific approach to methods of intelligence gathering and analysis of intelligence agencies. CONTENTS: Abstract 1 Introduction 1.1. History 2. Intelligence activity 2.1. Organizations 2.2. Intelligence cycle 2.3 Intelligence gathering 2.4. Intelligence analysis 2.5. Counterintelligence 2.6. Epistemic communities 3. Ontology 4. Epistemology 4.1. The tacit knowledge (Polanyi) 5. Methodologies 6. Analogies with other disciplines 6.1. Science 6.2. Archeology 6.3. Business 6.4. Medicine 7. Conclusions Bibliography DOI: 10.13140/RG.2.2.12971.4944

    Consortia as Technology Innovation Management Vehicles: Toward a Framework for Success in Venture Based Public-Private Partnerships

    Get PDF
    The purpose of this research was to explore the approach by federal/state agencies, university, and private sector consortia to develop and manage commercialization of innovation technologies. The evaluation, support, and management of technologically based consortia has traditionally been held in the private sector. There is a somewhat mature literature guiding innovation management (Utterback 1996; Rosenberg et al. 1994; Quinn 1997, 1992) in the private sector. However, there is an increasing emergence of consortia consisting of universities, industrial/private sector entities, and government agencies joining in collaborative efforts to launch technology based initiatives. These consortia are non-traditional and the applicability of traditional venture models is questionable. The guidance and maturity of the literature for assessment and management of these new consortia is sparsely developed. The specific research questions explored in this research are: (1) What are the major sources of consortia support for innovative technology based new ventures that seem to work? And, (2) What approaches to managing the commercial viability of advanced innovative technology-based new ventures through partnerships of industry, governmental agencies, and universities are effective? The research used an embedded case study method (Yin 1994) to explore the research questions. Consortia development of technology innovation projects, by a state government agency located in the southeastern United States, was selected as the focus of the case study. Four independent projects launched by the consortia were select as embedded units of analysis for the case development. The research was conducted in three phases. In Phase I the literature was reviewed and a framework for assessment of new ventures was developed. In Phase II, the framework was used to guide data collection and the formation of the case data base. Qualitative analysis methods (Patton 1990) were used to analyze transcripts from sixteen semi-structured interviews of consortia partners and project documents. The data analysis from this phase produced an embedded unit of analysis summary for each consortia project. These summaries were validated for each of the four units analyzed and added to the case database. In the third phase, the case was constructed and validated by consortia members from the government agency responsible for consortia assessment. The research produced an in-depth case study for the unique development and considerations for university, government agency, and private industry consortia in relation to traditional assessment models and considerations for private sector ventures. In addition, directions for future research involving the assessment, development, and management of university, industry, and government consortia were developed
    • 

    corecore