3,691 research outputs found

    Business Process Innovation using the Process Innovation Laboratory

    Get PDF
    Most organizations today are required not only to establish effective business processes but they are required to accommodate for changing business conditions at an increasing rate. Many business processes extend beyond the boundary of the enterprise into the supply chain and the information infrastructure therefore is critical. Today nearly every business relies on their Enterprise System (ES) for process integration and the future generations of enterprise systems will increasingly be driven by business process models. Consequently process modeling and improvement will become vital for business process innovation (BPI) in future organizations. There is a significant body of knowledge on various aspect of process innovation, e.g. on conceptual modeling, business processes, supply chains and enterprise systems. Still an overall comprehensive and consistent theoretical framework with guidelines for practical applications has not been identified. The aim of this paper is to establish a conceptual framework for business process innovation in the supply chain based on advanced enterprise systems. The main approach to business process innovation in this context is to create a new methodology for exploring process models and patterns of applications. The paper thus presents a new concept for business process innovation called the process innovation laboratory a.k.a. the Ð-Lab. The Ð-Lab is a comprehensive framework for BPI using advanced enterprise systems. The Ð-Lab is a collaborative workspace for experimenting with process models and an explorative approach to study integrated modeling in a controlled environment. The Ð-Lab facilitates innovation by using an integrated action learning approach to process modeling including contemporary technological, organizational and business perspectivesNo; keywords

    Knowledge Warehouse: An Architectural Integration of Knowledge Management, Decision Support, Artificial Intelligence and Data Warehousing

    Get PDF
    Decision support systems (DSS) are becoming increasingly more critical to the daily operation of organizations. Data warehousing, an integral part of this, provides an infrastructure that enables businesses to extract, cleanse, and store vast amounts of data. The basic purpose of a data warehouse is to empower the knowledge workers with information that allows them to make decisions based on a solid foundation of fact. However, only a fraction of the needed information exists on computers; the vast majority of a firm’s intellectual assets exist as knowledge in the minds of its employees. What is needed is a new generation of knowledge-enabled systems that provides the infrastructure needed to capture, cleanse, store, organize, leverage, and disseminate not only data and information but also the knowledge of the firm. The purpose of this paper is to propose, as an extension to the data warehouse model, a knowledge warehouse (KW) architecture that will not only facilitate the capturing and coding of knowledge but also enhance the retrieval and sharing of knowledge across the organization. The knowledge warehouse proposed here suggests a different direction for DSS in the next decade. This new direction is based on an expanded purpose of DSS. That is, the purpose of DSS in knowledge improvement. This expanded purpose of DSS also suggests that the effectiveness of a DS will, in the future, be measured based on how well it promotes and enhances knowledge, how well it improves the mental model(s) and understanding of the decision maker(s) and thereby how well it improves his/her decision making

    TOWARDS ADAPTIVE ENTERPRISES USING DIGITAL TWINS

    Get PDF
    Modern enterprises are large complex systems operating in highly dynamic environments thus requiring quick response to a variety of change drivers. Moreover, they are systems of systems wherein understanding is available in localized contexts only and that too is typically partial and uncertain. With the overall system behaviour hard to know a-priori and conventional techniques for system-wide analysis either lacking in rigour or defeated by the scale of the problem, the current practice often exclusively relies on human expertise for monitoring and adaptation. We present an approach that combines ideas from modeling & simulation, reinforcement learning and control theory to make enterprises adaptive. The approach hinges on the concept of Digital Twin - a set of relevant models that are amenable to analysis and simulation. The paper describes illustration of approach in two real world use cases

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Ethical Control of Unmanned Systems: lifesaving/lethal scenarios for naval operations

    Get PDF
    Prepared for: Raytheon Missiles & Defense under NCRADA-NPS-19-0227This research in Ethical Control of Unmanned Systems applies precepts of Network Optional Warfare (NOW) to develop a three-step Mission Execution Ontology (MEO) methodology for validating, simulating, and implementing mission orders for unmanned systems. First, mission orders are represented in ontologies that are understandable by humans and readable by machines. Next, the MEO is validated and tested for logical coherence using Semantic Web standards. The validated MEO is refined for implementation in simulation and visualization. This process is iterated until the MEO is ready for implementation. This methodology is applied to four Naval scenarios in order of increasing challenges that the operational environment and the adversary impose on the Human-Machine Team. The extent of challenge to Ethical Control in the scenarios is used to refine the MEO for the unmanned system. The research also considers Data-Centric Security and blockchain distributed ledger as enabling technologies for Ethical Control. Data-Centric Security is a combination of structured messaging, efficient compression, digital signature, and document encryption, in correct order, for round-trip messaging. Blockchain distributed ledger has potential to further add integrity measures for aggregated message sets, confirming receipt/response/sequencing without undetected message loss. When implemented, these technologies together form the end-to-end data security that ensures mutual trust and command authority in real-world operational environments—despite the potential presence of interfering network conditions, intermittent gaps, or potential opponent intercept. A coherent Ethical Control approach to command and control of unmanned systems is thus feasible. Therefore, this research concludes that maintaining human control of unmanned systems at long ranges of time-duration and distance, in denied, degraded, and deceptive environments, is possible through well-defined mission orders and data security technologies. Finally, as the human role remains essential in Ethical Control of unmanned systems, this research recommends the development of an unmanned system qualification process for Naval operations, as well as additional research prioritized based on urgency and impact.Raytheon Missiles & DefenseRaytheon Missiles & Defense (RMD).Approved for public release; distribution is unlimited

    Teaching Analytics: A Demonstration of Association Discovery with SAS Enterprise Miner

    Get PDF
    In the current age of data analytics, there has been a push for the emergence of technologies that allow for interactive analysis of extensive amounts of quickly produced, highly varied data. These technologies require people (nicknamed “data scientists”) from many business disciplines who are capable of managing and analyzing this data for use in decision making processes. In order to educate and train more of these people, there has been an increase in the teaching of analytical tools in both Management Information Systems (MIS) and Business Analytics (BA) programs. This article will describe details of an exercise on business analytics specially tailored for the Introduction to MIS or BA course. The main goal for this project is to educate first year business students about the importance and usefulness of data analytics without discouraging them with excessive coverage of technical software details

    Ontological Engineering: What are Ontologies and How Can We Build Them?

    Get PDF
    Ontologies are formal, explicit specifications of shared conceptualizations. There is much literature on what they are, how they can be engineered and where they can be used inside applications. All these literature can be grouped under the term “Ontological Engineering,” which is defined as the set of activities that concern the ontology development process, the ontology lifecycle, the principles, methods and methodologies for building ontologies, and the tool suites and languages that support them. In this chapter we provide an overview of Ontological Engineering, describing the current trends, issues and problem
    • 

    corecore