9,409 research outputs found

    Agent Based Modeling and Simulation: An Informatics Perspective

    Get PDF
    The term computer simulation is related to the usage of a computational model in order to improve the understanding of a system's behavior and/or to evaluate strategies for its operation, in explanatory or predictive schemes. There are cases in which practical or ethical reasons make it impossible to realize direct observations: in these cases, the possibility of realizing 'in-machina' experiments may represent the only way to study, analyze and evaluate models of those realities. Different situations and systems are characterized by the presence of autonomous entities whose local behaviors (actions and interactions) determine the evolution of the overall system; agent-based models are particularly suited to support the definition of models of such systems, but also to support the design and implementation of simulators. Agent-Based models and Multi-Agent Systems (MAS) have been adopted to simulate very different kinds of complex systems, from the simulation of socio-economic systems to the elaboration of scenarios for logistics optimization, from biological systems to urban planning. This paper discusses the specific aspects of this approach to modeling and simulation from the perspective of Informatics, describing the typical elements of an agent-based simulation model and the relevant research.Multi-Agent Systems, Agent-Based Modeling and Simulation

    Filling the Ontology Space for Coalition Battle Management Language

    Get PDF
    The Coalition Battle Management Language is a language for representing and exchanging plans, orders, and reports across live, constructive and robotic forces in multi-service, multi-national and multi-organizational operations. Standardization efforts in the Simulation Interoperability Standards Organization seek to define this language through three parallel activities: (1) specify a sufficient data model to unambiguously define a set of orders using the Joint Command, Control, and Consultation Information Exchange Data Model (JC3IEDM) as a starting point; (2) develop a formal grammar (lexicon and production rules) to formalize the definition of orders, requests, and reports; (3) develop a formal battle management ontology to enable conceptual interoperability across software systems. This paper focuses on the third activity, development of a formal battle management ontology, by describing an ontology space for potential technical approaches. An ontology space is a notional three dimensional space with qualitative axes representing: (1) the Ontological Spectrum; (2) the Levels of Conceptual Interoperability Model; and (3) candidate representation sources that can contribute to conceptual interoperability for the Coalition Battle Management Language. The first dimension is the Ontological Spectrum, which shows increasing levels of semantic formalism using various ontology representation artifacts. The second dimension is the Levels of Conceptual Interoperability Model, which describes varying levels of interoperability that can be attained across systems. The third dimension is a survey of likely candidate sources to provide the representation elements required for interoperability. This third dimension will be further described in relation to the artifact capabilities of the second dimension and the conceptual interoperability capabilities of the first dimension to highlight what is possible for ontological representation in C-BML with existing sources, and what needs to be added. The paper identifies requirements for building the ontology artifacts (starting with a controlled vocabulary) for conceptual interoperability, the highest level described in the LCIM, and gives a path ahead for increasingly logical artifacts

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Financial Information Mediation: A Case Study of Standards Integration for Electronic Bill Presentment and Payment Using the COIN Mediation Technology

    Get PDF
    Each player in the financial industry, each bank, stock exchange, government agency, or insurance company operates its own financial information system or systems. By its very nature, financial information, like the money that it represents, changes hands. Therefore the interoperation of financial information systems is the cornerstone of the financial services they support. E-services frameworks such as web services are an unprecedented opportunity for the flexible interoperation of financial systems. Naturally the critical economic role and the complexity of financial information led to the development of various standards. Yet standards alone are not the panacea: different groups of players use different standards or different interpretations of the same standard. We believe that the solution lies in the convergence of flexible E-services such as web-services and semantically rich meta-data as promised by the semantic Web; then a mediation architecture can be used for the documentation, identification, and resolution of semantic conflicts arising from the interoperation of heterogeneous financial services. In this paper we illustrate the nature of the problem in the Electronic Bill Presentment and Payment (EBPP) industry and the viability of the solution we propose. We describe and analyze the integration of services using four different formats: the IFX, OFX and SWIFT standards, and an example proprietary format. To accomplish this integration we use the COntext INterchange (COIN) framework. The COIN architecture leverages a model of sources and receivers’ contexts in reference to a rich domain model or ontology for the description and resolution of semantic heterogeneity.Singapore-MIT Alliance (SMA

    Cities, systems and structures: an ontological approach to urban studies

    Get PDF
    Ontological issues lie at the heart of the epistemological, methodological and theoretical discussions over the workings and operations of cities – however, as Castells and Zukin point out, they often find themselves mired in the midst of the praxis of urban studies, for reasons not entirely clear. In this work we attempt to trace a path through the ontology of urban studies, attempting to show the reasons for this apparent omission, as well as showing a possible line of convergence through the lens of systems theory, for a common ontology. We propose a set of analytical classes drawn from the literature, drawn from the need to discuss social phenomena, tracing some of the ontological compromises made in light of such entities. In line with that, we argue there is a need for a transparent phenomenological epistemology, tracing some methodological implications of such processes. We conclude by drawing some scientific and practical implications of these debates for a more general discussion of ontology in urban studies and research

    Interactive Knowledge Construction in the Collaborative Building of an Encyclopedia

    Get PDF
    International audienceOne of the major challenges of Applied Artificial Intelligence is to provide environments where high level human activities like learning, constructing theories or performing experiments, are enhanced by Artificial Intelligence technologies. This paper starts with the description of an ambitious project: EnCOrE2. The specific real world EnCOrE scenario, significantly representing a much wider class of potential applicative contexts, is dedicated to the building of an Encyclopedia of Organic Chemistry in the context of Virtual Communities of experts and students. Its description is followed by a brief survey of some major AI questions and propositions in relation with the problems raised by the EnCOrE project. The third part of the paper starts with some definitions of a set of “primitives” for rational actions, and then integrates them in a unified conceptual framework for the interactive construction of knowledge. To end with, we sketch out protocols aimed at guiding both the collaborative construction process and the collaborative learning process in the EnCOrE project.The current major result is the emerging conceptual model supporting interaction between human agents and AI tools integrated in Grid services within a socio-constructivist approach, consisting of cycles of deductions, inductions and abductions upon facts (the shared reality) and concepts (their subjective interpretation) submitted to negotiations, and finally converging to a socially validated consensus

    Aesthetics after the Ontological Turn: An Ecological Approach to Artificial Creativity

    Get PDF
    The development of chatbots and other generative systems powered by AI, particularly the latest version of ChatGPT, rekindled many discussions on topics such as intelligence and creativity, even leading some to suggest that we may be undergoing a “fourth narcissistic wound”. Starting from Margaret Boden’s approach to creativity, we will argue that if computational systems have always excelled at combinatorial creativity, current AI systems stand out at exploratory creativity but are perceived as still falling flat regarding transformational creativity. This paper explores some of the reasons for this, including how, despite the immensity of the conceptual space that results from training of large language models and other machine learning systems, these systems do not, for the most part, share models of the world with us, thus becoming cognitively inaccessible. This paper argues that rather than trying to bring AI systems to imitate us, our umwelt and psychology, to understand their full creative potential, we need to understand them from an ecological and non-anthropocentric perspective that implies an ontological turn both in science and technology studies and in art studies

    On the emergent Semantic Web and overlooked issues

    Get PDF
    The emergent Semantic Web, despite being in its infancy, has already received a lotof attention from academia and industry. This resulted in an abundance of prototype systems and discussion most of which are centred around the underlying infrastructure. However, when we critically review the work done to date we realise that there is little discussion with respect to the vision of the Semantic Web. In particular, there is an observed dearth of discussion on how to deliver knowledge sharing in an environment such as the Semantic Web in effective and efficient manners. There are a lot of overlooked issues, associated with agents and trust to hidden assumptions made with respect to knowledge representation and robust reasoning in a distributed environment. These issues could potentially hinder further development if not considered at the early stages of designing Semantic Web systems. In this perspectives paper, we aim to help engineers and practitioners of the Semantic Web by raising awareness of these issues

    Semantically intelligent semi-automated ontology integration

    Get PDF
    An ontology is a way of information categorization and storage. Web Ontologies provide help in retrieving the required and precise information over the web. However, the problem of heterogeneity between ontologies may occur in the use of multiple ontologies of the same domain. The integration of ontologies provides a solution for the heterogeneity problem. Ontology integration is a solution to problem of interoperability in the knowledge based systems. Ontology integration provides a mechanism to find the semantic association between a pair of reference ontologies based on their concepts. Many researchers have been working on the problem of ontology integration; however, multiple issues related to ontology integration are still not addressed. This dissertation involves the investigation of the ontology integration problem and proposes a layer based enhanced framework as a solution to the problem. The comparison between concepts of reference ontologies is based on their semantics along with their syntax in the concept matching process of ontology integration. The semantic relationship of a concept with other concepts between ontologies and the provision of user confirmation (only for the problematic cases) are also taken into account in this process. The proposed framework is implemented and validated by providing a comparison of the proposed concept matching technique with the existing techniques. The test case scenarios are provided in order to compare and analyse the proposed framework in the analysis phase. The results of the experiments completed demonstrate the efficacy and success of the proposed framework
    • 

    corecore