727,319 research outputs found

    A FOUNDATION FOR OPEN INFORMATION ENVIRONMENTS

    Get PDF
    Traditionally, information systems were developed within organizations for use by known audiences for known purposes. Advances in information technology have changed this landscape dramatically. The reach of information systems frequntly extends beyond organizational boundaries for use by unknown audiences and for purposes not originally anticipated. Individuals and informal communities can generate and use information in ways previously restricted to formal organizations. We term applications with these characteristics open information environments (OIEs). OIEs are marked by diversity of information available, flexibility in accommodating new sources, users and uses, and information management with minimal controls on structure, content, and access. This creates opportunities to generate new information and use it in unexpected ways. However, OIEs also come with challenges in managing the semantic diversity, flexibility of use, and information quality issus arising from the range of users and lack of controls. In this paper, we propose a set of principles for managing OIEs effectively. We outline a research program to examine the potential of OIEs, the challenges they present, and how to design OIEs to realize the benefits while mitigating the challenges. We highlight our ongoing research in this area, and conclude with a call for more research on this important phenomenon

    Information practices of disaster preparedness professionals in multidisciplinary groups

    Get PDF
    OBJECTIVE: This article summarizes the results of a descriptive qualitative study addressing the question, what are the information practices of the various professionals involved in disaster preparedness? We present key results, but focus on issues of choice and adaptation of models and theories for the study. METHODS: Primary and secondary literature on theory and models of information behavior were consulted. Taylor's Information Use Environments (IUE) model, Institutional Theory, and Dervin's Sense-Making metatheory were used in the design of an open-ended interview schedule. Twelve individual face-to-face interviews were conducted with disaster professionals drawn from the Pennsylvania Preparedness Leadership Institute (PPLI) scholars. Taylor's Information Use Environments (IUE) model served as a preliminary coding framework for the transcribed interviews. RESULTS: Disaster professionals varied in their use of libraries, peer-reviewed literature, and information management techniques, but many practices were similar across professions, including heavy Internet and email use, satisficing, and preference for sources that are socially and physically accessible. CONCLUSIONS: The IUE model provided an excellent foundation for the coding scheme, but required modification to place the workplace in the larger social context of the current information society. It is not possible to confidently attribute all work-related information practices to professional culture. Differences in information practice observed may arise from professional training and organizational environment, while many similarities observed seem to arise from everyday information practices common to non-work settings

    Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"

    Get PDF
    According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient. The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself. Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: • The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners. • The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another. • The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion. The behaviour of the entities may vary over time. • The systems operate with incomplete information about the environment. For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered. The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems. This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative. We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration

    ConceptFusion: Open-set Multimodal 3D Mapping

    Full text link
    Building 3D maps of the environment is central to robot navigation, planning, and interaction with objects in a scene. Most existing approaches that integrate semantic concepts with 3D maps largely remain confined to the closed-set setting: they can only reason about a finite set of concepts, pre-defined at training time. Further, these maps can only be queried using class labels, or in recent work, using text prompts. We address both these issues with ConceptFusion, a scene representation that is (1) fundamentally open-set, enabling reasoning beyond a closed set of concepts and (ii) inherently multimodal, enabling a diverse range of possible queries to the 3D map, from language, to images, to audio, to 3D geometry, all working in concert. ConceptFusion leverages the open-set capabilities of today's foundation models pre-trained on internet-scale data to reason about concepts across modalities such as natural language, images, and audio. We demonstrate that pixel-aligned open-set features can be fused into 3D maps via traditional SLAM and multi-view fusion approaches. This enables effective zero-shot spatial reasoning, not needing any additional training or finetuning, and retains long-tailed concepts better than supervised approaches, outperforming them by more than 40% margin on 3D IoU. We extensively evaluate ConceptFusion on a number of real-world datasets, simulated home environments, a real-world tabletop manipulation task, and an autonomous driving platform. We showcase new avenues for blending foundation models with 3D open-set multimodal mapping. For more information, visit our project page https://concept-fusion.github.io or watch our 5-minute explainer video https://www.youtube.com/watch?v=rkXgws8fiD

    An efficient and versatile approach to trust and reputation using hierarchical Bayesian modelling

    No full text
    In many dynamic open systems, autonomous agents must interact with one another to achieve their goals. Such agents may be self-interested and, when trusted to perform an action, may betray that trust by not performing the action as required. Due to the scale and dynamism of these systems, agents will often need to interact with other agents with which they have little or no past experience. Each agent must therefore be capable of assessing and identifying reliable interaction partners, even if it has no personal experience with them. To this end, we present HABIT, a Hierarchical And Bayesian Inferred Trust model for assessing how much an agent should trust its peers based on direct and third party information. This model is robust in environments in which third party information is malicious, noisy, or otherwise inaccurate. Although existing approaches claim to achieve this, most rely on heuristics with little theoretical foundation. In contrast, HABIT is based exclusively on principled statistical techniques: it can cope with multiple discrete or continuous aspects of trustee behaviour; it does not restrict agents to using a single shared representation of behaviour; it can improve assessment by using any observed correlation between the behaviour of similar trustees or information sources; and it provides a pragmatic solution to the whitewasher problem (in which unreliable agents assume a new identity to avoid bad reputation). In this paper, we describe the theoretical aspects of HABIT, and present experimental results that demonstrate its ability to predict agent behaviour in both a simulated environment, and one based on data from a real-world webserver domain. In particular, these experiments show that HABIT can predict trustee performance based on multiple representations of behaviour, and is up to twice as accurate as BLADE, an existing state-of-the-art trust model that is both statistically principled and has been previously shown to outperform a number of other probabilistic trust models

    Ontological View-driven Semantic Integration in Open Environments

    Get PDF
    In an open computing environment, such as the World Wide Web or an enterprise Intranet, various information systems are expected to work together to support information exchange, processing, and integration. However, information systems are usually built by different people, at different times, to fulfil different requirements and goals. Consequently, in the absence of an architectural framework for information integration geared toward semantic integration, there are widely varying viewpoints and assumptions regarding what is essentially the same subject. Therefore, communication among the components supporting various applications is not possible without at least some translation. This problem, however, is much more than a simple agreement on tags or mappings between roughly equivalent sets of tags in related standards. Industry-wide initiatives and academic studies have shown that complex representation issues can arise. To deal with these issues, a deep understanding and appropriate treatment of semantic integration is needed. Ontology is an important and widely accepted approach for semantic integration. However, usually there are no explicit ontologies with information systems. Rather, the associated semantics are implied within the supporting information model. It reflects a specific view of the conceptualization that is implicitly defining an ontological view. This research proposes to adopt ontological views to facilitate semantic integration for information systems in open environments. It proposes a theoretical foundation of ontological views, practical assumptions, and related solutions for research issues. The proposed solutions mainly focus on three aspects: the architecture of a semantic integration enabled environment, ontological view modeling and representation, and semantic equivalence relationship discovery. The solutions are applied to the collaborative intelligence project for the collaborative promotion / advertisement domain. Various quality aspects of the solutions are evaluated and future directions of the research are discussed

    Audio Visual Language Maps for Robot Navigation

    Full text link
    While interacting in the world is a multi-sensory experience, many robots continue to predominantly rely on visual perception to map and navigate in their environments. In this work, we propose Audio-Visual-Language Maps (AVLMaps), a unified 3D spatial map representation for storing cross-modal information from audio, visual, and language cues. AVLMaps integrate the open-vocabulary capabilities of multimodal foundation models pre-trained on Internet-scale data by fusing their features into a centralized 3D voxel grid. In the context of navigation, we show that AVLMaps enable robot systems to index goals in the map based on multimodal queries, e.g., textual descriptions, images, or audio snippets of landmarks. In particular, the addition of audio information enables robots to more reliably disambiguate goal locations. Extensive experiments in simulation show that AVLMaps enable zero-shot multimodal goal navigation from multimodal prompts and provide 50% better recall in ambiguous scenarios. These capabilities extend to mobile robots in the real world - navigating to landmarks referring to visual, audio, and spatial concepts. Videos and code are available at: https://avlmaps.github.io.Comment: Project page: https://avlmaps.github.io

    OmniScribe – Enhancing AAR in an LVC Environment

    Get PDF
    Innovations in live, virtual and constructive (LVC) environments geared for US military joint force training allow a more effective utilization of space and time for training exercises across the globe. As this use becomes more prominent, the need for a suitable after action review (AAR) tool to incorporate an ever-increasing number of data sources is fast becoming a requirement. To perform an AAR fully, data from a variety of input sources must be saved, synchronized, and analyzed. It is important to equip military trainers with an effective tool to facilitate this need for comprehensive data in AARs to maximize the effectiveness of LVC training environments. Iowa State University is developing an open source software tool for the U.S. Army to address shortcomings of existing AAR tools. Utilizing an innovative modular domain-independent API, users can combine inputs from multiple sources such as simulation data, physiological sensor information, discrete events, and video feeds into a single application. The aggregated information can then be replayed during an AAR session allowing simulation event information to be supplemented with sources not traditionally incorporated in AAR and providing a framework to greatly enhance AAR. This paper describes such a system (OmniScribe) at its current stage of development, describing its API for the integration of disparate inputs within a single tool and illustrating using a working prototype. It will discuss the current state of the architectural framework, designed to allow users the ability to add additional playback functionality by developing unique modules, and the prototype. Additionally, the paper will briefly discuss the implications a foundation of disparate data stream integration within LVC training will have on future real-time data mining, decision visualization, and enabling deep behavioral analysis of trainee performance

    A mobile agent strategy for grid interoperable virtual organisations

    Get PDF
    During the last few years much effort has been put into developing grid computing and proposing an open and interoperable framework for grid resources capable of defining a decentralized control setting. Such environments may define new rules and actions relating to internal Virtual Organisation (VO) members and therefore posing new challenges towards to an extended cooperation model of grids. More specifically, VO policies from the viewpoint of internal knowledge and capabilities may be expressed in the form of intelligent agents thus providing a more autonomous solution of inter-communicating members. In this paper we propose an interoperable mobility agent model that performs migration to any interacting VO member and by traveling within each domain allows the discovery of resources dynamically. The originality of our approach is the mobility mechanism based on traveling and migration which stores useful information during the route to each visited individual. The method is considered under the Foundation for Intelligent Physical Agents (FIPA) standard which provides an on demand resource provisioning model for autonomous mobile agents. Finally the decentralization of the proposed model is achieved by providing each member with a public profile of personal information which is available upon request from any interconnected member during the resource discovery process

    Relational Neurogenesis for Lifelong Learning Agents

    Get PDF
    Reinforcement learning systems have shown tremendous potential in being able to model meritorious behavior in virtual agents and robots. The ability to learn through continuous reinforcement and interaction with an environment negates the requirement of painstakingly curated datasets and hand crafted features. However, the ability to learn multiple tasks in a sequential manner, referred to as lifelong or continual learning, remains unresolved. The search for lifelong learning algorithms creates the foundation for this work. While there has been much research conducted in supervised learning domains under lifelong learning, the reinforced lifelong learning domain remains open for much exploration. Furthermore, current implementations either concentrate on preserving information in fixed capacity networks, or propose incrementally growing networks which randomly search through an unconstrained solution space. In order to develop a comprehensive lifelong learning algorithm, it seems essential to amalgamate these approaches into a condensed algorithm which can perform both neuroevolution and constrict network growth automatically. This thesis proposes a novel algorithm for continual learning using neurogenesis in reinforcement learning agents. It builds upon existing neuroevolutionary techniques, and incorporates several new mechanisms for limiting the memory resources while expanding neural network learning capacity. The algorithm is tested on a custom set of sequential virtual environments which emulate several meaningful scenarios for intellectually down-scaled species and autonomous robots. Additionally, a library for connecting an unconstrained range of machine learning tools, in a variety of programming languages to the Unity3D simulation engine for the development of future learning algorithms and environments, is also proposed
    corecore