217,913 research outputs found

    IMAGINE Final Report

    No full text

    Cyber-Virtual Systems: Simulation, Validation & Visualization

    Full text link
    We describe our ongoing work and view on simulation, validation and visualization of cyber-physical systems in industrial automation during development, operation and maintenance. System models may represent an existing physical part - for example an existing robot installation - and a software simulated part - for example a possible future extension. We call such systems cyber-virtual systems. In this paper, we present the existing VITELab infrastructure for visualization tasks in industrial automation. The new methodology for simulation and validation motivated in this paper integrates this infrastructure. We are targeting scenarios, where industrial sites which may be in remote locations are modeled and visualized from different sites anywhere in the world. Complementing the visualization work, here, we are also concentrating on software modeling challenges related to cyber-virtual systems and simulation, testing, validation and verification techniques for them. Software models of industrial sites require behavioural models of the components of the industrial sites such as models for tools, robots, workpieces and other machinery as well as communication and sensor facilities. Furthermore, collaboration between sites is an important goal of our work.Comment: Preprint, 9th International Conference on Evaluation of Novel Approaches to Software Engineering (ENASE 2014

    BonFIRE: A multi-cloud test facility for internet of services experimentation

    Get PDF
    BonFIRE offers a Future Internet, multi-site, cloud testbed, targeted at the Internet of Services community, that supports large scale testing of applications, services and systems over multiple, geographically distributed, heterogeneous cloud testbeds. The aim of BonFIRE is to provide an infrastructure that gives experimenters the ability to control and monitor the execution of their experiments to a degree that is not found in traditional cloud facilities. The BonFIRE architecture has been designed to support key functionalities such as: resource management; monitoring of virtual and physical infrastructure metrics; elasticity; single document experiment descriptions; and scheduling. As for January 2012 BonFIRE release 2 is operational, supporting seven pilot experiments. Future releases will enhance the offering, including the interconnecting with networking facilities to provide access to routers, switches and bandwidth-on-demand systems. BonFIRE will be open for general use late 2012

    Trusted Computing and Secure Virtualization in Cloud Computing

    Get PDF
    Large-scale deployment and use of cloud computing in industry is accompanied and in the same time hampered by concerns regarding protection of data handled by cloud computing providers. One of the consequences of moving data processing and storage off company premises is that organizations have less control over their infrastructure. As a result, cloud service (CS) clients must trust that the CS provider is able to protect their data and infrastructure from both external and internal attacks. Currently however, such trust can only rely on organizational processes declared by the CS provider and can not be remotely verified and validated by an external party. Enabling the CS client to verify the integrity of the host where the virtual machine instance will run, as well as to ensure that the virtual machine image has not been tampered with, are some steps towards building trust in the CS provider. Having the tools to perform such verifications prior to the launch of the VM instance allows the CS clients to decide in runtime whether certain data should be stored- or calculations should be made on the VM instance offered by the CS provider. This thesis combines three components -- trusted computing, virtualization technology and cloud computing platforms -- to address issues of trust and security in public cloud computing environments. Of the three components, virtualization technology has had the longest evolution and is a cornerstone for the realization of cloud computing. Trusted computing is a recent industry initiative that aims to implement the root of trust in a hardware component, the trusted platform module. The initiative has been formalized in a set of specifications and is currently at version 1.2. Cloud computing platforms pool virtualized computing, storage and network resources in order to serve a large number of customers customers that use a multi-tenant multiplexing model to offer on-demand self-service over broad network. Open source cloud computing platforms are, similar to trusted computing, a fairly recent technology in active development. The issue of trust in public cloud environments is addressed by examining the state of the art within cloud computing security and subsequently addressing the issues of establishing trust in the launch of a generic virtual machine in a public cloud environment. As a result, the thesis proposes a trusted launch protocol that allows CS clients to verify and ensure the integrity of the VM instance at launch time, as well as the integrity of the host where the VM instance is launched. The protocol relies on the use of Trusted Platform Module (TPM) for key generation and data protection. The TPM also plays an essential part in the integrity attestation of the VM instance host. Along with a theoretical, platform-agnostic protocol, the thesis also describes a detailed implementation design of the protocol using the OpenStack cloud computing platform. In order the verify the implementability of the proposed protocol, a prototype implementation has built using a distributed deployment of OpenStack. While the protocol covers only the trusted launch procedure using generic virtual machine images, it presents a step aimed to contribute towards the creation of a secure and trusted public cloud computing environment

    Bonobos extract meaning from call sequences

    Get PDF
    This research was funded by a Leverhulme Trust Research Leadership Award and the Wissenschaftskolleg zu Berlin. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.Studies on language-trained bonobos have revealed their remarkable abilities in representational and communication tasks. Surprisingly, however, corresponding research into their natural communication has largely been neglected. We address this issue with a first playback study on the natural vocal behaviour of bonobos. Bonobos produce five acoustically distinct call types when finding food, which they regularly mix together into longer call sequences. We found that individual call types were relatively poor indicators of food quality, while context specificity was much greater at the call sequence level. We therefore investigated whether receivers could extract meaning about the quality of food encountered by the caller by integrating across different call sequences. We first trained four captive individuals to find two types of foods, kiwi (preferred) and apples (less preferred) at two different locations. We then conducted naturalistic playback experiments during which we broadcasted sequences of four calls, originally produced by a familiar individual responding to either kiwi or apples. All sequences contained the same number of calls but varied in the composition of call types. Following playbacks, we found that subjects devoted significantly more search effort to the field indicated by the call sequence. Rather than attending to individual calls, bonobos attended to the entire sequences to make inferences about the food encountered by a caller. These results provide the first empirical evidence that bonobos are able to extract information about external events by attending to vocal sequences of other individuals and highlight the importance of call combinations in their natural communication system.Publisher PDFPeer reviewe
    corecore