104 research outputs found

    IoTwins: Design and implementation of a platform for the management of digital twins in industrial scenarios

    Get PDF
    With the increase of the volume of data produced by IoT devices, there is a growing demand of applications capable of elaborating data anywhere along the IoT-to-Cloud path (Edge/Fog). In industrial environments, strict real-time constraints require computation to run as close to the data origin as possible (e.g., IoT Gateway or Edge nodes), whilst batch-wise tasks such as Big Data analytics and Machine Learning model training are advised to run on the Cloud, where computing resources are abundant. The H2020 IoTwins project leverages the digital twin concept to implement virtual representation of physical assets (e.g., machine parts, machines, production/control processes) and deliver a software platform that will help enterprises, and in particular SMEs, to build highly innovative, AI-based services that exploit the potential of IoT/Edge/Cloud computing paradigms. In this paper, we discuss the design principles of the IoTwins reference architecture, delving into technical details of its components and offered functionalities, and propose an exemplary software implementation

    Exascale Computing Deployment Challenges

    Get PDF
    As Exascale computing proliferates, we see an accelerating shift towards clusters with thousands of nodes and thousands of cores per node, often on the back of commodity graphics processing units. This paper argues that this drives a once in a generation shift of computation, and that fundamentals of computer science therefore need to be re-examined. Exploiting the full power of Exascale computation will require attention to the fundamentals of programme design and specification, programming language design, systems and software engineering, analytic, performance and cost models, fundamental algorithmic design, and to increasing replacement of human bandwidth by computational analysis. As part of this, we will argue that Exascale computing will require a significant degree of co-design and close attention to the economics underlying the challenges ahead

    High Speed Simulation Analytics

    Get PDF
    Simulation, especially Discrete-event simulation (DES) and Agent-based simulation (ABS), is widely used in industry to support decision making. It is used to create predictive models or Digital Twins of systems used to analyse what-if scenarios, perform sensitivity analytics on data and decisions and even to optimise the impact of decisions. Simulation-based Analytics, or just Simulation Analytics, therefore has a major role to play in Industry 4.0. However, a major issue in Simulation Analytics is speed. Extensive, continuous experimentation demanded by Industry 4.0 can take a significant time, especially if many replications are required. This is compounded by detailed models as these can take a long time to simulate. Distributed Simulation (DS) techniques use multiple computers to either speed up the simulation of a single model by splitting it across the computers and/or to speed up experimentation by running experiments across multiple computers in parallel. This chapter discusses how DS and Simulation Analytics, as well as concepts from contemporary e-Science, can be combined to contribute to the speed problem by creating a new approach called High Speed Simulation Analytics. We present a vision of High Speed Simulation Analytics to show how this might be integrated with the future of Industry 4.0

    High Speed Simulation Analytics

    Get PDF
    Simulation, especially Discrete-event simulation (DES) and Agent-based simulation (ABS), is widely used in industry to support decision making. It is used to create predictive models or Digital Twins of systems used to analyse what-if scenarios, perform sensitivity analytics on data and decisions and even to optimise the impact of decisions. Simulation-based Analytics, or just Simulation Analytics, therefore has a major role to play in Industry 4.0. However, a major issue in Simulation Analytics is speed. Extensive, continuous experimentation demanded by Industry 4.0 can take a significant time, especially if many replications are required. This is compounded by detailed models as these can take a long time to simulate. Distributed Simulation (DS) techniques use multiple computers to either speed up the simulation of a single model by splitting it across the computers and/or to speed up experimentation by running experiments across multiple computers in parallel. This chapter discusses how DS and Simulation Analytics, as well as concepts from contemporary e-Science, can be combined to contribute to the speed problem by creating a new approach called High Speed Simulation Analytics. We present a vision of High Speed Simulation Analytics to show how this might be integrated with the future of Industry 4.0

    Análisis y modelado económico de la nube

    Full text link
    [EN] This final degree project analyzes the Cloud Computing situation in the world with a focus on the Spanish case, and models the Cloud Computing supply and demand. Different analysis about the Cloud Computing economic model have been carried before but either they focus on the general terms of Cloud, or they focus on specific service models. We tried to give a global perspective including all three main services. In order to do that, it focuses on the definition of the Cloud Computing and its elements, and the factors that may influence the demand and supply. Then, some theories are considered, and a scenario-case has been calculated to help understand how the supply meets the demand and its applications. At the final section, the conclusions about the different Cloud Computing elements, and factors that affect its adoption have been summarized and a forecast about future developments in the industry closes the paper.[ES] Las empresas, tanto tecnológicas como no tecnológicas, están cambiando progresivamente la gestión de su infraestructura de tecnologías de la información y comunicación hacia el paradigma de cloud computing o computación en la nube. O dicho de otro modo, existe una en la que los recursos físicos de los departamentos de sistemas de información tienden a externalizarse. Ante una decisión de este calado, resulta conveniente disponer de modelos económicos adecuados que evalúen los costes y beneficios de adoptar este paradigma. El objetivo de este trabajo es revisar los principales modelos económicos de cloud computing , determinar las variables principales que intervienen en la decisión de adoptar la nube, y aplicarlos los modelos en diversos escenarios.Pinós Ordiñana, P. (2016). Análisis y modelado económico de la nube. http://hdl.handle.net/10251/68901.TFG
    corecore