122 research outputs found

    CloudSimSC: A Toolkit for Modeling and Simulation of Serverless Computing Environments

    Full text link
    Serverless computing is gaining traction as an attractive model for the deployment of a multitude of workloads in the cloud. Designing and building effective resource management solutions for any computing environment requires extensive long term testing, experimentation and analysis of the achieved performance metrics. Utilizing real test beds and serverless platforms for such experimentation work is often times not possible due to resource, time and cost constraints. Thus, employing simulators to model these environments is key to overcoming the challenge of examining the viability of such novel ideas for resource management. Existing simulation software developed for serverless environments lack generalizibility in terms of their architecture as well as the various aspects of resource management, where most are purely focused on modeling function performance under a specific platform architecture. In contrast, we have developed a serverless simulation model with induced flexibility in its architecture as well as the key resource management aspects of function scheduling and scaling. Further, we incorporate techniques for easily deriving monitoring metrics required for evaluating any implemented solutions by users. Our work is presented as CloudSimSC, a modular extension to CloudSim which is a simulator tool extensively used for modeling cloud environments by the research community. We discuss the implemented features in our simulation tool using multiple use cases

    Communication models insights meet simulations

    Get PDF
    International audienceIt is well-known that taking into account communications while scheduling jobs in large scale parallel computing platforms is a crucial issue. In modern hierarchical platforms, communication times are highly different when occurring inside a cluster or between clusters. Thus, allocating the jobs taking into account locality constraints is a key factor for reaching good performances. However, several theoretical results prove that imposing such constraints reduces the solution space and thus, possibly degrades the performances. In practice, such constraints simplify implementations and most often lead to better results. Our aim in this work is to bridge theoretical and practical intuitions, and check the differences between constrained and unconstrained schedules (namely with respect to locality and node contiguity) through simulations. We have developped a generic tool, using SimGrid as the base simulator, enabling interactions with external batch schedulers to evaluate their scheduling policies. The results confirm that insights gained through theoretical models are ill-suited to current architectures and should be reevaluated

    Initial clinical validation of a hybrid in silico—in vitro cardiorespiratory simulator for comprehensive testing of mechanical circulatory support systems

    Get PDF
    Simulators are expected to assume a prominent role in the process of design—development and testing of cardiovascular medical devices. For this purpose, simulators should capture the complexity of human cardiorespiratory physiology in a realistic way. High fidelity simulations of pathophysiology do not only allow to test the medical device itself, but also to advance practically relevant monitoring and control features while the device acts under realistic conditions. We propose a physiologically controlled cardiorespiratory simulator developed in a mixed in silico-in vitro simulation environment. As inherent to this approach, most of the physiological model complexity is implemented in silico while the in vitro system acts as an interface to connect a medical device. As case scenarios, severe heart failure was modeled, at rest and at exercise and as medical device a left ventricular assist device (LVAD) was connected to the simulator. As initial validation, the simulator output was compared against clinical data from chronic heart failure patients supported by an LVAD, that underwent different levels of exercise tests with concomitant increase in LVAD speed. Simulations were conducted reproducing the same protocol as applied in patients, in terms of exercise intensity and related LVAD speed titration. Results show that the simulator allows to capture the principal parameters of the main adaptative cardiovascular and respiratory processes within the human body occurring from rest to exercise. The simulated functional interaction with the LVAD is comparable to the one clinically observed concerning ventricular unloading, cardiac output, and pump flow. Overall, the proposed simulation system offers a high fidelity in silico-in vitro representation of the human cardiorespiratory pathophysiology. It can be used as a test bench to comprehensively analyze the performance of physically connected medical devices simulating clinically realistic, critical scenarios, thus aiding in the future the development of physiologically responding, patient-adjustable medical devices. Further validation studies will be conducted to assess the performance of the simulator in other pathophysiological conditions

    Not your Grandpa's SSD: The Era of Co-Designed Storage Devices

    Get PDF

    DRackSim: Simulator for Rack-scale Memory Disaggregation

    Full text link
    Memory disaggregation has emerged as an alternative to traditional server architecture in data centers. This paper introduces DRackSim, a simulation infrastructure to model rack-scale hardware disaggregated memory. DRackSim models multiple compute nodes, memory pools, and a rack-scale interconnect similar to GenZ. An application-level simulation approach simulates an x86 out-of-order multi-core processor with a multi-level cache hierarchy at compute nodes. A queue-based simulation is used to model a remote memory controller and rack-level interconnect, which allows both cache-based and page-based access to remote memory. DRackSim models a central memory manager to manage address space at the memory pools. We integrate community-accepted DRAMSim2 to perform memory simulation at local and remote memory using multiple DRAMSim2 instances. An incremental approach is followed to validate the core and cache subsystem of DRackSim with that of Gem5. We measure the performance of various HPC workloads and show the performance impact for different nodes/pools configuration

    IoT protocols, architectures, and applications

    Get PDF
    The proliferation of embedded systems, wireless technologies, and Internet protocols have made it possible for the Internet-of-things (IoT) to bridge the gap between the physical and the virtual world and thereby enabling monitoring and control of the physical environment by data processing systems. IoT refers to the inter-networking of everyday objects that are equipped with sensing, computing, and communication capabilities. These networks can collaborate to autonomously solve a variety of tasks. Due to the very diverse set of applications and application requirements, there is no single communication technology that is able to provide cost-effective and close to optimal performance in all scenarios. In this chapter, we report on research carried out on a selected number of IoT topics: low-power wide-area networks, in particular, LoRa and narrow-band IoT (NB-IoT); IP version 6 over IEEE 802.15.4 time-slotted channel hopping (6TiSCH); vehicular antenna design, integration, and processing; security aspects for vehicular networks; energy efficiency and harvesting for IoT systems; and software-defined networking/network functions virtualization for (SDN/NFV) IoT

    Earth orbital teleoperator system man-machine interface evaluation

    Get PDF
    The teleoperator system man-machine interface evaluation develops and implements a program to determine human performance requirements in teleoperator systems

    Energy-Efficient, Thermal-Aware Modeling and Simulation of Datacenters: The CoolEmAll Approach and Evaluation Results

    Get PDF
    International audienceThis paper describes the CoolEmAll project and its approach for modeling and simulating energy-efficient and thermal-aware data centers. The aim of the project was to address energy-thermal efficiency of data centers by combining the optimization of IT, cooling and workload management. This paper provides a complete data center model considering the workload profiles, the applications profiling, the power model and a cooling model. Different energy efficiency metrics are proposed and various resource management and scheduling policies are presented. The proposed strategies are validated through simulation at different levels of a data cente
    • …
    corecore