13 research outputs found

    Resilience Analysis of the IMS based Networks

    Get PDF

    Factores que influyen en la evaluación de parámetros de performance y escalabilidad para clouds híbridos

    Get PDF
    El avance notable de tecnologías como la computación distribuida, Internet y grid computing, han posibilitado que Cloud Computing forme parte de un nuevo modelo de computación y de negocios. Cloud Computing está transformando los modos tradicionales de cómo las empresas utilizan y adquieren los recursos de Information Technology (IT). Representa un nuevo tipo del valor de la computación en red. Entrega mayor eficiencia, escalabilidad masiva y más rápido y fácil desarrollo de software. Los nuevos modelos de programación y la nueva infraestructura de IT están impulsando nuevos modelos de negocios. Luego de un auge inicial de los Cloud Públicos las empresas han comenzado a montar Cloud híbridos que ofrecen las ventajas de Cloud Computing sumado a la privacidad de los datos que consideren estratégicos. En la actualidad, datos de sistemas propios están almacenados en servidores privados y muchos otros como sitios Web y servicios de e-mails se encuentran en algún proveedor remoto. Una solución de Cloud híbrido permite la integración de ambos sistemas. No obstante las ventajas expuestas, no se han realizado estudios profundos de escalabilidad y eficiencia. Tampoco se han definido parámetros para este tipo de soluciones, que permitan a los desarrolladores proponer la arquitectura más conveniente, o configurar los schedulers convenientemente.Facultad de Informátic

    Factores que influyen en la evaluación de parámetros de performance y escalabilidad para clouds híbridos

    Get PDF
    El avance notable de tecnologías como la computación distribuida, Internet y grid computing, han posibilitado que Cloud Computing forme parte de un nuevo modelo de computación y de negocios. Cloud Computing está transformando los modos tradicionales de cómo las empresas utilizan y adquieren los recursos de Information Technology (IT). Representa un nuevo tipo del valor de la computación en red. Entrega mayor eficiencia, escalabilidad masiva y más rápido y fácil desarrollo de software. Los nuevos modelos de programación y la nueva infraestructura de IT están impulsando nuevos modelos de negocios. Luego de un auge inicial de los Cloud Públicos las empresas han comenzado a montar Cloud híbridos que ofrecen las ventajas de Cloud Computing sumado a la privacidad de los datos que consideren estratégicos. En la actualidad, datos de sistemas propios están almacenados en servidores privados y muchos otros como sitios Web y servicios de e-mails se encuentran en algún proveedor remoto. Una solución de Cloud híbrido permite la integración de ambos sistemas. No obstante las ventajas expuestas, no se han realizado estudios profundos de escalabilidad y eficiencia. Tampoco se han definido parámetros para este tipo de soluciones, que permitan a los desarrolladores proponer la arquitectura más conveniente, o configurar los schedulers convenientemente.Facultad de Informátic

    Empowering Cloud Data Centers with Network Programmability

    Get PDF
    Cloud data centers are a critical infrastructure for modern Internet services such as web search, social networking and e-commerce. However, the gradual slow-down of Moore’s law has put a burden on the growth of data centers’ performance and energy efficiency. In addition, the increasing of millisecond-scale and microsecond-scale tasks also bring higher requirements to the throughput and latency for the cloud applications. Today’s server-based solutions are hard to meet the performance requirements in many scenarios like resource management, scheduling, high-speed traffic monitoring and testing. In this dissertation, we study these problems from a network perspective. We investigate a new architecture that leverages the programmability of new-generation network switches to improve the performance and reliability of clouds. As programmable switches only provide very limited memory and functionalities, we exploit compact data structures and deeply co-design software and hardware to best utilize the resource. More specifically, this dissertation presents four systems: (i) NetLock: A new centralized lock management architecture that co-designs programmable switches and servers to simultaneously achieve high performance and rich policy support. It provides orders-of-magnitude higher throughput than existing systems with microsecond-level latency, and supports many commonly-used policies such as performance isolation. (ii) HCSFQ: A scalable and practical solution to implement hierarchical fair queueing on commodity hardware at line rate. Instead of relying on a hierarchy of queues with complex queue management, HCSFQ does not keep per-flow states and uses only one queue to achieve hierarchical fair queueing. (iii) AIFO: A new approach for programmable packet scheduling that only uses a single FIFO queue. AIFO utilizes an admission control mechanism to approximate PIFO which is theoretically ideal but hard to implement with commodity devices. (iv) Lumina: A tool that enables fine-grained analysis of hardware network stack. By exploiting network programmability to emulate various network scenarios, Lumina is able to help users understand the micro-behaviors of hardware network stacks

    An Integrated Modeling Framework for Managing the Deployment and Operation of Cloud Applications

    Get PDF
    Cloud computing can help Software as a Service (SaaS) providers to take advantage of the sheer number of cloud benefits such as, agility, continuity, cost reduction, autonomy, and easy management of resources. To reap the benefits, SaaS providers should create their applications to utilize the cloud platform capabilities. However, this is a daunting task. First, it requires a full understanding of the service offerings from different providers, and the meta-data artifacts required by each provider to configure the platform to efficiently deploy, run and manage the application. Second, it involves complex decisions that are specified by different stakeholders. Examples include, financial decisions (e.g., selecting a platform to reduces costs), architectural decisions (e.g., partition the application to maximize scalability), and operational decisions (e.g., distributing modules to insure availability and porting the application to other platforms). Finally, while each stakeholder may conduct a certain type of change to address a specific concern, the impact of a change may span multiple models and influence the decisions of several stakeholders. These factors motivate the need for: (i) a new architectural view model that focuses on service operation and reflects the cloud stakeholder perspectives, and (ii) a novel framework that facilitates providing holistic as well as partial architectural views, and generating the required platform artifacts by fragmenting the model into artifacts that can be easily modified separately. This PhD research devises a novel architecture framework, "The 5+1 Architectural View Model", for cloud applications, in which each view corresponds to a different perspective on cloud application deployment. The architectural framework is realized as a cloud modeling framework, called "StratusML", which consists of a modeling language that uses layers to specify the cloud configuration space, and a transformation engine to generate the configuration space artifacts. The usefulness and practical applicability of StratusML to model multi-cloud and multi-tenant applications have been demonstrated though a representative domain example. Moreover, to automate the framework evolution as new concerns and cloud platforms emerge, this research work introduces also a novel schema matching technique, called "Liberate". Liberate supports the process of domain model creation, evolution, and transformations. Liberate helps solve the vendor lock-in problem by reducing the manual efforts required to map complex correspondences between cloud schemas whose domain concepts do not share linguistic similarities. The evaluation of Liberate shows its superiority in the cloud domain over existing schema matching approaches

    Learning, Verifying, and Erasing Errors on a Chaotic and Highly Entangled Programmable Quantum Simulator

    Get PDF
    Controlled quantum systems have the potential to make major advancements in tasks ranging from computing to metrology. In recent years, quantum devices have experienced tremendous progress, reaching meaningful, intermediate-scale sizes and demonstrating advantage over their classical counterparts. Still, sensing, learning, verifying, and hopefully mitigating errors in these systems is an outstanding and ubiquitous challenge facing all modern quantum platforms. Here we review and expound upon one such platform: arrays of Rydberg atoms trapped in optical tweezers. We demonstrate several key advancements, including the first experimental realization of erasure conversion to prepare two-qubit Bell states with a fidelity in excess of 0.999, and to cool atoms to their motional ground state. We further showcase the tools of universal quantum processing via arbitrary single-qubit gates, fixed two-qubit gates, and mid-circuit measurement, and discuss applications of these techniques for metrology and computing. Then, we turn to the many-body regime, generating highly entangled states with up to 60 atoms through analog quench dynamics. We reveal the emergence of random behavior from unitary quantum evolution, and uncover a universal form of quantum ergodicity linking quantum and statistical mechanics. We exploit these discoveries to verify the global many-body fidelity and then realize practical applications like parameter estimation and noise learning. Finally, we compare against both state-of-the-art quantum and classical processors: we introduce a new proxy for the experimental mixed state entanglement which is comparable amongst all quantum platforms, and that reflects the classical complexity of quantum simulation.</p

    Model-Driven Online Capacity Management for Component-Based Software Systems

    Get PDF
    Capacity management is a core activity when designing and operating distributed software systems. It comprises the provisioning of data center resources and the deployment of software components to these resources. The goal is to continuously provide adequate capacity, i.e., service level agreements should be satisfied while keeping investment and operating costs reasonably low. Traditional capacity management strategies are rather static and pessimistic: resources are provisioned for anticipated peak workload levels. Particularly, enterprise application systems are exposed to highly varying workloads, leading to unnecessarily high total cost of ownership due to poor resource usage efficiency caused by the aforementioned static capacity management approach. During the past years, technologies emerged that enable dynamic data center infrastructures, e. g., leveraged by cloud computing products. These technologies build the foundation for elastic online capacity management, i.e., adapting the provided capacity to workload demands based on a short-term horizon. Because manual online capacity management is not an option, automatic control approaches have been proposed. However, most of these approaches focus on coarse-grained adaptation actions and adaptation decisions are based on aggregated system-level measures. Architectural information about the controlled software system is rarely considered. This thesis introduces a model-driven online capacity management approach for distributed component-based software systems, called SLAstic. The core contributions of this approach are a) modeling languages to capture relevant architectural information about a controlled software system, b) an architecture-based online capacity management framework based on the common MAPE-K control loop architecture, c) model-driven techniques supporting the automation of the approach, d) architectural runtime reconfiguration operations for controlling a system’s capacity, e) as well as an integration of the Palladio Component Model. A qualitative and quantitative evaluation of the approach is performed by case studies, lab experiments, and simulation

    Energy Management

    Get PDF
    Forecasts point to a huge increase in energy demand over the next 25 years, with a direct and immediate impact on the exhaustion of fossil fuels, the increase in pollution levels and the global warming that will have significant consequences for all sectors of society. Irrespective of the likelihood of these predictions or what researchers in different scientific disciplines may believe or publicly say about how critical the energy situation may be on a world level, it is without doubt one of the great debates that has stirred up public interest in modern times. We should probably already be thinking about the design of a worldwide strategic plan for energy management across the planet. It would include measures to raise awareness, educate the different actors involved, develop policies, provide resources, prioritise actions and establish contingency plans. This process is complex and depends on political, social, economic and technological factors that are hard to take into account simultaneously. Then, before such a plan is formulated, studies such as those described in this book can serve to illustrate what Information and Communication Technologies have to offer in this sphere and, with luck, to create a reference to encourage investigators in the pursuit of new and better solutions

    WSN based sensing model for smart crowd movement with identification: a conceptual model

    Get PDF
    With the advancement of IT and increase in world population rate, Crowd Management (CM) has become a subject undergoing intense study among researchers. Technology provides fast and easily available means of transport and, up-to-date information access to the people that causes crowd at public places. This imposes a big challenge for crowd safety and security at public places such as airports, railway stations and check points. For example, the crowd of pilgrims during Hajj and Ummrah while crossing the borders of Makkah, Kingdom of Saudi Arabia. To minimize the risk of such crowd safety and security identification and verification of people is necessary which causes unwanted increment in processing time. It is observed that managing crowd during specific time period (Hajj and Ummrah) with identification and verification is a challenge. At present, many advanced technologies such as Internet of Things (IoT) are being used to solve the crowed management problem with minimal processing time. In this paper, we have presented a Wireless Sensor Network (WSN) based conceptual model for smart crowd movement with minimal processing time for people identification. This handles the crowd by forming groups and provides proactive support to handle them in organized manner. As a result, crowd can be managed to move safely from one place to another with group identification. The group identification minimizes the processing time and move the crowd in smart way
    corecore