134 research outputs found

    A service-oriented and cloud-based statistical analysis framework

    Get PDF
    Cloud Computing has gained popularity among e-Science environments after realizing the propitious use of economical provisions for delivering IT services and the range of resources offered by the cloud for the support, maintenance, and security of running the computation based applications. Cloud Computing being a recently growing technology offers various deployment and service models. In Software as a Service (SaaS) model, the applications and software run on the cloud and are available as 'pay-per-use'. As computing becomes more pervasive within the organization, the increase in complexity to manage the infrastructure of disparate architectures, distributed data and software has made computing very expensive. Cloud offerings promise to deliver all the functionality of existing information technology services at an economical cost. Researchers and scientists use resources provided by the cloud to handle large research datasets and results. The main advantages in Cloud computing are related to dynamic scaling of resources, which are able to adapt to changes based on demand of resources. Another advantage of cloud offering enables the use of multi-tenancy techniques to allow the sharing of resources between different users towards achieving the economy of scale along with considering data isolation as a dominant feature. Representational State Transfer (REST) based architectural style has gained popularity for designing web service features like statelessness, modifiability, portability and simplicity. REST tends to focus on the components involved and their interactions along with interpretation of the significant data elements. Realising the intricacies of the computation and analysis that e-Science deals with, an attempt to provide a framework for statistical analysis has been made in this Master Thesis. The computational and numerical libraries are made available to the user and its functions provide the user with results in desirable format. Research focuses on providing such libraries can significantly and simultaneously decrease the computation time while decreasing the monetary costs of running such analyses. To enable scalability, Cloudburst technique is used to manage bursting the workload from a private cloud to public at times of capacity spikes and provide more resources on the public cloud to meet the user needs

    Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016)

    Get PDF
    Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016) Timisoara, Romania. February 8-11, 2016.The PhD Symposium was a very good opportunity for the young researchers to share information and knowledge, to present their current research, and to discuss topics with other students in order to look for synergies and common research topics. The idea was very successful and the assessment made by the PhD Student was very good. It also helped to achieve one of the major goals of the NESUS Action: to establish an open European research network targeting sustainable solutions for ultrascale computing aiming at cross fertilization among HPC, large scale distributed systems, and big data management, training, contributing to glue disparate researchers working across different areas and provide a meeting ground for researchers in these separate areas to exchange ideas, to identify synergies, and to pursue common activities in research topics such as sustainable software solutions (applications and system software stack), data management, energy efficiency, and resilience.European Cooperation in Science and Technology. COS

    Proceedings of the Third International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2016) Sofia, Bulgaria

    Get PDF
    Proceedings of: Third International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2016). Sofia (Bulgaria), October, 6-7, 2016

    National Science Foundation Advisory Committee for Cyberinfrastructure Task Force on Campus Bridging Final Report

    Get PDF
    The mission of the National Science Foundation (NSF) Advisory Committee on Cyberinfrastructure (ACCI) is to advise the NSF as a whole on matters related to vision and strategy regarding cyberinfrastructure (CI). In early 2009 the ACCI charged six task forces with making recommendations to the NSF in strategic areas of cyberinfrastructure: Campus Bridging; Cyberlearning and Workforce Development; Data and Visualization; Grand Challenges; High Performance Computing (HPC); and Software for Science and Engineering. Each task force was asked to offer advice on the basis of which the NSF would modify existing programs and create new programs. This document is the final, overall report of the Task Force on Campus Bridging.National Science Foundatio

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Desarrollode un simulador de redes de procesadores que evolucionan (NEPS) en la nube (SPARK)

    Full text link
    Máster Universitario en Investigación e Innovación en Tecnologías de la Información y las Comunicaciones (i2-TIC)The natural-inspired computing has becomeone of the most frequently used techniques to handle complex problems such as the NP-Hard optimization problems. This kind of computing has several advantages over traditional computing, including resiliency, parallel data processing, and low consumptionof power. One of the active research areas of the natural-inspired algorithms is Network of Evolutionary Processors (NEPs). A NEP consists of several cells that are attached together; at the same time the edges of the graph are to transfer data between the nodes in system, while cells are representing the nodes.In this thesis we construct a NEPs system which is implemented over the Hadoop spark environment. The use of the spark platform is essential in this work due to the capabilities supplied by this platform. It is a suitable environment used solving some complicated problems. Using the environment is a possible choice in order to design the NEPs system. For this reason, in this thesis, we detailed on how to install, design and operate this system on the Apache the spark environment is used because it has the capability to implement the NEPs system in a distributed manner. The NEPs simulation is delivered in this work. An analysis of system’s parameters was also provided in this work for the system performance evaluation via the examination of each single factor affecting the performance of the NEPs individually. After testing the system, it become clear that using NEPs on the decentralize cloud eco-system can be thought as an effective method to handle data of different formats and also to execute optimization problems such as Adelman, 3-colorabilty and Massive-NEP problems. Moreover, this scheme is also robust that can be adaptable to handle data which might be scaled up to be big data which is characterized by its volume and heterogeneity. In this context heterogeneity might be referring to collecting data from different sources. Moreover, the utilization of the spark environment as a platform to operate the NEPs system has it is prospects. This environment is characterized by its fast task handing chunks of data to Hadoop architecture that is used to implement the spark system which is mainly based on the map and reduce functions. Thus, the task is distributed on NEPs system using the cloud based environment system made it possible to have logical result in all of the three examples investigated and examined in this method

    B3: Fuzzy-Based Data Center Load Optimization in Cloud Computing

    Get PDF
    Cloud computing started a new era in getting variety of information puddles through various internet connections by any connective devices. It provides pay and use method for grasping the services by the clients. Data center is a sophisticated high definition server, which runs applications virtually in cloud computing. It moves the application, services, and data to a large data center. Data center provides more service level, which covers maximum of users. In order to find the overall load efficiency, the utilization service in data center is a definite task. Hence, we propose a novel method to find the efficiency of the data center in cloud computing. The goal is to optimize date center utilization in terms of three big factors—Bandwidth, Memory, and Central Processing Unit (CPU) cycle. We constructed a fuzzy expert system model to obtain maximum Data Center Load Efficiency (DCLE) in cloud computing environments. The advantage of the proposed system lies in DCLE computing. While computing, it allows regular evaluation of services to any number of clients. This approach indicates that the current cloud needs an order of magnitude in data center management to be used in next generation computing

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Energy Efficiency

    Get PDF
    Energy efficiency is finally a common sense term. Nowadays almost everyone knows that using energy more efficiently saves money, reduces the emissions of greenhouse gasses and lowers dependence on imported fossil fuels. We are living in a fossil age at the peak of its strength. Competition for securing resources for fuelling economic development is increasing, price of fuels will increase while availability of would gradually decline. Small nations will be first to suffer if caught unprepared in the midst of the struggle for resources among the large players. Here it is where energy efficiency has a potential to lead toward the natural next step - transition away from imported fossil fuels! Someone said that the only thing more harmful then fossil fuel is fossilized thinking. It is our sincere hope that some of chapters in this book will influence you to take a fresh look at the transition to low carbon economy and the role that energy efficiency can play in that process
    • …
    corecore