110,172 research outputs found

    Development Of A Cloud Computing Application For Water Resources Modelling And Optimization Based On Open Source Software

    Full text link
    Cloud computing is the latest advancement in Information and Communication Technology (ICT) that provides computing as a service or delivers computation, software, data access, storage service without end-user knowledge of the physical location and system configuration. Cloud computing, service oriented architecture and web geographic information systems are new technologies for development of the cloud computing application for water resources modelling and optimization. The cloud application is deployed and tested in a distributed computer environment running on three virtual machines (VMs). The cloud application has five web services for: (1) spatial data infrastructure – 1 (SDI), (2) SDI – 2, (3) support for water resources modelling (4) water resources optimization and 5) user authentication. The cloud application is developed using several programming languages (PHP, Ajax, Java, and JavaScript), libraries (OpenLayers and JQuery) and open-source software components (GeoServer, PostgreSQL and PostGIS) and OGC standards (WMS, WFS and WFT-T). The web services for support of water resources modelling and user authentication are deployed on Amazon Web Services and are communicating using WFS with the two SDI web services. The two SDI web services are working on the two separate VMs providing geospatial data and services. The fourth web service is deployed on a separate VM because of the expected large computational requirements. The cloud application is scalable, interoperable, creates a real time multi-user collaboration platform. All code and components used are open source. The cloud application was tested with concurrent multiple users. The performance, security and utilization of the distributed computer environment are monitored and analysed together with the users’ experience and satisfaction. The applicability of the presented solution and its future are elaborated

    Model Based Development of Quality-Aware Software Services

    Get PDF
    Modelling languages and development frameworks give support for functional and structural description of software architectures. But quality-aware applications require languages which allow expressing QoS as a first-class concept during architecture design and service composition, and to extend existing tools and infrastructures adding support for modelling, evaluating, managing and monitoring QoS aspects. In addition to its functional behaviour and internal structure, the developer of each service must consider the fulfilment of its quality requirements. If the service is flexible, the output quality depends both on input quality and available resources (e.g., amounts of CPU execution time and memory). From the software engineering point of view, modelling of quality-aware requirements and architectures require modelling support for the description of quality concepts, support for the analysis of quality properties (e.g. model checking and consistencies of quality constraints, assembly of quality), tool support for the transition from quality requirements to quality-aware architectures, and from quality-aware architecture to service run-time infrastructures. Quality management in run-time service infrastructures must give support for handling quality concepts dynamically. QoS-aware modeling frameworks and QoS-aware runtime management infrastructures require a common evolution to get their integration

    Quality-aware model-driven service engineering

    Get PDF
    Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Quality aspects ranging from interoperability to maintainability to performance are of central importance for the integration of heterogeneous, distributed service-based systems. Architecture models can substantially influence quality attributes of the implemented software systems. Besides the benefits of explicit architectures on maintainability and reuse, architectural constraints such as styles, reference architectures and architectural patterns can influence observable software properties such as performance. Empirical performance evaluation is a process of measuring and evaluating the performance of implemented software. We present an approach for addressing the quality of services and service-based systems at the model-level in the context of model-driven service engineering. The focus on architecture-level models is a consequence of the black-box character of services

    The financial clouds review

    No full text
    This paper demonstrates financial enterprise portability, which involves moving entire application services from desktops to clouds and between different clouds, and is transparent to users who can work as if on their familiar systems. To demonstrate portability, reviews for several financial models are studied, where Monte Carlo Methods (MCM) and Black Scholes Model (BSM) are chosen. A special technique in MCM, Least Square Methods, is used to reduce errors while performing accurate calculations. The coding algorithm for MCM written in MATLAB is explained. Simulations for MCM are performed on different types of Clouds. Benchmark and experimental results are presented for discussion. 3D Black Scholes are used to explain the impacts and added values for risk analysis, and three different scenarios with 3D risk analysis are explained. We also discuss implications for banking and ways to track risks in order to improve accuracy. We have used a conceptual Cloud platform to explain our contributions in Financial Software as a Service (FSaaS) and the IBM Fined Grained Security Framework. Our objective is to demonstrate portability, speed, accuracy and reliability of applications in the clouds, while demonstrating portability for FSaaS and the Cloud Computing Business Framework (CCBF), which is proposed to deal with cloud portability

    Towards a grid-enabled simulation framework for nano-CMOS electronics

    Get PDF
    The electronics design industry is facing major challenges as transistors continue to decrease in size. The next generation of devices will be so small that the position of individual atoms will affect their behaviour. This will cause the transistors on a chip to have highly variable characteristics, which in turn will impact circuit and system design tools. The EPSRC project "Meeting the Design Challenges of Nano-CMOS Electronics" (Nana-CMOS) has been funded to explore this area. In this paper, we describe the distributed data-management and computing framework under development within Nano-CMOS. A key aspect of this framework is the need for robust and reliable security mechanisms that support distributed electronics design groups who wish to collaborate by sharing designs, simulations, workflows, datasets and computation resources. This paper presents the system design, and an early prototype of the project which has been useful in helping us to understand the benefits of such a grid infrastructure. In particular, we also present two typical use cases: user authentication, and execution of large-scale device simulations

    Grid-enabling FIRST: Speeding up simulation applications using WinGrid

    Get PDF
    The vision of grid computing is to make computational power, storage capacity, data and applications available to users as readily as electricity and other utilities. Grid infrastructures and applications have traditionally been geared towards dedicated, centralized, high performance clusters running on UNIX flavour operating systems (commonly referred to as cluster-based grid computing). This can be contrasted with desktop-based grid computing which refers to the aggregation of non-dedicated, de-centralized, commodity PCs connected through a network and running (mostly) the Microsoft Windowstrade operating system. Large scale adoption of such Windowstrade-based grid infrastructure may be facilitated via grid-enabling existing Windows applications. This paper presents the WinGridtrade approach to grid enabling existing Windowstrade based commercial-off-the-shelf (COTS) simulation packages (CSPs). Through the use of a case study developed in conjunction with Ford Motor Company, the paper demonstrates how experimentation with the CSP Witnesstrade and FIRST can achieve a linear speedup when WinGridtrade is used to harness idle PC computing resources. This, combined with the lessons learned from the case study, has encouraged us to develop the Web service extensions to WinGridtrade. It is hoped that this would facilitate wider acceptance of WinGridtrade among enterprises having stringent security policies in place

    Towards business integration as a service 2.0 (BIaaS 2.0)

    Get PDF
    Cloud Computing Business Framework (CCBF) is a framework for designing and implementation of Could Computing solutions. This proposal focuses on how CCBF can help to address linkage in Cloud Computing implementations. This leads to the development of Business Integration as a Service 1.0 (BIaaS 1.0) allowing different services, roles and functionalities to work together in a linkage-oriented framework where the outcome of one service can be input to another, without the need to translate between domains or languages. BIaaS 2.0 aims to allow automation, enhanced security, advanced risk modelling and improved collaboration between processes in BIaaS 1.0. The benefits from adopting BIaaS 1.0 and developing BIaaS 2.0 are illustrated using a case study from the University of Southampton and several collaborators including IBM US. BIaaS 2.0 can work with mainstream technologies such as scientific workflows, and the proposal and demonstration of BIaaS 2.0 will be aimed to certainly benefit industry and academia. © 2011 IEEE
    corecore