384,372 research outputs found

    How do goals drive the engineering of capacity-driven Web services?

    Get PDF
    This paper discusses a goal-based approach for the engineering of capacity-driven Web services. In this approach goals are set to define the roles that these Web services could play in business applications, to frame the requirements that could be put on these Web services, and to identify the processes in term of business logic that these Web services could implement. Because of the specificities of capacity-driven Web services compared to regular (i.e., mono-capacity) Web services, their engineering in terms of design, development, and deployment needs to be conducted in a complete different way. A Web service that is empowered with several capacities, which are basically operations to execute, has to know which capacity it can choose from several capacities for triggering at run-time. For this purpose, the Web service takes into account different types of requirements like data and privacy that are put on each capacity empowering this Web service. In addition, this paper shows that the goals in the approach to engineering capacity-driven Web services are geared towards three aspects, which as business logic, requirement, and capacity. © 2010 IEEE

    Desain Dan Implementasi Layanan Penyedia Data Penerimaan Mahasiswa Baru Berbasis Web Services Untuk Menunjang Executive Support System

    Full text link
    A design and implement a data provider services (services provider) new admissions web-based services to support the needs of data for executive support system without reducing the workload of academic database server, ensure interoperability and security systems in Lampung State Polytechnic. With the data provider services (services provider) is a web-based services, the future of academic data base server can be accessed and processed using a multi-platform applications.Methods floating system used in this study are the method of software engineering approach, Linear Models. Starting with the analysis, to collect and analyze data through field studies. Later stages of the design, architectural design services for data providers, data design, interfaces and applications required. The next stage is the implementation of the service architecture design data providers (services provider) web-based services, data, interfaces and applications implemented on the real or actual conditions. The final step is testing the method of black box testing, test architecture for service providers warrant the entire system runs well and fi

    Desain Dan Implementasi Services Providerberbasis Web Services Push Pangkalan Data Perguruan Tinggi Pada Sistem Informasi Akademik Politeknik Negeri Lampung

    Full text link
    Long-term goal of this research is to improve the quality, efficiency and effectiveness of the data reporting services of Study Program Evaluation Based on Self Evaluation (EPSBED) in Lampung State Polytechnic through the use of one of the mechanisms and web-based reporting system services Higher Education Data Station (PDPT) Directorate General (DIRJEN) Higher Education (DIKTI). Specific targets to be achieved in this research are to implement the architecture and web services technologies for the purposes of data communication between different platform and designing services that are equipped with query mapping on academic information system of Polytechnic Lampung for the purpose of providing (response) of data web services push system PDPT. web services technology and mapping query on academic information systems, will unify the existing data on academic information system with required data for reporting to the DIRJEN DIKTI's PDPT machine. So that the problem of data accuracy, relevance and timeliness of the data can be resolved. Method of this research is software engineering approach method, Linear Model. this stage will begin by building a web services PDPT server connected to the Internet, then installing service provider design and technology of web services on the machine of academic information system. The last stage is testing bt black box testing method of the system architecture services provider of web services push PDPT to ensure that the entire system runs well

    EMF-REST: Generation of RESTful APIs from Models

    Get PDF
    In the last years, RESTful Web services have become more and more popular as a lightweight solution to connect remote systems in distributed and Cloud-based architectures. However, being an architectural style rather than a specification or standard, the proper design of RESTful Web services is not trivial since developers have to deal with a plethora of recommendations and best practices. Model-Driven Engineering (MDE) emphasizes the use of models and model transformations to raise the level of abstraction and semi-automate the development of software. In this paper we present an approach that leverages on MDE techniques to generate RESTful services. The approach, called EMF-REST, takes EMF data models as input and generates Web APIs following the REST principles and relying on well-known libraries and standards, thus facilitating its comprehension and maintainability. Additionally, EMF-REST integrates model and Web-specific features to provide model validation and security capabilities, respectively, to the generated API. For Web developers, our approach brings more agility to the Web development process by providing ready-to-run-and-test Web APIs out of data models. Also, our approach provides MDE practitioners the basis to develop Cloud-based modeling solutions as well as enhanced collaborative support

    Strategies for including cloud-computing into an engineering modeling workflow

    Get PDF
    With the advent of cloud computing, high-end computing, networking, and storage resources are available on-demand at a relatively low price point. Internet applications in the consumer and increasingly in the enterprise space are making use of these resources to upgrade existing applications and build new ones. This is made possible by building decentralized applications that can be integrated with one another through web-enabled application programming interfaces (APIs). However, in the fields of engineering and computational science, cloud computing resources have been utilized primarily to augment existing high-performance computing hardware, but engineering model integrations still occur by the use of software libraries. In this research, a novel approach is proposed where engineering models are constructed as independent services that publish web-enabled APIs. To enable this, the engineering models are built as stateless microservices that solve a single computational problem. Composite services are then built utilizing these independent component models, much like in the consumer application space. Interactions between component models is orchestrated by a federation management system. This proposed approach is then demonstrated by disaggregating an existing monolithic model for a cookstove into a set of component models. The component models are then reintegrated and compared with the original model for computational accuracy and run-time. Additionally, a novel engineering workflow is proposed that reuses computational data by constructing reduced-order models (ROMs). This framework is evaluated empirically for a number of producers and consumers of engineering models based on computation and data synchronization aspects. The framework is also evaluated by simulating an engineering design workflow with multiple producers and consumers at various stages during the design process. Finally, concepts from the federated system of models and ROMs are combined to propose the concept of a hybrid model (information artefact). The hybrid model is a web-enabled microservice that encapsulates information from multiple engineering models at varying fidelities, and responds to queries based on the best available information. Rules for the construction of hybrid models have been proposed and evaluated in the context of engineering workflows

    Integrating Engineering Data Systems for NASA Spaceflight Projects

    Get PDF
    NASA has a large range of custom-built and commercial data systems to support spaceflight programs. Some of the systems are re-used by many programs and projects over time. Management and systems engineering processes require integration of data across many of these systems, a difficult problem given the widely diverse nature of system interfaces and data models. This paper describes an ongoing project to use a central data model with a web services architecture to support the integration and access of linked data across engineering functions for multiple NASA programs. The work involves the implementation of a web service-based middleware system called Data Aggregator to bring together data from a variety of systems to support space exploration. Data Aggregator includes a central data model registry for storing and managing links between the data in disparate systems. Initially developed for NASA's Constellation Program needs, Data Aggregator is currently being repurposed to support the International Space Station Program and new NASA projects with processes that involve significant aggregating and linking of data. This change in user needs led to development of a more streamlined data model registry for Data Aggregator in order to simplify adding new project application data as well as standardization of the Data Aggregator query syntax to facilitate cross-application querying by client applications. This paper documents the approach from a set of stand-alone engineering systems from which data are manually retrieved and integrated, to a web of engineering data systems from which the latest data are automatically retrieved and more quickly and accurately integrated. This paper includes the lessons learned through these efforts, including the design and development of a service-oriented architecture and the evolution of the data model registry approaches as the effort continues to evolve and adapt to support multiple NASA programs and priorities

    DESAIN DAN IMPLEMENTASI SERVICES PROVIDERBERBASIS WEB SERVICES PUSH PANGKALAN DATA PERGURUAN TINGGI PADA SISTEM INFORMASI AKADEMIK POLITEKNIK NEGERI LAMPUNG

    Get PDF
    Long-term goal of this research is to improve the quality, efficiency and effectiveness of the data reporting services of Study Program Evaluation Based on Self Evaluation (EPSBED) in Lampung State Polytechnic through the use of one of the mechanisms and web-based reporting system services Higher Education Data Station (PDPT) Directorate General (DIRJEN) Higher Education (DIKTI). Specific targets to be achieved in this research are to implement the architecture and web services technologies for the purposes of data communication between different platform and designing services that are equipped with query mapping on academic information system of Polytechnic Lampung for the purpose of providing (response) of data web services push system PDPT. web services technology and mapping query on academic information systems, will unify the existing data on academic information system with required data for reporting to the DIRJEN DIKTI's PDPT machine. So that the problem of data accuracy, relevance and timeliness of the data can be resolved. Method of this research is software engineering approach method, Linear Model. this stage will begin by building a web services PDPT server connected to the Internet, then installing service provider design and technology of web services on the machine of academic information system. The last stage is testing bt black box testing method of the system architecture services provider of web services push PDPT to ensure that the entire system runs well.Keywords: web, database, system, informatio

    On the Role of Context in the Design of Mobile Mashups

    Get PDF
    This paper presents a design methodology and an accompanying platform for the design and fast development of Context-Aware Mobile mashUpS (CAMUS). The approach is characterized by the role given to context as a first-class modeling dimension used to support i) the identification of the most adequate resources that can satisfy the users' situational needs and ii) the consequent tailoring at runtime of the provided data and functions. Context-based abstractions are exploited to generate models specifying how data returned by the selected services have to be merged and visualized by means of integrated views. Thanks to the adoption of Model-Driven Engineering (MDE) techniques, these models drive the flexible execution of the final mobile app on target mobile devices. A prototype of the platform, making use of novel and advanced Web and mobile technologies, is also illustrated

    Optimised auto-scaling for cloud-based web service

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Elasticity and cost-effectiveness are two key features for ensuring that cloud-based web services appeal to more businesses. However, true elasticity and cost-effectiveness in the pay-per-use cloud business model has not yet been fully achieved. The explosion of cloud-based web services brings new challenges to enable the automatic scaling up and down of service provision when the workload is time-varying. This research studies the problems associated with these challenges. It proposes a novel scheme to achieve optimised auto-scaling for cloud-based web services from three levels of cloud structure: Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). At the various levels, auto-scaling for cloud-based web services has different problems and requires different solutions. At the SaaS level, this study investigates how to design and develop scalable web services, especially for time-consuming applications. To achieve the greatest efficiency, the optimisation of service provision problem is studied by providing the minimum functionality and fastest scalability performance concerning the speed-up curve and QoS (Quality of Service) of the SLA (Service-Level Agreement). At the PaaS level, this work studies how to support dynamic re-configuration when workloads change and the effective deployment of various kinds of web services to the cloud. To achieve optimised auto-scaling of this deployment, a platform is designed to deploy all web services automatically with the minimal number of cloud resources by satisfying the QoS of SLAs. At the IaaS level for two infrastructure resources of virtual machine (VM) and virtual network (VN), this research focuses on studying two types of cloud-based web service: computation-intensive and bandwidth-intensive. To address the optimised auto-scaling problem for computation-intensive cloud-based web service, data-driven VM auto-scaling approaches are proposed to handle the workload in both stable and dynamic environments. To address the optimised auto-scaling problem for bandwidth-intensive cloud-based web service, this study proposes a novel approach to predict the volume of requests and dynamically adjust the software defined network (SDN)-based network configuration in the cloud to auto-scale the service with minimal cost. This research proposes comprehensive and profound perspectives to solve the auto-scaling optimisation problems for cloud-based web services. The proposed approaches not only enable cloud-based web services to minimise resource consumption while auto-scaling service provision to achieve satisfying performance, but also save energy consumption for the global realisation of green computing. The performance of the proposed approaches has been evaluated on a public platform (e.g. Amazon EC2) with the real dataset workload of web services. The experiment results demonstrate that the proposed approaches are practicable and achieve superior performance to other benchmark methods

    Scientific Workflows for Metabolic Flux Analysis

    Get PDF
    Metabolic engineering is a highly interdisciplinary research domain that interfaces biology, mathematics, computer science, and engineering. Metabolic flux analysis with carbon tracer experiments (13 C-MFA) is a particularly challenging metabolic engineering application that consists of several tightly interwoven building blocks such as modeling, simulation, and experimental design. While several general-purpose workflow solutions have emerged in recent years to support the realization of complex scientific applications, the transferability of these approaches are only partially applicable to 13C-MFA workflows. While problems in other research fields (e.g., bioinformatics) are primarily centered around scientific data processing, 13C-MFA workflows have more in common with business workflows. For instance, many bioinformatics workflows are designed to identify, compare, and annotate genomic sequences by "pipelining" them through standard tools like BLAST. Typically, the next workflow task in the pipeline can be automatically determined by the outcome of the previous step. Five computational challenges have been identified in the endeavor of conducting 13 C-MFA studies: organization of heterogeneous data, standardization of processes and the unification of tools and data, interactive workflow steering, distributed computing, and service orientation. The outcome of this thesis is a scientific workflow framework (SWF) that is custom-tailored for the specific requirements of 13 C-MFA applications. The proposed approach – namely, designing the SWF as a collection of loosely-coupled modules that are glued together with web services – alleviates the realization of 13C-MFA workflows by offering several features. By design, existing tools are integrated into the SWF using web service interfaces and foreign programming language bindings (e.g., Java or Python). Although the attributes "easy-to-use" and "general-purpose" are rarely associated with distributed computing software, the presented use cases show that the proposed Hadoop MapReduce framework eases the deployment of computationally demanding simulations on cloud and cluster computing resources. An important building block for allowing interactive researcher-driven workflows is the ability to track all data that is needed to understand and reproduce a workflow. The standardization of 13 C-MFA studies using a folder structure template and the corresponding services and web interfaces improves the exchange of information for a group of researchers. Finally, several auxiliary tools are developed in the course of this work to complement the SWF modules, i.e., ranging from simple helper scripts to visualization or data conversion programs. This solution distinguishes itself from other scientific workflow approaches by offering a system of loosely-coupled components that are flexibly arranged to match the typical requirements in the metabolic engineering domain. Being a modern and service-oriented software framework, new applications are easily composed by reusing existing components
    • …
    corecore