13,744 research outputs found
Integrating trait-based empirical and modeling research to improve ecological restoration
A global ecological restoration agenda has led to ambitious programs in environmental policy to mitigate declines in biodiversity and ecosystem services. Current restoration programs can incompletely return desired ecosystem service levels, while resilience of restored ecosystems to future threats is unknown. It is therefore essential to advance understanding and better utilize knowledge from ecological literature in restoration approaches. We identified an incomplete linkage between global change ecology, ecosystem function research, and restoration ecology. This gap impedes a full understanding of the interactive effects of changing environmental factors on the long-term provision of ecosystem functions and a quantification of trade-offs and synergies among multiple services. Approaches that account for the effects of multiple changing factors on the composition of plant traits and their direct and indirect impact on the provision of ecosystem functions and services can close this gap. However, studies on this multilayered relationship are currently missing. We therefore propose an integrated restoration agenda complementing trait-based empirical studies with simulation modeling. We introduce an ongoing case study to demonstrate how this framework could allow systematic assessment of the impacts of interacting environmental factors on long-term service provisioning. Our proposed agenda will benefit restoration programs by suggesting plant species compositions with specific traits that maximize the supply of multiple ecosystem services in the long term. Once the suggested compositions have been implemented in actual restoration projects, these assemblages should be monitored to assess whether they are resilient as well as to improve model parameterization. Additionally, the integration of empirical and simulation modeling research can improve global outcomes by raising the awareness of which restoration goals can be achieved, due to the quantification of trade-offs and synergies among ecosystem services under a wide range of environmental conditions
Pattern-based software architecture for service-oriented software systems
Service-oriented architecture is a recent conceptual framework for service-oriented software platforms. Architectures are of great importance for the evolution of
software systems. We present a modelling and transformation technique for service-centric distributed software systems. Architectural configurations, expressed through hierarchical architectural patterns, form the core of a specification and transformation technique. Patterns on different levels of abstraction form transformation invariants that structure and constrain the transformation
process. We explore the role that patterns can play in architecture transformations in terms of functional properties, but also non-functional quality aspects
1992 NASA Life Support Systems Analysis workshop
The 1992 Life Support Systems Analysis Workshop was sponsored by NASA's Office of Aeronautics and Space Technology (OAST) to integrate the inputs from, disseminate information to, and foster communication among NASA, industry, and academic specialists. The workshop continued discussion and definition of key issues identified in the 1991 workshop, including: (1) modeling and experimental validation; (2) definition of systems analysis evaluation criteria; (3) integration of modeling at multiple levels; and (4) assessment of process control modeling approaches. Through both the 1991 and 1992 workshops, NASA has continued to seek input from industry and university chemical process modeling and analysis experts, and to introduce and apply new systems analysis approaches to life support systems. The workshop included technical presentations, discussions, and interactive planning, with sufficient time allocated for discussion of both technology status and technology development recommendations. Key personnel currently involved with life support technology developments from NASA, industry, and academia provided input to the status and priorities of current and future systems analysis methods and requirements
Autonomous management of cost, performance, and resource uncertainty for migration of applications to infrastructure-as-a-service (IaaS) clouds
2014 Fall.Includes bibliographical references.Infrastructure-as-a-Service (IaaS) clouds abstract physical hardware to provide computing resources on demand as a software service. This abstraction leads to the simplistic view that computing resources are homogeneous and infinite scaling potential exists to easily resolve all performance challenges. Adoption of cloud computing, in practice however, presents many resource management challenges forcing practitioners to balance cost and performance tradeoffs to successfully migrate applications. These challenges can be broken down into three primary concerns that involve determining what, where, and when infrastructure should be provisioned. In this dissertation we address these challenges including: (1) performance variance from resource heterogeneity, virtualization overhead, and the plethora of vaguely defined resource types; (2) virtual machine (VM) placement, component composition, service isolation, provisioning variation, and resource contention for multitenancy; and (3) dynamic scaling and resource elasticity to alleviate performance bottlenecks. These resource management challenges are addressed through the development and evaluation of autonomous algorithms and methodologies that result in demonstrably better performance and lower monetary costs for application deployments to both public and private IaaS clouds. This dissertation makes three primary contributions to advance cloud infrastructure management for application hosting. First, it includes design of resource utilization models based on step-wise multiple linear regression and artificial neural networks that support prediction of better performing component compositions. The total number of possible compositions is governed by Bell's Number that results in a combinatorially explosive search space. Second, it includes algorithms to improve VM placements to mitigate resource heterogeneity and contention using a load-aware VM placement scheduler, and autonomous detection of under-performing VMs to spur replacement. Third, it describes a workload cost prediction methodology that harnesses regression models and heuristics to support determination of infrastructure alternatives that reduce hosting costs. Our methodology achieves infrastructure predictions with an average mean absolute error of only 0.3125 VMs for multiple workloads
From Design to Production Control Through the Integration of Engineering Data Management and Workflow Management Systems
At a time when many companies are under pressure to reduce "times-to-market"
the management of product information from the early stages of design through
assembly to manufacture and production has become increasingly important.
Similarly in the construction of high energy physics devices the collection of
(often evolving) engineering data is central to the subsequent physics
analysis. Traditionally in industry design engineers have employed Engineering
Data Management Systems (also called Product Data Management Systems) to
coordinate and control access to documented versions of product designs.
However, these systems provide control only at the collaborative design level
and are seldom used beyond design. Workflow management systems, on the other
hand, are employed in industry to coordinate and support the more complex and
repeatable work processes of the production environment. Commercial workflow
products cannot support the highly dynamic activities found both in the design
stages of product development and in rapidly evolving workflow definitions. The
integration of Product Data Management with Workflow Management can provide
support for product development from initial CAD/CAM collaborative design
through to the support and optimisation of production workflow activities. This
paper investigates this integration and proposes a philosophy for the support
of product data throughout the full development and production lifecycle and
demonstrates its usefulness in the construction of CMS detectors.Comment: 18 pages, 13 figure
Proceedings of the First Karlsruhe Service Summit Workshop - Advances in Service Research, Karlsruhe, Germany, February 2015 (KIT Scientific Reports ; 7692)
Since April 2008 KSRI fosters interdisciplinary research in order to support and advance the progress in the service domain. KSRI brings together academia and industry while serving as a European research hub with respect to service science. For KSS2015 Research Workshop, we invited submissions of theoretical and empirical research dealing with the relevant topics in the context of services including energy, mobility, health care, social collaboration, and web technologies
Service Provisioning through Opportunistic Computing in Mobile Clouds
Mobile clouds are a new paradigm enabling mobile users to access the heterogeneous services present in a pervasive mobile environment together with the rich service offers of the cloud infrastructures. In mobile computing environments mobile devices can also act as service providers, using approaches conceptually similar to service-oriented models. Many approaches implement service provisioning between mobile devices with the intervention of cloud-based handlers, with mobility playing a disruptive role to the functionality offered by of the system. In our approach, we exploit the opportunistic computing model, whereby mobile devices exploit direct contacts to provide services to each other, without necessarily go through conventional cloud services residing in the Internet. Conventional cloud services are therefore complemented by a mobile cloud formed directly by the mobile devices. This paper exploits an algorithm for service selection and composition in this type of mobile cloud environments able to estimate the execution time of a service composition. The model enables the system to produce an estimate of the execution time of the alternative compositions that can be exploited to solve a user's request and then choose the best one among them. We compare the performance of our algorithm with alternative strategies, showing its superior performance from a number of standpoints. In particular, we show how our algorithm can manage a higher load of requests without causing instability in the system conversely to the other strategies. When the load of requests is manageable for all strategies, our algorithm can achieve up to 75% less time spent in average to solve requests
08031 Abstracts Collection -- Software Engineering for Self-Adaptive Systems
From 13.01. to 18.01.2008, the Dagstuhl Seminar 08031 ``Software Engineering for Self-Adaptive Systems\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
- âŠ