43,853 research outputs found
Towards a Formal Framework for Mobile, Service-Oriented Sensor-Actuator Networks
Service-oriented sensor-actuator networks (SOSANETs) are deployed in
health-critical applications like patient monitoring and have to fulfill strong
safety requirements. However, a framework for the rigorous formal modeling and
analysis of SOSANETs does not exist. In particular, there is currently no
support for the verification of correct network behavior after node failure or
loss/addition of communication links. To overcome this problem, we propose a
formal framework for SOSANETs. The main idea is to base our framework on the
\pi-calculus, a formally defined, compositional and well-established formalism.
We choose KLAIM, an existing formal language based on the \pi-calculus as the
foundation for our framework. With that, we are able to formally model SOSANETs
with possible topology changes and network failures. This provides the basis
for our future work on prediction, analysis and verification of the network
behavior of these systems. Furthermore, we illustrate the real-life
applicability of this approach by modeling and extending a use case scenario
from the medical domain.Comment: In Proceedings FESCA 2013, arXiv:1302.478
Functional adaptivity for digital library services in e-infrastructures: the gCube approach
We consider the problem of e-Infrastructures that wish to reconcile the generality of their services with the bespoke requirements of diverse user communities. We motivate the requirement of functional adaptivity in the context of gCube, a service-based system that integrates Grid and Digital Library technologies to deploy, operate, and monitor Virtual Research Environments defined over infrastructural resources. We argue that adaptivity requires mapping service interfaces onto multiple implementations, truly alternative interpretations of the same functionality. We then analyse two design solutions in which the alternative implementations are, respectively, full-fledged services and local components of a single service. We associate the latter with lower development costs and increased binding flexibility, and outline a strategy to deploy them dynamically as the payload of service plugins. The result is an infrastructure in which services exhibit multiple behaviours, know how to select the most appropriate behaviour, and can seamlessly learn new behaviours
Matchmaking for covariant hierarchies
We describe a model of matchmaking suitable for the implementation of services, rather than their for their discovery and composition. In the model, processing requirements are modelled by client requests and computational resources are software processors that compete for request processing as the covariant implementations of an open service interface. Matchmaking then relies on type analysis to rank processors against requests in support of a wide range of dispatch strategies. We relate the model to the autonomicity of service provision and briefly report on its deployment within a production-level infrastructure for scientic computing
SDN and NFV for satellite infrastructures
The integration of SDN and NFV enablers into the satellite network could prove to be an essential means to save on physical sites, improve the time to bring new services to the market and open new ways to improve network resiliency, availability and efficiency. It can be considered that the above two enablers can play a central role in the integration of satellite to terrestrial technologies by using federated management of the network resources.Peer ReviewedPostprint (author's final draft
Flexible provisioning of Web service workflows
Web services promise to revolutionise the way computational resources and business processes are offered and invoked in open, distributed systems, such as the Internet. These services are described using machine-readable meta-data, which enables consumer applications to automatically discover and provision suitable services for their workflows at run-time. However, current approaches have typically assumed service descriptions are accurate and deterministic, and so have neglected to account for the fact that services in these open systems are inherently unreliable and uncertain. Specifically, network failures, software bugs and competition for services may regularly lead to execution delays or even service failures. To address this problem, the process of provisioning services needs to be performed in a more flexible manner than has so far been considered, in order to proactively deal with failures and to recover workflows that have partially failed. To this end, we devise and present a heuristic strategy that varies the provisioning of services according to their predicted performance. Using simulation, we then benchmark our algorithm and show that it leads to a 700% improvement in average utility, while successfully completing up to eight times as many workflows as approaches that do not consider service failures
Issues about the Adoption of Formal Methods for Dependable Composition of Web Services
Web Services provide interoperable mechanisms for describing, locating and
invoking services over the Internet; composition further enables to build
complex services out of simpler ones for complex B2B applications. While
current studies on these topics are mostly focused - from the technical
viewpoint - on standards and protocols, this paper investigates the adoption of
formal methods, especially for composition. We logically classify and analyze
three different (but interconnected) kinds of important issues towards this
goal, namely foundations, verification and extensions. The aim of this work is
to individuate the proper questions on the adoption of formal methods for
dependable composition of Web Services, not necessarily to find the optimal
answers. Nevertheless, we still try to propose some tentative answers based on
our proposal for a composition calculus, which we hope can animate a proper
discussion
The role of trust in financial sector development
In any economic environment where decisions are decentralized, agents consider the risk that others might unfairly exploit informational asymmetries to their own disadvantage. Incomplete results, especially, lies at the heart of financial transactions in which agents trade real claims for promises of future real claims. Agents thus need to invest considerable resources to assess the trustworthiness of others with whom they know they can interact only under conditions of limited and asymmetrically distributed information. Thinking of finance as the complex of institutions and instruments needed to reduce the cost of trading promises among anonymous individuals who do not fully trust each other, the author analyzes how incomplete trust shapes the transaction costs in trading assets, and how it affects resource allocation and pricing decisions from rational, forward-looking agents. His analysis leads to core propositions about the role of finance and financial efficiency in economic development. He recommends areas of financial sector reform in emerging economies aimed at improving the financial system's efficiency in dealing with incomplete trust. Among other things, the public sector can improve trust in finance by improving financial infrastructure, including legal systems, financial regulation, and security in payment and trading systems. But fundamental improvements in financial efficiency may best be gained by eliciting good conduct through market forces.Payment Systems&Infrastructure,Economic Theory&Research,Environmental Economics&Policies,International Terrorism&Counterterrorism,Decentralization,Economic Theory&Research,Environmental Economics&Policies,International Terrorism&Counterterrorism,Banks&Banking Reform,Insurance&Risk Mitigation
Recommended from our members
The Size and Role of Government Economic Issues
[Excerpt] The size and role of the government is one of the most fundamental and enduring debates in American politics. Economics can be used to analyze the relative merits of government intervention in the economy in specific areas, but it cannot answer the question of whether there is “too much” or “too little” government activity overall. That is not to say that one cannot find many examples of government programs that economists would consider to be a highly inefficient, if not counterproductive, way to achieve policy goals. Reducing inefficient government spending would benefit the economy; however, reducing efficient government spending would harm it, and reducing the size of government could involve either one. Government intervention can increase economic efficiency when market failures or externalities exist. Political choices may lead to second-best economic outcomes, however, and some argue that, for that reason, market failures can be preferable to government intervention. In the absence of market failures and externalities, there is little economic justification for government intervention, which lowers efficiency and probably economic growth. But government intervention is often based on the desire to achieve social goals, such as income redistribution. Economics cannot quantitatively value social goals, although it can often offer suggestions for how to achieve those goals in the least costly way.
The government intervenes in the economy in four ways. First, it produces goods and services, such as infrastructure, education, and national defense. Measuring the effects of these goods and services is difficult because they are not bought and sold in markets. Second, it transfers income, both vertically across income levels and horizontally among groups with similar incomes and different characteristics. Third, it taxes to pay for its outlays, which can lower economic efficiency by distorting behavior. Not all taxes are equally distortionary, however, so there are ways of reducing the costs of taxation without changing the size of government. Furthermore, deficit spending does not allow the government to escape the burden of taxation since deficits impose their own burden. Finally, government regulation alters economic activity. The economic effects of regulation are the most difficult to measure, in terms of both costs and benefits, yet they cannot be neglected because they can be interchangeable with taxes or government spending.
There are many different ways to measure the size of the government, making its economic effects difficult to evaluate. Budgeting conventions are partly responsible: tax expenditures, offsetting receipts and collections, and government corporations are all excluded from the budget. But some governmental functions, like regulation, simply cannot be quantified robustly. Discussions about the overall size of government mask significant changes in the composition of government spending over time. Spending has shifted from the federal to the state and local level. Federal production of goods and services has fallen, while federal transfers have grown significantly. In 2008, nearly two-thirds of federal spending is devoted to Social Security, Medicare, Medicaid, and national defense. Thus, there is limited scope to alter the size of government without fundamentally altering these programs. The share of federal spending devoted to the elderly has burgeoned over time, and this trend is forecast to continue.
The size of government has increased significantly since the financial crisis of 2008 as a result of the government’s unplanned intervention in financial markets and subsequent stimulus legislation. Much of this increase in government spending could be reversed when financial conditions return to normal, although critics are skeptical about how easy it will be for the government to extricate itself from the new commitments its made
- …