7,069 research outputs found

    Personalizable Service Discovery in Pervasive Systems

    Get PDF
    Today, telecom providers are facing changing challenges. To stay ahead in the competition and provide market leading offerings, carriers need to enable a global ecosystem of third party independent application developers to deliver converged services. This is the aim of leveraging a open standardsbased service delivery platform. To identify and to cope with those challenges is the main target of the EU funded project IST DAIDALOS II. And a central point to satisfy the changing user needs is the provision of a well working, user friendly and personalized service discovery. This paper describes our work in the project on a middleware in a framework for pervasive service usage. We have designed an architecture for it, that enables full transparency to the user, grants high compatibility and extendability by a modular and pluggable conception and allows for interoperability with most known service discovery protocols. Our Multi-Protocol Service Discovery and the Four Phases Service Filtering concept enabling personalization should allow for the best possible results in service discovery

    SIMDAT

    No full text

    Modeling location for pervasive environments

    Get PDF
    The representation of spaces, locations and the entities they contain is of great importance to location aware systems and pervasive computing scenarios. There has been an active research community in developing many diverse models of location, resulting in significant progress in the area. Various types of location model have evolved through experiment and experience however there still remains many challenges to be met by the research community. This paper aims to highlight previous trends in location modeling, discuss the research challenges ahead and to outline the initial design of a location model for the Strathclyde Context Infrastructure [?]

    A Case Study on Formal Verification of Self-Adaptive Behaviors in a Decentralized System

    Full text link
    Self-adaptation is a promising approach to manage the complexity of modern software systems. A self-adaptive system is able to adapt autonomously to internal dynamics and changing conditions in the environment to achieve particular quality goals. Our particular interest is in decentralized self-adaptive systems, in which central control of adaptation is not an option. One important challenge in self-adaptive systems, in particular those with decentralized control of adaptation, is to provide guarantees about the intended runtime qualities. In this paper, we present a case study in which we use model checking to verify behavioral properties of a decentralized self-adaptive system. Concretely, we contribute with a formalized architecture model of a decentralized traffic monitoring system and prove a number of self-adaptation properties for flexibility and robustness. To model the main processes in the system we use timed automata, and for the specification of the required properties we use timed computation tree logic. We use the Uppaal tool to specify the system and verify the flexibility and robustness properties.Comment: In Proceedings FOCLASA 2012, arXiv:1208.432

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware

    The DECIDE Project: Designing and Implementing a Prototype Service for Supporting Early Diagnosis of Alzheimer's Disease

    Get PDF
    This paper will present the design and implementation challenges of the innovative DECIDE service, to support research and early diagnosis of Alzheimer’s and other neurodegenerative diseases. DECIDE service, which is based on a Grid eInfrastructure, offers a set of tools providing quantitative measurements, to help researchers and clinicians make more informed diagnosis. As the service specifically targets the clinical community, it differs significantly from other initiatives since it needs to comply with the requirements imposed by the clinical routine in terms of accuracy, robustness, ease of use, data handling policies, adherence to clinical praxis. Moreover, sustainability aspects will also be discussed, since DECIDE aims to propose such service as a reference at European level, possibly extending it to other pathologies. We will then summarize the main results obtained to date, and the possible future developments

    Cloudbus Toolkit for Market-Oriented Cloud Computing

    Full text link
    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.Comment: 21 pages, 6 figures, 2 tables, Conference pape

    A Framework for Evaluating Model-Driven Self-adaptive Software Systems

    Get PDF
    In the last few years, Model Driven Development (MDD), Component-based Software Development (CBSD), and context-oriented software have become interesting alternatives for the design and construction of self-adaptive software systems. In general, the ultimate goal of these technologies is to be able to reduce development costs and effort, while improving the modularity, flexibility, adaptability, and reliability of software systems. An analysis of these technologies shows them all to include the principle of the separation of concerns, and their further integration is a key factor to obtaining high-quality and self-adaptable software systems. Each technology identifies different concerns and deals with them separately in order to specify the design of the self-adaptive applications, and, at the same time, support software with adaptability and context-awareness. This research studies the development methodologies that employ the principles of model-driven development in building self-adaptive software systems. To this aim, this article proposes an evaluation framework for analysing and evaluating the features of model-driven approaches and their ability to support software with self-adaptability and dependability in highly dynamic contextual environment. Such evaluation framework can facilitate the software developers on selecting a development methodology that suits their software requirements and reduces the development effort of building self-adaptive software systems. This study highlights the major drawbacks of the propped model-driven approaches in the related works, and emphasise on considering the volatile aspects of self-adaptive software in the analysis, design and implementation phases of the development methodologies. In addition, we argue that the development methodologies should leave the selection of modelling languages and modelling tools to the software developers.Comment: model-driven architecture, COP, AOP, component composition, self-adaptive application, context oriented software developmen
    corecore