805 research outputs found
Critical analysis of vendor lock-in and its impact on cloud computing migration: a business perspective
Vendor lock-in is a major barrier to the adoption of cloud computing, due to the lack of standardization. Current solutions and efforts tackling the vendor lock-in problem are predominantly technology-oriented. Limited studies exist to analyse and highlight the complexity of vendor lock-in problem in the cloud environment. Consequently, most customers are unaware of proprietary standards which inhibit interoperability and portability of applications when taking services from vendors. This paper provides a critical analysis of the vendor lock-in problem, from a business perspective. A survey based on qualitative and quantitative approaches conducted in this study has identified the main risk factors that give rise to lock-in situations. The analysis of our survey of 114 participants shows that, as computing resources migrate from on-premise to the cloud, the vendor lock-in problem is exacerbated. Furthermore, the findings exemplify the importance of interoperability, portability and standards in cloud computing. A number of strategies are proposed on how to avoid and mitigate lock-in risks when migrating to cloud computing. The strategies relate to contracts, selection of vendors that support standardised formats and protocols regarding standard data structures and APIs, developing awareness of commonalities and dependencies
among cloud-based solutions. We strongly believe that the implementation of these strategies has a great potential
to reduce the risks of vendor lock-in
Towards a flexible deployment of multi-cloud applications based on TOSCA and CAMP
Cloud Computing platforms offer diverse services and capabilities with own features. Hence, the provider services could be used by end users to compose a heterogeneous context of multiple cloud platforms in order to deploy their cloud applications made up of a set of modules, according to the best capabilities of the cloud providers. However, this is an ideal scenario, since the cloud platforms are being conducted in an isolated way by presenting many interoperability and portability restrictions,
which complicate the integration of diverse provider services to achieve an heterogeneous deployment of multi-cloud applications. In this ongoing work, we present an approach based on model transformation to deploy multi-cloud applications by reusing standardization e orts related to the management and deployment of cloud applications. Specifically, using mechanisms speci ed by both standards, TOSCA and CAMP, we propose
a methodology to describe the topology and distribution of modules of a cloud application and to deploy the interconnected modules over
heterogeneous clouds. We illustrate our idea using a running example.Work partially supported by projects TIN2012-35669, funded by Spanish Ministry MINECO, FEDER; P11-TIC-7659 funded by Andalusian Gov; FP7-610531 SeaClouds funded by EU; and Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
A Development Framework Enabling the Design of Service-Based Cloud Applications
Cloud application platforms gain popularity and have the potential to change the way applications are developed, involving composition of platform basic services. In order to enhance the developer’s experience and reduce the barriers in the software development, a new paradigm of cloud application creation should be adopted. According to that developers are enabled to design their applications, leveraging multiple platform basic services, independently from the target application platforms. To this end, this paper proposes a development framework for the design of service-based cloud applications comprising two main components: the meta-model and the Platform Service Manager. The meta-model describes the building blocks which enable the construction of Platform Service Connectors in a uniform way while the Platform Service Manager coordinates the interaction of the application with the concrete service providers and further facilitates the administration of the deployed platform basic services
Bone Mineral Density in Patients with Ankylosing Spondylitis: Incidence and Correlation with Demographic and Clinical Variables
Objective: To evaluate bone mineral density (BMD) in patients with ankylosing spondylitis (AS) and determine its correlation with the demographic and clinical characteristics of AS. Patients and Methods: Demographic, clinical and osteodensitometric data were evaluated in a cross-sectional study that included 136 patients with AS. Spine and hip BMD were measured by means of dual energy X-ray absorptiometry (DXA). Using the modified Schober’s test we assessed spine mobility. We examined the sacroiliac, anteroposterior and lateral dorso-lumbar spine radiographs in order to grade sacroiliitis and assess syndesmophytes. Disease activity was evaluated using C-reactive protein (CRP) levels and erythrocyte sedimentation rate (ESR). Demographic data and BMD measurements were compared with those of 167 age- and sex-matched healthy controls. Results: Patients with AS had a significantly lower BMD at the spine, femoral neck, trochanter and total hip as compared to age-matched controls (all p<0.01). According to the WHO classification, osteoporosis was present in 20.6% of the AS patients at the lumbar spine and in 14.6% at the femoral neck. There were no significant differences in BMD when comparing men and women with AS, except for trochanter BMD that was lower in female patients. No correlations were found between disease activity markers (ESR, CRP) and BMD. Femoral neck BMD was correlated with disease duration, Schober’s test and sacroiliitis grade. Conclusion: Patients with AS have a lower spine and hip BMD as compared to age- and sex-matched controls. Bone loss at the femoral neck is associated with disease duration and more severe AS
Orthogonal variability modeling to support multi-cloud application configuration
Cloud service providers benefit from a vast majority of customers due to variability and making profit from commonalities between the cloud services that they provide. Recently, application configuration dimensions has been increased dramatically due to multi-tenant, multi-device and multi-cloud paradigm. This challenges the configuration and customization of cloud-based software that are typically offered as a service due to the intrinsic variability. In this paper, we present a model-driven approach based on variability models originating from the software product line community to handle such multi-dimensional variability in the cloud. We exploit orthogonal variability models to systematically manage and create tenant-specific configuration and customizations. We also demonstrate how such variability models can be utilized to take into account the already deployed application parts to enable harmonized deployments for new tenants in a multi-cloud setting. The approach considers application functional and non-functional requirements to provide a set of valid multi-cloud configurations. We illustrate our approach through a case study
KCDC - The KASCADE Cosmic-ray Data Centre
KCDC, the KASCADE Cosmic-ray Data Centre, is a web portal, where data of
astroparticle physics experiments will be made available for the interested
public. The KASCADE experiment, financed by public money, was a large-area
detector for the measurement of high-energy cosmic rays via the detection of
air showers. KASCADE and its extension KASCADE-Grande stopped finally the
active data acquisition of all its components including the radio EAS
experiment LOPES end of 2012 after more than 20 years of data taking. In a
first release, with KCDC we provide to the public the measured and
reconstructed parameters of more than 160 million air showers. In addition,
KCDC provides the conceptional design, how the data can be treated and
processed so that they are also usable outside the community of experts in the
research field. Detailed educational examples make a use also possible for
high-school students and early stage researchers.Comment: 8 pages, accepted proceeding of the ECRS-symposium, Kiel, 201
First results of the air shower experiment KASCADE
The main goals of the KASCADE (KArlsruhe Shower Core and Array DEtector)
experiment are the determination of the energy spectrum and elemental
composition of the charged cosmic rays in the energy range around the knee at
ca. 5 PeV. Due to the large number of measured observables per single shower a
variety of different approaches are applied to the data, preferably on an
event-by-event basis. First results are presented and the influence of the
high-energy interaction models underlying the analyses is discussed.Comment: 3 pages, 3 figures included, to appear in the TAUP 99 Proceedings,
Nucl. Phys. B (Proc. Suppl.), ed. by M. Froissart, J. Dumarchez and D.
Vignau
- …
