935 research outputs found

    CREOLE: a Universal Language for Creating, Requesting, Updating and Deleting Resources

    Get PDF
    In the context of Service-Oriented Computing, applications can be developed following the REST (Representation State Transfer) architectural style. This style corresponds to a resource-oriented model, where resources are manipulated via CRUD (Create, Request, Update, Delete) interfaces. The diversity of CRUD languages due to the absence of a standard leads to composition problems related to adaptation, integration and coordination of services. To overcome these problems, we propose a pivot architecture built around a universal language to manipulate resources, called CREOLE, a CRUD Language for Resource Edition. In this architecture, scripts written in existing CRUD languages, like SQL, are compiled into Creole and then executed over different CRUD interfaces. After stating the requirements for a universal language for manipulating resources, we formally describe the language and informally motivate its definition with respect to the requirements. We then concretely show how the architecture solves adaptation, integration and coordination problems in the case of photo management in Flickr and Picasa, two well-known service-oriented applications. Finally, we propose a roadmap for future work.Comment: In Proceedings FOCLASA 2010, arXiv:1007.499

    Characterizing groundwater CH4 and 222Rn in relation to hydraulic fracturing and other environmental processes in Letcher County, KY

    Get PDF
    Hydraulic fracturing of shale deposits has greatly increased the productivity of the natural gas industry by allowing it to exploit previously inaccessible reservoirs. However, previous research has demonstrated that this practice can contaminate shallow aquifers with CH4 [methane] from deeper formations. This study compares concentrations and isotope compositions of CH4 sampled from domestic groundwater wells in Letcher County, Kentucky in order to characterize its occurrence and origins in relation to neighboring hydraulically fractured natural gas wells. Additionally, this study tests the reliability of 222Rn [radon] as an alternative tracer to CH4 in identifying processes of gas migration from Devonian shale. Other chemical and isotopic tracers – including isotope compositions of H2O [water] and dissolved SO4 [sulfate], as well as concentrations of major dissolved ions – were also compared in order to characterize groundwater in relation to other environmental processes. Approximately half of the 59 households sampled in Letcher County showed elevated CH4 concentrations (\u3e 1 mg/L). CH4 concentrations measured in groundwater ranged from \u3c 0.05 mg/L to 10 mg/L (mean: 4.92 mg/L). δ13C [delta-13 of carbon] values of CH4 ranged from -66 ‰ [per mil] to -16 ‰ (mean: -46 ‰), and δ2H [deuterium] values ranged from -286 ‰ to -86 ‰ (mean -204 ‰). The isotope composition of observed CH4 was characteristic of an immature thermogenic or mixed biogenic/thermogenic origin, similar to that of coalbed CH4 sampled from shallower, Pennsylvanian deposits. The occurrence of 222Rn was rare, and determined not to be linked to the occurrence of CH4. CH4 and 222Rn occurrences were not correlated with proximity to hydraulically fractured natural gas wells. Instead, CH4 occurrence corresponded with groundwater abundant in Na+ [sodium], Cl- [chloride], and HCO3- [bicarbonate], and CH4 concentrations were best predicted by the oxidation/reduction potential of the aquifer sampled. These results suggest that hydraulic fracturing has had a negligible impact on processes of stray gas migration in Letcher County. Furthermore, CH4 found in shallow groundwater likely originated from shallower depositional and/or microbial processes unrelated to gas migration from Devonian shale

    Dust reddening and extinction curves towards gamma-ray bursts at z > 4

    Full text link
    Dust is known to be produced in the envelopes of AGB stars, the expanded shells of supernova (SN) remnants, and in situ grain growth in the ISM, although the corresponding efficiency of each of these dust formation mechanisms at different redshifts remains a topic of debate. During the first Gyr after the Big Bang, it is widely believed that there was not enough time to form AGB stars in high numbers, so that the dust at this epoch is expected to be purely from SNe, or subsequent grain growth in the ISM. The time period corresponding to z ~5-6 is thus expected to display the transition from SN-only dust to a mixture of both formation channels as we know it today. Here we aim to use afterglow observations of GRBs at redshifts larger than z>4z > 4 in order to derive host galaxy dust column densities along their line-of-sight and to test if a SN-type dust extinction curve is required for some of the bursts. GRB afterglow observations were performed with the 7-channel GROND Detector at the 2.2m MPI telescope in La Silla, Chile and combined with data gathered with XRT. We increase the number of measured AVA_V values for GRBs at z > 4 by a factor of ~2-3 and find that, in contrast to samples at mostly lower redshift, all of the GRB afterglows have a visual extinction of AVA_V < 0.5 mag. Analysis of the GROND detection thresholds and results from a Monte-Carlo simulation show that, although we partly suffer from an observational bias against highly extinguished sight-lines, GRB host galaxies at 4 < z < 6 seem to contain on average less dust than at z ~ 2. Additionally, we find that all of the GRBs can be modeled with locally measured extinction curves and that the SN-like dust extinction curve provides a better fit for only two of the afterglow SEDs. For the first time we also report a photometric redshift of z=7.88z = 7.88 for GRB 100905A, making it one of the most distant GRBs known to date.Comment: 26 pages, 37 figure

    Experimental Analysis on Autonomic Strategies for Cloud Elasticity

    Get PDF
    International audienceIn spite of the indubitable advantages of elasticity in Cloud infrastructures, some technical and conceptual limitations are still to be considered. For instance , resource start up time is generally too long to react to unexpected workload spikes. Also, the billing cycles' granularity of existing pricing models may incur consumers to suffer from partial usage waste. We advocate that the software layer can take part in the elasticity process as the overhead of software reconfigurations can be usually considered negligible if compared to infrastructure one. Thanks to this extra level of elasticity, we are able to define cloud reconfigurations that enact elasticity in both software and infrastructure layers so as to meet demand changes while tackling those limitations. This paper presents an autonomic approach to manage cloud elasticity in a cross-layered manner. First, we enhance cloud elasticity with the software elasticity model. Then, we describe how our au-tonomic cloud elasticity model relies on dynamic selection of elasticity tactics. We present an experimental analysis of a subset of those elasticity tactics under different scenarios in order to provide insights on strategies that could drive the autonomic selection of the proper tactics to be applied

    Standing in the Shadows of Obesity: The Local Food Environment and Obesity in Detroit

    Full text link
    Much of the literature examining associations between local food environments and obesity fail to consider whether or not respondents actually utilise the food stores around them. Drawing on survey data, this study examines the relationships between the neighbourhood food environment, mobility and obesity among residents from the lower eastside neighbourhoods of Detroit, Michigan. Certain dimensions of the local food environment are found to contribute to obesity, but these dimensions occur at different scales. Residents who rely on their immediate neighbourhood food environment have a higher likelihood of being obese than residents who do not utilise the stores around them. At a broader level, lower eastside Detroit residents with a greater concentration of fast food establishments around them have a higher possibility of being obese than residents with fewer fast food restaurants around them. The salience of the fast food environment warrants additional attention in terms of public health interventions.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/139092/1/tesg12227.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/139092/2/tesg12227_am.pd

    Enabling Green Energy awareness in Interactive Cloud Application

    Get PDF
    International audienceWith the proliferation of Cloud computing, data centers have to urgently face energy consumption issues. Although recent efforts such as the integration of renewable energy to data centers or energy efficient techniques in (virtual) machines contribute to the reduction of carbon footprint, creating green energy awareness around Interactive Cloud Applications by smartly using the presence of green energy has not been yet addressed. By awareness, we mean the inherited capability of Software-as-a-Service applications to dynamically adapt with the availability of green energy and to reduce energy consumption while green energy is scarce or absent. In this paper, we present two application controllers based on different metrics (e.g., availability of green energy, response time, user experience level). Based on extensive experiments with a real application benchmark and workloads in Grid'5000, results suggest that providers revenue can be increased as high as 64%, while 13% brown energy can be reduced without deprovisioning any physical or virtual resources at IaaS layer and 17 fold increment of performance can be guaranteed

    Unifying Runtime Adaptation and Design Evolution

    Get PDF
    International audienceThe increasing need for continuously available software systems has raised two key-issues: self-adaptation and design evolution. The former one requires software systems to monitor their execution platform and automatically adapt their configuration and/or architecture to adjust their quality of service (optimization, fault-handling). The later one requires new design decisions to be reflected on the fly on the running system to ensure the needed high availability (new requirements, corrective and preventive maintenance). However, design evolution and selfadaptation are not independent and reflecting a design evolution on a running self-adaptative system is not always safe. We propose to unify run-time adaptation and run-time evolution by monitoring both the run-time platform and the design models. Thus, it becomes possible to correlate those heterogeneous events and to use pattern matching on events to elaborate a pertinent decision for run-time adaptation. A flood prediction system deployed along the Ribble river (Yorkshire, England) is used to illustrate how to unify design evolution and run-time adaptation and to safely perform runtime evolution on adaptive systems

    CREOLE: a Universal Language for Creating, Requesting, Updating and Deleting Resources

    Get PDF
    International audienceIn the context of Service-Oriented Computing, applications can be developed following the REST (Representation State Transfer) architectural style. This style corresponds to a resource oriented model, where resources are manipulated via CRUD (Create, Request, Update, Delete) interfaces. The diversity of CRUD languages due to the absence of a standard leads to composition problems related to adaptation, integration and coordination of services. To overcome these problems, we propose a pivot architecture built around a universal language to manipulate resources, called CREOLE, a CRUD Language for Resource Edition. In this architecture, scripts written in existing CRUD languages, like SQL, are compiled into CREOLE and then executed over different CRUD interfaces. After stating the requirements for a universal language for manipulating resources, we formally describe the language and informally motivate its definition with respect to the requirements. We then concretely show how the architecture solves adaptation, integration and coordination problems in the case of photo management in Flickr and Picasa, two well-known service-oriented applications. Finally, we propose a roadmap for future work

    Towards QoS-Oriented SLA Guarantees for Online Cloud Services

    Get PDF
    International audienceCloud Computing provides a convenient means of remote on-demand and pay-per-use access to computing resources. However, its ad hoc management of quality-of-service and SLA poses significant challenges to the performance, dependability and costs of online cloud services. The paper precisely addresses this issue and makes a threefold contribution. First, it introduces a new cloud model, the SLAaaS (SLA aware Service) model. SLAaaS enables a systematic integration of QoS levels and SLA into the cloud. It is orthogonal to other cloud models such as SaaS or PaaS, and may apply to any of them. Second, the paper introduces CSLA, a novel language to describe QoS-oriented SLA associated with cloud services. Third, the paper presents a control-theoretic approach to provide performance, dependability and cost guarantees for online cloud services, with time-varying workloads. The proposed approach is validated through case studies and extensive experiments with online services hosted in clouds such as Amazon EC2. The case studies illustrate SLA guarantees for various services such as a MapReduce service, a cluster-based multi-tier e-commerce service, and a low-level locking service

    BCTMark: a framework for benchmarking blockchain technologies

    Get PDF
    International audienceOver the last years, research activities on blockchain technologies have fairly increased. Firstly introduced with Bitcoin, some projects have since emerged to create or improve blockchain features like privacy while others propose to overcome technical limitations such as scalability and energy consumption. New proposals are often evaluated with ad hoc tools and experimental environments. Reproducibility and comparison of these new contributions with the state of the art of the blockchain technologies are therefore complicated. To the best of our knowledge, only a few tools partially address the design of a generic benchmarking of blockchain technologies (e.g., load generation). In this paper, we introduce BCTMark, a generic framework for benchmarking blockchain technologies on an emulated network in a reproducible way. To illustrate the portability of experiments using BCTMark, we have conducted some experiments on two different testbeds: a cluster of Dell PowerEdge R630 servers (Grid'5000) and one of Raspberry Pi 3+. Experiments have been conducted on three different blockchain systems (Ethereum Clique/Ethash and Hyperledger Fabric) to measure their CPU consumption and energy footprint for different numbers of clients
    • …
    corecore