27,176 research outputs found
Ten Years of Rich Internet Applications: A Systematic Mapping Study, and Beyond
BACKGROUND: The term Rich Internet Applications (RIAs) is generally associated with Web appli-
cations that provide the features and functionality of traditional desktop applications. Ten years after the
introduction of the term, an ample amount of research has been carried out to study various aspects of
RIAs. It has thus become essential to summarize this research and provide an adequate overview.
OBJECTIVE: The objective of our study is to assemble, classify and analyze all RIA research performed
in the scienti c community, thus providing a consolidated overview thereof, and to identify well-established
topics, trends and open research issues. Additionally, we provide a qualitative discussion of the most inter-
esting ndings. This work therefore serves as a reference work for beginning and established RIA researchers
alike, as well as for industrial actors that need an introduction in the eld, or seek pointers to (a speci c
subset of) the state-of-the-art.
METHOD: A systematic mapping study is performed in order to identify all RIA-related publications,
de ne a classi cation scheme, and categorize, analyze, and discuss the identi ed research according to it.
RESULTS: Our source identi cation phase resulted in 133 relevant, peer-reviewed publications, published
between 2002 and 2011 in a wide variety of venues. They were subsequently classi ed according to four facets:
development activity, research topic, contribution type and research type. Pie, stacked bar and bubble charts
were used to visualize and analyze the results. A deeper analysis is provided for the most interesting and/or
remarkable results.
CONCLUSION: Analysis of the results shows that, although the RIA term was coined in 2002, the rst
RIA-related research appeared in 2004. From 2007 there was a signi cant increase in research activity,
peaking in 2009 and decreasing to pre-2009 levels afterwards. All development phases are covered in the
identi ed research, with emphasis on \design" (33%) and \implementation" (29%). The majority of research
proposes a \method" (44%), followed by \model" (22%), \methodology" (18%) and \tools" (16%); no
publications in the category \metrics" were found. The preponderant research topic is \models, methods
and methodologies" (23%) and to a lesser extent, \usability & accessibility" and \user interface" (11% each).
On the other hand, the topic \localization, internationalization & multi-linguality" received no attention at
all, and topics such as \deep web" (under 1%), \business processing", \usage analysis", \data management",
\quality & metrics", (all under 2%), \semantics" and \performance" (slightly above 2%) received very few
attention. Finally, there is a large majority of \solution proposals" (66%), few \evaluation research" (14%)
and even fewer \validation" (6%), although the latter are increasing in recent years
InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services
Cloud computing providers have setup several data centers at different
geographical locations over the Internet in order to optimally serve needs of
their customers around the world. However, existing systems do not support
mechanisms and policies for dynamically coordinating load distribution among
different Cloud-based data centers in order to determine optimal location for
hosting application services to achieve reasonable QoS levels. Further, the
Cloud computing providers are unable to predict geographic distribution of
users consuming their services, hence the load coordination must happen
automatically, and distribution of services must change in response to changes
in the load. To counter this problem, we advocate creation of federated Cloud
computing environment (InterCloud) that facilitates just-in-time,
opportunistic, and scalable provisioning of application services, consistently
achieving QoS targets under variable workload, resource and network conditions.
The overall goal is to create a computing environment that supports dynamic
expansion or contraction of capabilities (VMs, services, storage, and database)
for handling sudden variations in service demands.
This paper presents vision, challenges, and architectural elements of
InterCloud for utility-oriented federation of Cloud computing environments. The
proposed InterCloud environment supports scaling of applications across
multiple vendor clouds. We have validated our approach by conducting a set of
rigorous performance evaluation study using the CloudSim toolkit. The results
demonstrate that federated Cloud computing model has immense potential as it
offers significant performance gains as regards to response time and cost
saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape
Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World
This report documents the program and the outcomes of GI-Dagstuhl Seminar
16394 "Software Performance Engineering in the DevOps World".
The seminar addressed the problem of performance-aware DevOps. Both, DevOps
and performance engineering have been growing trends over the past one to two
years, in no small part due to the rise in importance of identifying
performance anomalies in the operations (Ops) of cloud and big data systems and
feeding these back to the development (Dev). However, so far, the research
community has treated software engineering, performance engineering, and cloud
computing mostly as individual research areas. We aimed to identify
cross-community collaboration, and to set the path for long-lasting
collaborations towards performance-aware DevOps.
The main goal of the seminar was to bring together young researchers (PhD
students in a later stage of their PhD, as well as PostDocs or Junior
Professors) in the areas of (i) software engineering, (ii) performance
engineering, and (iii) cloud computing and big data to present their current
research projects, to exchange experience and expertise, to discuss research
challenges, and to develop ideas for future collaborations
Tailoring e-commerce sites to ease recovery after disruptions
Developers of e-commerce applications are often unrealistic about how their Web site is going to be used, and about possible outcomes during site usage. The most commonly considered outcomes of a user's visit to a site are firstly that the visit culminates in a sale, and secondly that the user leaves the site without buying anything - perhaps to return later. In the second case, sites often "remember" any accumulated items so that a shopper can return at a later stage to resume shopping. In this paper, we consider certain disruptions, such as breakdowns, problems caused by human errors and interruptions, which could affect the outcome of the e-commerce shopping experience. These events have definite and possibly long-lasting effects on users, and applications should therefore be developed to cater for these eventualities so as to enhance the usability of the site and encourage further usage. We develop a model for analysing e-commerce application usage and, using this model, propose an evaluation strategy for determining whether an e-commerce site is resistant to such factors. The proposed evaluation mechanism is applied to three sites to arrive at what we call a "disruption-resistance score"
- …