45,514 research outputs found
The Making of Cloud Applications An Empirical Study on Software Development for the Cloud
Cloud computing is gaining more and more traction as a deployment and
provisioning model for software. While a large body of research already covers
how to optimally operate a cloud system, we still lack insights into how
professional software engineers actually use clouds, and how the cloud impacts
development practices. This paper reports on the first systematic study on how
software developers build applications in the cloud. We conducted a
mixed-method study, consisting of qualitative interviews of 25 professional
developers and a quantitative survey with 294 responses. Our results show that
adopting the cloud has a profound impact throughout the software development
process, as well as on how developers utilize tools and data in their daily
work. Among other things, we found that (1) developers need better means to
anticipate runtime problems and rigorously define metrics for improved fault
localization and (2) the cloud offers an abundance of operational data,
however, developers still often rely on their experience and intuition rather
than utilizing metrics. From our findings, we extracted a set of guidelines for
cloud development and identified challenges for researchers and tool vendors
Geoportals: an internet marketing perspective
A geoportal is a web site that presents an entry point to geo-products (including geo-data) on the web. Despite their importance in (spatial) data infrastructures, literature suggest stagnating or even declining trends in visitor numbers. In this paper relevant ideas and techniques for improving performance are derived from internet marketing literature. We tested the extent to which these ideas are already applied in practice through a survey among 48 geoportals worldwide. Results show in many cases positive correlation with trends in visitor numbers. The ideas can be useful for geoportal managers developing their marketing strateg
Using real options to select stable Middleware-induced software architectures
The requirements that force decisions towards building distributed system architectures are usually of a non-functional nature. Scalability, openness, heterogeneity, and fault-tolerance are examples of such non-functional requirements. The current trend is to build distributed systems with middleware, which provide the application developer with primitives for managing the complexity of distribution, system resources, and for realising many of the non-functional requirements. As non-functional requirements evolve, the `coupling' between the middleware and architecture becomes the focal point for understanding the stability of the distributed software system architecture in the face of change. It is hypothesised that the choice of a stable distributed software architecture depends on the choice of the underlying middleware and its flexibility in responding to future changes in non-functional requirements. Drawing on a case study that adequately represents a medium-size component-based distributed architecture, it is reported how a likely future change in scalability could impact the architectural structure of two versions, each induced with a distinct middleware: one with CORBA and the other with J2EE. An option-based model is derived to value the flexibility of the induced-architectures and to guide the selection. The hypothesis is verified to be true for the given change. The paper concludes with some observations that could stimulate future research in the area of relating requirements to software architectures
ClouNS - A Cloud-native Application Reference Model for Enterprise Architects
The capability to operate cloud-native applications can generate enormous
business growth and value. But enterprise architects should be aware that
cloud-native applications are vulnerable to vendor lock-in. We investigated
cloud-native application design principles, public cloud service providers, and
industrial cloud standards. All results indicate that most cloud service
categories seem to foster vendor lock-in situations which might be especially
problematic for enterprise architectures. This might sound disillusioning at
first. However, we present a reference model for cloud-native applications that
relies only on a small subset of well standardized IaaS services. The reference
model can be used for codifying cloud technologies. It can guide technology
identification, classification, adoption, research and development processes
for cloud-native application and for vendor lock-in aware enterprise
architecture engineering methodologies
Proposal for an IMLS Collection Registry and Metadata Repository
The University of Illinois at Urbana-Champaign proposes to design, implement, and research a collection-level registry and item-level metadata repository service that will aggregate information about digital collections and items of digital content created using funds from Institute of Museum and Library Services (IMLS) National Leadership Grants. This work will be a collaboration by the University Library and the Graduate School of Library and Information Science. All extant digital collections initiated or augmented under IMLS aegis from 1998 through September 30, 2005 will be included in the proposed collection registry. Item-level metadata will be harvested from collections making such content available using the Open Archives Initiative Protocol for Metadata Harvesting (OAI PMH). As part of this work, project personnel, in cooperation with IMLS staff and grantees, will define and document appropriate metadata schemas, help create and maintain collection-level metadata records, assist in implementing OAI compliant metadata provider services for dissemination of item-level metadata records, and research potential benefits and issues associated with these activities. The immediate outcomes of this work will be the practical demonstration of technologies that have the potential to enhance the visibility of IMLS funded online exhibits and digital library collections and improve discoverability of items contained in these resources. Experience gained and research conducted during this project will make clearer both the costs and the potential benefits associated with such services. Metadata provider and harvesting service implementations will be appropriately instrumented (e.g., customized anonymous transaction logs, online questionnaires for targeted user groups, performance monitors). At the conclusion of this project we will submit a final report that discusses tasks performed and lessons learned, presents business plans for sustaining registry and repository services, enumerates and summarizes potential benefits of these services, and makes recommendations regarding future implementations of these and related intermediary and end user interoperability services by IMLS projects.unpublishednot peer reviewe
Master of Puppets: Analyzing And Attacking A Botnet For Fun And Profit
A botnet is a network of compromised machines (bots), under the control of an
attacker. Many of these machines are infected without their owners' knowledge,
and botnets are the driving force behind several misuses and criminal
activities on the Internet (for example spam emails). Depending on its
topology, a botnet can have zero or more command and control (C&C) servers,
which are centralized machines controlled by the cybercriminal that issue
commands and receive reports back from the co-opted bots.
In this paper, we present a comprehensive analysis of the command and control
infrastructure of one of the world's largest proprietary spamming botnets
between 2007 and 2012: Cutwail/Pushdo. We identify the key functionalities
needed by a spamming botnet to operate effectively. We then develop a number of
attacks against the command and control logic of Cutwail that target those
functionalities, and make the spamming operations of the botnet less effective.
This analysis was made possible by having access to the source code of the C&C
software, as well as setting up our own Cutwail C&C server, and by implementing
a clone of the Cutwail bot. With the help of this tool, we were able to
enumerate the number of bots currently registered with the C&C server,
impersonate an existing bot to report false information to the C&C server, and
manipulate spamming statistics of an arbitrary bot stored in the C&C database.
Furthermore, we were able to make the control server inaccessible by conducting
a distributed denial of service (DDoS) attack. Our results may be used by law
enforcement and practitioners to develop better techniques to mitigate and
cripple other botnets, since many of findings are generic and are due to the
workflow of C&C communication in general
- …