10,619 research outputs found
Human factors and the WWW : making sense of URLs
We present a study of how WWW users āmake senseā of URLs. Experiments were used to investigate
usersā capacity to employ the URL as a surrogate for the resource to which it refers. The results show that users
can infer useful information from URLs, but that such improvisation has shortcomings as a navigation aid
Support for collaborative component-based software engineering
Collaborative system composition during design has been poorly supported by traditional CASE tools (which have usually concentrated on supporting individual projects) and almost exclusively focused on static composition. Little support for maintaining large distributed collections of heterogeneous software components across a number of projects has been developed. The CoDEEDS project addresses the collaborative determination, elaboration, and evolution of design spaces that describe both static and dynamic compositions of software components from sources such as component libraries, software service directories, and reuse repositories. The GENESIS project has focussed, in the development of OSCAR, on the creation and maintenance of large software artefact repositories. The most recent extensions are explicitly addressing the provision of cross-project global views of large software collections and historical views of individual artefacts within a collection. The long-term benefits of such support can only be realised if OSCAR and CoDEEDS are widely adopted and steps to facilitate this are described.
This book continues to provide a forum, which a recent book, Software Evolution with UML and XML, started, where expert insights are presented on the subject.
In that book, initial efforts were made to link together three current phenomena: software evolution, UML, and XML. In this book, focus will be on the practical side of linking them, that is, how UML and XML and their related methods/tools can assist software evolution in practice.
Considering that nowadays software starts evolving before it is delivered, an apparent feature for software evolution is that it happens over all stages and over all aspects.
Therefore, all possible techniques should be explored. This book explores techniques based on UML/XML and a combination of them with other techniques (i.e., over all techniques from theory to tools).
Software evolution happens at all stages. Chapters in this book describe that software evolution issues present at stages of software architecturing, modeling/specifying,
assessing, coding, validating, design recovering, program understanding, and reusing.
Software evolution happens in all aspects. Chapters in this book illustrate that software evolution issues are involved in Web application, embedded system, software repository, component-based development, object model, development environment, software metrics, UML use case diagram, system model, Legacy system, safety critical system, user interface, software reuse, evolution management, and variability modeling. Software evolution needs to be facilitated with all possible techniques. Chapters in this book demonstrate techniques, such as formal methods, program transformation,
empirical study, tool development, standardisation, visualisation, to control system changes to meet organisational and business objectives in a cost-effective way. On the journey of the grand challenge posed by software evolution, the journey that we have to make, the contributory authors of this book have already made further
advances
Learning Correlations between Linguistic Indicators and Semantic Constraints: Reuse of Context-Dependent Descriptions of Entities
This paper presents the results of a study on the semantic constraints
imposed on lexical choice by certain contextual indicators. We show how such
indicators are computed and how correlations between them and the choice of a
noun phrase description of a named entity can be automatically established
using supervised learning. Based on this correlation, we have developed a
technique for automatic lexical choice of descriptions of entities in text
generation. We discuss the underlying relationship between the pragmatics of
choosing an appropriate description that serves a specific purpose in the
automatically generated text and the semantics of the description itself. We
present our work in the framework of the more general concept of reuse of
linguistic structures that are automatically extracted from large corpora. We
present a formal evaluation of our approach and we conclude with some thoughts
on potential applications of our method.Comment: 7 pages, uses colacl.sty and acl.bst, uses epsfig. To appear in the
Proceedings of the Joint 17th International Conference on Computational
Linguistics 36th Annual Meeting of the Association for Computational
Linguistics (COLING-ACL'98
OGSA first impressions: a case study re-engineering a scientific applicationwith the open grid services architecture
We present a case study of our experience re-engineeringa scientific application using the Open Grid Services Architecture(OGSA), a new specification for developing Gridapplications using web service technologies such as WSDLand SOAP. During the last decade, UCL?s Chemistry departmenthas developed a computational approach for predictingthe crystal structures of small molecules. However,each search involves running large iterations of computationallyexpensive calculations and currently takes a fewmonths to perform. Making use of early implementationsof the OGSA specification we have wrapped the Fortranbinaries into OGSI-compliant service interfaces to exposethe existing scientific application as a set of loosely coupledweb services. We show how the OGSA implementationfacilitates the distribution of such applications across alarge network, radically improving performance of the systemthrough parallel CPU capacity, coordinated resourcemanagement and automation of the computational process.We discuss the difficulties that we encountered turning Fortranexecutables into OGSA services and delivering a robust,scalable system. One unusual aspect of our approachis the way we transfer input and output data for the Fortrancodes. Instead of employing a file transfer service wetransform the XML encoded data in the SOAP message tonative file format, where possible using XSLT stylesheets.We also discuss a computational workflow service that enablesusers to distribute and manage parts of the computationalprocess across different clusters and administrativedomains. We examine how our experience re-engineeringthe polymorph prediction application led to this approachand to what extent our efforts have succeeded
Citation sentence reuse behavior of scientists: A case study on massive bibliographic text dataset of computer science
Our current knowledge of scholarly plagiarism is largely based on the
similarity between full text research articles. In this paper, we propose an
innovative and novel conceptualization of scholarly plagiarism in the form of
reuse of explicit citation sentences in scientific research articles. Note that
while full-text plagiarism is an indicator of a gross-level behavior, copying
of citation sentences is a more nuanced micro-scale phenomenon observed even
for well-known researchers. The current work poses several interesting
questions and attempts to answer them by empirically investigating a large
bibliographic text dataset from computer science containing millions of lines
of citation sentences. In particular, we report evidences of massive copying
behavior. We also present several striking real examples throughout the paper
to showcase widespread adoption of this undesirable practice. In contrast to
the popular perception, we find that copying tendency increases as an author
matures. The copying behavior is reported to exist in all fields of computer
science; however, the theoretical fields indicate more copying than the applied
fields
Basis Token Consistency: A Practical Mechanism for Strong Web Cache Consistency
With web caching and cache-related services like CDNs and edge services playing an increasingly significant role in the modern internet, the problem of the weak consistency and coherence provisions in current web protocols is becoming increasingly significant and drawing the attention of the standards community [LCD01]. Toward this end, we present definitions of consistency and coherence for web-like environments, that is, distributed client-server information systems where the semantics of interactions with resource are more general than the read/write operations found in memory hierarchies and distributed file systems. We then present a brief review of proposed mechanisms which strengthen the consistency of caches in the web, focusing upon their conceptual contributions and their weaknesses in real-world practice. These insights motivate a new mechanism, which we call "Basis Token Consistency" or BTC; when implemented at the server, this mechanism allows any client (independent of the presence and conformity of any intermediaries) to maintain a self-consistent view of the server's state. This is accomplished by annotating responses with additional per-resource application information which allows client caches to recognize the obsolescence of currently cached entities and identify responses from other caches which are already stale in light of what has already been seen. The mechanism requires no deviation from the existing client-server communication model, and does not require servers to maintain any additional per-client state. We discuss how our mechanism could be integrated into a fragment-assembling Content Management System (CMS), and present a simulation-driven performance comparison between the BTC algorithm and the use of the Time-To-Live (TTL) heuristic.National Science Foundation (ANI-9986397, ANI-0095988
- ā¦