317 research outputs found
A Model for Managing Information Flow on the World Wide Web
Metadata merged with duplicate record (http://hdl.handle.net/10026.1/330) on 20.12.2016 by CS (TIS).This is a digitised version of a thesis that was deposited in the University Library. If you are the author please contact PEARL Admin ([email protected]) to discuss options.This thesis considers the nature of information management on the World Wide Web. The
web has evolved into a global information system that is completely unregulated, permitting
anyone to publish whatever information they wish. However, this information is almost
entirely unmanaged, which, together with the enormous number of users who access it, places
enormous strain on the web's architecture. This has led to the exposure of inherent flaws,
which reduce its effectiveness as an information system.
The thesis presents a thorough analysis of the state of this architecture, and identifies three
flaws that could render the web unusable: link rot; a shrinking namespace; and the inevitable
increase of noise in the system. A critical examination of existing solutions to these flaws is
provided, together with a discussion on why the solutions have not been deployed or adopted.
The thesis determines that they have failed to take into account the nature of the information
flow between information provider and consumer, or the open philosophy of the web. The
overall aim of the research has therefore been to design a new solution to these flaws in the
web, based on a greater understanding of the nature of the information that flows upon it.
The realization of this objective has included the development of a new model for managing
information flow on the web, which is used to develop a solution to the flaws. The solution
comprises three new additions to the web's architecture: a temporal referencing scheme; an
Oracle Server Network for more effective web browsing; and a Resource Locator Service,
which provides automatic transparent resource migration. The thesis describes their design
and operation, and presents the concept of the Request Router, which provides a new way of
integrating such distributed systems into the web's existing architecture without breaking it.
The design of the Resource Locator Service, including the development of new protocols for
resource migration, is covered in great detail, and a prototype system that has been developed
to prove the effectiveness of the design is presented. The design is further validated by
comprehensive performance measurements of the prototype, which show that it will scale to
manage a web whose size is orders of magnitude greater than it is today
RAICS as advanced cloud backup technology in telecommunication networks
Data crashes can cause unpredictable and even hard-out effects for an enterprise or authority. Backup strategies as antidote unify a complex of organizational and technical measures that are necessary for data restoring, processing and transfer as well as for data security and defense against its loss, crash and tampering. High-performance modern Internet allows delivery of backup functions and is complemented by attractive (mobile) services with a Quality of Service comparable to that in Local Area Networks. One of the most efficient backup strategies acts the delegation of this functionality to an external provider, an online or Cloud Storage system. This article argues for a consideration of intelligently distributed backup over multiple storage providers in addition to the use of local resources. Some examples of Cloud Computing deployment in the USA, the European Union as well as in Ukraine and the Russian Federation are introduced to identify the benefits and challenges of distributed backup with Cloud Storage
Service-oriented models for audiovisual content storage
What are the important topics to understand if involved with storage services to hold digital audiovisual content? This report takes a look at how content is created and moves into and out of storage; the storage service value networks and architectures found now and expected in the future; what sort of data transfer is expected to and from an audiovisual archive; what transfer protocols to use; and a summary of security and interface issues
ImageJ2: ImageJ for the next generation of scientific image data
ImageJ is an image analysis program extensively used in the biological
sciences and beyond. Due to its ease of use, recordable macro language, and
extensible plug-in architecture, ImageJ enjoys contributions from
non-programmers, amateur programmers, and professional developers alike.
Enabling such a diversity of contributors has resulted in a large community
that spans the biological and physical sciences. However, a rapidly growing
user base, diverging plugin suites, and technical limitations have revealed a
clear need for a concerted software engineering effort to support emerging
imaging paradigms, to ensure the software's ability to handle the requirements
of modern science. Due to these new and emerging challenges in scientific
imaging, ImageJ is at a critical development crossroads.
We present ImageJ2, a total redesign of ImageJ offering a host of new
functionality. It separates concerns, fully decoupling the data model from the
user interface. It emphasizes integration with external applications to
maximize interoperability. Its robust new plugin framework allows everything
from image formats, to scripting languages, to visualization to be extended by
the community. The redesigned data model supports arbitrarily large,
N-dimensional datasets, which are increasingly common in modern image
acquisition. Despite the scope of these changes, backwards compatibility is
maintained such that this new functionality can be seamlessly integrated with
the classic ImageJ interface, allowing users and developers to migrate to these
new methods at their own pace. ImageJ2 provides a framework engineered for
flexibility, intended to support these requirements as well as accommodate
future needs
- …