229,500 research outputs found

    A survey of UK university web management: staffing, systems and issues

    No full text
    Purpose: The purpose of the paper is to summarize the findings of a survey of UK universities about how their web site is managed and resourced, which technologies are in use and what are seen as the main issues and priorities. Methodology/approach: The paper is based on a web based questionnaire distributed in summer 2006, and which received 104 usable responses from 87 insitutions. Findings: The survey showed that some web teams were based in IT and some in external relations, yet in both cases the site typically served internal and external audiences. The role of web manager is partly management of resources, time and people, partly about marketing and liaison and partly also concerned with more technical aspects including interface design and HTML. But it is a diverse role with a wide spread of responsibilities. On the whole web teams were relatively small. Three quarters of responding institutions had a CMS, but specific systems in use were diverse. 60% had a portal. There was evidence of increasing use of blogs and wikis. The key driver for the web site is student recruitment, with instituitional reputation and information to stakeholders also being important. The biggest perceived weaknesses were maintaining consistency with devolved content creation and currency of content; lack of resourcing a key threat while comprehensiveness was a key strength. Current and wished for projects pointed again to the diversity of the sector. Research implications/limitations: The lack of comparative data and difficulties of interpreting responses to closed questions where respondents could have quite different status (partly reflecting divergent patterns of governance of the web across the sector) create issues with the reliability of the research. Practical implications: Data about resourcing of web management, technology in use etc at comparable institutions is invaluable for practitioners in their efforts to gain resource in their own context. Originality/value of paper: The paper adds more systematic, current data to our limited knowledge about how university web sites are managed

    Techniques of data prefetching, replication, and consistency in the Internet

    Get PDF
    Internet has become a major infrastructure for information sharing in our daily life, and indispensable to critical and large applications in industry, government, business, and education. Internet bandwidth (or the network speed to transfer data) has been dramatically increased, however, the latency time (or the delay to physically access data) has been reduced in a much slower pace. The rich bandwidth and lagging latency can be effectively coped with in Internet systems by three data management techniques: caching, replication, and prefetching. The focus of this dissertation is to address the latency problem in Internet by utilizing the rich bandwidth and large storage capacity for efficiently prefetching data to significantly improve the Web content caching performance, by proposing and implementing scalable data consistency maintenance methods to handle Internet Web address caching in distributed name systems (DNS), and to handle massive data replications in peer-to-peer systems. While the DNS service is critical in Internet, peer-to-peer data sharing is being accepted as an important activity in Internet.;We have made three contributions in developing prefetching techniques. First, we have proposed an efficient data structure for maintaining Web access information, called popularity-based Prediction by Partial Matching (PB-PPM), where data are placed and replaced guided by popularity information of Web accesses, thus only important and useful information is stored. PB-PPM greatly reduces the required storage space, and improves the prediction accuracy. Second, a major weakness in existing Web servers is that prefetching activities are scheduled independently of dynamically changing server workloads. Without a proper control and coordination between the two kinds of activities, prefetching can negatively affect the Web services and degrade the Web access performance. to address this problem, we have developed a queuing model to characterize the interactions. Guided by the model, we have designed a coordination scheme that dynamically adjusts the prefetching aggressiveness in Web Servers. This scheme not only prevents the Web servers from being overloaded, but it can also minimize the average server response time. Finally, we have proposed a scheme that effectively coordinates the sharing of access information for both proxy and Web servers. With the support of this scheme, the accuracy of prefetching decisions is significantly improved.;Regarding data consistency support for Internet caching and data replications, we have conducted three significant studies. First, we have developed a consistency support technique to maintain the data consistency among the replicas in structured P2P networks. Based on Pastry, an existing and popular P2P system, we have implemented this scheme, and show that it can effectively maintain consistency while prevent hot-spot and node-failure problems. Second, we have designed and implemented a DNS cache update protocol, called DNScup, to provide strong consistency for domain/IP mappings. Finally, we have developed a dynamic lease scheme to timely update the replicas in Internet

    Synthesis-Aided Crash Consistency for Storage Systems

    Get PDF
    Reliable storage systems must be crash consistent - guaranteed to recover to a consistent state after a crash. Crash consistency is non-trivial as it requires maintaining complex invariants about persistent data structures in the presence of caching, reordering, and system failures. Current programming models offer little support for implementing crash consistency, forcing storage system developers to roll their own consistency mechanisms. Bugs in these mechanisms can lead to severe data loss for applications that rely on persistent storage. This paper presents a new synthesis-aided programming model for building crash-consistent storage systems. In this approach, storage systems can assume an angelic crash-consistency model, where the underlying storage stack promises to resolve crashes in favor of consistency whenever possible. To realize this model, we introduce a new labeled writes interface for developers to identify their writes to disk, and develop a program synthesis tool, DepSynth, that generates dependency rules to enforce crash consistency over these labeled writes. We evaluate our model in a case study on a production storage system at Amazon Web Services. We find that DepSynth can automate crash consistency for this complex storage system, with similar results to existing expert-written code, and can automatically identify and correct consistency and performance issues

    AOP-Based Caching of Dynamic Web Content: Experience with J2EE Applications

    Get PDF
    Caching dynamic web content is an appealing approach to reduce Internet latency and server load. In aspect-oriented programming, caching is usually presented as an orthogonal aspect that could be automatically integrated to an application. A classical AOP motivating example is adding caching of static data with no underlying consistency. But what about caching dynamic data? In this paper, we explore the feasibility of aspectizing consistent caching of dynamically generated web documents. We use two J2EE web applications to validate our experiments: the TPC-W on-line bookstore and the RUBiS auction site. To the question "Can we consider consistent caching of dynamic web content as a separate aspect that could be transparently and efficiently integrated to a dynamic web application?", our conclusions are the following: (a) Just as in the classic AOP caching example having no consistency management, AOP provides a modular way to add caching having a strong consistency policy. (b) However, maintaining strong consistency on web pages results in prohibitively expensive run-time processing and, thus, any straightforward implementation in AOP is too slow. We propose an optimization that essentially eliminates all the run-time overhead in practice. (c) Furthermore, we identify in-stances where consistent web caching may not be orthogonal to J2EE applications, especially for those applications that rely on sophisticated web techniques (e.g., cookies). In summary, adding caching supporting strong consistency using AOP turned out to be an unexpected chal-lenge

    Care of burns in Scotland: 3-year data from the managed clinical network national registry

    Get PDF
    Introduction The Managed Clinical Network for Care of Burns in Scotland (COBIS) was launched in April 2007. Primary aims included establishing and maintaining a registry of complex burn injury in Scotland and setting mechanisms to regularly audit outcome of burn treatment against nationally agreed standards of care. On behalf of COBIS, we present 3-year incidence and mortality data of Scottish patients admitted with a complex burn injury in this abstract. Methods From January 2010 onwards, data were prospectively collected for all patients in Scotland with complex burn injury admitted to Scottish burns units. Data collection was initially on a paper pro forma, but subsequently evolved into a web-based audit data capture system to securely link hospital sites involved in the delivery of care of complex burns. Data collected included extent and mechanism of burn, presence of airway burn or smoke inhalational injury, comorbidities, complications, length of stay, interventions and mortality. Quality, completeness and consistency of data collection are audited with feedback to the individual units. Results In a population of approximately 5.3 million, the annual incidence of complex burn injury is 499 to 537 (9 to 10 per 100,000). The incidence of a major burn is 5% of burn admissions. The hospital mortality from a burn is 1 to 2.2%. See Table 1. Table 1. Numbers of complex burns in Scotland 2010 to 2012 Conclusion From these data, Scotland now has comprehensive national figures for complex burn injury. This allows for benchmarking against other international indices, few of which provide comprehensive data. COBIS data can now also be correlated with other mortality data sources. As data quality improves, detailed analysis of mortality data will allow COBIS to identify contributing issues affecting burns patients. Some issues identified already are that patients with burns often die soon after their discharge from hospital of other related and unrelated causes. Subsequent analysis of this will allow COBIS to identify and address issues that may be contributing to these statistics

    Maintaining Integrity Constraints in Semantic Web

    Get PDF
    As an expressive knowledge representation language for Semantic Web, Web Ontology Language (OWL) plays an important role in areas like science and commerce. The problem of maintaining integrity constraints arises because OWL employs the Open World Assumption (OWA) as well as the Non-Unique Name Assumption (NUNA). These assumptions are typically suitable for representing knowledge distributed across the Web, where the complete knowledge about a domain cannot be assumed, but make it challenging to use OWL itself for closed world integrity constraint validation. Integrity constraints (ICs) on ontologies have to be enforced; otherwise conflicting results would be derivable from the same knowledge base (KB). The current trends of incorporating ICs into OWL are based on its query language SPARQL, alternative semantics, or logic programming. These methods usually suffer from limited types of constraints they can handle, and/or inherited computational expensiveness. This dissertation presents a comprehensive and efficient approach to maintaining integrity constraints. The design enforces data consistency throughout the OWL life cycle, including the processes of OWL generation, maintenance, and interactions with other ontologies. For OWL generation, the Paraconsistent model is used to maintain integrity constraints during the relational database to OWL translation process. Then a new rule-based language with set extension is introduced as a platform to allow users to specify constraints, along with a demonstration of 18 commonly used constraints written in this language. In addition, a new constraint maintenance system, called Jena2Drools, is proposed and implemented, to show its effectiveness and efficiency. To further handle inconsistencies among multiple distributed ontologies, this work constructs a framework to break down global constraints into several sub-constraints for efficient parallel validation

    Basis Token Consistency: A Practical Mechanism for Strong Web Cache Consistency

    Full text link
    With web caching and cache-related services like CDNs and edge services playing an increasingly significant role in the modern internet, the problem of the weak consistency and coherence provisions in current web protocols is becoming increasingly significant and drawing the attention of the standards community [LCD01]. Toward this end, we present definitions of consistency and coherence for web-like environments, that is, distributed client-server information systems where the semantics of interactions with resource are more general than the read/write operations found in memory hierarchies and distributed file systems. We then present a brief review of proposed mechanisms which strengthen the consistency of caches in the web, focusing upon their conceptual contributions and their weaknesses in real-world practice. These insights motivate a new mechanism, which we call "Basis Token Consistency" or BTC; when implemented at the server, this mechanism allows any client (independent of the presence and conformity of any intermediaries) to maintain a self-consistent view of the server's state. This is accomplished by annotating responses with additional per-resource application information which allows client caches to recognize the obsolescence of currently cached entities and identify responses from other caches which are already stale in light of what has already been seen. The mechanism requires no deviation from the existing client-server communication model, and does not require servers to maintain any additional per-client state. We discuss how our mechanism could be integrated into a fragment-assembling Content Management System (CMS), and present a simulation-driven performance comparison between the BTC algorithm and the use of the Time-To-Live (TTL) heuristic.National Science Foundation (ANI-9986397, ANI-0095988

    PEMANFAATAN SEMANTIK WEB RULE LANGUANGE (SWRL) DALAM PENGEMBANGAN PROTOTYPE SISTEM PERENCANAAN PAKET PERJALANAN WISATA DI SUMATERA SELATAN

    Get PDF
    Semantic Web is a new technology in the field of information technologies that use ontology as a representation of the knowledge base of domain problems. Ontology model that forms expressed using OWL is equipped with a set of rules (SWRL) to control the consistency of information as a result of the relationships between objects in the OWL. The model stores all knowledge related to the main attractions of tourism research in South Sumatra. The study focused on the use of SWRL for planning travel package tours in South Sumatra. The method used in this study is to integrate secondary data obtained from relevant agencies such as Tourism Office Province of South Sumatra, Central Bureau of Statistics of South Sumatra, South Sumatra travel maps, and so on. In addition, previous studies related to the semantic web as well as on planning travel package as a reference. Research carried out, established a prototype system based on the semantic web-based travel package planning system in South Sumatra. The initial step of making the system begins by establishing model ontology as a representation of the knowledge base. Ontology model formed is another representation of the graph attractions in South Sumatra. Graph is formed by combining secondary data obtained from the relevant agencies and the southern Sumatran map. The main object in the ontology model is attractions in South Sumatra. In the ontology model also contains a representation of the relationships among the attractions such as the territorial aspect, a territorial status of the city, and so on. To control the consistency of data and relationships between objects in the ontology model established rules that are represented in the form of SWRL. SWRL not only plays a role in maintaining data consistency and the consistency of the relationships among objects in the ontology model, but also plays a role in the inference process planning package tours in South Sumatra. In doing inference, combined with JSP SWRL in doing computational cost during the tour. This is because of the limited ability of SWRL in terms of numerical computation

    PEMANFAATAN SEMANTIK WEB RULE LANGUANGE (SWRL) DALAM PENGEMBANGAN PROTOTYPE SISTEM PERENCANAAN PAKET PERJALANAN WISATA DI SUMATERA SELATAN

    Get PDF
    Semantic Web is a new technology in the field of information technologies that use ontology as a representation of the knowledge base of domain problems. Ontology model that forms expressed using OWL is equipped with a set of rules (SWRL) to control the consistency of information as a result of the relationships between objects in the OWL. The model stores all knowledge related to the main attractions of tourism research in South Sumatra. The study focused on the use of SWRL for planning travel package tours in South Sumatra. The method used in this study is to integrate secondary data obtained from relevant agencies such as Tourism Office Province of South Sumatra, Central Bureau of Statistics of South Sumatra, South Sumatra travel maps, and so on. In addition, previous studies related to the semantic web as well as on planning travel package as a reference. Research carried out, established a prototype system based on the semantic web-based travel package planning system in South Sumatra. The initial step of making the system begins by establishing model ontology as a representation of the knowledge base. Ontology model formed is another representation of the graph attractions in South Sumatra. Graph is formed by combining secondary data obtained from the relevant agencies and the southern Sumatran map. The main object in the ontology model is attractions in South Sumatra. In the ontology model also contains a representation of the relationships among the attractions such as the territorial aspect, a territorial status of the city, and so on. To control the consistency of data and relationships between objects in the ontology model established rules that are represented in the form of SWRL. SWRL not only plays a role in maintaining data consistency and the consistency of the relationships among objects in the ontology model, but also plays a role in the inference process planning package tours in South Sumatra. In doing inference, combined with JSP SWRL in doing computational cost during the tour. This is because of the limited ability of SWRL in terms of numerical computation
    corecore