25 research outputs found

    Digital Preservation: Handling Large Collections Case Study: Digitizing Egyptian Press Archive at Centre for Economic, Judicial, and Social Study and Documentation(CEDEJ)

    Get PDF
    Managing the digitization of large collections is quite a challenge not only in terms of quantity, but also in terms of text and material quality, designing the workflow system which organizes the operations, and handling metadata. This has been the focus of the Bibliotheca Alexandrina during its partnership with the Centre for Economic, Judicial, and Social Study and Documentation (CEDEJ), to digitize more than 800,000 pages of press articles dating back to 1976. This triggered a need to design a workflow to manage such a massive collection proficiently. This required simultaneous intervention of four main aspects; data analysis, developing a digitization workflow , implementing and installing the necessary software tools for metadata entry, and publishing the digital archive. This paper demonstrates the workflow system implemented to manage this massive press collection, yielding more than 400,000 items to date. It illustrates the BA’s Digital Assets Factory (DAF); the nucleus of the digitization process ,and the tools and stages implemented for ingesting data into the system. The outflow is also discussed in terms of organizing and grouping multipart press clips, in addition to reviewing and validating the output. The paper also discusses the challenges of associating the accessible online archive with a powerful search engine supporting multidimensional search

    Magnesium in type 2 diabetes mellitus and its correlation with glycemic control

    Get PDF
    Background: Hypomagnesaemia may have negative impact on glucose homeostasis and insulin sensitivity. This study was done to compare serum Mg levels in type 2 diabetic patients with non diabetic healthy control subjects and to assess the correlation between serum Mg levels and glycemic control in Egyptian patients.Methods: 60 type 2 diabetic patients attending the outpatient clinic of diabetes at Kasr Al Aini hospital faculty of medicine Cairo University and 30 healthy age matched control subjects were enrolled. Fasting blood sugar, fasting insulin, fasting lipids, Hb A1C and serum Mg were measured. Weight, height and blood pressure were recorded. BMI, IR (HOMA), were calculated. The data was analyzed and expressed in terms of mean ± SD. Pearson correlation was performed to establish the relationship between Mg and metabolic variables in type 2 diabetic patients.Results: serum Mg levels were significantly reduced in type 2 diabetic patients compared to the control group with mean ±SD (1.29 ± 0.31 mg/dl) versus (2.41 ± 0.13 mg/dl) with P value < 0.001. There were highly significantly negative correlations between serum Mg levels and HbA1c, fasting glucose and insulin resistance with (r = -0.969, -0.894, -0.653) respectively, P value < 0.001. The best cut off point of Mg was ≤ 2.0 mg/dl in differentiating cases from controls using ROC curve analysis.Conclusion: hypomagnesaemia is closely linked to type 2 diabetes mellitus and it is strongly correlated to glycemic control. We recommend to measure serum Mg in type 2 diabetes and patients who need supplementation should be considered.

    HARP: A Hierarchical Asynchronous Replication Protocol for Massively Replicated Systems

    No full text
    This paper presents a new asynchronous replication protocol that is especially suitable for wide area and mobile systems, and allows reads and writes to occur at any replica. Updates reach other replicas using a propagation scheme based on nodes organized into a logical hierarchy. The hierarchical structure enables the scheme to scale well for thousands of replicas, while ensuring reliable delivery. A new service interface is proposed that provides different levels of asynchrony, allowing strong consistency and weak consistency to be integrated into the same framework. Further, due to the hierarchical pattern of propagation, the scheme provides the ability to locate replicas that are more up-to-date than others, depending upon the needs of various applications. Also, it allows a selection from a number of reconciliation techniques based on delivery order mechanisms. Restructuring operations are provided to build and reconfigure the hierarchy dynamically without disturbing normal operat..

    HPP: A Hierarchical Propagation Protocol for Large Scale Replication in Wide Area Networks

    No full text
    This paper describes a fast, reliable, scalable and efficient propagation protocol for weak-consistency replica management. This protocol can be used to implement a bulletin board service such as the Usenet news on the Internet. It is based on organizing the nodes in a network into a logical hierarchy, and maintaining a limited amount of state information at each node. It ensures that messages are not lost due to failures or partitions once they are repaired. Further, the protocol allows messages to be diffused while nodes are down provided the parent and child nodes of a failed node are alive. Moreover, it does not involve redundant messages to be sent during normal conditions; and, by maintaining minimal state information, it minimizes redundancy in a novel manner in case failures occur. Keywords: Distributed databases, large networks, replication, weak consistency, propagation protocol, reliability, scalability. 1 Introduction Data is replicated in distributed systems to improve sy..

    Maintaining Causal Order in Large Scale Distributed Systems Using a Logical Hierarchy

    No full text
    The paper presents a simple and efficient protocol that supports exchange of messages among a set of nodes while preserving the causal ordering of message exchange. It is designed for a large scale replicated system where a message is sent to every replica in the network. The desirable characteristics of the protocol are that it imposes little space overhead appended to each message. This is achieved by using a propagation algorithm that is based on nodes organized in a logical hierarchy, where each node sends and receives messages from a few nodes only. Therefore, a node needs to keep track of messages received from those nodes and stamp messages with this information only in order to verify the causal ordering. This low cost of timestamp size results in reduced communication overhead and increased performance and scalability of the system. The protocol is fully asynchronous and the burden of propagation is evenly distributed among the nodes, which improves system performance. It can ..

    DAR: Institutional Repository Integration in Action

    No full text
    Abstract. The Digital Assets Repository (DAR) is a system developed at the Bibliotheca Alexandrina to manage the full lifecycle of a digital asset: its creation and ingestion, its metadata management, storage and archival in addition to the necessary mechanisms for publishing and dissemination. In its third release, the system architecture has been revamped into a modular design including components that are best of the breed, in addition to defining a flexible content model for digital objects based on current standards and a focus on integrating DAR with different sources and applications. The goal of this paper is to demonstrate the building blocks of DAR as an example of a modern repository, in addition to discussing the challenges that face an institution in consolidating its assets and DAR&apos;s answer to these challenges
    corecore