2,396 research outputs found
Nanotechnologies in Cultural Heritage - Materials and Instruments for Diagnosis and Treatment
This chapter aims to evaluate the nanomaterials that can be used to diagnostic, conservation and restoration of different artifacts and monuments and that can contribute to solving the problems which could appear during weathering processes of them. The nanotechnology, as a new and revolutionary area in science, can improve the traditional methods currently used for restoration and preservation in cultural heritage and can contribute to the creation of new highly specialized methods for diagnostic and treatment of different artifacts or even monuments. With a smaller size, a higher penetrability, viscosity, thermal and magnetic properties, in comparison with the traditional materials, the nanomaterials can contribute to solve the problems deriving from specific phenomena that could appear during the intervention and to identify the potential newly formed products in the treated materials. In this chapter, some aspects about the nanomaterials used for conservation and restoration of stone and paper artifacts are evidenced and discussed
Transactional failure recovery for a distributed key-value store
With the advent of cloud computing, many applications have embraced the ensuing paradigm shift towards modern distributed key-value data stores, like HBase, in order to benefit from the elastic scalability on offer. However, many applications still hesitate to make the leap from the traditional relational database model simply because they cannot compromise on the standard transactional guarantees of atomicity, isolation, and durability. To get the best of both worlds, one option is to integrate an independent transaction management component with a distributed key-value store. In this paper, we discuss the implications of this approach for durability. In particular, if the transaction manager provides durability (e.g., through logging), then we can relax durability constraints in the key-value store. However, if a component fails (e.g., a client or a key-value server), then we need a coordinated recovery procedure to ensure that commits are persisted correctly. In our research, we integrate an independent transaction manager with HBase. Our main contribution is a failure recovery middleware for the integrated system, which tracks the progress of each commit as it is flushed down by the client and persisted within HBase, so that we can recover reliably from failures. During recovery, commits that were interrupted by the failure are replayed from the transaction management log. Importantly, the recovery process does not interrupt transaction processing on the available servers. Using a benchmark, we evaluate the impact of component failure, and subsequent recovery, on application performance
Arquitetura de elevada disponibilidade para bases de dados na cloud
Dissertação de mestrado em Computer ScienceCom a constante expansão de sistemas informáticos nas diferentes áreas de aplicação, a
quantidade de dados que exigem persistência aumenta exponencialmente. Assim, por
forma a tolerar faltas e garantir a disponibilidade de dados, devem ser implementadas
técnicas de replicação.
Atualmente existem várias abordagens e protocolos, tendo diferentes tipos de aplicações
em vista. Existem duas grandes vertentes de protocolos de replicação, protocolos genéricos,
para qualquer serviço, e protocolos específicos destinados a bases de dados. No que toca
a protocolos de replicação genéricos, as principais técnicas existentes, apesar de completa mente desenvolvidas e em utilização, têm algumas limitações, nomeadamente: problemas
de performance relativamente a saturação da réplica primária na replicação passiva e o
determinismo necessário associado à replicação ativa. Algumas destas desvantagens são
mitigadas pelos protocolos específicos de base de dados (e.g., com recurso a multi-master)
mas estes protocolos não permitem efetuar uma separação entre a lógica da replicação e
os respetivos dados. Abordagens mais recentes tendem a basear-se em técnicas de repli cação com fundamentos em mecanismos distribuídos de logging. Tais mecanismos propor cionam alta disponibilidade de dados e tolerância a faltas, permitindo abordagens inovado ras baseadas puramente em logs.
Por forma a atenuar as limitações encontradas não só no mecanismo de replicação ativa
e passiva, mas também nas suas derivações, esta dissertação apresenta uma solução de
replicação híbrida baseada em middleware, o SQLware. A grande vantagem desta abor dagem baseia-se na divisão entre a camada de replicação e a camada de dados, utilizando
um log distribuído altamente escalável que oferece tolerância a faltas e alta disponibilidade.
O protótipo desenvolvido foi validado com recurso à execução de testes de desempenho,
sendo avaliado em duas infraestruturas diferentes, nomeadamente, um servidor privado
de média gama e um grupo de servidores de computação de alto desempenho. Durante a
avaliação do protótipo, o standard da indústria TPC-C, tipicamente utilizado para avaliar
sistemas de base de dados transacionais, foi utilizado. Os resultados obtidos demonstram
que o SQLware oferece uma aumento de throughput de 150 vezes, comparativamente ao
mecanismo de replicação nativo da base de dados considerada, o PostgreSQL.With the constant expansion of computational systems, the amount of data that requires
durability increases exponentially. All data persistence must be replicated in order to provide high-availability and fault tolerance according to the surrogate application or use-case.
Currently, there are numerous approaches and replication protocols developed supporting different use-cases. There are two prominent variations of replication protocols, generic
protocols, and database specific ones. The two main techniques associated with generic
replication protocols are the active and passive replication. Although generic replication
techniques are fully matured and widely used, there are inherent problems associated with
those protocols, namely: performance issues of the primary replica of passive replication
and the determinism required by the active replication. Some of those disadvantages are
mitigated by specific database replication protocols (e.g., using multi-master) but, those
protocols do not allow a separation between logic and data and they can not be decoupled
from the database engine. Moreover, recent strategies consider highly-scalable and fault tolerant distributed logging mechanisms, allowing for newer designs based purely on logs
to power replication.
To mitigate the shortcomings found in both active and passive replication mechanisms,
but also in partial variations of these methods, this dissertation presents a hybrid replication middleware, SQLware. The cornerstone of the approach lies in the decoupling between
the logical replication layer and the data store, together with the use of a highly scalable distributed log that provides fault-tolerance and high-availability. We validated the prototype
by conducting a benchmarking campaign to evaluate the overall system’s performance under two distinct infrastructures, namely a private medium class server, and a private high
performance computing cluster. Across the evaluation campaign, we considered the TPCC benchmark, a widely used benchmark in the evaluation of Online transaction processing
(OLTP) database systems. Results show that SQLware was able to achieve 150 times more
throughput when compared with the native replication mechanism of the underlying data
store considered as baseline, PostgreSQL.This work was partially funded by FCT - Fundação para a Ciência e a Tecnologia, I.P.,
(Portuguese Foundation for Science and Technology) within project UID/EEA/50014/201
Recommended from our members
In Situ Deacidification of Vernacular Wallpaper
This thesis involved testing common proprietary deacidification sprays on vernacular wallpaper in situ. Inexpensive, mass-produced wallpaper is commonly overlooked by many conservators, whose efforts are more often directed toward the higher quality wallpapers that hang in the homes of historic luminaries. Cheaper wallpaper is just as relevant as these upscale counterparts, yet its materiality makes it more ephemeral and, therefore, in need of preservation efforts. Vernacular wallpaper was first produced in the middle of the 19th century, when wood pulp was introduced to the manufacturing process. A cheap alternative to cotton rags, wood pulp drove down the cost of production, making a traditionally expensive product available to nearly all Americans. The presence of wood pulp, however, also causes the wallpaper to deteriorate more quickly as the result of a higher acid content. Deacidification is a conservation method that was developed during the mid-20th century to preserve deteriorating library collections. By neutralizing the acids present in paper and providing an alkaline reserve to protect against future acids, deacidification is believed to prolong the lifespan of wood pulp paper. Although paper conservators usually treat wallpaper in a laboratory setting, there are cases in which its removal from the wall may be deemed inappropriate. Professional laboratory conservation may also be prohibitively expensive for smaller, low budget house museums that often include vernacular wallpaper. Proprietary deacidification products were therefore chosen for testing. Vernacular wallpaper was provided by The Lower East Side Tenement Museum in New York City, where thousands of working class immigrants lived and worked between the 1860s and 1930s. The museum founders discovered the building in the 1980s with its residential floors exactly as they had been left when the building was condemned in 1935. The multiple layers of deteriorating wallpaper that remained in situ have become a defining feature of the museum and are preserved as integral architectural finishes. For this reason, and because most of the wallpaper is extremely brittle, in situ treatment is preferred. Three spray products were tested on the wallpaper at the Tenement Museum. In order to be considered successful, these products were required to neutralize wallpaper samples without significantly altering their appearance. One product performed successfully on most samples, another achieved the highest pH measurements and also caused the most visual change, and the third was ineffective and inconsistent. Further research on the long-term effects of deacidification is necessary before any product can be recommended for use
The Johnsonian May 4, 1956
The Johnsonian is the weekly student newspaper of Winthrop University. It is published during fall and spring semesters with the exception of university holidays and exam periods. We have proudly served the Winthrop and Rock Hill community since 1923.https://digitalcommons.winthrop.edu/thejohnsonian1950s/1161/thumbnail.jp
Transactions and data management in NoSQL cloud databases
NoSQL databases have become the preferred option for storing and processing data in cloud computing as they are capable of providing high data availability, scalability and efficiency. But in order to achieve these attributes, NoSQL databases make certain trade-offs. First, NoSQL databases cannot guarantee strong consistency of data. They only guarantee a weaker consistency which is based on eventual consistency model. Second, NoSQL databases adopt a simple data model which makes it easy for data to be scaled across multiple nodes. Third, NoSQL databases do not support table joins and referential integrity which by implication, means they cannot implement complex queries. The combination of these factors implies that NoSQL databases cannot support transactions. Motivated by these crucial issues this thesis investigates into the transactions and data management in NoSQL databases.
It presents a novel approach that implements transactional support for NoSQL databases in order to ensure stronger data consistency and provide appropriate level of performance. The novelty lies in the design of a Multi-Key transaction model that guarantees the standard properties of transactions in order to ensure stronger consistency and integrity of data. The model is implemented in a novel loosely-coupled architecture that separates the implementation of transactional logic from the underlying data thus ensuring transparency and abstraction in cloud and NoSQL databases. The proposed approach is validated through the development of a prototype system using real MongoDB system. An extended version of the standard Yahoo! Cloud Services Benchmark (YCSB) has been used in order to test and evaluate the proposed approach. Various experiments have been conducted and sets of results have been generated. The results show that the proposed approach meets the research objectives. It maintains stronger consistency of cloud data as well as appropriate level of reliability and performance
Special Libraries, January 1940
Volume 31, Issue 1https://scholarworks.sjsu.edu/sla_sl_1940/1000/thumbnail.jp
CREATING A $40 MILLION COMPANY BASED ON DISPERSION MODELLING
Starting in 1974, the author grew Trinity Consultants, Inc. from a one-person firm to one employing over 270 staff in 22
offices scattered across the United States and an added office in the People’s Republic of China. Trinity became the leading US firm
in the field of air quality consulting and compliance. The foundation of the firm was in the field of dispersion modelling and the
author taught a course, which he continually revised, entitled “Fundamentals of Dispersion Modeling” more than 200 times in the United States and in more than a dozen countries on five continents. He also authored a 400-page textbook with D. Bruce Turner on this subject in 2007. This paper describes three selected aspects of this experience that spanned a third of a century until controlling interest of Trinity Consultants was sold to a private equity firm in November 2007. The three areas to be described are: a) an
assessment by the author of the human qualities that foster success as an entrepreneur, b) the stages of growth of a professional
services firm, and c) the process of ownership transfer and the financial engineering that was involved
- …