36 research outputs found
DNA-Liposome Hybrid Carriers for Triggered Cargo Release
The design of simple and versatile synthetic routes to accomplish triggered-release properties in carriers is of particular interest for drug delivery purposes. In this context, the programmability and adaptability of DNA nanoarchitectures in combination with liposomes have great potential to render biocompatible hybrid carriers for triggered cargo release. We present an approach to form a DNA mesh on large unilamellar liposomes incorporating a stimuli-responsive DNA building block. Upon incubation with a single-stranded DNA trigger sequence, a hairpin closes, and the DNA building block is allowed to self-contract. We demonstrate the actuation of this building block by single-molecule Förster resonance energy transfer (FRET), fluorescence recovery after photobleaching, and fluorescence quenching measurements. By triggering this process, we demonstrate the elevated release of the dye calcein from the DNA-liposome hybrid carriers. Interestingly, the incubation of the doxorubicin-laden active hybrid carrier with HEK293T cells suggests increased cytotoxicity relative to a control carrier without the triggered-release mechanism. In the future, the trigger could be provided by peritumoral nucleic acid sequences and lead to site-selective release of encapsulated chemotherapeutics. © 2022 American Chemical Society. All rights reserved
Concurrency control in distributed database systems
Distributed Database Systems (DDBS) may be defined as integrated database systems composed of autonomous local databases, geographically distributed and interconnected by a computer network.The purpose of this monograph is to present DDBS concurrency control algorithms and their related performance issues. The most recent results have been taken into consideration. A detailed analysis and selection of these results has been made so as to include those which will promote applications and progress in the field. The application of the methods and algorithms presented is not limited to DDBSs but
XCleaner: A new method for clustering XML documents by structure
With the vastly growing data resources on the Internet, XML is one of the most important standards for document management. Not only does it provide enhancements to document exchange and storage, but it is also helpful in a variety of information retrieval tasks. Document clustering is one of the most interesting research areas that utilize semi-structural nature of XML. In this paper, we put forward a new XML clustering algorithm that relies solely on document structure. We propose the use of maximal frequent subtrees and an operator called Satisf/Violate to divide documents into groups. The algorithm is experimentally evaluated on real and synthetic data sets with promising results
Transaction mechanisms in complex business processes
The importance of systems based on the SOA architecture continues to grow. At the same time, in spite of the existence of many specifications which allow for the coordination of business processes functioning in the SOA environment, there is still lack of solutions that allow the use of system mechanisms of transaction processing. Such solutions should entail the simplification of construction of business processes without affecting their capabilities. This article presents a Transaction Coordinator environment, designed and implemented as a response to these needs
Formal model of time point-based sequential data for OLAP-like analysis
Numerous nowadays applications generate huge sets of data, whose natural feature is order, e.g,. sensor installations, RFID devices, workflow systems, Website monitors, health care applications. By analyzing the data and their order dependencies one can acquire new knowledge. However, nowadays commercial BI technologies and research prototypes allow to analyze mostly set oriented data, neglecting their order (sequential) dependencies. Few approaches to analyzing data of sequential nature have been proposed so far and all of them lack a comprehensive data model being able to represent and analyze sequential dependencies. In this paper, we propose a formal model for time point-based sequential data. The main elements of this model include an event and a sequence of events. Measures are associated with events and sequences. Measures are analyzed in the context set up by dimensions in an OLAP-like manner by means of the set of operations. The operations in our model are categorized as: operations on sequences, on dimensions, general operations, and analytical functions
Formal model of time point-based sequential data for OLAP-like analysis
Numerous nowadays applications generate huge sets of data, whose natural feature is order, e.g,. sensor installations, RFID devices, workflow systems, Website monitors, health care applications. By analyzing the data and their order dependencies one can acquire new knowledge. However, nowadays commercial BI technologies and research prototypes allow to analyze mostly set oriented data, neglecting their order (sequential) dependencies. Few approaches to analyzing data of sequential nature have been proposed so far and all of them lack a comprehensive data model being able to represent and analyze sequential dependencies. In this paper, we propose a formal model for time point-based sequential data. The main elements of this model include an event and a sequence of events. Measures are associated with events and sequences. Measures are analyzed in the context set up by dimensions in an OLAP-like manner by means of the set of operations. The operations in our model are categorized as: operations on sequences, on dimensions, general operations, and analytical functions
Just-In-Time Data Distribution for Analytical Query Processing
Distributed processing commonly requires data spread across machines using a priori static or hash-based data allocation. In this paper, we explore an alternative approach that starts from a master node in control of the complete database, and a variable number of worker nodes for delegated query processing. Data is shipped just-in-time to the worker nodes using a need to know policy, and is being reused, if possible, in subsequent queries. A bidding mechanism among the workers yields a scheduling with the most efficient reuse of previously shipped data, minimizing the data transfer costs. Just-in-time data shipment allows our system to benefit from locally available idle resources to boost overall performance. The system is maintenance-free and allocation is fully transparent to users. Our experiments show that the proposed adaptive distributed architecture is a viable and flexible alternative for small scale MapReduce-type of settings