163 research outputs found
SCOOTER: A compact and scalable dynamic labeling scheme for XML updates
Although dynamic labeling schemes for XML have been the
focus of recent research activity, there are significant challenges still to be overcome. In particular, though there are labeling schemes that ensure a compact label representation when creating an XML document, when the document is subject to repeated and arbitrary deletions and insertions, the labels grow rapidly and consequently have a significant impact on query and update performance. We review the outstanding issues todate and in this paper we propose SCOOTER - a new dynamic labeling scheme for XML. The new labeling scheme can completely avoid relabeling
existing labels. In particular, SCOOTER can handle frequently skewed insertions gracefully. Theoretical analysis and experimental results confirm the scalability, compact representation, efficient growth rate and performance of SCOOTER in comparison to existing dynamic labeling schemes
MFPA: Mixed-Signal Field Programmable Array for Energy-Aware Compressive Signal Processing
Compressive Sensing (CS) is a signal processing technique which reduces the number of samples taken per frame to decrease energy, storage, and data transmission overheads, as well as reducing time taken for data acquisition in time-critical applications. The tradeoff in such an approach is increased complexity of signal reconstruction. While several algorithms have been developed for CS signal reconstruction, hardware implementation of these algorithms is still an area of active research. Prior work has sought to utilize parallelism available in reconstruction algorithms to minimize hardware overheads; however, such approaches are limited by the underlying limitations in CMOS technology. Herein, the MFPA (Mixed-signal Field Programmable Array) approach is presented as a hybrid spin-CMOS reconfigurable fabric specifically designed for implementation of CS data sampling and signal reconstruction. The resulting fabric consists of 1) slice-organized analog blocks providing amplifiers, transistors, capacitors, and Magnetic Tunnel Junctions (MTJs) which are configurable to achieving square/square root operations required for calculating vector norms, 2) digital functional blocks which feature 6-input clockless lookup tables for computation of matrix inverse, and 3) an MRAM-based nonvolatile crossbar array for carrying out low-energy matrix-vector multiplication operations. The various functional blocks are connected via a global interconnect and spin-based analog-to-digital converters. Simulation results demonstrate significant energy and area benefits compared to equivalent CMOS digital implementations for each of the functional blocks used: this includes an 80% reduction in energy and 97% reduction in transistor count for the nonvolatile crossbar array, 80% standby power reduction and 25% reduced area footprint for the clockless lookup tables, and roughly 97% reduction in transistor count for a multiplier built using components from the analog blocks. Moreover, the proposed fabric yields 77% energy reduction compared to CMOS when used to implement CS reconstruction, in addition to latency improvements
Adaptive Brain Stimulation for Movement Disorders
Deep brain stimulation (DBS) has markedly changed how we treat movement disorders including Parkinson's disease (PD), dystonia, and essential tremor (ET). However, despite its demonstrable clinical benefit, DBS is often limited by side effects and partial efficacy. These limitations may be due in part to the fact that DBS interferes with both pathological and physiological neural activities. DBS could, therefore, be potentially improved were it applied selectively and only at times of enhanced pathological activity. This form of stimulation is known as closed-loop or adaptive DBS (aDBS). An aDBS approach has been shown to be superior to conventional DBS in PD in primates using cortical neuronal spike triggering and in humans employing local field potential biomarkers. Likewise, aDBS studies for essential and Parkinsonian tremor are advancing and show great promise, using both peripheral or central sensing and stimulation. aDBS has not yet been trialed in dystonia and yet exciting and promising biomarkers suggest it could be beneficial here too. In this chapter, we will review the existing literature on aDBS in movement disorders and explore potential biomarkers and stimulation algorithms for applying aDBS in PD, ET, and dystonia
Secure and Efficient Models for Retrieving Data from Encrypted Databases in Cloud
Recently, database users have begun to use cloud database services to outsource their databases. The reason for this is the high computation speed and the huge storage capacity that cloud owners provide at low prices. However, despite the attractiveness of the cloud computing environment to database users, privacy issues remain a cause for concern for database owners since data access is out of their control. Encryption is the only way of assuaging users’ fears surrounding data privacy, but executing Structured Query Language (SQL) queries over encrypted data is a challenging task, especially if the data are encrypted by a randomized encryption algorithm. Many researchers have addressed the privacy issues by encrypting the data using deterministic, onion layer, or homomorphic encryption. Nevertheless, even with these systems, the encrypted data can still be subjected to attack. In this research, we first propose an indexing scheme to encode the original table’s tuples into bit vectors (BVs) prior to the encryption. The resulting index is then used to narrow the range of retrieved encrypted records from the cloud to a small set of records that are candidates for the user’s query. Based on the indexing scheme, we then design three different models to execute SQL queries over the encrypted data. The data are encrypted by a single randomized encryption algorithm, namely the Advanced Encryption Standard AES-CBC. In each proposed scheme, we use a different (secure) method for storing and maintaining the index values (BVs) (i.e., either at user’s side or at the cloud server), and we extend each system to support most of relational algebra operators, such as select, join, etc. Implementation and evaluation of the proposed systems reveals that they are practical and efficient at reducing both the computation and space overhead when compared with state-of-the-art systems like CryptDB
ON IMPLEMENTATION OF ROBUST AUTOTUNING OF TRANSMISSION ELECTRON MICROSCOPES
Practice shows that the current impiementations of automatic tuning of transmission elec-
tron microscopes suffer from not satisfactory robustness, and this seriously limits their
applicability. The paper presents a software architecture which provides a framework for
the realization of a real-time automatic tuning system with improved robustness. First
the transmission electron microscope tuning as general measuring/modelling process is
characterized and the consequences of the improvement in robustness are identified in
this context. It is concluded that both extending the models of image formation of the
electron microscope into qualitative and heuristic directions, and the continuous model
validation with sophisticated control are necessary for coping with these problems. Then
a two-layer software architecture is presented which helps satisfying the above require-
ments to a considerable extent: the lower layer contains the conventional and symbolic
data/image processing components (with data/control interfaces), the upper layer - us-
ing knowledge based approach extensively - realizes the higher level control based on the
partial results of the processing on the lower level. (Hence, the upper level is responsible
for the robustness in system-wide sense.) Main subsystems of the autotuning software are
shown. A short survey of the hardware background is also given. A summary closes the
paper
A Two-Level Dynamic Chrono-Scheduling Algorithm
We propose a dynamic instruction scheduler that does
not need any kind of wakeup logic, as all the instructions are
“programmed” on issue stage to be executed in pre-calculated
cycles. The scheduler is composed of two similar levels, each one
composed of simple “stations”, where the timing information is
recorded. The first level is aimed to the group of instructions
whose timing information cannot be calculated at issue (for
example, those instructions whose latency is not predictable).
The second level contains simple “stations” for the instructions
whose execution and write back cycle have been already
calculated. The key idea of this scheduler is to extract and
record all possible information about the future execution of an
instruction during its issue, so as not to look for this information
again and again during wait stages at the reservation stations.
Another additional advantage is that time critical parts can be
identified as instruction timing information is available, so high
speed and frequency logic can be used only in these parts, while
the rest of the scheduler can work at lower frequencies,
therefore consuming much less power. The lack of wakeup and
CAM (Content Addressable Memory) means that power
consumption and latencies would be presumably reduced,
frequency would probably be made higher, while CPI (clock
Cycles Per Instruction) would remain approximately the same.Ministerio de Educación y Ciencia TIN2006-15617- C03-03Junta de Andalucía P06-TIC-0229
Distributed databases
Mòdul 3 del llibre Database Architecture. UOC, 20122022/202
Clustering and Community Detection with Imbalanced Clusters
Spectral clustering methods which are frequently used in clustering and
community detection applications are sensitive to the specific graph
constructions particularly when imbalanced clusters are present. We show that
ratio cut (RCut) or normalized cut (NCut) objectives are not tailored to
imbalanced cluster sizes since they tend to emphasize cut sizes over cut
values. We propose a graph partitioning problem that seeks minimum cut
partitions under minimum size constraints on partitions to deal with imbalanced
cluster sizes. Our approach parameterizes a family of graphs by adaptively
modulating node degrees on a fixed node set, yielding a set of parameter
dependent cuts reflecting varying levels of imbalance. The solution to our
problem is then obtained by optimizing over these parameters. We present
rigorous limit cut analysis results to justify our approach and demonstrate the
superiority of our method through experiments on synthetic and real datasets
for data clustering, semi-supervised learning and community detection.Comment: Extended version of arXiv:1309.2303 with new applications. Accepted
to IEEE TSIP
- …