938 research outputs found
Streamed Sampling on Dynamic data as Support for Classification Model
Data mining process on dynamically changing data have several problems, such as unknown data size and changing of class distribution. Random sampling method commonly applied for extracting general synopsis from very large database. In this research, Vitter’s reservoir algorithm is used to retrieve k records of data from the database and put into the sample. Sample is used as input for classification task in data mining. Sample type is backing sample and it saved as table contains value of id, priority and timestamp. Priority indicates the probability of how long data retained in the sample. Kullback-Leibler divergence applied to measure the similarity between database and sample distribution. Result of this research is showed that continuously taken samples randomly is possible when transaction occurs. Kullback-Leibler divergence with interval from 0 to 0.0001, is a very good measure to maintain similar class distribution between database and sample. Sample results are always up to date on new transactions with similar class distribution. Classifier built from balance class distribution showed to have better performance than from imbalance one
Compressing DNA sequence databases with coil
Background: Publicly available DNA sequence databases such as GenBank are large, and are
growing at an exponential rate. The sheer volume of data being dealt with presents serious storage
and data communications problems. Currently, sequence data is usually kept in large "flat files,"
which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which
rarely achieves good compression ratios. While much research has been done on compressing
individual DNA sequences, surprisingly little has focused on the compression of entire databases
of such sequences. In this study we introduce the sequence database compression software coil.
Results: We have designed and implemented a portable software package, coil, for compressing
and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared
towards achieving high compression ratios at the expense of execution time and memory usage
during compression – the compression time represents a "one-off investment" whose cost is
quickly amortised if the resulting compressed file is transmitted many times. Decompression
requires little memory and is extremely fast. We demonstrate a 5% improvement in compression
ratio over state-of-the-art general-purpose compression tools for a large GenBank database file
containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental
additions to a sequence database.
Conclusion: coil presents a compelling alternative to conventional compression of flat files for the
storage and distribution of DNA sequence databases having a narrow distribution of sequence
lengths, such as EST data. Increasing compression levels for databases having a wide distribution of
sequence lengths is a direction for future work
An Enhanced CART Algorithm for Preserving Privacy of Distributed Data and Provide Access Control over Tree Data
Now in these days the utilization of distributed applications are increases rapidly because these applications are serve more than one client at a time. In the use of distributed database data distribution and management is a key area of attraction. Because of privacy of private data organizations are unwilling to participate for data mining due to the data leakage. So it is required to collect data from different parties in a secured way. This paper represents how CART algorithm can be used for multi parties in vertically partitioned environment. In order to solve the privacy and security issues the proposed model incorporates the server side random key generation and key distribution. Finally the performance of proposed classification technique is evaluated in terms of memory consumption, training time, search time, accuracy and there error rate
Reducing Object Detection Uncertainty from RGB and Thermal Data for UAV Outdoor Surveillance
Recent advances in Unmanned Aerial Vehicles (UAVs) have resulted in their
quick adoption for wide a range of civilian applications, including precision
agriculture, biosecurity, disaster monitoring and surveillance. UAVs offer
low-cost platforms with flexible hardware configurations, as well as an
increasing number of autonomous capabilities, including take-off, landing,
object tracking and obstacle avoidance. However, little attention has been paid
to how UAVs deal with object detection uncertainties caused by false readings
from vision-based detectors, data noise, vibrations, and occlusion. In most
situations, the relevance and understanding of these detections are delegated
to human operators, as many UAVs have limited cognition power to interact
autonomously with the environment. This paper presents a framework for
autonomous navigation under uncertainty in outdoor scenarios for small UAVs
using a probabilistic-based motion planner. The framework is evaluated with
real flight tests using a sub 2 kg quadrotor UAV and illustrated in victim
finding Search and Rescue (SAR) case study in a forest/bushland. The navigation
problem is modelled using a Partially Observable Markov Decision Process
(POMDP), and solved in real time onboard the small UAV using Augmented Belief
Trees (ABT) and the TAPIR toolkit. Results from experiments using colour and
thermal imagery show that the proposed motion planner provides accurate victim
localisation coordinates, as the UAV has the flexibility to interact with the
environment and obtain clearer visualisations of any potential victims compared
to the baseline motion planner. Incorporating this system allows optimised UAV
surveillance operations by diminishing false positive readings from
vision-based object detectors
ACCAMS: Additive Co-Clustering to Approximate Matrices Succinctly
Matrix completion and approximation are popular tools to capture a user's
preferences for recommendation and to approximate missing data. Instead of
using low-rank factorization we take a drastically different approach, based on
the simple insight that an additive model of co-clusterings allows one to
approximate matrices efficiently. This allows us to build a concise model that,
per bit of model learned, significantly beats all factorization approaches to
matrix approximation. Even more surprisingly, we find that summing over small
co-clusterings is more effective in modeling matrices than classic
co-clustering, which uses just one large partitioning of the matrix.
Following Occam's razor principle suggests that the simple structure induced
by our model better captures the latent preferences and decision making
processes present in the real world than classic co-clustering or matrix
factorization. We provide an iterative minimization algorithm, a collapsed
Gibbs sampler, theoretical guarantees for matrix approximation, and excellent
empirical evidence for the efficacy of our approach. We achieve
state-of-the-art results on the Netflix problem with a fraction of the model
complexity.Comment: 22 pages, under review for conference publicatio
The Family of MapReduce and Large Scale Data Processing Systems
In the last two decades, the continuous increase of computational power has
produced an overwhelming flow of data which has called for a paradigm shift in
the computing architecture and large scale data processing mechanisms.
MapReduce is a simple and powerful programming model that enables easy
development of scalable parallel applications to process vast amounts of data
on large clusters of commodity machines. It isolates the application from the
details of running a distributed program such as issues on data distribution,
scheduling and fault tolerance. However, the original implementation of the
MapReduce framework had some limitations that have been tackled by many
research efforts in several followup works after its introduction. This article
provides a comprehensive survey for a family of approaches and mechanisms of
large scale data processing mechanisms that have been implemented based on the
original idea of the MapReduce framework and are currently gaining a lot of
momentum in both research and industrial communities. We also cover a set of
introduced systems that have been implemented to provide declarative
programming interfaces on top of the MapReduce framework. In addition, we
review several large scale data processing systems that resemble some of the
ideas of the MapReduce framework for different purposes and application
scenarios. Finally, we discuss some of the future research directions for
implementing the next generation of MapReduce-like solutions.Comment: arXiv admin note: text overlap with arXiv:1105.4252 by other author
- …