11,047 research outputs found
Self-normalized Cram\'{e}r type moderate deviations for the maximum of sums
Let be independent random variables with zero means and finite
variances, and let and . A
Cram\'{e}r type moderate deviation for the maximum of the self-normalized sums
is obtained. In particular, for identically
distributed it is proved that uniformly for
under the optimal finite third moment of .Comment: Published in at http://dx.doi.org/10.3150/12-BEJ415 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
On non-stationary threshold autoregressive models
In this paper we study the limiting distributions of the least-squares
estimators for the non-stationary first-order threshold autoregressive (TAR(1))
model. It is proved that the limiting behaviors of the TAR(1) process are very
different from those of the classical unit root model and the explosive AR(1).Comment: Published in at http://dx.doi.org/10.3150/10-BEJ306 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Context-aware Cluster Based Device-to-Device Communication to Serve Machine Type Communications
Billions of Machine Type Communication (MTC) devices are foreseen to be
deployed in next ten years and therefore potentially open a new market for next
generation wireless network. However, MTC applications have different
characteristics and requirements compared with the services provided by legacy
cellular networks. For instance, an MTC device sporadically requires to
transmit a small data packet containing information generated by sensors. At
the same time, due to the massive deployment of MTC devices, it is inefficient
to charge their batteries manually and thus a long battery life is required for
MTC devices. In this sense, legacy networks designed to serve human-driven
traffics in real time can not support MTC efficiently. In order to improve the
availability and battery life of MTC devices, context-aware device-to-device
(D2D) communication is exploited in this paper. By applying D2D communication,
some MTC users can serve as relays for other MTC users who experience bad
channel conditions. Moreover, signaling schemes are also designed to enable the
collection of context information and support the proposed D2D communication
scheme. Last but not least, a system level simulator is implemented to evaluate
the system performance of the proposed technologies and a large performance
gain is shown by the numerical results
GPU-Accelerated BWT Construction for Large Collection of Short Reads
Advances in DNA sequencing technology have stimulated the development of
algorithms and tools for processing very large collections of short strings
(reads). Short-read alignment and assembly are among the most well-studied
problems. Many state-of-the-art aligners, at their core, have used the
Burrows-Wheeler transform (BWT) as a main-memory index of a reference genome
(typical example, NCBI human genome). Recently, BWT has also found its use in
string-graph assembly, for indexing the reads (i.e., raw data from DNA
sequencers). In a typical data set, the volume of reads is tens of times of the
sequenced genome and can be up to 100 Gigabases. Note that a reference genome
is relatively stable and computing the index is not a frequent task. For reads,
the index has to computed from scratch for each given input. The ability of
efficient BWT construction becomes a much bigger concern than before. In this
paper, we present a practical method called CX1 for constructing the BWT of
very large string collections. CX1 is the first tool that can take advantage of
the parallelism given by a graphics processing unit (GPU, a relative cheap
device providing a thousand or more primitive cores), as well as simultaneously
the parallelism from a multi-core CPU and more interestingly, from a cluster of
GPU-enabled nodes. Using CX1, the BWT of a short-read collection of up to 100
Gigabases can be constructed in less than 2 hours using a machine equipped with
a quad-core CPU and a GPU, or in about 43 minutes using a cluster with 4 such
machines (the speedup is almost linear after excluding the first 16 minutes for
loading the reads from the hard disk). The previously fastest tool BRC is
measured to take 12 hours to process 100 Gigabases on one machine; it is
non-trivial how BRC can be parallelized to take advantage a cluster of
machines, let alone GPUs.Comment: 11 page
- β¦