327 research outputs found
A Multiresolution Stochastic Process Model for Predicting Basketball Possession Outcomes
Basketball games evolve continuously in space and time as players constantly
interact with their teammates, the opposing team, and the ball. However,
current analyses of basketball outcomes rely on discretized summaries of the
game that reduce such interactions to tallies of points, assists, and similar
events. In this paper, we propose a framework for using optical player tracking
data to estimate, in real time, the expected number of points obtained by the
end of a possession. This quantity, called \textit{expected possession value}
(EPV), derives from a stochastic process model for the evolution of a
basketball possession; we model this process at multiple levels of resolution,
differentiating between continuous, infinitesimal movements of players, and
discrete events such as shot attempts and turnovers. Transition kernels are
estimated using hierarchical spatiotemporal models that share information
across players while remaining computationally tractable on very large data
sets. In addition to estimating EPV, these models reveal novel insights on
players' decision-making tendencies as a function of their spatial strategy.Comment: 31 pages, 9 figure
In-Network Volumetric DDoS Victim Identification Using Programmable Commodity Switches
Volumetric distributed Denial-of-Service (DDoS) attacks have become one of
the most significant threats to modern telecommunication networks. However,
most existing defense systems require that detection software operates from a
centralized monitoring collector, leading to increased traffic load and delayed
response. The recent advent of Data Plane Programmability (DPP) enables an
alternative solution: threshold-based volumetric DDoS detection can be
performed directly in programmable switches to skim only potentially hazardous
traffic, to be analyzed in depth at the controller. In this paper, we first
introduce the BACON data structure based on sketches, to estimate
per-destination flow cardinality, and theoretically analyze it. Then we employ
it in a simple in-network DDoS victim identification strategy, INDDoS, to
detect the destination IPs for which the number of incoming connections exceeds
a pre-defined threshold. We describe its hardware implementation on a
Tofino-based programmable switch using the domain-specific P4 language, proving
that some limitations imposed by real hardware to safeguard processing speed
can be overcome to implement relatively complex packet manipulations. Finally,
we present some experimental performance measurements, showing that our
programmable switch is able to keep processing packets at line-rate while
performing volumetric DDoS detection, and also achieves a high F1 score on DDoS
victim identification.Comment: Accepted by IEEE Transactions on Network and Service Management
Special issue on Latest Developments for Security Management of Networks and
Service
Discrete Wavelet Transforms
The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications
INFORMATION SECURITY: A STUDY ON BIOMETRIC SECURITY SOLUTIONS FOR TELECARE MEDICAL INFORMATION SYSTEMS
This exploratory study provides a means for evaluating and rating Telecare medical information systems in order to provide a more effective security solution. This analysis of existing solutions was conducted via an in-depth study of Telecare security. This is a proposition for current biometric technologies as a new means for secure communication of private information over public channels. Specifically, this research was done in order to provide a means for businesses to evaluate prospective technologies from a 3 dimensional view in order to make am accurate decision on any given biometric security technology. Through identifying key aspects of what makes a security solution the most effective in minimizing risk of a patient’s confidential data being exposed we were then able to create a 3 dimensional rubric to see not only from a business view but also the users such as the patients and doctors that use Telecare medical information systems every day. Finally, we also need to understand the implications of biometric solutions from a technological standpoint
Anti-Neuron Watermarking: Protecting Personal Data Against Unauthorized Neural Networks
We study protecting a user's data (images in this work) against a learner's
unauthorized use in training neural networks. It is especially challenging when
the user's data is only a tiny percentage of the learner's complete training
set. We revisit the traditional watermarking under modern deep learning
settings to tackle the challenge. We show that when a user watermarks images
using a specialized linear color transformation, a neural network classifier
will be imprinted with the signature so that a third-party arbitrator can
verify the potentially unauthorized usage of the user data by inferring the
watermark signature from the neural network. We also discuss what watermarking
properties and signature spaces make the arbitrator's verification convincing.
To our best knowledge, this work is the first to protect an individual user's
data ownership from unauthorized use in training neural networks.Comment: Accepted to ECCV 202
Sparse and Nonnegative Factorizations For Music Understanding
In this dissertation, we propose methods for sparse and nonnegative factorization that are specifically suited for analyzing musical signals. First, we discuss two constraints that aid factorization of musical signals: harmonic and co-occurrence constraints. We propose a novel dictionary learning method that imposes harmonic constraints upon the atoms of the learned dictionary while allowing the dictionary size to grow appropriately during the learning procedure. When there is significant spectral-temporal overlap among the musical sources, our method outperforms popular existing matrix factorization methods as measured by the recall and precision of learned dictionary atoms. We also propose co-occurrence constraints -- three simple and convenient multiplicative update rules for nonnegative matrix factorization (NMF) that enforce dependence among atoms. Using examples in music transcription, we demonstrate the ability of these updates to represent each musical note with multiple atoms and cluster the atoms for source separation purposes.
Second, we study how spectral and temporal information extracted by nonnegative factorizations can improve upon musical instrument recognition. Musical instrument recognition in melodic signals is difficult, especially for classification systems that rely entirely upon spectral information instead of temporal information. Here, we propose a simple and effective method of combining spectral and temporal information for instrument recognition. While existing classification methods use traditional features such as statistical moments, we extract novel features from spectral and temporal atoms generated by NMF using a biologically motivated multiresolution gamma filterbank. Unlike other methods that require thresholds, safeguards, and hierarchies, the proposed spectral-temporal method requires only simple filtering and a flat classifier.
Finally, we study how to perform sparse factorization when a large dictionary of musical atoms is already known. Sparse coding methods such as matching pursuit (MP) have been applied to problems in music information retrieval such as transcription and source separation with moderate success. However, when the set of dictionary atoms is large, identification of the best match in the dictionary with the residual is slow -- linear in the size of the dictionary. Here, we propose a variant called approximate matching pursuit (AMP) that is faster than MP while maintaining scalability and accuracy. Unlike MP, AMP uses an approximate nearest-neighbor (ANN) algorithm to find the closest match in a dictionary in sublinear time. One such ANN algorithm, locality-sensitive hashing (LSH), is a probabilistic hash algorithm that places similar, yet not identical, observations into the same bin. While the accuracy of AMP is comparable to similar MP methods, the computational complexity is reduced. Also, by using LSH, this method scales easily; the dictionary can be expanded without reorganizing any data structures
- …