61,378 research outputs found

    Data mining based cyber-attack detection

    Get PDF

    Randomized Dimensionality Reduction for k-means Clustering

    Full text link
    We study the topic of dimensionality reduction for kk-means clustering. Dimensionality reduction encompasses the union of two approaches: \emph{feature selection} and \emph{feature extraction}. A feature selection based algorithm for kk-means clustering selects a small subset of the input features and then applies kk-means clustering on the selected features. A feature extraction based algorithm for kk-means clustering constructs a small set of new artificial features and then applies kk-means clustering on the constructed features. Despite the significance of kk-means clustering as well as the wealth of heuristic methods addressing it, provably accurate feature selection methods for kk-means clustering are not known. On the other hand, two provably accurate feature extraction methods for kk-means clustering are known in the literature; one is based on random projections and the other is based on the singular value decomposition (SVD). This paper makes further progress towards a better understanding of dimensionality reduction for kk-means clustering. Namely, we present the first provably accurate feature selection method for kk-means clustering and, in addition, we present two feature extraction methods. The first feature extraction method is based on random projections and it improves upon the existing results in terms of time complexity and number of features needed to be extracted. The second feature extraction method is based on fast approximate SVD factorizations and it also improves upon the existing results in terms of time complexity. The proposed algorithms are randomized and provide constant-factor approximation guarantees with respect to the optimal kk-means objective value.Comment: IEEE Transactions on Information Theory, to appea

    Energy Efficiency in Cache Enabled Small Cell Networks With Adaptive User Clustering

    Full text link
    Using a network of cache enabled small cells, traffic during peak hours can be reduced considerably through proactively fetching the content that is most probable to be requested. In this paper, we aim at exploring the impact of proactive caching on an important metric for future generation networks, namely, energy efficiency (EE). We argue that, exploiting the correlation in user content popularity profiles in addition to the spatial repartitions of users with comparable request patterns, can result in considerably improving the achievable energy efficiency of the network. In this paper, the problem of optimizing EE is decoupled into two related subproblems. The first one addresses the issue of content popularity modeling. While most existing works assume similar popularity profiles for all users in the network, we consider an alternative caching framework in which, users are clustered according to their content popularity profiles. In order to showcase the utility of the proposed clustering scheme, we use a statistical model selection criterion, namely Akaike information criterion (AIC). Using stochastic geometry, we derive a closed-form expression of the achievable EE and we find the optimal active small cell density vector that maximizes it. The second subproblem investigates the impact of exploiting the spatial repartitions of users with comparable request patterns. After considering a snapshot of the network, we formulate a combinatorial optimization problem that enables to optimize content placement such that the used transmission power is minimized. Numerical results show that the clustering scheme enable to considerably improve the cache hit probability and consequently the EE compared with an unclustered approach. Simulations also show that the small base station allocation algorithm results in improving the energy efficiency and hit probability.Comment: 30 pages, 5 figures, submitted to Transactions on Wireless Communications (15-Dec-2016

    An information-theoretic approach to the gravitational-wave burst detection problem

    Get PDF
    The observational era of gravitational-wave astronomy began in the Fall of 2015 with the detection of GW150914. One potential type of detectable gravitational wave is short-duration gravitational-wave bursts, whose waveforms can be difficult to predict. We present the framework for a new detection algorithm for such burst events -- \textit{oLIB} -- that can be used in low-latency to identify gravitational-wave transients independently of other search algorithms. This algorithm consists of 1) an excess-power event generator based on the Q-transform -- \textit{Omicron} --, 2) coincidence of these events across a detector network, and 3) an analysis of the coincident events using a Markov chain Monte Carlo Bayesian evidence calculator -- \textit{LALInferenceBurst}. These steps compress the full data streams into a set of Bayes factors for each event; through this process, we use elements from information theory to minimize the amount of information regarding the signal-versus-noise hypothesis that is lost. We optimally extract this information using a likelihood-ratio test to estimate a detection significance for each event. Using representative archival LIGO data, we show that the algorithm can detect gravitational-wave burst events of astrophysical strength in realistic instrumental noise across different burst waveform morphologies. We also demonstrate that the combination of Bayes factors by means of a likelihood-ratio test can improve the detection efficiency of a gravitational-wave burst search. Finally, we show that oLIB's performance is robust against the choice of gravitational-wave populations used to model the likelihood-ratio test likelihoods
    corecore