8 research outputs found
Simple Analysis of Sparse, Sign-Consistent JL
Allen-Zhu, Gelashvili, Micali, and Shavit construct a sparse, sign-consistent Johnson-Lindenstrauss distribution, and prove that this distribution yields an essentially optimal dimension for the correct choice of sparsity. However, their analysis of the upper bound on the dimension and sparsity requires a complicated combinatorial graph-based argument similar to Kane and Nelson\u27s analysis of sparse JL. We present a simple, combinatorics-free analysis of sparse, sign-consistent JL that yields the same dimension and sparsity upper bounds as the original analysis. Our analysis also yields dimension/sparsity tradeoffs, which were not previously known.
As with previous proofs in this area, our analysis is based on applying Markov\u27s inequality to the pth moment of an error term that can be expressed as a quadratic form of Rademacher variables. Interestingly, we show that, unlike in previous work in the area, the traditionally used Hanson-Wright bound is not strong enough to yield our desired result. Indeed, although the Hanson-Wright bound is known to be optimal for gaussian degree-2 chaos, it was already shown to be suboptimal for Rademachers. Surprisingly, we are able to show a simple moment bound for quadratic forms of Rademachers that is sufficiently tight to achieve our desired result, which given the ubiquity of moment and tail bounds in theoretical computer science, is likely to be of broader interest
Machine Learning for Detecting Malware in PE Files
The increasing number of sophisticated malware poses a major cybersecurity
threat. Portable executable (PE) files are a common vector for such malware. In
this work we review and evaluate machine learning-based PE malware detection
techniques. Using a large benchmark dataset, we evaluate features of PE files
using the most common machine learning techniques to detect malware
Natural language processing for web browsing analytics: Challenges, lessons learned, and opportunities
In an Internet arena where the search engines and other digital marketing firms’ revenues peak, other actors still have open opportunities to monetize their users’ data. After the convenient anonymization, aggregation, and agreement, the set of websites users visit may result in exploitable data for ISPs. Uses cover from assessing the scope of advertising campaigns to reinforcing user fidelity among other marketing approaches, as well as security issues. However, sniffers based on HTTP, DNS, TLS or flow features do not suffice for this task. Modern websites are designed for preloading and prefetching some contents in addition to embedding banners, social networks’ links, images, and scripts from other websites. This self-triggered traffic makes it confusing to assess which websites users visited on purpose. Moreover, DNS caches prevent some queries of actively visited websites to be even sent. On this limited input, we propose to handle such domains as words and the sequences of domains as documents. This way, it is possible to identify the visited websites by translating this problem to a text classification context and applying the most promising techniques of the natural language processing and neural networks fields. After applying different representation methods such as TF–IDF, Word2vec, Doc2vec, and custom neural networks in diverse scenarios and with several datasets, we can state websites visited on purpose with accuracy figures over 90%, with peaks close to 100%, being processes that are fully automated and free of any human parametrizationThis research has been partially funded by the Spanish State Research
Agency under the project AgileMon (AEI PID2019-104451RBC21)
and by the Spanish Ministry of Science, Innovation and Universities
under the program for the training of university lecturers (Grant
number: FPU19/05678
Unified Embedding: Battle-Tested Feature Representations for Web-Scale ML Systems
Learning high-quality feature embeddings efficiently and effectively is
critical for the performance of web-scale machine learning systems. A typical
model ingests hundreds of features with vocabularies on the order of millions
to billions of tokens. The standard approach is to represent each feature value
as a d-dimensional embedding, introducing hundreds of billions of parameters
for extremely high-cardinality features. This bottleneck has led to substantial
progress in alternative embedding algorithms. Many of these methods, however,
make the assumption that each feature uses an independent embedding table. This
work introduces a simple yet highly effective framework, Feature Multiplexing,
where one single representation space is used across many different categorical
features. Our theoretical and empirical analysis reveals that multiplexed
embeddings can be decomposed into components from each constituent feature,
allowing models to distinguish between features. We show that multiplexed
representations lead to Pareto-optimal parameter-accuracy tradeoffs for three
public benchmark datasets. Further, we propose a highly practical approach
called Unified Embedding with three major benefits: simplified feature
configuration, strong adaptation to dynamic data distributions, and
compatibility with modern hardware. Unified embedding gives significant
improvements in offline and online metrics compared to highly competitive
baselines across five web-scale search, ads, and recommender systems, where it
serves billions of users across the world in industry-leading products.Comment: NeurIPS'23 Spotligh