35,249 research outputs found
Keystroke Biometrics in Response to Fake News Propagation in a Global Pandemic
This work proposes and analyzes the use of keystroke biometrics for content
de-anonymization. Fake news have become a powerful tool to manipulate public
opinion, especially during major events. In particular, the massive spread of
fake news during the COVID-19 pandemic has forced governments and companies to
fight against missinformation. In this context, the ability to link multiple
accounts or profiles that spread such malicious content on the Internet while
hiding in anonymity would enable proactive identification and blacklisting.
Behavioral biometrics can be powerful tools in this fight. In this work, we
have analyzed how the latest advances in keystroke biometric recognition can
help to link behavioral typing patterns in experiments involving 100,000 users
and more than 1 million typed sequences. Our proposed system is based on
Recurrent Neural Networks adapted to the context of content de-anonymization.
Assuming the challenge to link the typed content of a target user in a pool of
candidate profiles, our results show that keystroke recognition can be used to
reduce the list of candidate profiles by more than 90%. In addition, when
keystroke is combined with auxiliary data (such as location), our system
achieves a Rank-1 identification performance equal to 52.6% and 10.9% for a
background candidate list composed of 1K and 100K profiles, respectively.Comment: arXiv admin note: text overlap with arXiv:2004.0362
CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines
Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective.
The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines.
From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research
Joint Video and Text Parsing for Understanding Events and Answering Queries
We propose a framework for parsing video and text jointly for understanding
events and answering user queries. Our framework produces a parse graph that
represents the compositional structures of spatial information (objects and
scenes), temporal information (actions and events) and causal information
(causalities between events and fluents) in the video and text. The knowledge
representation of our framework is based on a spatial-temporal-causal And-Or
graph (S/T/C-AOG), which jointly models possible hierarchical compositions of
objects, scenes and events as well as their interactions and mutual contexts,
and specifies the prior probabilistic distribution of the parse graphs. We
present a probabilistic generative model for joint parsing that captures the
relations between the input video/text, their corresponding parse graphs and
the joint parse graph. Based on the probabilistic model, we propose a joint
parsing system consisting of three modules: video parsing, text parsing and
joint inference. Video parsing and text parsing produce two parse graphs from
the input video and text respectively. The joint inference module produces a
joint parse graph by performing matching, deduction and revision on the video
and text parse graphs. The proposed framework has the following objectives:
Firstly, we aim at deep semantic parsing of video and text that goes beyond the
traditional bag-of-words approaches; Secondly, we perform parsing and reasoning
across the spatial, temporal and causal dimensions based on the joint S/T/C-AOG
representation; Thirdly, we show that deep joint parsing facilitates subsequent
applications such as generating narrative text descriptions and answering
queries in the forms of who, what, when, where and why. We empirically
evaluated our system based on comparison against ground-truth as well as
accuracy of query answering and obtained satisfactory results
- …