2,110 research outputs found
CAPTCHaStar! A novel CAPTCHA based on interactive shape discovery
Over the last years, most websites on which users can register (e.g., email
providers and social networks) adopted CAPTCHAs (Completely Automated Public
Turing test to tell Computers and Humans Apart) as a countermeasure against
automated attacks. The battle of wits between designers and attackers of
CAPTCHAs led to current ones being annoying and hard to solve for users, while
still being vulnerable to automated attacks.
In this paper, we propose CAPTCHaStar, a new image-based CAPTCHA that relies
on user interaction. This novel CAPTCHA leverages the innate human ability to
recognize shapes in a confused environment. We assess the effectiveness of our
proposal for the two key aspects for CAPTCHAs, i.e., usability, and resiliency
to automated attacks. In particular, we evaluated the usability, carrying out a
thorough user study, and we tested the resiliency of our proposal against
several types of automated attacks: traditional ones; designed ad-hoc for our
proposal; and based on machine learning. Compared to the state of the art, our
proposal is more user friendly (e.g., only some 35% of the users prefer current
solutions, such as text-based CAPTCHAs) and more resilient to automated
attacks.Comment: 15 page
Methodological Flaws in Cognitive Animat Research
In the field of convergence between research in autonomous machine construction and biological systems understanding it is usually argued that building robots for research on auton- omy by replicating extant animals is a valuable strategy for engineering autonomous intelligent systems. In this paper we will address the very issue of animat construction, the ratio- nale behind this, their current implementations and the value they are producing. It will be shown that current activity, as it is done today, is deeply flawed and useless as research in the science and engineering of autonomy
Mathematical Language Models: A Survey
In recent years, there has been remarkable progress in leveraging Language
Models (LMs), encompassing Pre-trained Language Models (PLMs) and Large-scale
Language Models (LLMs), within the domain of mathematics. This paper conducts a
comprehensive survey of mathematical LMs, systematically categorizing pivotal
research endeavors from two distinct perspectives: tasks and methodologies. The
landscape reveals a large number of proposed mathematical LLMs, which are
further delineated into instruction learning, tool-based methods, fundamental
CoT techniques, and advanced CoT methodologies. In addition, our survey entails
the compilation of over 60 mathematical datasets, including training datasets,
benchmark datasets, and augmented datasets. Addressing the primary challenges
and delineating future trajectories within the field of mathematical LMs, this
survey is positioned as a valuable resource, poised to facilitate and inspire
future innovation among researchers invested in advancing this domain.Comment: arXiv admin note: text overlap with arXiv:1705.04146,
arXiv:2304.10977, arXiv:2112.00114, arXiv:1905.13319, arXiv:2304.12244,
arXiv:2206.01347, arXiv:2006.09265 by other author
How functional programming mattered
In 1989 when functional programming was still considered a niche topic, Hughes wrote a visionary paper arguing convincingly ‘why functional programming matters’. More than two decades have passed. Has functional programming really mattered? Our answer is a resounding ‘Yes!’. Functional programming is now at the forefront of a new generation of programming technologies, and enjoying increasing popularity and influence. In this paper, we review the impact of functional programming, focusing on how it has changed the way we may construct programs, the way we may verify programs, and fundamentally the way we may think about programs
Deep learning that scales: leveraging compute and data
Deep learning has revolutionized the field of artificial intelligence in the past decade. Although the development of these techniques spans over several years, the recent advent of deep learning is explained by an increased availability of data and compute that have unlocked the potential of deep neural networks. They have become ubiquitous in domains such as natural language processing, computer vision, speech processing, and control, where enough training data is available. Recent years have seen continuous progress driven by ever-growing neural networks that benefited from large amounts of data and computing power.
This thesis is motivated by the observation that scale is one of the key factors driving progress in deep learning research, and aims at devising deep learning methods that scale gracefully with the available data and compute. We narrow down this scope into two main research directions. The first of them is concerned with designing hardware-aware methods which can make the most of the computing resources in current high performance computing facilities. We then study bottlenecks preventing existing methods from scaling up as more data becomes available, providing solutions that contribute towards enabling training of more complex models.
This dissertation studies the aforementioned research questions for two different learning paradigms, each with its own algorithmic and computational characteristics. The first part of this thesis studies the paradigm where the model needs to learn from a collection of examples, extracting as much information as possible from the given data. The second part is concerned with training agents that learn by interacting with a simulated environment, which introduces unique challenges such as efficient exploration and simulation
A Survey of Quantum-Cognitively Inspired Sentiment Analysis Models
Quantum theory, originally proposed as a physical theory to describe the
motions of microscopic particles, has been applied to various non-physics
domains involving human cognition and decision-making that are inherently
uncertain and exhibit certain non-classical, quantum-like characteristics.
Sentiment analysis is a typical example of such domains. In the last few years,
by leveraging the modeling power of quantum probability (a non-classical
probability stemming from quantum mechanics methodology) and deep neural
networks, a range of novel quantum-cognitively inspired models for sentiment
analysis have emerged and performed well. This survey presents a timely
overview of the latest developments in this fascinating cross-disciplinary
area. We first provide a background of quantum probability and quantum
cognition at a theoretical level, analyzing their advantages over classical
theories in modeling the cognitive aspects of sentiment analysis. Then, recent
quantum-cognitively inspired models are introduced and discussed in detail,
focusing on how they approach the key challenges of the sentiment analysis
task. Finally, we discuss the limitations of the current research and highlight
future research directions
- …