1,615 research outputs found
Proximal Deictic Temporal Reference with Calendar Units
version corrigéeThe paper centres on deictic reference to temporal segments of the near future or past using of the fundamental calendar units (days, years, weeks, months) and their divisions (days of the week, parts of the day). • The global aim of the study: to identify language specific and cross linguistic patterns in the linguistic use of calendar units. • A more specific goal: determining to what extent temporal reference can be achieved through linguistic calendar expressions independently of other elements—how much of the necessary information is directly encoded in them and how much is supplied by additional linguistic and extra-linguistic elements. We presents initial results of ongoing research. We will consider here some of the properties of three types of expressions employing linguistic calendar terms: the fundamental units (day, year, week, month), parts of the day and the (named) days of the week. The fundamental units have been examined (to varying degrees of depth) in some 20 languages of various language families. The other units have only been examined in a more limited set of languages, at this stage. As will be shown, the three types of expressions reflect temporal reference to different levels or different cycles and their linguistic behaviour reveals differences in the temporal information they encode and in their ability to function independently as temporal markers
Proximal Deixis with Calendar Terms: Cross-linguistic Patterns of Temporal Reference
An analysis of deictic temporal reference using major calendar units (day, year, week, month) and their divisions (days of the week, parts of day). Our analysis shows systematic inter-linguistic tendencies et indicates that each type of unit encodes different information, affecting their capacity to function independently as temporal markers, in the absence of additional linguistic or extra-linguistic elements.Cette étude analyse la référence temporelle déictique avec des segments fondamentales du calendrier (jours, années, semaines, mois) et leurs divisions (jours de la semaine, certaines parties de la journée). Notre analyse montre des tendances inter-linguistiques systématiques et indique que chaque type encode des informations différentes qui affectent leur capacité de fonctionner indépendamment en tant que marqueurs temporels en l'absence d'autres éléments linguistiques et extra-linguistiques
1st Annual LGBT Symposium Professor’s Gratitude
Thank-you letter given to professors who participated in the 1st LGBTQ Symposiu
Sample-based distance-approximation for subsequence-freeness
In this work, we study the problem of approximating the distance to
subsequence-freeness in the sample-based distribution-free model. For a given
subsequence (word) , a sequence (text)
is said to contain if there exist indices
such that for every . Otherwise, is
-free. Ron and Rosin (ACM TOCT 2022) showed that the number of samples both
necessary and sufficient for one-sided error testing of subsequence-freeness in
the sample-based distribution-free model is . Denoting by
the distance of to -freeness under a distribution , we are interested in obtaining an estimate ,
such that with probability at
least , for a given distance parameter . Our main result is an
algorithm whose sample complexity is . We first
present an algorithm that works when the underlying distribution is
uniform, and then show how it can be modified to work for any (unknown)
distribution . We also show that a quadratic dependence on is
necessary
Verification of Neural Networks' Global Robustness
Neural networks are successful in various applications but are also
susceptible to adversarial attacks. To show the safety of network classifiers,
many verifiers have been introduced to reason about the local robustness of a
given input to a given perturbation. While successful, local robustness cannot
generalize to unseen inputs. Several works analyze global robustness
properties, however, neither can provide a precise guarantee about the cases
where a network classifier does not change its classification. In this work, we
propose a new global robustness property for classifiers aiming at finding the
minimal globally robust bound, which naturally extends the popular local
robustness property for classifiers. We introduce VHAGaR, an anytime verifier
for computing this bound. VHAGaR relies on three main ideas: encoding the
problem as a mixed-integer programming and pruning the search space by
identifying dependencies stemming from the perturbation or the network's
computation and generalizing adversarial attacks to unknown inputs. We evaluate
VHAGaR on several datasets and classifiers and show that, given a three hour
timeout, the average gap between the lower and upper bound on the minimal
globally robust bound computed by VHAGaR is 1.9, while the gap of an existing
global robustness verifier is 154.7. Moreover, VHAGaR is 130.6x faster than
this verifier. Our results further indicate that leveraging dependencies and
adversarial attacks makes VHAGaR 78.6x faster
- …