2,454 research outputs found
Three phenomena discriminating between “raising” and “matching” relative clauses
The article discusses three phenomena (relative clause extraposition, stacking and weak island sensitivity) which are incompatible with bona fide “raising” relatives, like the amount relatives studied by Carlson (1977), but not with ordinary restrictive relatives. They thus appear to provide clear evidence that a “matching” derivation of relative clauses is needed in addition to the “raising” one
Resource Splitting and Reintegration with Supplementals
In this paper we survey the various ways of expressing modality in Urdu/Hindi and show that Urdu/Hindi modals provide interesting insights on current discussions of the semantics of modality. There are very few dedicated modals in Urdu/Hindi: most of which has been arrived at constructionally via a combination of a certain kind of verb with a certain kind of embedded verb form and a certain kind of case. Among the range of constructions yielded by such combinations, there is evidence for a two-place modal operator in addition to the one-place operator usually assumed in the literature. We also discuss instances of the Actuality Entailment, which had been shown to be sensitive to aspect, but in Urdu/Hindi appears to be sensitive to aspect only some of the time, depending on the type of modal verb. Indeed, following recent proposals by Ramchand (2011), we end up with a purely lexical account of modality and the Actuality Entailment, rather than the structural one put forward by Hacquard (2010)
Typologies of agreement: some problems from Kayardild
In this paper I describe a number of agreement-type phenomena in the Australian language Kayardild, and assess them against existing definitions, stating both the boundaries of what is to be considered agreement, and characteristics of prototypical agreement phenomena. Though conforming, prima facie, to definitions of agreement that stress semantically based covariance in inflections on different words, the Kayardild phenomena considered here pose a number of challenges to accepted views of agreement: the rich possibilities for stacking case-like agreement inflections emanating from different syntactic levels, the fact that inflections resulting from agreement may change the word class of their host, and the semantic categories involved, in particular tense/aspect/mood, which have been claimed not to be agreement categories on nominals. Two types of inflection, in particular - 'modal case' and 'associating case' - lie somewhere between prototypical agreement and prototypical government. Like agreement, but unlike government, they are triggered by inflectional rather than lexical features of the head, and appear on more than one constituent; like government, but unlike agreement, the semantic categories on head and dependent are not isomorphic. Other types of inflection, though unusual in the categories involved, the possibility of recursion, and their effects on the host's word class, are close to prototypical in terms of how they fare in Corbett's proposed tests for canonical agreement
Building Program Vector Representations for Deep Learning
Deep learning has made significant breakthroughs in various fields of
artificial intelligence. Advantages of deep learning include the ability to
capture highly complicated features, weak involvement of human engineering,
etc. However, it is still virtually impossible to use deep learning to analyze
programs since deep architectures cannot be trained effectively with pure back
propagation. In this pioneering paper, we propose the "coding criterion" to
build program vector representations, which are the premise of deep learning
for program analysis. Our representation learning approach directly makes deep
learning a reality in this new field. We evaluate the learned vector
representations both qualitatively and quantitatively. We conclude, based on
the experiments, the coding criterion is successful in building program
representations. To evaluate whether deep learning is beneficial for program
analysis, we feed the representations to deep neural networks, and achieve
higher accuracy in the program classification task than "shallow" methods, such
as logistic regression and the support vector machine. This result confirms the
feasibility of deep learning to analyze programs. It also gives primary
evidence of its success in this new field. We believe deep learning will become
an outstanding technique for program analysis in the near future.Comment: This paper was submitted to ICSE'1
CPL: A Core Language for Cloud Computing -- Technical Report
Running distributed applications in the cloud involves deployment. That is,
distribution and configuration of application services and middleware
infrastructure. The considerable complexity of these tasks resulted in the
emergence of declarative JSON-based domain-specific deployment languages to
develop deployment programs. However, existing deployment programs unsafely
compose artifacts written in different languages, leading to bugs that are hard
to detect before run time. Furthermore, deployment languages do not provide
extension points for custom implementations of existing cloud services such as
application-specific load balancing policies.
To address these shortcomings, we propose CPL (Cloud Platform Language), a
statically-typed core language for programming both distributed applications as
well as their deployment on a cloud platform. In CPL, application services and
deployment programs interact through statically typed, extensible interfaces,
and an application can trigger further deployment at run time. We provide a
formal semantics of CPL and demonstrate that it enables type-safe, composable
and extensible libraries of service combinators, such as load balancing and
fault tolerance.Comment: Technical report accompanying the MODULARITY '16 submissio
Movement and interpretation of quantifiers in internally-headed relative clauses
This paper addresses the semantic typology of internally-headed relative clauses using a case study of two West African languages, Atchan (Kwa) and Bùlì (Gur). Both languages exhibit syntactically-similar relatives, involving overt movement of the head. However, quantifiers on the head are interpreted differently in the two languages. In Atchan, quantifiers on the relative-clause head take the entire relative clause as their restriction; in Bùlì, quantifiers on the head take only the head noun as their restriction. I propose that the former is interpreted via NP reconstruction and Trace Conversion, the latter via DP reconstruction. The empirical difference between these two languages motivates a revision to the typology developed by Grosu (2012), which tightly links head movement and the Atchan-like quantifier interpretation pattern. This work further supports a a modular view in which languages can adopt different strategies to interpret movement-involving structures
- …