6,576 research outputs found
Az adverbiumok mondattani és jelentéstani kérdései = The syntax and syntax-semantics interface of adverbial modification
A határozószók és a határozók alaktani, mondattani és funkcionális kérdéseit vizsgáltuk a generatív nyelvelmélet keretében, főként magyar anyag alapján. Olyan leírásra törekedtünk, melyből a különféle határozófajták mondattani viselkedése, hatóköre, valamint hangsúlyozása egyaránt következik. A különféle határozótípusok PP-ként való elemzésének lehetőségét bizonyítottuk. A határozók mondatbeli elhelyezése tekintetében a specifikálói pozíció (Cinque 1999) ellen és az adjunkciós elemzés (Ernst 2002) mellett érveltünk. Megmutattuk, hogy a határozók szórendjének levezetéséhez bal- és jobboldali adjunkció feltételezése egyaránt szükséges. A különféle határozófajták szórendi helyét mondattani, jelentéstani és prozódiai tényezők összjátékával magyaráztuk. A jelentéstani tényezők között pl. a határozók inkorporálhatóságát korlátozó típusmegszorítást, a negatív határozók kötelező fókuszálását előidéző skaláris megszorítást, egyes határozófajták és igefajták komplex eseményszerkezetének inkompatibilitását vizsgáltuk. Az ige mögötti határozók szórendjét befolyásoló prozódiai tényező például a növekvő összetevők törvénye. Megfigyeltük az intonációskifejezés- újraelemzés kiváltódásának feltételeit és jelentéstani következményeit is. A helyhatározói igekötők egy típusát a mozgatási láncok sajátos fonológiai megvalósulásaként (a fonológiailag redukált kópia inkorporációjaként) elemeztük. A tárgykörben mintegy 60 tanulmányt publikáltunk. Adverbs and Adverbial Adjuncts at the Interfaces (489 old.) c. könyvünket kiadja a Mouton de Gruyter (Berlin). | This project has aimed to clarify (on the basis of mainly Hungarian data) basic issues concerning the category "adverb", the function "adverbial", and the grammar of adverbial modification. We have argued for the PP analysis of adverbials, and have claimed that they enter the derivation via left- and right-adjunction. Their merge-in position is determined by the interplay of syntactic, semantic, and prosodic factors. The semantically motivated constraints discussed also include a type restriction affecting adverbials semantically incorporated into the verbal predicate, an obligatory focus position for scalar adverbs representing negative values of bidirectional scales, cooccurrence restrictions between verbs and adverbials involving incompatible subevents, etc. The order and interpretation of adverbials in the postverbal domain is shown to be affected by such phonologically motivated constraints as the Law of Growing Constituents, and by intonation-phrase restructuring. The shape of the light-headed chain arising in the course of locative PP incorporation is determined by morpho-phonological requirements. The types of adverbs and adverbials analyzed include locatives, temporals, comitatives, epistemic adverbs, adverbs of degree, manner, counting, and frequency, quantificational adverbs, and adverbial participles. We have published about 60 studies; our book Adverbs and Adverbial Adjuncts at the Interfaces (pp. 489) is published in the series Interface Explorations of Mouton de Gruyter, Berlin
A Markov growth process for Macdonald's distribution on reduced words
We give an algorithmic-bijective proof of Macdonald's reduced word identity
in the theory of Schubert polynomials, in the special case where the
permutation is dominant. Our bijection uses a novel application of David
Little's generalized bumping algorithm. We also describe a Markov growth
process for an associated probability distribution on reduced words. Our growth
process can be implemented efficiently on a computer and allows for fast
sampling of reduced words. We also discuss various partial generalizations and
links to Little's work on the RSK algorithm.Comment: 16 pages, 5 figure
A delayed choice quantum eraser explained by the transactional interpretation of quantum mechanics
This paper explains the delayed choice quantum eraser of Kim et al. in terms
of the transactional interpretation of quantum mechanics by John Cramer. It is
kept deliberately mathematically simple to help explain the transactional
technique. The emphasis is on a clear understanding of how the instantaneous
"collapse" of the wave function due to a measurement at a specific time and
place may be reinterpreted as a gradual collapse over the entire path of the
photon and over the entire transit time from slit to detector. This is made
possible by the use of a retarded offer wave, which is thought to travel from
the slits (or rather the small region within the parametric crystal where
down-conversion takes place) to the detector and an advanced counter wave
traveling backward in time from the detector to the slits. The point here is to
make clear how simple the Cramer transactional picture is and how much more
intuitive the collapse of the wave function becomes if viewed in this way. Also
any confusion about possible retro-causal signaling is put to rest. A delayed
choice quantum eraser does not require any sort of backward in time
communication. This paper makes the point that it is preferable to use the
Transactional Interpretation (TI) over the usual Copenhagen Interpretation (CI)
for a more intuitive understanding of the quantum eraser delayed choice
experiment. Both methods give exactly the same end results and can be used
interchangeably.Comment: 24 pages 4 figures, fifth draf
Validating Network Value of Influencers by means of Explanations
Recently, there has been significant interest in social influence analysis.
One of the central problems in this area is the problem of identifying
influencers, such that by convincing these users to perform a certain action
(like buying a new product), a large number of other users get influenced to
follow the action. The client of such an application is a marketer who would
target these influencers for marketing a given new product, say by providing
free samples or discounts. It is natural that before committing resources for
targeting an influencer the marketer would be interested in validating the
influence (or network value) of influencers returned. This requires digging
deeper into such analytical questions as: who are their followers, on what
actions (or products) they are influential, etc. However, the current
approaches to identifying influencers largely work as a black box in this
respect. The goal of this paper is to open up the black box, address these
questions and provide informative and crisp explanations for validating the
network value of influencers.
We formulate the problem of providing explanations (called PROXI) as a
discrete optimization problem of feature selection. We show that PROXI is not
only NP-hard to solve exactly, it is NP-hard to approximate within any
reasonable factor. Nevertheless, we show interesting properties of the
objective function and develop an intuitive greedy heuristic. We perform
detailed experimental analysis on two real world datasets - Twitter and
Flixster, and show that our approach is useful in generating concise and
insightful explanations of the influence distribution of users and that our
greedy algorithm is effective and efficient with respect to several baselines
Towards Understanding the Origin of Genetic Languages
Molecular biology is a nanotechnology that works--it has worked for billions
of years and in an amazing variety of circumstances. At its core is a system
for acquiring, processing and communicating information that is universal, from
viruses and bacteria to human beings. Advances in genetics and experience in
designing computers have taken us to a stage where we can understand the
optimisation principles at the root of this system, from the availability of
basic building blocks to the execution of tasks. The languages of DNA and
proteins are argued to be the optimal solutions to the information processing
tasks they carry out. The analysis also suggests simpler predecessors to these
languages, and provides fascinating clues about their origin. Obviously, a
comprehensive unraveling of the puzzle of life would have a lot to say about
what we may design or convert ourselves into.Comment: (v1) 33 pages, contributed chapter to "Quantum Aspects of Life",
edited by D. Abbott, P. Davies and A. Pati, (v2) published version with some
editin
TRX: A Formally Verified Parser Interpreter
Parsing is an important problem in computer science and yet surprisingly
little attention has been devoted to its formal verification. In this paper, we
present TRX: a parser interpreter formally developed in the proof assistant
Coq, capable of producing formally correct parsers. We are using parsing
expression grammars (PEGs), a formalism essentially representing recursive
descent parsing, which we consider an attractive alternative to context-free
grammars (CFGs). From this formalization we can extract a parser for an
arbitrary PEG grammar with the warranty of total correctness, i.e., the
resulting parser is terminating and correct with respect to its grammar and the
semantics of PEGs; both properties formally proven in Coq.Comment: 26 pages, LMC
Detect the unexpected: a science for surveillance
Purpose – The purpose of this paper is to outline a strategy for research development focused on addressing the neglected role of visual perception in real life tasks such as policing surveillance and command and control settings. Approach – The scale of surveillance task in modern control room is expanding as technology increases input capacity at an accelerating rate. The authors review recent literature highlighting the difficulties that apply to modern surveillance and give examples of how poor detection of the unexpected can be, and how surprising this deficit can be. Perceptual phenomena such as change blindness are linked to the perceptual processes undertaken by law-enforcement personnel. Findings – A scientific programme is outlined for how detection deficits can best be addressed in the context of a multidisciplinary collaborative agenda between researchers and practitioners. The development of a cognitive research field specifically examining the occurrence of perceptual “failures” provides an opportunity for policing agencies to relate laboratory findings in psychology to their own fields of day-to-day enquiry. Originality/value – The paper shows, with examples, where interdisciplinary research may best be focussed on evaluating practical solutions and on generating useable guidelines on procedure and practice. It also argues that these processes should be investigated in real and simulated context-specific studies to confirm the validity of the findings in these new applied scenarios
- …