3,617 research outputs found
Playing Games in the Baire Space
We solve a generalized version of Church's Synthesis Problem where a play is
given by a sequence of natural numbers rather than a sequence of bits; so a
play is an element of the Baire space rather than of the Cantor space. Two
players Input and Output choose natural numbers in alternation to generate a
play. We present a natural model of automata ("N-memory automata") equipped
with the parity acceptance condition, and we introduce also the corresponding
model of "N-memory transducers". We show that solvability of games specified by
N-memory automata (i.e., existence of a winning strategy for player Output) is
decidable, and that in this case an N-memory transducer can be constructed that
implements a winning strategy for player Output.Comment: In Proceedings Cassting'16/SynCoP'16, arXiv:1608.0017
On the Lengths of Symmetry Breaking-Preserving Games on Graphs
Given a graph , we consider a game where two players, and ,
alternatingly color edges of in red and in blue respectively. Let be
the maximum number of moves in which is able to keep the red and the blue
subgraphs isomorphic, if plays optimally to destroy the isomorphism. This
value is a lower bound for the duration of any avoidance game on under the
assumption that plays optimally. We prove that if is a path or a cycle
of odd length , then . The lower
bound is based on relations with Ehrenfeucht games from model theory. We also
consider complete graphs and prove that .Comment: 20 page
Jointly Modeling Embedding and Translation to Bridge Video and Language
Automatically describing video content with natural language is a fundamental
challenge of multimedia. Recurrent Neural Networks (RNN), which models sequence
dynamics, has attracted increasing attention on visual interpretation. However,
most existing approaches generate a word locally with given previous words and
the visual content, while the relationship between sentence semantics and
visual content is not holistically exploited. As a result, the generated
sentences may be contextually correct but the semantics (e.g., subjects, verbs
or objects) are not true.
This paper presents a novel unified framework, named Long Short-Term Memory
with visual-semantic Embedding (LSTM-E), which can simultaneously explore the
learning of LSTM and visual-semantic embedding. The former aims to locally
maximize the probability of generating the next word given previous words and
visual content, while the latter is to create a visual-semantic embedding space
for enforcing the relationship between the semantics of the entire sentence and
visual content. Our proposed LSTM-E consists of three components: a 2-D and/or
3-D deep convolutional neural networks for learning powerful video
representation, a deep RNN for generating sentences, and a joint embedding
model for exploring the relationships between visual content and sentence
semantics. The experiments on YouTube2Text dataset show that our proposed
LSTM-E achieves to-date the best reported performance in generating natural
sentences: 45.3% and 31.0% in terms of BLEU@4 and METEOR, respectively. We also
demonstrate that LSTM-E is superior in predicting Subject-Verb-Object (SVO)
triplets to several state-of-the-art techniques
Deep Fragment Embeddings for Bidirectional Image Sentence Mapping
We introduce a model for bidirectional retrieval of images and sentences
through a multi-modal embedding of visual and natural language data. Unlike
previous models that directly map images or sentences into a common embedding
space, our model works on a finer level and embeds fragments of images
(objects) and fragments of sentences (typed dependency tree relations) into a
common space. In addition to a ranking objective seen in previous work, this
allows us to add a new fragment alignment objective that learns to directly
associate these fragments across modalities. Extensive experimental evaluation
shows that reasoning on both the global level of images and sentences and the
finer level of their respective fragments significantly improves performance on
image-sentence retrieval tasks. Additionally, our model provides interpretable
predictions since the inferred inter-modal fragment alignment is explicit
Acquired non-specific stuttering in Parkinson’s disease: a case report
Parkinson’s disease (PD) is a progressive neurodegenerative disease predominantly characterized by
tremor, bradykinesia, and rigor. In addition to motor and non-motor manifestations of Parkinson’s
disease, there are a number of symptoms, including speech disorders and other cognitive impairments.
The most common speech symptoms are bradylalia, dysarthria, hypophonia and impaired prosody.
Cognitive changes that occur in the prodromal phase of PD include impairment in executive functions
and working memory, followed by impairment in attention and verbal fluency, and that is before the
motor characteristics of PD become visible. The aim of the study is to present the case of a 74-year-old patient with Parkinson’s disease who has speech and language difficulties and atypical speech
disfluency. Diagnostic processing was performed using a clinical battery of tests for speech – language
assessment and neuropsychological assessment. The results of the speech – language assessment indicate
significantly reduced intelligence due to non-specific speech disfluency and inaccurate articulation,
difficulty in organizing spontaneous expression and understanding grammatical structures, impaired
phonemic verbal fluency and difficulties in receptive vocabulary. Neuropsychological processing
indicated diffuse deterioration of the examined cognitive functioning to be larger than expected when
taking ito consideration the age and probably good premorbid abilities of this person
- …