295,648 research outputs found
fMRI Evidence for Modality-Specific Processing of Conceptual Knowledge on Six Modalities
Traditional theories assume that amodal representations, such as feature lists and semantic
networks, represent conceptual knowledge about the world. According to this view, the
sensory, motor, and introspective states that arise during perception and action are irrelevant
to representing knowledge. Instead the conceptual system lies outside modality-specific
systems and operates according to different principles. Increasingly, however, researchers
report that modality-specific systems become active during purely conceptual tasks,
suggesting that these systems play central roles in representing knowledge (for a review, see
Martin, 2001, Handbook of Functional Neuroimaging of Cognition). In particular,
researchers report that the visual system becomes active while processing visual properties,
and that the motor system becomes active while processing action properties. The present
study corroborates and extends these findings. During fMRI, subjects verified whether or not
properties could potentially be true of concepts (e.g., BLENDER-loud). Subjects received
only linguistic stimuli, and nothing was said about using imagery. Highly related false
properties were used on false trials to block word association strategies (e.g., BUFFALOwinged).
To assess the full extent of the modality-specific hypothesis, properties were
verified on each of six modalities. Examples include GEMSTONE-glittering (vision),
BLENDER-loud (audition), FAUCET-turned (motor), MARBLE-cool (touch),
CUCUMBER-bland (taste), and SOAP-perfumed (smell). Neural activity during property
verification was compared to a lexical decision baseline. For all six sets of the modalityspecific
properties, significant activation was observed in the respective neural system.
Finding modality-specific processing across six modalities contributes to the growing
conclusion that knowledge is grounded in modality-specific systems of the brain
An Abstract Approach to Stratification in Linear Logic
We study the notion of stratification, as used in subsystems of linear logic
with low complexity bounds on the cut-elimination procedure (the so-called
light logics), from an abstract point of view, introducing a logical system in
which stratification is handled by a separate modality. This modality, which is
a generalization of the paragraph modality of Girard's light linear logic,
arises from a general categorical construction applicable to all models of
linear logic. We thus learn that stratification may be formulated independently
of exponential modalities; when it is forced to be connected to exponential
modalities, it yields interesting complexity properties. In particular, from
our analysis stem three alternative reformulations of Baillot and Mazza's
linear logic by levels: one geometric, one interactive, and one semantic
JAVANESE LANGUAGE MODALITY IN BLENCONG ARTICLES OF SUARA MERDEKA NEWSPAPER
Many languages have modalities which are usually expressed through certain modal
verbs. By analyzing the modal verbs, the varying degrees of commitment to or belief in a
proposition can be explained. Javanese language also has such modal system. This paper
examines the modality in Javanese language used in Blencong articles of Suara Merdeka
newspaper. The articles contain opinions towards many issues from the view of Javanese people
and Javanese wisdom. This paper is aimed to find out the modalities realized in modal verbs in
Blencong articles and to describe the functions of such modal verbs. The data were collected
from Blencong articles downloaded from the online version of Suara Merdeka newspaper. They
were then analyzed by using translational (identity) method. The result shows that the writers of
the articles use epistemic modality to show their certainty and uncertainty about any issue,
deontic modality to show their attitude towards obligation, dynamic modality to express one‘s
ability, and intentional modality to express the writers‘ wish, willingness, and hope. The modal
verbs used are mesthi, pancen, mesthine, bakal, arep, kudu, kudune, dititahke, bisa, bisaa,
mbokmenawa, and muga-muga
Design and application of a multi-modal process tomography system
This paper presents a design and application study of an integrated multi-modal system designed to support a range of common modalities: electrical resistance, electrical capacitance and ultrasonic tomography. Such a system is designed for use with complex processes that exhibit behaviour changes over time and space, and thus demand equally diverse sensing modalities. A multi-modal process tomography system able to exploit multiple sensor modes must permit the integration of their data, probably centred upon a composite process model. The paper presents an overview of this approach followed by an overview of the systems engineering and integrated design constraints. These include a range of hardware oriented challenges: the complexity and specificity of the front end electronics for each modality; the need for front end data pre-processing and packing; the need to integrate the data to facilitate data fusion; and finally the features to enable successful fusion and interpretation. A range of software aspects are also reviewed: the need to support differing front-end sensors for each modality in a generic fashion; the need to communicate with front end data pre-processing and packing systems; the need to integrate the data to allow data fusion; and finally to enable successful interpretation. The review of the system concepts is illustrated with an application to the study of a complex multi-component process
ModDrop: adaptive multi-modal gesture recognition
We present a method for gesture detection and localisation based on
multi-scale and multi-modal deep learning. Each visual modality captures
spatial information at a particular spatial scale (such as motion of the upper
body or a hand), and the whole system operates at three temporal scales. Key to
our technique is a training strategy which exploits: i) careful initialization
of individual modalities; and ii) gradual fusion involving random dropping of
separate channels (dubbed ModDrop) for learning cross-modality correlations
while preserving uniqueness of each modality-specific representation. We
present experiments on the ChaLearn 2014 Looking at People Challenge gesture
recognition track, in which we placed first out of 17 teams. Fusing multiple
modalities at several spatial and temporal scales leads to a significant
increase in recognition rates, allowing the model to compensate for errors of
the individual classifiers as well as noise in the separate channels.
Futhermore, the proposed ModDrop training technique ensures robustness of the
classifier to missing signals in one or several channels to produce meaningful
predictions from any number of available modalities. In addition, we
demonstrate the applicability of the proposed fusion scheme to modalities of
arbitrary nature by experiments on the same dataset augmented with audio.Comment: 14 pages, 7 figure
- …