49,205 research outputs found
A systematic review of protocol studies on conceptual design cognition: design as search and exploration
This paper reports findings from the first systematic review of protocol studies focusing specifically on conceptual design cognition, aiming to answer the following research question: What is our current understanding of the cognitive processes involved in conceptual design tasks carried out by individual designers? We reviewed 47 studies on architectural design, engineering design and product design engineering. This paper reports 24 cognitive processes investigated in a subset of 33 studies aligning with two viewpoints on the nature of designing: (V1) design as search (10 processes, 41.7%); and (V2) design as exploration (14 processes, 58.3%). Studies on search focused on solution search and problem structuring, involving: long-term memory retrieval; working memory; operators and reasoning processes. Studies on exploration investigated: co-evolutionary design; visual reasoning; cognitive actions; and unexpected discovery and situated requirements invention. Overall, considerable conceptual and terminological differences were observed among the studies. Nonetheless, a common focus on memory, semantic, associative, visual perceptual and mental imagery processes was observed to an extent. We suggest three challenges for future research to advance the field: (i) developing general models/theories; (ii) testing protocol study findings using objective methods conducive to larger samples and (iii) developing a shared ontology of cognitive processes in design
AutoDIAL: Automatic DomaIn Alignment Layers
Classifiers trained on given databases perform poorly when tested on data
acquired in different settings. This is explained in domain adaptation through
a shift among distributions of the source and target domains. Attempts to align
them have traditionally resulted in works reducing the domain shift by
introducing appropriate loss terms, measuring the discrepancies between source
and target distributions, in the objective function. Here we take a different
route, proposing to align the learned representations by embedding in any given
network specific Domain Alignment Layers, designed to match the source and
target feature distributions to a reference one. Opposite to previous works
which define a priori in which layers adaptation should be performed, our
method is able to automatically learn the degree of feature alignment required
at different levels of the deep network. Thorough experiments on different
public benchmarks, in the unsupervised setting, confirm the power of our
approach.Comment: arXiv admin note: substantial text overlap with arXiv:1702.06332
added supplementary materia
Towards a Unified Knowledge-Based Approach to Modality Choice
This paper advances a unified knowledge-based approach to the process of choosing the most appropriate modality or combination of modalities in multimodal output generation. We propose a Modality Ontology (MO) that models the knowledge needed to support the two most fundamental processes determining modality choice – modality allocation (choosing the modality or set of modalities that can best support a particular type of information) and modality combination (selecting an optimal final combination of modalities). In the proposed ontology we model the main levels which collectively determine the characteristics of each modality and the specific relationships between different modalities that are important for multi-modal meaning making. This ontology aims to support the automatic selection of modalities and combinations of modalities that are suitable to convey the meaning of the intended message
Back-translation for discovering distant protein homologies
Frameshift mutations in protein-coding DNA sequences produce a drastic change
in the resulting protein sequence, which prevents classic protein alignment
methods from revealing the proteins' common origin. Moreover, when a large
number of substitutions are additionally involved in the divergence, the
homology detection becomes difficult even at the DNA level. To cope with this
situation, we propose a novel method to infer distant homology relations of two
proteins, that accounts for frameshift and point mutations that may have
affected the coding sequences. We design a dynamic programming alignment
algorithm over memory-efficient graph representations of the complete set of
putative DNA sequences of each protein, with the goal of determining the two
putative DNA sequences which have the best scoring alignment under a powerful
scoring system designed to reflect the most probable evolutionary process. This
allows us to uncover evolutionary information that is not captured by
traditional alignment methods, which is confirmed by biologically significant
examples.Comment: The 9th International Workshop in Algorithms in Bioinformatics
(WABI), Philadelphia : \'Etats-Unis d'Am\'erique (2009
Recommended from our members
CHREST+: A simulation of how humans learn to solve problems using diagrams.
This paper describes the underlying principles of a computer model, CHREST+, which learns to solve problems using diagrammatic representations. Although earlier work has determined that experts store domain-specific information within schemata, no substantive model has been proposed for learning such representations. We describe the different strategies used by subjects in constructing a diagrammatic representation of an electric circuit known as an AVOW diagram, and explain how these strategies fit a theory for the learnt representations. Then we describe CHREST+, an extended version of an established model of human perceptual memory. The extension enables the model to relate information learnt about circuits with that about their associated AVOW diagrams, and use this information as a schema to improve its efficiency at problem solving
Biomedical ontology alignment: An approach based on representation learning
While representation learning techniques have shown great promise in application to a number of different NLP tasks, they have had little impact on the problem of ontology matching. Unlike past work that has focused on feature engineering, we present a novel representation learning approach that is tailored to the ontology matching task. Our approach is based on embedding ontological terms in a high-dimensional Euclidean space. This embedding is derived on the basis of a novel phrase retrofitting strategy through which semantic similarity information becomes inscribed onto fields of pre-trained word vectors. The resulting framework also incorporates a novel outlier detection mechanism based on a denoising autoencoder that is shown to improve performance. An ontology matching system derived using the proposed framework achieved an F-score of 94% on an alignment scenario involving the Adult Mouse Anatomical Dictionary and the Foundational Model of Anatomy ontology (FMA) as targets. This compares favorably with the best performing systems on the Ontology Alignment Evaluation Initiative anatomy challenge. We performed additional experiments on aligning FMA to NCI Thesaurus and to SNOMED CT based on a reference alignment extracted from the UMLS Metathesaurus. Our system obtained overall F-scores of 93.2% and 89.2% for these experiments, thus achieving state-of-the-art results
- …