80,968 research outputs found
Recommended from our members
A computer model of chess memory
Chess research provides rich data for testing computational models of human memory. This paper presents a model which shares several common concepts with an earlier attempt (Simon & Gilmartin, 1973), but features several new attributes: dynamic short-term memory, recursive chunking, more sophisticated perceptual mechanisms and use of a retrieval structure (Chase & Ericsson, 1982). Simulations of data from three experiments are presented: 1) differential recall of random and game positions; 2) recall of several boards presented in short succession; 3) recall of positions modified by mirror image reflection about various axes. The model fits the data reasonably well, although some empirical phenomena are not captured by it. At a theoretical level, the conceptualization of the internal representation and its relation with the retrieval structure needs further refinement
Difficult forms: critical practices of design and research
As a kind of 'criticism from within', conceptual and critical design inquire into what design is about – how the market operates, what is considered 'good design', and how the design and development of technology typically works. Tracing relations of conceptual and critical design to (post-)critical architecture and anti-design, we discuss a series of issues related to the operational and intellectual basis for 'critical practice', and how these might open up for a new kind of development of the conceptual and theoretical frameworks of design. Rather than prescribing a practice on the basis of theoretical considerations, these critical practices seem to build an intellectual basis for design on the basis of its own modes of operation, a kind of theoretical development that happens through, and from within, design practice and not by means of external descriptions or analyses of its practices and products
Recommended from our members
Innovating Pedagogy 2015: Open University Innovation Report 4
This series of reports explores new forms of teaching, learning and assessment for an interactive world, to guide teachers and policy makers in productive innovation. This fourth report proposes ten innovations that are already in currency but have not yet had a profound influence on education. To produce it, a group of academics at the Institute of Educational Technology in The Open University collaborated with researchers from the Center for Technology in Learning at SRI International. We proposed a long list of new educational terms, theories, and practices. We then pared these down to ten that have the potential to provoke major shifts in educational practice, particularly in post-school education. Lastly, we drew on published and unpublished writings to compile the ten sketches of new pedagogies that might transform education. These are summarised below in an approximate order of immediacy and timescale to widespread implementation
A Novel ILP Framework for Summarizing Content with High Lexical Variety
Summarizing content contributed by individuals can be challenging, because
people make different lexical choices even when describing the same events.
However, there remains a significant need to summarize such content. Examples
include the student responses to post-class reflective questions, product
reviews, and news articles published by different news agencies related to the
same events. High lexical diversity of these documents hinders the system's
ability to effectively identify salient content and reduce summary redundancy.
In this paper, we overcome this issue by introducing an integer linear
programming-based summarization framework. It incorporates a low-rank
approximation to the sentence-word co-occurrence matrix to intrinsically group
semantically-similar lexical items. We conduct extensive experiments on
datasets of student responses, product reviews, and news documents. Our
approach compares favorably to a number of extractive baselines as well as a
neural abstractive summarization system. The paper finally sheds light on when
and why the proposed framework is effective at summarizing content with high
lexical variety.Comment: Accepted for publication in the journal of Natural Language
Engineering, 201
2-D iteratively reweighted least squares lattice algorithm and its application to defect detection in textured images
In this paper, a 2-D iteratively reweighted least squares lattice algorithm, which is robust to the outliers, is introduced and is applied to defect detection problem in textured images. First, the philosophy of using different optimization functions that results in weighted least squares solution in the theory of 1-D robust regression is extended to 2-D. Then a new algorithm is derived which combines 2-D robust regression concepts with the 2-D recursive least squares lattice algorithm. With this approach, whatever the probability distribution of the prediction error may be, small weights are assigned to the outliers so that the least squares algorithm will be less sensitive to the outliers. Implementation of the proposed iteratively reweighted least squares lattice algorithm to the problem of defect detection in textured images is then considered. The performance evaluation, in terms of defect detection rate, demonstrates the importance of the proposed algorithm in reducing the effect of the outliers that generally correspond to false alarms in classification of textures as defective or nondefective
Transformation seismology: composite soil lenses for steering surface elastic Rayleigh waves.
Metamaterials are artificially structured media that exibit properties beyond those usually encountered in nature. Typically they are developed for electromagnetic waves at millimetric down to nanometric scales, or for acoustics, at centimeter scales. By applying ideas from transformation optics we can steer Rayleigh-surface waves that are solutions of the vector Navier equations of elastodynamics. As a paradigm of the conformal geophysics that we are creating, we design a square arrangement of Luneburg lenses to reroute Rayleigh waves around a building with the dual aim of protection and minimizing the effect on the wavefront (cloaking). To show that this is practically realisable we deliberately choose to use material parameters readily available and this metalens consists of a composite soil structured with buried pillars made of softer material. The regular lattice of inclusions is homogenized to give an effective material with a radially varying velocity profile and hence varying the refractive index of the lens. We develop the theory and then use full 3D numerical simulations to conclusively demonstrate, at frequencies of seismological relevance 3–10 Hz, and for low-speed sedimentary soil (v(s): 300–500 m/s), that the vibration of a structure is reduced by up to 6 dB at its resonance frequency
Interaction Design: Foundations, Experiments
Interaction Design: Foundations, Experiments is the result of a series of projects, experiments and curricula aimed at investigating the foundations of interaction design in particular and design research in general.
The first part of the book - Foundations - deals with foundational theoretical issues in interaction design. An analysis of two categorical mistakes -the empirical and interactive fallacies- forms a background to a discussion of interaction design as act design and of computational technology as material in design.
The second part of the book - Experiments - describes a range of design methods, programs and examples that have been used to probe foundational issues through systematic questioning of what is given. Based on experimental design work such as Slow Technology, Abstract Information Displays, Design for Sound Hiders, Zero Expression Fashion, and IT+Textiles, this section also explores how design experiments can play a central role when developing new design theory
Going Deeper with Semantics: Video Activity Interpretation using Semantic Contextualization
A deeper understanding of video activities extends beyond recognition of
underlying concepts such as actions and objects: constructing deep semantic
representations requires reasoning about the semantic relationships among these
concepts, often beyond what is directly observed in the data. To this end, we
propose an energy minimization framework that leverages large-scale commonsense
knowledge bases, such as ConceptNet, to provide contextual cues to establish
semantic relationships among entities directly hypothesized from video signal.
We mathematically express this using the language of Grenander's canonical
pattern generator theory. We show that the use of prior encoded commonsense
knowledge alleviate the need for large annotated training datasets and help
tackle imbalance in training through prior knowledge. Using three different
publicly available datasets - Charades, Microsoft Visual Description Corpus and
Breakfast Actions datasets, we show that the proposed model can generate video
interpretations whose quality is better than those reported by state-of-the-art
approaches, which have substantial training needs. Through extensive
experiments, we show that the use of commonsense knowledge from ConceptNet
allows the proposed approach to handle various challenges such as training data
imbalance, weak features, and complex semantic relationships and visual scenes.Comment: Accepted to WACV 201
Multi-Modal Multi-Scale Deep Learning for Large-Scale Image Annotation
Image annotation aims to annotate a given image with a variable number of
class labels corresponding to diverse visual concepts. In this paper, we
address two main issues in large-scale image annotation: 1) how to learn a rich
feature representation suitable for predicting a diverse set of visual concepts
ranging from object, scene to abstract concept; 2) how to annotate an image
with the optimal number of class labels. To address the first issue, we propose
a novel multi-scale deep model for extracting rich and discriminative features
capable of representing a wide range of visual concepts. Specifically, a novel
two-branch deep neural network architecture is proposed which comprises a very
deep main network branch and a companion feature fusion network branch designed
for fusing the multi-scale features computed from the main branch. The deep
model is also made multi-modal by taking noisy user-provided tags as model
input to complement the image input. For tackling the second issue, we
introduce a label quantity prediction auxiliary task to the main label
prediction task to explicitly estimate the optimal label number for a given
image. Extensive experiments are carried out on two large-scale image
annotation benchmark datasets and the results show that our method
significantly outperforms the state-of-the-art.Comment: Submited to IEEE TI
- …