80,968 research outputs found

    Difficult forms: critical practices of design and research

    Get PDF
    As a kind of 'criticism from within', conceptual and critical design inquire into what design is about – how the market operates, what is considered 'good design', and how the design and development of technology typically works. Tracing relations of conceptual and critical design to (post-)critical architecture and anti-design, we discuss a series of issues related to the operational and intellectual basis for 'critical practice', and how these might open up for a new kind of development of the conceptual and theoretical frameworks of design. Rather than prescribing a practice on the basis of theoretical considerations, these critical practices seem to build an intellectual basis for design on the basis of its own modes of operation, a kind of theoretical development that happens through, and from within, design practice and not by means of external descriptions or analyses of its practices and products

    A Novel ILP Framework for Summarizing Content with High Lexical Variety

    Full text link
    Summarizing content contributed by individuals can be challenging, because people make different lexical choices even when describing the same events. However, there remains a significant need to summarize such content. Examples include the student responses to post-class reflective questions, product reviews, and news articles published by different news agencies related to the same events. High lexical diversity of these documents hinders the system's ability to effectively identify salient content and reduce summary redundancy. In this paper, we overcome this issue by introducing an integer linear programming-based summarization framework. It incorporates a low-rank approximation to the sentence-word co-occurrence matrix to intrinsically group semantically-similar lexical items. We conduct extensive experiments on datasets of student responses, product reviews, and news documents. Our approach compares favorably to a number of extractive baselines as well as a neural abstractive summarization system. The paper finally sheds light on when and why the proposed framework is effective at summarizing content with high lexical variety.Comment: Accepted for publication in the journal of Natural Language Engineering, 201

    2-D iteratively reweighted least squares lattice algorithm and its application to defect detection in textured images

    Get PDF
    In this paper, a 2-D iteratively reweighted least squares lattice algorithm, which is robust to the outliers, is introduced and is applied to defect detection problem in textured images. First, the philosophy of using different optimization functions that results in weighted least squares solution in the theory of 1-D robust regression is extended to 2-D. Then a new algorithm is derived which combines 2-D robust regression concepts with the 2-D recursive least squares lattice algorithm. With this approach, whatever the probability distribution of the prediction error may be, small weights are assigned to the outliers so that the least squares algorithm will be less sensitive to the outliers. Implementation of the proposed iteratively reweighted least squares lattice algorithm to the problem of defect detection in textured images is then considered. The performance evaluation, in terms of defect detection rate, demonstrates the importance of the proposed algorithm in reducing the effect of the outliers that generally correspond to false alarms in classification of textures as defective or nondefective

    Transformation seismology: composite soil lenses for steering surface elastic Rayleigh waves.

    Get PDF
    Metamaterials are artificially structured media that exibit properties beyond those usually encountered in nature. Typically they are developed for electromagnetic waves at millimetric down to nanometric scales, or for acoustics, at centimeter scales. By applying ideas from transformation optics we can steer Rayleigh-surface waves that are solutions of the vector Navier equations of elastodynamics. As a paradigm of the conformal geophysics that we are creating, we design a square arrangement of Luneburg lenses to reroute Rayleigh waves around a building with the dual aim of protection and minimizing the effect on the wavefront (cloaking). To show that this is practically realisable we deliberately choose to use material parameters readily available and this metalens consists of a composite soil structured with buried pillars made of softer material. The regular lattice of inclusions is homogenized to give an effective material with a radially varying velocity profile and hence varying the refractive index of the lens. We develop the theory and then use full 3D numerical simulations to conclusively demonstrate, at frequencies of seismological relevance 3–10 Hz, and for low-speed sedimentary soil (v(s): 300–500 m/s), that the vibration of a structure is reduced by up to 6 dB at its resonance frequency

    Interaction Design: Foundations, Experiments

    Get PDF
    Interaction Design: Foundations, Experiments is the result of a series of projects, experiments and curricula aimed at investigating the foundations of interaction design in particular and design research in general. The first part of the book - Foundations - deals with foundational theoretical issues in interaction design. An analysis of two categorical mistakes -the empirical and interactive fallacies- forms a background to a discussion of interaction design as act design and of computational technology as material in design. The second part of the book - Experiments - describes a range of design methods, programs and examples that have been used to probe foundational issues through systematic questioning of what is given. Based on experimental design work such as Slow Technology, Abstract Information Displays, Design for Sound Hiders, Zero Expression Fashion, and IT+Textiles, this section also explores how design experiments can play a central role when developing new design theory

    Going Deeper with Semantics: Video Activity Interpretation using Semantic Contextualization

    Full text link
    A deeper understanding of video activities extends beyond recognition of underlying concepts such as actions and objects: constructing deep semantic representations requires reasoning about the semantic relationships among these concepts, often beyond what is directly observed in the data. To this end, we propose an energy minimization framework that leverages large-scale commonsense knowledge bases, such as ConceptNet, to provide contextual cues to establish semantic relationships among entities directly hypothesized from video signal. We mathematically express this using the language of Grenander's canonical pattern generator theory. We show that the use of prior encoded commonsense knowledge alleviate the need for large annotated training datasets and help tackle imbalance in training through prior knowledge. Using three different publicly available datasets - Charades, Microsoft Visual Description Corpus and Breakfast Actions datasets, we show that the proposed model can generate video interpretations whose quality is better than those reported by state-of-the-art approaches, which have substantial training needs. Through extensive experiments, we show that the use of commonsense knowledge from ConceptNet allows the proposed approach to handle various challenges such as training data imbalance, weak features, and complex semantic relationships and visual scenes.Comment: Accepted to WACV 201

    Multi-Modal Multi-Scale Deep Learning for Large-Scale Image Annotation

    Full text link
    Image annotation aims to annotate a given image with a variable number of class labels corresponding to diverse visual concepts. In this paper, we address two main issues in large-scale image annotation: 1) how to learn a rich feature representation suitable for predicting a diverse set of visual concepts ranging from object, scene to abstract concept; 2) how to annotate an image with the optimal number of class labels. To address the first issue, we propose a novel multi-scale deep model for extracting rich and discriminative features capable of representing a wide range of visual concepts. Specifically, a novel two-branch deep neural network architecture is proposed which comprises a very deep main network branch and a companion feature fusion network branch designed for fusing the multi-scale features computed from the main branch. The deep model is also made multi-modal by taking noisy user-provided tags as model input to complement the image input. For tackling the second issue, we introduce a label quantity prediction auxiliary task to the main label prediction task to explicitly estimate the optimal label number for a given image. Extensive experiments are carried out on two large-scale image annotation benchmark datasets and the results show that our method significantly outperforms the state-of-the-art.Comment: Submited to IEEE TI
    corecore