30,108 research outputs found

    A cultural, scientific and technical study of the Durham lead cloth seal assemblage.

    Get PDF
    This thesis is an integrated and interdisciplinary study of 275 lead cloth seals dated from the mid-fourteenth to the early-nineteenth centuries. These recently discovered objects, recovered from a single submerged river-bed site located in the North-East of England, were once linked to the trade, industrial regulation and taxation of commercially produced cloth. They are presented here, catalogued and illustrated. These objects represent the largest assemblage of such material outside London and are of crucial significance for understanding the cloth trade in the late- and post-medieval period. Due to the unusual deposition conditions from which the objects were recovered, rare scraps of textiles have survived in many of the cloth seals. A range of scientific and analytical analyses was undertaken on three cloth seals containing textiles revealing important information. For the first time in the UK, ultra-high performance liquid chromatography (performed at The Centre for Textile Conservation and Technical Art History, Glasgow University) was successfully used to extract colourants related to dyes from textile fragments preserved in lead cloth seals. This significant new information provides new insights into textile availability, trade and the consumption of cloth, mordants and dyestuffs in the late-sixteenth to early-nineteenth century. Evidence from the cloth seals is combined with other documentary, cartographic and archaeological sources of evidence to produce a synthesis providing new understanding of the cloth trade in Durham in the late- and post-medieval periods. The research generated by this thesis has demonstrated not just the scale and extent of textile production in the City of Durham, but has also revealed evidence of hitherto unknown English and European trade routes

    Usage of Physics Engines for UI Design in NexusUI

    Get PDF
    In preparation to expand the experimental interfaces in NexusUI widgets, the authors have been evaluating physics engines and exploring physics-based user interfaces on the web. Tying physics simulation events, influenced by user interactions, to web audio encourages exploration of novel methods of interactivity between users and web-based instruments. Object collisions, deformation of a mesh of objects with elastic connections, and liquid simulation via particle generation were identified as systems with dynamics that may provide interesting links to audio synthesis. Two popular physics engines explored are LiquidFun and Matter.js, with new prototype widgets taking advantage of LiquidFun’s Elastic Particles and Matter.js’ Cloth and Newton’s Cradle composites. One of our goals is to discover methods of audio synthesis that complement the behaviors of each physical simulation

    Visually Indicated Sounds

    Get PDF
    Objects make distinctive sounds when they are hit or scratched. These sounds reveal aspects of an object's material properties, as well as the actions that produced them. In this paper, we propose the task of predicting what sound an object makes when struck as a way of studying physical interactions within a visual scene. We present an algorithm that synthesizes sound from silent videos of people hitting and scratching objects with a drumstick. This algorithm uses a recurrent neural network to predict sound features from videos and then produces a waveform from these features with an example-based synthesis procedure. We show that the sounds predicted by our model are realistic enough to fool participants in a "real or fake" psychophysical experiment, and that they convey significant information about material properties and physical interactions

    Learning to Navigate Cloth using Haptics

    Full text link
    We present a controller that allows an arm-like manipulator to navigate deformable cloth garments in simulation through the use of haptic information. The main challenge of such a controller is to avoid getting tangled in, tearing or punching through the deforming cloth. Our controller aggregates force information from a number of haptic-sensing spheres all along the manipulator for guidance. Based on haptic forces, each individual sphere updates its target location, and the conflicts that arise between this set of desired positions is resolved by solving an inverse kinematic problem with constraints. Reinforcement learning is used to train the controller for a single haptic-sensing sphere, where a training run is terminated (and thus penalized) when large forces are detected due to contact between the sphere and a simplified model of the cloth. In simulation, we demonstrate successful navigation of a robotic arm through a variety of garments, including an isolated sleeve, a jacket, a shirt, and shorts. Our controller out-performs two baseline controllers: one without haptics and another that was trained based on large forces between the sphere and cloth, but without early termination.Comment: Supplementary video available at https://youtu.be/iHqwZPKVd4A. Related publications http://www.cc.gatech.edu/~karenliu/Robotic_dressing.htm

    Research students exhibition catalogue 2011

    Get PDF
    The catalogue demonstrates the scope and vibrancy of current inquiries and pays tribute to the creative capacity and investment of UCA research students. It brings together contributions from students who are at different stages in their research ad/venture. Their explorations are connected by the centrality of contemporary material practices as focal point for the reconsideration of societal values, cultural symbols and rituals and their meaning, and the trans/formation of individual, collective and national identities The media and formats employed range from cloth, jewellery and ceramics to analogue film, the human voice and the representation of dress and fashionin virtual environments. Thematic interests span from explorations at the interface of art and medical science to an investigation of the role of art in contested spaces, or the role of metonymy in ‘how the arts think’ And whilst the projects are motivated by personal curiosity and passion, their outcomes transcend the boundaries of individual practice and offer new insights, under-standing and applications for the benefit of wider society. Prof. Kerstin Me

    Development of non-destructive methodology using ATR-FTIR with PCA to differentiate between historical Pacific barkcloth

    Get PDF
    Barkcloths, non-woven textiles originating from the Pacific Islands, form part of many museum collections and date back to the 18th and 19th centuries. The ability to determine different plant species which have been used for producing barkcloth is required by art historians to help understand the origin and use of the cloths and by conservators for whom the species type may have an impact on textile durability, deterioration and hence conservation. However, to date the development of a non-destructive, robust analytical technique has been elusive. This article describes the use of Fourier transform infrared spectroscopy with attenuated total reflection (ATR-FTIR) and principal component analysis (PCA) todifferentiation between historic barkcloths. Three distinct groups of historic cloths were identified using PCA of the FTIR region between 1200 and 1600 cm−1 where molecular vibrations associated with tannins and lignins are dominant. Analysis of contemporary cloths only identified Pipturus albidus cloth as different and highlighted the difficulties around producing a representative textile sample to mimic the historic cloths. While the methodology does not itself identify species, the use of historically well-provenanced samples allows cloths showing similarities to group together and is a significant aid to identification

    A Generative Model of People in Clothing

    Full text link
    We present the first image-based generative model of people in clothing for the full body. We sidestep the commonly used complex graphics rendering pipeline and the need for high-quality 3D scans of dressed people. Instead, we learn generative models from a large image database. The main challenge is to cope with the high variance in human pose, shape and appearance. For this reason, pure image-based approaches have not been considered so far. We show that this challenge can be overcome by splitting the generating process in two parts. First, we learn to generate a semantic segmentation of the body and clothing. Second, we learn a conditional model on the resulting segments that creates realistic images. The full model is differentiable and can be conditioned on pose, shape or color. The result are samples of people in different clothing items and styles. The proposed model can generate entirely new people with realistic clothing. In several experiments we present encouraging results that suggest an entirely data-driven approach to people generation is possible
    • …
    corecore