10,931 research outputs found

    Maintaining authenticity: transferring patina from the real world to the digital to retain narrative value

    Get PDF
    This research is concerned with utilizing new technologies to harvest existing narrative, symbolic and emotive value for use in a digital environment enabling "emotional durability" (Chapman, 2005) in future design. The projects discussed in this paper have been conducted as part of PhD research by Rosemary Wallin into 'Technology for Sustainable Luxury' at University of the Arts London, and visual effects technology research undertaken by Florian Stephens at University of West London. Jonathan Chapman describes vast consumer waste being "symptomatic of failed relationships" between consumers and the goods they buy, and suggests approaches for designing love, dependency, and even cherishability into products to give them a longer lifespan. 'Failed relationships' might also be observed in the transference of physical objects to their virtual cousins. Consider the throwaway nature of digital photography when compared to the carefully preserved prints in a family album. Apple often use a skeuomorphic (Hobbs, 2012) approach to user interface design, to digitally replicate the patina and 'value' of real objects. However, true transference of physical form and texture presumably occurs when an object is scanned and a virtual 3D model is created. This paper presents three practice-based approaches to storing and transferring patina from an original object, utilizing high resolution scanning, photogrammetry, mobile applications and 3D print technologies. The objective is not merely accuracy, but evocation of the emotive data connecting the digital and physical realm. As the human face holds experience in the lines and wrinkles of the skin, so the surface of an object holds its narrative. From the signs of the craftsman to the bumps and scratches that accumulate over the life of an item over time and generations, marks gather like evidence to be read by a familiar or a trained eye. According to the time and the culture these marks are read within, they will either add to or detract from its value. These marks can be captured via complex 3D modelling and scanning technologies, which allow detailed forms to be recreated as dense 3D wireframe, but the result is often unsatisfying. 3D greyscale surfaces can never fully capture the richness of patina. Authentic surfaces require other qualities such as colour, texture and depth, but there is something else - more difficult to define. Donald A. Norman expands on the idea of emotion and objects by describing three 'levels’ of design "visceral, behavioural and reflective". Visceral is based on "look, feel and sound", behavioural is focused on an object’s use, and reflective is concerned with its message. New technology is commonly seen in terms of its ability to increase efficiency, but this research has longer-term objectives: to repair or even rebuild Chapman's 'broken relationships' and enable ‘emotionally durable' design. The PhD that has formed the context for this paper examines the concept of luxury value, and how and why the value of patina has been replaced by fashion. Luxury goods are aspirational items often emulated in the bulk of mass production. If we are to alter behaviour around consumption, one approach might be to use technology to harvest patina as a way to retain emotional, symbolic and poetic value with a view to maintaining a relationship with the things we buy

    It's Not a Matter of Time: Highlights From the 2011 Competency-Based Learning Summit

    Get PDF
    Outlines discussions about the potential and challenges of competency-based learning in transforming the current time-based system, including issues of accountability, equity, personalization, and aligning policy and practice. Includes case summaries

    Conference on Binary Optics: An Opportunity for Technical Exchange

    Get PDF
    The papers herein were presented at the Conference on Binary Optics held in Huntsville, AL, February 23-25, 1993. The papers were presented according to subject as follows: modeling and design, fabrication, and applications. Invited papers and tutorial viewgraphs presented on these subjects are included

    Applied AI/ML for automatic customisation of medical implants

    Get PDF
    Most knee replacement surgeries are performed using ‘off-the-shelf’ implants, supplied with a set number of standardised sizes. X-rays are taken during pre-operative assessment and used by clinicians to estimate the best options for patients. Manual templating and implant size selection have, however, been shown to be inaccurate, and frequently the generically shaped products do not adequately fit patients’ unique anatomies. Furthermore, off-the-shelf implants are typically made from solid metal and do not exhibit mechanical properties like the native bone. Consequently, the combination of these factors often leads to poor outcomes for patients. Various solutions have been outlined in the literature for customising the size, shape, and stiffness of implants for the specific needs of individuals. Such designs can be fabricated via additive manufacturing which enables bespoke and intricate geometries to be produced in biocompatible materials. Despite this, all customisation solutions identified required some level of manual input to segment image files, identify anatomical features, and/or drive design software. These tasks are time consuming, expensive, and require trained resource. Almost all currently available solutions also require CT imaging, which adds further expense, incurs high levels of potentially harmful radiation, and is not as commonly accessible as X-ray imaging. This thesis explores how various levels of knee replacement customisation can be completed automatically by applying artificial intelligence, machine learning and statistical methods. The principal output is a software application, believed to be the first true ‘mass-customisation’ solution. The software is compatible with both 2D X-ray and 3D CT data and enables fully automatic and accurate implant size prediction, shape customisation and stiffness matching. It is therefore seen to address the key limitations associated with current implant customisation solutions and will hopefully enable the benefits of customisation to be more widely accessible.Open Acces

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    GTTC Future of Ground Testing Meta-Analysis of 20 Documents

    Get PDF
    National research, development, test, and evaluation ground testing capabilities in the United States are at risk. There is a lack of vision and consensus on what is and will be needed, contributing to a significant threat that ground test capabilities may not be able to meet the national security and industrial needs of the future. To support future decisions, the AIAA Ground Testing Technical Committees (GTTC) Future of Ground Test (FoGT) Working Group selected and reviewed 20 seminal documents related to the application and direction of ground testing. Each document was reviewed, with the content main points collected and organized into sections in the form of a gap analysis current state, future state, major challenges/gaps, and recommendations. This paper includes key findings and selected commentary by an editing team

    Network-on-Chip Based H.264 Video Decoder on a Field Programmable Gate Array

    Get PDF
    This thesis develops the first fully network-on-chip (NoC) based h.264 video decoder implemented in real hardware on a field programmable gate array (FPGA). This thesis starts with an overview of the h.264 video coding standard and an introduction to the NoC communication paradigm. Following this, a series of processing elements (PEs) are developed which implement the component algorithms making up the h.264 video decoder. These PEs, described primarily in VHDL with some Verilog and C, are then mapped to an NoC which is generated using the CONNECT NoC generation tool. To demonstrate the scalability of the proposed NoC based design, a second NoC based video decoder is implemented on a smaller FPGA using the same PEs on a more compact NoC topology. The performance of both decoders, as well as their component PEs, is evaluated on real hardware. An analysis of the performance results is conducted and recommendations for future work are made based on the results of this analysis. Aside from the development of the proposed decoder, a major contribution of this thesis is the release of all source materials for this design as open source hardware and software. The release of these materials will allow other researchers to more easily replicate this work, as well as create derivative works in the areas of NoC based designs for FPGA, video coding and decoding, and related areas

    Measurement techniques and instruments suitable for life-prediction testing of photovoltaic arrays

    Get PDF
    Array failure modes, relevant materials property changes, and primary degradation mechanisms are discussed as a prerequisite to identifying suitable measurement techniques and instruments. Candidate techniques and instruments are identified on the basis of extensive reviews of published and unpublished information. These methods are organized in six measurement categories - chemical, electrical, optical, thermal, mechanical, and other physicals. Using specified evaluation criteria, the most promising techniques and instruments for use in life prediction tests of arrays were selected

    THE DEVELOPMENT OF A NOVEL ELECTRO-MAGNETIC FORCE MICROSCOPE

    Get PDF
    This thesis describes the development of a new type of Magnetic Force Microscope (MFM) probe based on a unique electromagnetic design. In addition the design, construction and testing of a new MFM system, complete in both hardware and software, is also described. The MFM allowed initial tests on prototypes of the new probe, and is to provide a base for future new probe integration. The microscope uses standard MFM micro-cantilever probes in static modes of imaging. A new computer hosted DSP control system, software, and its various interfaces with the MFM have been integrated into the system. The system has been tested using standard probes with various specimens and satisfactory results have been produced. A novel probe has been designed to replace the standard MFM magnetic coated tip with a field generated about a sub-micron aperture in a conducting film. The field from the new probe is modelled and its imaging capability investigated, with iterative designs analysed in this way. The practical construction and potential problems therein, of the probe are also considered. Test apertures have been manufactured, and an image of the field produced when operating is provided as support to the theoretical designs. Future methods of using the new probe are also discussed, including the examination of the probe as a magnetic write mechanism. This probe, integrated into the MFM, can provide a new method of microscopic magnetic imaging, and in addition opens a new potential method of magnetic storage that will require further research

    Hardware Acceleration Using Functional Languages

    Get PDF
    Cílem této práce je prozkoumat možnosti využití funkcionálního paradigmatu pro hardwarovou akceleraci, konkrétně pro datově paralelní úlohy. Úroveň abstrakce tradičních jazyků pro popis hardwaru, jako VHDL a Verilog, přestáví stačit. Pro popis na algoritmické či behaviorální úrovni se rozmáhají jazyky původně navržené pro vývoj softwaru a modelování, jako C/C++, SystemC nebo MATLAB. Funkcionální jazyky se s těmi imperativními nemůžou měřit v rozšířenosti a oblíbenosti mezi programátory, přesto je předčí v mnoha vlastnostech, např. ve verifikovatelnosti, schopnosti zachytit inherentní paralelismus a v kompaktnosti kódu. Pro akceleraci datově paralelních výpočtů se často používají jednotky FPGA, grafické karty (GPU) a vícejádrové procesory. Praktická část této práce rozšiřuje existující knihovnu Accelerate pro počítání na grafických kartách o výstup do VHDL. Accelerate je možno chápat jako doménově specifický jazyk vestavěný do Haskellu s backendem pro prostředí NVIDIA CUDA. Rozšíření pro vysokoúrovňovou syntézu obvodů ve VHDL představené v této práci používá stejný jazyk a frontend.The aim of this thesis is to research how the functional paradigm can be used for hardware acceleration with an emphasis on data-parallel tasks. The level of abstraction of the traditional hardware description languages, such as VHDL or Verilog, is becoming to low. High-level languages from the domains of software development and modeling, such as C/C++, SystemC or MATLAB, are experiencing a boom for hardware description on the algorithmic or behavioral level. Functional Languages are not so commonly used, but they outperform imperative languages in verification, the ability to capture inherent paralellism and the compactness of code. Data-parallel task are often accelerated on FPGAs, GPUs and multicore processors. In this thesis, we use a library for general-purpose GPU programs called Accelerate and extend it to produce VHDL. Accelerate is a domain-specific language embedded into Haskell with a backend for the NVIDIA CUDA platform. We use the language and its frontend, and create a new backend for high-level synthesis of circuits in VHDL.
    corecore