65,108 research outputs found

    Reprogramming the hand: bridging the craft skills gap in 3D/digital fashion knitwear design

    Get PDF
    Designer-makers have integrated a wide range of digital media and tools into their practices, many taking ownership of a specific technology or application and learning how to use it for themselves, often drawing on their experiential knowledge of established practices to do so. To date, there has been little discussion on how digital knitting practice has evolved within this context, possibly due to the complexity of the software, limited access to industrial machinery and the fact that it seems divorced from the idea of 'craft'. Despite the machine manufacturers' efforts to make knitting technology and software more user-friendly, the digital interface remains a significant barrier to knitwear designer-makers, generally only accessed via experienced technicians

    A Parallel Tree code for large Nbody simulation: dynamic load balance and data distribution on CRAY T3D system

    Get PDF
    N-body algorithms for long-range unscreened interactions like gravity belong to a class of highly irregular problems whose optimal solution is a challenging task for present-day massively parallel computers. In this paper we describe a strategy for optimal memory and work distribution which we have applied to our parallel implementation of the Barnes & Hut (1986) recursive tree scheme on a Cray T3D using the CRAFT programming environment. We have performed a series of tests to find an " optimal data distribution " in the T3D memory, and to identify a strategy for the " Dynamic Load Balance " in order to obtain good performances when running large simulations (more than 10 million particles). The results of tests show that the step duration depends on two main factors: the data locality and the T3D network contention. Increasing data locality we are able to minimize the step duration if the closest bodies (direct interaction) tend to be located in the same PE local memory (contiguous block subdivison, high granularity), whereas the tree properties have a fine grain distribution. In a very large simulation, due to network contention, an unbalanced load arises. To remedy this we have devised an automatic work redistribution mechanism which provided a good Dynamic Load Balance at the price of an insignificant overhead.Comment: 16 pages with 11 figures included, (Latex, elsart.style). Accepted by Computer Physics Communication

    Expert operator's associate: A knowledge based system for spacecraft control

    Get PDF
    The Expert Operator's Associate (EOA) project is presented which studies the applicability of expert systems for day-to-day space operations. A prototype expert system is developed, which operates on-line with an existing spacecraft control system at the European Space Operations Centre, and functions as an 'operator's assistant' in controlling satellites. The prototype is demonstrated using an existing real-time simulation model of the MARECS-B2 telecommunication satellite. By developing a prototype system, the extent to which reliability and effectivens of operations can be enhanced by AI based support is examined. In addition the study examines the questions of acquisition and representation of the 'knowledge' for such systems, and the feasibility of 'migration' of some (currently) ground-based functions into future spaceborne autonomous systems

    On acquisition of programming knowledge

    Get PDF
    For the evolving discipline of programming, acquisition of programming knowledge is a difficult issue. Common knowledge results from the acceptance of proven techniques based on results of formal inquiries into the nature of the programming process. This is a rather slow process. In addition, the vast body of common knowledge needs to be explicated to a low enough level of details for it to be represented in the machine processable form. It is felt that this is an impediment to the progress of automatic programming. The importance of formal approaches cannot be overstated since their contributions lead to quantum leaps in the state of the art

    A study of the very high order natural user language (with AI capabilities) for the NASA space station common module

    Get PDF
    The requirements are identified for a very high order natural language to be used by crew members on board the Space Station. The hardware facilities, databases, realtime processes, and software support are discussed. The operations and capabilities that will be required in both normal (routine) and abnormal (nonroutine) situations are evaluated. A structure and syntax for an interface (front-end) language to satisfy the above requirements are recommended

    A modified parallel tree code for N-body simulation of the Large Scale Structure of the Universe

    Full text link
    N-body codes to perform simulations of the origin and evolution of the Large Scale Structure of the Universe have improved significantly over the past decade both in terms of the resolution achieved and of reduction of the CPU time. However, state-of-the-art N-body codes hardly allow one to deal with particle numbers larger than a few 10^7, even on the largest parallel systems. In order to allow simulations with larger resolution, we have first re-considered the grouping strategy as described in Barnes (1990) (hereafter B90) and applied it with some modifications to our WDSH-PT (Work and Data SHaring - Parallel Tree) code. In the first part of this paper we will give a short description of the code adopting the Barnes and Hut algorithm \cite{barh86} (hereafter BH), and in particular of the memory and work distribution strategy applied to describe the {\it data distribution} on a CC-NUMA machine like the CRAY-T3E system. In the second part of the paper we describe the modification to the Barnes grouping strategy we have devised to improve the performance of the WDSH-PT code. We will use the property that nearby particles have similar interaction list. This idea has been checked in B90, where an interaction list is builded which applies everywhere within a cell C_{group} containing a little number of particles N_{crit}. B90 reuses this interaction list for each particle p∈Cgroup p \in C_{group} in the cell in turn. We will assume each particle p to have the same interaction list. Thus it has been possible to reduce the CPU time increasing the performances. This leads us to run simulations with a large number of particles (N ~ 10^7/10^9) in non-prohibitive times.Comment: 13 pages and 7 Figure

    Towards participatory design of social robots

    Get PDF
    • …
    corecore