14,162 research outputs found

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Study to design and develop remote manipulator system

    Get PDF
    Modeling of human performance in remote manipulation tasks is reported by automated procedures using computers to analyze and count motions during a manipulation task. Performance is monitored by an on-line computer capable of measuring the joint angles of both master and slave and in some cases the trajectory and velocity of the hand itself. In this way the operator's strategies with different transmission delays, displays, tasks, and manipulators can be analyzed in detail for comparison. Some progress is described in obtaining a set of standard tasks and difficulty measures for evaluating manipulator performance

    Voice and Touch Diagrams (VATagrams) Diagrams for the Visually Impaired

    Get PDF
    If a picture is worth a thousand words would you rather read the two pages of text or simply view the image? Most would choose to view the image; however, for the visually impaired this isn’t always an option. Diagrams assist people in visualizing relationships between objects. Most often these diagrams act as a source for quickly referencing information about relationships. Diagrams are highly visual and as such, there are few tools to support diagram creation for visually impaired individuals. To allow the visually impaired the ability to share the same advantages in school and work as sighted colleagues, an accessible diagram tool is needed. A suitable tool for the visually impaired to create diagrams should allow these individuals to: 1. easily define the type of relationship based diagram to be created, 2. easily create the components of a relationship based diagram, 3. easily modify the components of a relationship based diagram, 4. quickly understand the structure of a relationship based diagram, 5. create a visual representation which can be used by the sighted, and 6. easily accesses reference points for tracking diagram components. To do this a series of prototypes of a tool were developed that allow visually impaired users the ability to read, create, modify and share relationship based diagrams using sound and gestural touches. This was accomplished by creating a series of applications that could be run on an iPad using an overlay that restricts the areas in which a user can perform gestures. These prototypes were tested for usability using measures of efficiency, effectiveness and satisfaction. The prototypes were tested with visually impaired, blindfolded and sighted participants. The results of the evaluation indicate that the prototypes contain the main building blocks that can be used to complete a fully functioning application to be used on an iPad

    Do we enjoy what we sense and perceive?:A dissociation between aesthetic appreciation and basic perception of environmental objects or events

    Get PDF
    This integrative review rearticulates the notion of human aesthetics by critically appraising the conventional definitions, offerring a new, more comprehensive definition, and identifying the fundamental components associated with it. It intends to advance holistic understanding of the notion by differentiating aesthetic perception from basic perceptual recognition, and by characterizing these concepts from the perspective of information processing in both visual and nonvisual modalities. To this end, we analyze the dissociative nature of information processing in the brain, introducing a novel local-global integrative model that differentiates aesthetic processing from basic perceptual processing. This model builds on the current state of the art in visual aesthetics as well as newer propositions about nonvisual aesthetics. This model comprises two analytic channels: aesthetics-only channel and perception-to-aesthetics channel. The aesthetics-only channel primarily involves restricted local processing for quality or richness (e.g., attractiveness, beauty/prettiness, elegance, sublimeness, catchiness, hedonic value) analysis, whereas the perception-to-aesthetics channel involves global/extended local processing for basic feature analysis, followed by restricted local processing for quality or richness analysis. We contend that aesthetic processing operates independently of basic perceptual processing, but not independently of cognitive processing. We further conjecture that there might be a common faculty, labeled as aesthetic cognition faculty, in the human brain for all sensory aesthetics albeit other parts of the brain can also be activated because of basic sensory processing prior to aesthetic processing, particularly during the operation of the second channel. This generalized model can account not only for simple and pure aesthetic experiences but for partial and complex aesthetic experiences as well.</p

    Towards an understanding of musical intelligence as a framework for learning to read and play piano notation

    Get PDF
    The writing of this thesis was born from concern over the difficulties observed with some of the researcher’s piano students in terms of learning to read and play notation simultaneously. It aims to define and develop an accessible and understandable framework for musical intelligence that might support piano teachers in their practice, in particular with the teaching of the simultaneous reading and playing of notation on the piano. Inspired by Gardner’s concept of multiple intelligences (2004), the thesis also argues for different musical learning strengths, and suggests that musical intelligence is underpinned by aural intelligence. Following a literature search to determine whether or not a definition of musical intelligence existed beyond the work of Gardner (2004), whose chapter in Frames of Mind, The Theory of Multiple Intelligences (2004) provided no suggestion of how his concept of musical intelligence could be used for learning to play a musical instrument, in particular with the notation, the next step towards defining musical intelligence was to explore the real world of a sample of piano teachers, to try to ascertain: a) their definition of musical intelligence b) how they taught their students to read and play notation In terms of the latter, understanding the participants’ teaching practices might help me to improve my own practice and therefore the learning outcomes of my students. In addition, the latter step could serve to suggest what the teachers believed to be important to a musical education, particularly in terms of learning to read and play notation, and from this it could perhaps be inferred that their implicit definition of musical intelligence guided their teaching. The literature and the data were then used to inform and create a framework for musical intelligence, with a focus on learning to read and play notation simultaneously on the piano. The work of Gardner (2004), Gordon (1993) and Dweck (2016) have provided the foundation for the theoretical framework for this study. A summary of the study’s key findings follows: - Reading and playing notation simultaneously on the piano is complex, and is underpinned by strong proprioceptive, kinaesthetic and tactile skills, a reliable musical-spatial intelligence and above all a strong aural intelligence. - All the study teachers believed, whether implicitly or explicitly, that it was important to be able to read and play from notation, therefore this was inferred to underpin part of their definition of musical intelligence. - Musicianship was also regarded by the teacher participants as a central part of musical intelligence, and therefore the interpersonal and intrapersonal intelligences put forward by Gardner (2004) also form part of musical intelligence. - Students appear to demonstrate different musical strengths, generally either an ability to read notation or an ability to play by ear and learn by rote, therefore both need to be equally developed during music education. - The teachers demonstrated a lack of understanding of how some individuals are able to learn lengthy pieces of repertoire by ear, which seems to lead to a lack of confidence in introducing aural learning in piano lessons. This was evident also from their general lack of awareness of pedagogical research. - Mental strategies for learning to read and play simultaneously were not understood or used by most of the teachers. - Some teachers demonstrated an entity theory of intelligence. - Many of the teachers had engaged in continuing professional development. - A conclusive definition of musical intelligence is elusive, however it could be argued to be underpinned by the ability to think in sound and be at one with the instrument, thus requiring solid aural, proprioceptive, kinaesthetic, tactile and musical-spatial intelligences, together with strong musicianship, as well as the interpersonal and intrapersonal elements of Gardner’s (2004) work, gathered into a deep understanding of the craft of playing an instrument, here, a piano. The words that incorporate all of these elements of musical intelligence are ‘deep engagement and understanding’, in the same way that the Puluwat sailors demonstrate in their craft (see Glossary, p.379), but the ear rests at the heart of musical intelligence. A more comprehensive definition based on a synthesis of the literature, the teachers’ beliefs and the researcher’s inferences and interpretations can be found in Appendix 22, ‘A Framework for Musical Intelligence’

    Multisensory mechanisms of body ownership and self-location

    Get PDF
    Having an accurate sense of the spatial boundaries of the body is a prerequisite for interacting with the environment and is thus essential for the survival of any organism with a central nervous system. Every second, our brain receives a staggering amount of information from the body across different sensory channels, each of which features a certain degree of noise. Despite the complexity of the incoming multisensory signals, the brain manages to construct and maintain a stable representation of our own body and its spatial relationships to the external environment. This natural “in-body” experience is such a fundamental subjective feeling that most of us take it for granted. However, patients with lesions in particular brain areas can experience profound disturbances in their normal sense of ownership over their body (somatoparaphrenia) or lose the feeling of being located inside their physical body (out-of-body experiences), suggesting that our “in-body” experience depends on intact neural circuitry in the temporal, frontal, and parietal brain regions. The question at the heart of this thesis relates to how the brain combines visual, tactile, and proprioceptive signals to build an internal representation of the bodily self in space. Over the past two decades, perceptual body illusions have become an important tool for studying the mechanisms underlying our sense of body ownership and self-location. The most influential of these illusions is the rubber hand illusion, in which ownership of an artificial limb is induced via the synchronous stroking of a rubber hand and an individual’s hidden real hand. Studies of this illusion have shown that multisensory integration within the peripersonal space is a key mechanism for bodily self-attribution. In Study I, we showed that the default sense of ownership of one’s real hand, not just the sense of rubber hand ownership, also depends on spatial and temporal multisensory congruence principles implemented in fronto-parietal brain regions. In Studies II and III, we characterized two novel perceptual illusions that provide strong support for the notion that multisensory integration within the peripersonal space is intimately related to the sense of limb ownership, and we examine the role of vision in this process. In Study IV, we investigated a fullbody version of the rubber hand illusion—the “out-of-body illusion”—and show that it can be used to induce predictable changes in one’s sense of self-location and body ownership. Finally, in Study V, we used the out-of-body illusion to “perceptually teleport” participants during brain imaging and identify activity patterns specific to the sense of self-location in a given position in space. Together, these findings shed light on the role of multisensory integration in building the experience of the bodily self in space and provide initial evidence for how representations of body ownership and self-location interact in the brain

    A Framework for Tumor Localization in Robot-Assisted Minimally Invasive Surgery

    Get PDF
    Manual palpation of tissue is frequently used in open surgery, e.g., for localization of tumors and buried vessels and for tissue characterization. The overall objective of this work is to explore how tissue palpation can be performed in Robot-Assisted Minimally Invasive Surgery (RAMIS) using laparoscopic instruments conventionally used in RAMIS. This thesis presents a framework where a surgical tool is moved teleoperatively in a manner analogous to the repetitive pressing motion of a finger during manual palpation. We interpret the changes in parameters due to this motion such as the applied force and the resulting indentation depth to accurately determine the variation in tissue stiffness. This approach requires the sensorization of the laparoscopic tool for force sensing. In our work, we have used a da Vinci needle driver which has been sensorized in our lab at CSTAR for force sensing using Fiber Bragg Grating (FBG). A computer vision algorithm has been developed for 3D surgical tool-tip tracking using the da Vinci \u27s stereo endoscope. This enables us to measure changes in surface indentation resulting from pressing the needle driver on the tissue. The proposed palpation framework is based on the hypothesis that the indentation depth is inversely proportional to the tissue stiffness when a constant pressing force is applied. This was validated in a telemanipulated setup using the da Vinci surgical system with a phantom in which artificial tumors were embedded to represent areas of different stiffnesses. The region with high stiffness representing tumor and region with low stiffness representing healthy tissue showed an average indentation depth change of 5.19 mm and 10.09 mm respectively while maintaining a maximum force of 8N during robot-assisted palpation. These indentation depth variations were then distinguished using the k-means clustering algorithm to classify groups of low and high stiffnesses. The results were presented in a colour-coded map. The unique feature of this framework is its use of a conventional laparoscopic tool and minimal re-design of the existing da Vinci surgical setup. Additional work includes a vision-based algorithm for tracking the motion of the tissue surface such as that of the lung resulting from respiratory and cardiac motion. The extracted motion information was analyzed to characterize the lung tissue stiffness based on the lateral strain variations as the surface inflates and deflates

    Summer Institute in Biomedical Engineering, 1973

    Get PDF
    Bioengineering of medical equipment is detailed. Equipment described includes: an environmental control system for a surgical suite; surface potential mapping for an electrode system; the use of speech-modulated-white-noise to differentiate hearers and feelers among the profoundly deaf; the design of an automatic weight scale for an isolette; and an internal tibial torsion correction study. Graphs and charts are included with design specifications of this equipment
    corecore