292 research outputs found

    Aerospace Medicine and Biology. A continuing bibliography with indexes

    Get PDF
    This bibliography lists 244 reports, articles, and other documents introduced into the NASA scientific and technical information system in February 1981. Aerospace medicine and aerobiology topics are included. Listings for physiological factors, astronaut performance, control theory, artificial intelligence, and cybernetics are included

    Quantification of neural substrates of vergence system via fMRI

    Get PDF
    Vergence eye movement is one of the oculomotor systems which allow depth perception via disconjugate movement of the eyes. Neuroimaging methods such as functional magnetic resonance imaging (fMRI) measure neural activity changes activity in the brain while subjects perform experimental tasks. A rich body of primate investigations on vergence is already established in the neurophysiology literature; on the other hand, there are a limited number of fMRI studies on neural mechanisms behind the vergence system. The results demonstrated that vergence system shares neural sources and also shows differentiation within the boundaries of frontal eye fields (FEF) and midbrain of the brainstem in comparison to saccadic, rapid conjugate eye movements, system with application of simple tracking experiment. Functional activity within the FEF was located anterior to the saccadic functional activity (z \u3e 2.3; p \u3c 0.03). Functional activity within the midbrain was observed as a result of application of vergence task, but not for the saccade data set. The novel memory-guided vergence experiment also showed a relationship between posterior parahippocampal area and memory where two other experiments were implemented for comparison of memory load in this region. Significant percent change in the functional activity was observed for the posterior parahippocampal area. Furthermore, an increase in the interconnectivity was observed for vergence tasks via utilization of Granger-Causality Analysis. When prediction was involved the increase in the number of causal interactions was statistically significant (p\u3c 0.05). The comparison of the number of influences between prediction-evoked vergence task and simple tracking vergence task was also statistically significant for these two experimental paradigms, p \u3c 0.0001. Another result observed in this dissertation was the application of hierarchical independent component analysis from to the fronto-parietal and cerebellar components within saccade and vergence tasks. Interestingly, cerebellar component showed delayed latency in the group level signal in comparison to fronto-parietal group level signals, which was evaluated to determine why segregation existed between the components acquired from the implementation of independent component analysis. Lastly, region of interet (ROI) based analysis in comparison to global (whole) brain analysis indicated more sensitive results on frontal, parietal, brainstem and occipital areas at both individual and group levels. Overall, the purpose of this dissertation was to investigate neural control of vergence movements by 1-spatial mapping of vergence induced functional activity, 2- applying different signal processing methods to quantify neural correlates of the vergence system at causal functional connectivity, underlying sources and region of interests (ROI) based levels. It was concluded that quantification of vergence movements via fMRI can build a synergy with behavioral investigations and may also shed light on neural differentiation between healthy individuals and patients with neural dysfunctions and injuries by serving as a biomarker

    BrickPal: Augmented Reality-based Assembly Instructions for Brick Models

    Full text link
    The assembly instruction is a mandatory component of Lego-like brick sets.The conventional production of assembly instructions requires a considerable amount of manual fine-tuning, which is intractable for casual users and customized brick sets.Moreover, the traditional paper-based instructions lack expressiveness and interactivity.To tackle the two problems above, we present BrickPal, an augmented reality-based system, which visualizes assembly instructions in an augmented reality head-mounted display. It utilizes Natural Language Processing (NLP) techniques to generate plausible assembly sequences, and provide real-time guidance in the AR headset.Our user study demonstrates BrickPal's effectiveness at assisting users in brick assembly compared to traditional assembly methods. Additionally, the NLP algorithm-generated assembly sequences achieve the same usability with manually adapted sequences.Comment: 9 pages,7 figures. Project URL: https://origami.dance/brickpa

    Aerospace Medicine and Biology: A continuing bibliography with indexes, supplement 145

    Get PDF
    This bibliography lists 301 reports, articles, and other documents introduced into the NASA scientific and technical information system in August 1975

    Varieties of Attractiveness and their Brain Responses

    Get PDF

    Science of Facial Attractiveness

    Get PDF

    Hand eye coordination in surgery

    Get PDF
    The coordination of the hand in response to visual target selection has always been regarded as an essential quality in a range of professional activities. This quality has thus far been elusive to objective scientific measurements, and is usually engulfed in the overall performance of the individuals. Parallels can be drawn to surgery, especially Minimally Invasive Surgery (MIS), where the physical constraints imposed by the arrangements of the instruments and visualisation methods require certain coordination skills that are unprecedented. With the current paradigm shift towards early specialisation in surgical training and shortened focused training time, selection process should identify trainees with the highest potentials in certain specific skills. Although significant effort has been made in objective assessment of surgical skills, it is only currently possible to measure surgeons’ abilities at the time of assessment. It has been particularly difficult to quantify specific details of hand-eye coordination and assess innate ability of future skills development. The purpose of this thesis is to examine hand-eye coordination in laboratory-based simulations, with a particular emphasis on details that are important to MIS. In order to understand the challenges of visuomotor coordination, movement trajectory errors have been used to provide an insight into the innate coordinate mapping of the brain. In MIS, novel spatial transformations, due to a combination of distorted endoscopic image projections and the “fulcrum” effect of the instruments, accentuate movement generation errors. Obvious differences in the quality of movement trajectories have been observed between novices and experts in MIS, however, this is difficult to measure quantitatively. A Hidden Markov Model (HMM) is used in this thesis to reveal the underlying characteristic movement details of a particular MIS manoeuvre and how such features are exaggerated by the introduction of rotation in the endoscopic camera. The proposed method has demonstrated the feasibility of measuring movement trajectory quality by machine learning techniques without prior arbitrary classification of expertise. Experimental results have highlighted these changes in novice laparoscopic surgeons, even after a short period of training. The intricate relationship between the hands and the eyes changes when learning a skilled visuomotor task has been previously studied. Reactive eye movement, when visual input is used primarily as a feedback mechanism for error correction, implies difficulties in hand-eye coordination. As the brain learns to adapt to this new coordinate map, eye movements then become predictive of the action generated. The concept of measuring this spatiotemporal relationship is introduced as a measure of hand-eye coordination in MIS, by comparing the Target Distance Function (TDF) between the eye fixation and the instrument tip position on the laparoscopic screen. Further validation of this concept using high fidelity experimental tasks is presented, where higher cognitive influence and multiple target selection increase the complexity of the data analysis. To this end, Granger-causality is presented as a measure of the predictability of the instrument movement with the eye fixation pattern. Partial Directed Coherence (PDC), a frequency-domain variation of Granger-causality, is used for the first time to measure hand-eye coordination. Experimental results are used to establish the strengths and potential pitfalls of the technique. To further enhance the accuracy of this measurement, a modified Jensen-Shannon Divergence (JSD) measure has been developed for enhancing the signal matching algorithm and trajectory segmentations. The proposed framework incorporates high frequency noise filtering, which represents non-purposeful hand and eye movements. The accuracy of the technique has been demonstrated by quantitative measurement of multiple laparoscopic tasks by expert and novice surgeons. Experimental results supporting visual search behavioural theory are presented, as this underpins the target selection process immediately prior to visual motor action generation. The effects of specialisation and experience on visual search patterns are also examined. Finally, pilot results from functional brain imaging are presented, where the Posterior Parietal Cortical (PPC) activation is measured using optical spectroscopy techniques. PPC has been demonstrated to involve in the calculation of the coordinate transformations between the visual and motor systems, which establishes the possibilities of exciting future studies in hand-eye coordination

    MultiIoT: Towards Large-scale Multisensory Learning for the Internet of Things

    Full text link
    The Internet of Things (IoT), the network integrating billions of smart physical devices embedded with sensors, software, and communication technologies for the purpose of connecting and exchanging data with other devices and systems, is a critical and rapidly expanding component of our modern world. The IoT ecosystem provides a rich source of real-world modalities such as motion, thermal, geolocation, imaging, depth, sensors, video, and audio for prediction tasks involving the pose, gaze, activities, and gestures of humans as well as the touch, contact, pose, 3D of physical objects. Machine learning presents a rich opportunity to automatically process IoT data at scale, enabling efficient inference for impact in understanding human wellbeing, controlling physical devices, and interconnecting smart cities. To develop machine learning technologies for IoT, this paper proposes MultiIoT, the most expansive IoT benchmark to date, encompassing over 1.15 million samples from 12 modalities and 8 tasks. MultiIoT introduces unique challenges involving (1) learning from many sensory modalities, (2) fine-grained interactions across long temporal ranges, and (3) extreme heterogeneity due to unique structure and noise topologies in real-world sensors. We also release a set of strong modeling baselines, spanning modality and task-specific methods to multisensory and multitask models to encourage future research in multisensory representation learning for IoT

    A bird’s eye view on turbulence: Seabird foraging associations with evolving surface flow features

    Get PDF
    Lieber L, Langrock R, Nimmo-Smith WAM. A bird's-eye view on turbulence: seabird foraging associations with evolving surface flow features. Proceedings of the Royal Society B: Biological Sciences. 2021;288(1949): 20210592.Understanding physical mechanisms underlying seabird foraging is fundamental to predict responses to coastal change. For instance, turbulence in the water arising from natural or anthropogenic structures can affect foraging opportunities in tidal seas. Yet, identifying ecologically important localized turbulence features (e.g. upwellings approximately 10–100 m) is limited by observational scale, and this knowledge gap is magnified in volatile predators. Here, using a drone-based approach, we present the tracking of surface-foraging terns (143 trajectories belonging to three tern species) and dynamic turbulent surface flow features in synchrony. We thereby provide the earliest evidence that localized turbulence features can present physical foraging cues. Incorporating evolving vorticity and upwelling features within a hidden Markov model, we show that terns were more likely to actively forage as the strength of the underlying vorticity feature increased, while conspicuous upwellings ahead of the flight path presented a strong physical cue to stay in transit behaviour. This clearly encapsulates the importance of prevalent turbulence features as localized foraging cues. Our quantitative approach therefore offers the opportunity to unlock knowledge gaps in seabird sensory and foraging ecology on hitherto unobtainable scales. Finally, it lays the foundation to predict responses to coastal change to inform sustainable ocean development
    • …
    corecore