4,774 research outputs found

    Temporal features of size constancy for perception and action in a real-world setting: A combined EEG-kinematics study

    Get PDF
    A stable representation of object size, in spite of continuous variations in retinal input due to changes in viewing distance, is critical for perceiving and acting in a real 3D world. In fact, our perceptual and visuo-motor systems exhibit size and grip constancies in order to compensate for the natural shrinkage of the retinal image with increased distance. The neural basis of this size-distance scaling remains largely unknown, although multiple lines of evidence suggest that size-constancy operations might take place remarkably early, already at the level of the primary visual cortex. In this study, we examined for the first time the temporal dynamics of size constancy during perception and action by using a combined measurement of event-related potentials (ERPs) and kinematics. Participants were asked to maintain their gaze steadily on a fixation point and perform either a manual estimation or a grasping task towards disks of different sizes placed at different distances. Importantly, the physical size of the target was scaled with distance to yield a constant retinal angle. Meanwhile, we recorded EEG data from 64 scalp electrodes and hand movements with a motion capture system. We focused on the first positive-going visual evoked component peaking at approximately 90 ms after stimulus onset. We found earlier latencies and greater amplitudes in response to bigger than smaller disks of matched retinal size, regardless of the task. In line with the ERP results, manual estimates and peak grip apertures were larger for the bigger targets. We also found task-related differences at later stages of processing from a cluster of central electrodes, whereby the mean amplitude of the P2 component was greater for manual estimation than grasping. Taken together, these findings provide novel evidence that size constancy for real objects at real distances occurs at the earliest cortical stages and that early visual processing does not change as a function of task demands

    Impact of Imaging and Distance Perception in VR Immersive Visual Experience

    Get PDF
    Virtual reality (VR) headsets have evolved to include unprecedented viewing quality. Meanwhile, they have become lightweight, wireless, and low-cost, which has opened to new applications and a much wider audience. VR headsets can now provide users with greater understanding of events and accuracy of observation, making decision-making faster and more effective. However, the spread of immersive technologies has shown a slow take-up, with the adoption of virtual reality limited to a few applications, typically related to entertainment. This reluctance appears to be due to the often-necessary change of operating paradigm and some scepticism towards the "VR advantage". The need therefore arises to evaluate the contribution that a VR system can make to user performance, for example to monitoring and decision-making. This will help system designers understand when immersive technologies can be proposed to replace or complement standard display systems such as a desktop monitor. In parallel to the VR headsets evolution there has been that of 360 cameras, which are now capable to instantly acquire photographs and videos in stereoscopic 3D (S3D) modality, with very high resolutions. 360° images are innately suited to VR headsets, where the captured view can be observed and explored through the natural rotation of the head. Acquired views can even be experienced and navigated from the inside as they are captured. The combination of omnidirectional images and VR headsets has opened to a new way of creating immersive visual representations. We call it: photo-based VR. This represents a new methodology that combines traditional model-based rendering with high-quality omnidirectional texture-mapping. Photo-based VR is particularly suitable for applications related to remote visits and realistic scene reconstruction, useful for monitoring and surveillance systems, control panels and operator training. The presented PhD study investigates the potential of photo-based VR representations. It starts by evaluating the role of immersion and user’s performance in today's graphical visual experience, to then use it as a reference to develop and evaluate new photo-based VR solutions. With the current literature on photo-based VR experience and associated user performance being very limited, this study builds new knowledge from the proposed assessments. We conduct five user studies on a few representative applications examining how visual representations can be affected by system factors (camera and display related) and how it can influence human factors (such as realism, presence, and emotions). Particular attention is paid to realistic depth perception, to support which we develop target solutions for photo-based VR. They are intended to provide users with a correct perception of space dimension and objects size. We call it: true-dimensional visualization. The presented work contributes to unexplored fields including photo-based VR and true-dimensional visualization, offering immersive system designers a thorough comprehension of the benefits, potential, and type of applications in which these new methods can make the difference. This thesis manuscript and its findings have been partly presented in scientific publications. In particular, five conference papers on Springer and the IEEE symposia, [1], [2], [3], [4], [5], and one journal article in an IEEE periodical [6], have been published

    Displacement and the Humanities: Manifestos from the Ancient to the Present

    Get PDF
    This is the final version. Available on open access from MDPI via the DOI in this recordThis is a reprint of articles from the Special Issue published online in the open access journal Humanities (ISSN 2076-0787) (available at: https://www.mdpi.com/journal/humanities/special_issues/Manifestos Ancient Present)This volume brings together the work of practitioners, communities, artists and other researchers from multiple disciplines. Seeking to provoke a discourse around displacement within and beyond the field of Humanities, it positions historical cases and debates, some reaching into the ancient past, within diverse geo-chronological contexts and current world urgencies. In adopting an innovative dialogic structure, between practitioners on the ground - from architects and urban planners to artists - and academics working across subject areas, the volume is a proposition to: remap priorities for current research agendas; open up disciplines, critically analysing their approaches; address the socio-political responsibilities that we have as scholars and practitioners; and provide an alternative site of discourse for contemporary concerns about displacement. Ultimately, this volume aims to provoke future work and collaborations - hence, manifestos - not only in the historical and literary fields, but wider research concerned with human mobility and the challenges confronting people who are out of place of rights, protection and belonging

    Interactive rapid prototyping combining 3D Printing and Augmented Reality

    Get PDF
    In the development of new products by the industry, a rapid prototyping stage is recommended so that an initial version of the product can be evaluated. In this way, any necessary corrections can be applied while still in the prototyping stage, preventing design errors from reaching the final product. Augmented Reality (AR) and 3D Printing are techniques that have become ubiquitous in recent years due to the reduction of equipment costs. Several works in the area of rapid prototyping have been developed with one of these techniques in isolation; a few works have tried to unite these two tools. In this work, we propose a new functional rapid prototyping process, combining 3D Printing and AR to create functional interactive prototypes. This process is accomplished by projecting the AR onto the 3D-printed prototype. It interprets the user’s gestures on the physical prototype, converting clicks and touches into actions to be executed on the AR virtual prototype, making the prototype functional and interactive. The proposed system is evaluated by means of case studies and the application of the UEQ (User Experience Questionnaire) to users who have tested the system. This way, it is possible to evaluate the relevance of the proposed process

    Teleoperation Methods for High-Risk, High-Latency Environments

    Get PDF
    In-Space Servicing, Assembly, and Manufacturing (ISAM) can enable larger-scale and longer-lived infrastructure projects in space, with interest ranging from commercial entities to the US government. Servicing, in particular, has the potential to vastly increase the usable lifetimes of satellites. However, the vast majority of spacecraft on low Earth orbit today were not designed to be serviced on-orbit. As such, several of the manipulations during servicing cannot easily be automated and instead require ground-based teleoperation. Ground-based teleoperation of on-orbit robots brings its own challenges of high latency communications, with telemetry delays of several seconds, and difficulties in visualizing the remote environment due to limited camera views. We explore teleoperation methods to alleviate these difficulties, increase task success, and reduce operator load. First, we investigate a model-based teleoperation interface intended to provide the benefits of direct teleoperation even in the presence of time delay. We evaluate the model-based teleoperation method using professional robot operators, then use feedback from that study to inform the design of a visual planning tool for this task, Interactive Planning and Supervised Execution (IPSE). We describe and evaluate the IPSE system and two interfaces, one 2D using a traditional mouse and keyboard and one 3D using an Intuitive Surgical da Vinci master console. We then describe and evaluate an alternative 3D interface using a Meta Quest head-mounted display. Finally, we describe an extension of IPSE to allow human-in-the-loop planning for a redundant robot. Overall, we find that IPSE improves task success rate and decreases operator workload compared to a conventional teleoperation interface

    Breaking Virtual Barriers : Investigating Virtual Reality for Enhanced Educational Engagement

    Get PDF
    Virtual reality (VR) is an innovative technology that has regained popularity in recent years. In the field of education, VR has been introduced as a tool to enhance learning experiences. This thesis presents an exploration of how VR is used from the context of educators and learners. The research employed a mixed-methods approach, including surveying and interviewing educators, and conducting empirical studies to examine engagement, usability, and user behaviour within VR. The results revealed educators are interested in using VR for a wide range of scenarios, including thought exercises, virtual field trips, and simulations. However, they face several barriers to incorporating VR into their practice, such as cost, lack of training, and technical challenges. A subsequent study found that virtual reality can no longer be assumed to be more engaging than desktop equivalents. This empirical study showed that engagement levels were similar in both VR and non-VR environments, suggesting that the novelty effect of VR may be less pronounced than previously assumed. A study against a VR mind mapping artifact, VERITAS, demonstrated that complex interactions are possible on low-cost VR devices, making VR accessible to educators and students. The analysis of user behaviour within this VR artifact showed that quantifiable strategies emerge, contributing to the understanding of how to design for collaborative VR experiences. This thesis provides insights into how the end-users in the education space perceive and use VR. The findings suggest that while educators are interested in using VR, they face barriers to adoption. The research highlights the need to design VR experiences, with understanding of existing pedagogy, that are engaging with careful thought applied to complex interactions, particularly for collaborative experiences. This research contributes to the understanding of the potential of VR in education and provides recommendations for educators and designers to enhance learning experiences using VR

    LIQUID METAL ANTENNAS FOR WEARABLE DEVICES

    Get PDF
    The novelty in this invention, rather than being about the liquid metal materials themselves, is around the use of liquid metals as antennas in a stretchable substrate (e.g., the metal could be injected into a silicone band or used as a conductive core of a thread for a textile band). The use of liquid metals as antennas in wearables could be the best way to significantly increase antenna surface area without introducing problematic points of failure, as liquid metals are self-healing (e.g., to minor puncture damage) and highly adaptable. This concept could be used in any wearable strap and may be particularly useful in watches

    Current Challenges and Advances in Cataract Surgery

    Get PDF
    This reprint focuses on new trials related to cataract surgery, intraocular lens power calculations for cataracts after refractive surgery, problems related to high myopia, toric IOL power calculations, etc. Intraoperative use of the 3D Viewing System and OCT, studies on the spectacle dependence of EDOF, IOL fixation status and visual function, and dry eye after FLAC are also discussed. Proteomic analysis of aqueous humor proteins is also discussed

    Shared-Control Teleoperation Paradigms on a Soft Growing Robot Manipulator

    Full text link
    Semi-autonomous telerobotic systems allow both humans and robots to exploit their strengths, while enabling personalized execution of a task. However, for new soft robots with degrees of freedom dissimilar to those of human operators, it is unknown how the control of a task should be divided between the human and robot. This work presents a set of interaction paradigms between a human and a soft growing robot manipulator, and demonstrates them in both real and simulated scenarios. The robot can grow and retract by eversion and inversion of its tubular body, a property we exploit to implement interaction paradigms. We implemented and tested six different paradigms of human-robot interaction, beginning with full teleoperation and gradually adding automation to various aspects of the task execution. All paradigms were demonstrated by two expert and two naive operators. Results show that humans and the soft robot manipulator can split control along degrees of freedom while acting simultaneously. In the simple pick-and-place task studied in this work, performance improves as the control is gradually given to the robot, because the robot can correct certain human errors. However, human engagement and enjoyment may be maximized when the task is at least partially shared. Finally, when the human operator is assisted by haptic feedback based on soft robot position errors, we observed that the improvement in performance is highly dependent on the expertise of the human operator.Comment: 15 pages, 14 figure

    Computer Vision-Based Hand Tracking and 3D Reconstruction as a Human-Computer Input Modality with Clinical Application

    Get PDF
    The recent pandemic has impeded patients with hand injuries from connecting in person with their therapists. To address this challenge and improve hand telerehabilitation, we propose two computer vision-based technologies, photogrammetry and augmented reality as alternative and affordable solutions for visualization and remote monitoring of hand trauma without costly equipment. In this thesis, we extend the application of 3D rendering and virtual reality-based user interface to hand therapy. We compare the performance of four popular photogrammetry software in reconstructing a 3D model of a synthetic human hand from videos captured through a smartphone. The visual quality, reconstruction time and geometric accuracy of output model meshes are compared. Reality Capture produces the best result, with output mesh having the least error of 1mm and a total reconstruction time of 15 minutes. We developed an augmented reality app using MediaPipe algorithms that extract hand key points, finger joint coordinates and angles in real-time from hand images or live stream media. We conducted a study to investigate its input variability and validity as a reliable tool for remote assessment of finger range of motion. The intraclass correlation coefficient between DIGITS and in-person measurement obtained is 0.767- 0.81 for finger extension and 0.958–0.857 for finger flexion. Finally, we develop and surveyed the usability of a mobile application that collects patient data medical history, self-reported pain levels and hand 3D models and transfer them to therapists. These technologies can improve hand telerehabilitation, aid clinicians in monitoring hand conditions remotely and make decisions on appropriate therapy, medication, and hand orthoses
    • …
    corecore