119 research outputs found
Humanistic Computing: WearComp as a New Framework and Application for Intelligent Signal Processing
Humanistic computing is proposed as a new signal processing framework in which the processing apparatus is inextricably intertwined with the natural capabilities of our human body and mind. Rather than trying to emulate human intelligence, humanistic computing recognizes that the human brain is perhaps the best neural network of its kind, and that there are many new signal processing applications (within the domain of personal technologies) that can make use of this excellent but often overlooked processor. The emphasis of this paper is on personal imaging applications of humanistic computing, to take a first step toward an intelligent wearable camera system that can allow us to effortlessly capture our day-to-day experiences, help us remember and see better, provide us with personal safety through crime reduction, and facilitate new forms of communication through collective connected humanistic computing. The author’s wearable signal processing hardware, which began as a cumbersome backpackbased photographic apparatus of the 1970’s and evolved into a clothing-based apparatus in the early 1980’s, currently provides the computational power of a UNIX workstation concealed within ordinary-looking eyeglasses and clothing. Thus it may be worn continuously during all facets of ordinary day-to-day living, so that, through long-term adaptation, it begins to function as a true extension of the mind and body
Review on Augmented Reality in Oral and Cranio-Maxillofacial Surgery: Toward 'Surgery-Specific' Head-Up Displays
In recent years, there has been an increasing interest towards the augmented reality as applied to the surgical field. We conducted a systematic review of literature classifying the augmented reality applications in oral and cranio-maxillofacial surgery (OCMS) in order to pave the way to future solutions that may ease the adoption of AR guidance in surgical practice. Publications containing the terms 'augmented reality' AND 'maxillofacial surgery', and the terms 'augmented reality' AND 'oral surgery' were searched in the PubMed database. Through the selected studies, we performed a preliminary breakdown according to general aspects, such as surgical subspecialty, year of publication and country of research; then, a more specific breakdown was provided according to technical features of AR-based devices, such as virtual data source, visualization processing mode, tracking mode, registration technique and AR display type. The systematic search identified 30 eligible publications. Most studies (14) were in orthognatic surgery, the minority (2) concerned traumatology, while 6 studies were in oncology and 8 in general OCMS. In 8 of 30 studies the AR systems were based on a head-mounted approach using smart glasses or headsets. In most of these cases (7), a video-see-through mode was implemented, while only 1 study described an optical-see-through mode. In the remaining 22 studies, the AR content was displayed on 2D displays (10), full-parallax 3D displays (6) and projectors (5). In 1 case the AR display type is not specified. AR applications are of increasing interest and adoption in oral and cranio-maxillofacial surgery, however, the quality of the AR experience represents the key requisite for a successful result. Widespread use of AR systems in the operating room may be encouraged by the availability of 'surgery-specific' head-mounted devices that should guarantee the accuracy required for surgical tasks and the optimal ergonomics
Recommended from our members
Review of substitutive assistive tools and technologies for people with visual impairments: recent advancements and prospects
YesThe development of many tools and technologies for people with visual impairment has become a major priority in the
field of assistive technology research. However, many of these technology advancements have limitations in terms of the
human aspects of the user experience (e.g., usability, learnability, and time to user adaptation) as well as difficulties in
translating research prototypes into production. Also, there was no clear distinction between the assistive aids of adults
and children, as well as between “partial impairment” and “total blindness”. As a result of these limitations, the produced
aids have not gained much popularity and the intended users are still hesitant to utilise them. This paper presents a comprehensive review of substitutive interventions that aid in adapting to vision loss, centred on laboratory research studies
to assess user-system interaction and system validation. Depending on the primary cueing feedback signal offered to the
user, these technology aids are categorized as visual, haptics, or auditory-based aids. The context of use, cueing feedback
signals, and participation of visually impaired people in the evaluation are all considered while discussing these aids.
Based on the findings, a set of recommendations is suggested to assist the scientific community in addressing persisting
challenges and restrictions faced by both the totally blind and partially sighted people
Switchable Liquid crystal contact lenses for the correction of presbyopia
Presbyopia is an age-related disorder where the lens of the eye hardens so that focusing on near objects becomes increasingly difficult. This complaint affects everyone over the age of 50. It is becoming progressively more relevant, as the average age of the global population continues to rise. Bifocal or varifocal spectacles are currently the best solution for those that require near and far vision correction. However, many people prefer not to wear spectacles and while multifocal contact lenses are available, they are not widely prescribed and can require significant adaptation by wearers. One possible solution is to use liquid crystal contact lenses that can change focal power by applying a small electric field across the device. However, the design of these contact lenses must be carefully considered as they must be comfortable for the user to wear and able to provide the required change in focal power (usually about +2D). Progress towards different lens designs, which includes lens geometry, liquid crystal choices and suitable alignment modes, are reviewed. Furthermore, we also discuss suitable electrode materials, possible power sources and suggest some methods for switching the lenses between near and far vision correction
Haptic feedback to gaze events
Eyes are the window to the world, and most of the input from the surrounding environment is captured through the eyes. In Human-Computer Interaction too, gaze based interactions are gaining prominence, where the user’s gaze acts as an input to the system. Of late portable and inexpensive eye-tracking devices have made inroads in the market, opening up wider possibilities for interacting with a gaze. However, research on feedback to the gaze-based events is limited. This thesis proposes to study vibrotactile feedback to gaze-based interactions.
This thesis presents a study conducted to evaluate different types of vibrotactile feedback and their role in response to a gaze-based event. For this study, an experimental setup was designed wherein when the user fixated the gaze on a functional object, vibrotactile feedback was provided either on the wrist or on the glasses. The study seeks to answer questions such as the helpfulness of vibrotactile feedback in identifying functional objects, user preference for the type of vibrotactile feedback, and user preference of the location of the feedback. The results of this study indicate that vibrotactile feedback was an important factor in identifying the functional object. The preference for the type of vibrotactile feedback was somewhat inconclusive as there were wide variations among the users over the type of vibrotactile feedback. The personal preference largely influenced the choice of location for receiving the feedback
Ubiquitous computing and natural interfaces for environmental information
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em
Engenharia do Ambiente, perfil Gestão e Sistemas AmbientaisThe next computing revolution‘s objective is to embed every street, building, room and object with computational power. Ubiquitous computing (ubicomp) will allow every object to receive and transmit information, sense its surroundings and act accordingly, be located from anywhere in the world, connect every person. Everyone will have the possibility to access information, despite their age, computer knowledge, literacy or physical impairment. It will impact the world in a profound way, empowering mankind, improving the environment, but will also create new challenges that our society, economy, health and global environment will have to overcome. Negative impacts have to be identified and dealt with in advance. Despite these concerns, environmental studies have been mostly absent from discussions on the new paradigm.
This thesis seeks to examine ubiquitous computing, its technological emergence, raise awareness towards future impacts and explore the design of new interfaces and rich interaction modes. Environmental information is approached as an area which may greatly benefit from ubicomp as a way to gather, treat and disseminate it, simultaneously complying with the Aarhus convention. In an educational context, new media are poised to revolutionize the way we perceive, learn and interact with environmental information. cUbiq is presented as a natural interface to access that information
Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants
The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric
vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry
researchers from Europe, the US, and Asia with a diverse background, including wearable and
ubiquitous computing, computer vision, developmental psychology, optics, and human-computer
interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to
reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions,
group work, general discussions, and socialising. The key results of this seminar are 1) the
identification of key research challenges and summaries of breakout groups on multimodal eyewear
computing, egocentric vision, security and privacy issues, skill augmentation and task guidance,
eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and
research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4)
an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d,
as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d
at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at
the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)
Deformable Beamsplitters: Enhancing Perception with Wide Field of View, Varifocal Augmented Reality Displays
An augmented reality head-mounted display with full environmental awareness could present data in new ways and provide a new type of experience, allowing seamless transitions between real life and virtual content. However, creating a light-weight, optical see-through display providing both focus support and wide field of view remains a challenge. This dissertation describes a new dynamic optical element, the deformable beamsplitter, and its applications for wide field of view, varifocal, augmented reality displays. Deformable beamsplitters combine a traditional deformable membrane mirror and a beamsplitter into a single element, allowing reflected light to be manipulated by the deforming membrane mirror, while transmitted light remains unchanged. This research enables both single element optical design and correct focus while maintaining a wide field of view, as demonstrated by the description and analysis of two prototype hardware display systems which incorporate deformable beamsplitters. As a user changes the depth of their gaze when looking through these displays, the focus of virtual content can quickly be altered to match the real world by simply modulating air pressure in a chamber behind the deformable beamsplitter; thus ameliorating vergence–accommodation conflict. Two user studies verify the display prototypes’ capabilities and show the potential of the display in enhancing human performance at quickly perceiving visual stimuli. This work shows that near-eye displays built with deformable beamsplitters allow for simple optical designs that enable wide field of view and comfortable viewing experiences with the potential to enhance user perception.Doctor of Philosoph
- …