51 research outputs found
Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants
The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric
vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry
researchers from Europe, the US, and Asia with a diverse background, including wearable and
ubiquitous computing, computer vision, developmental psychology, optics, and human-computer
interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to
reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions,
group work, general discussions, and socialising. The key results of this seminar are 1) the
identification of key research challenges and summaries of breakout groups on multimodal eyewear
computing, egocentric vision, security and privacy issues, skill augmentation and task guidance,
eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and
research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4)
an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d,
as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d
at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at
the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)
EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment
Face performance capture and reenactment techniques use multiple cameras and sensors, positioned at a distance from the face or mounted on heavy wearable devices. This limits their applications in mobile and outdoor environments. We present EgoFace, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera. Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments. The input image is projected into a low dimensional latent space of the facial expression parameters. Through careful adversarial training of the parameter-space synthetic rendering, a videorealistic animation is produced. Our problem is challenging as the human visual system is sensitive to the smallest face irregularities that could occur in the final results. This sensitivity is even stronger for video results. Our solution is trained in a pre-processing stage, through a supervised manner without manual annotations. EgoFace captures a wide variety of facial expressions, including mouth movements and asymmetrical expressions. It works under varying illuminations, background, movements, handles people from different ethnicities and can operate in real time
Optical Gaze Tracking with Spatially-Sparse Single-Pixel Detectors
Gaze tracking is an essential component of next generation displays for
virtual reality and augmented reality applications. Traditional camera-based
gaze trackers used in next generation displays are known to be lacking in one
or multiple of the following metrics: power consumption, cost, computational
complexity, estimation accuracy, latency, and form-factor. We propose the use
of discrete photodiodes and light-emitting diodes (LEDs) as an alternative to
traditional camera-based gaze tracking approaches while taking all of these
metrics into consideration. We begin by developing a rendering-based simulation
framework for understanding the relationship between light sources and a
virtual model eyeball. Findings from this framework are used for the placement
of LEDs and photodiodes. Our first prototype uses a neural network to obtain an
average error rate of 2.67{\deg} at 400Hz while demanding only 16mW. By
simplifying the implementation to using only LEDs, duplexed as light
transceivers, and more minimal machine learning model, namely a light-weight
supervised Gaussian process regression algorithm, we show that our second
prototype is capable of an average error rate of 1.57{\deg} at 250 Hz using 800
mW.Comment: 10 pages, 8 figures, published in IEEE International Symposium on
Mixed and Augmented Reality (ISMAR) 202
Body-Area Capacitive or Electric Field Sensing for Human Activity Recognition and Human-Computer Interaction: A Comprehensive Survey
Due to the fact that roughly sixty percent of the human body is essentially
composed of water, the human body is inherently a conductive object, being able
to, firstly, form an inherent electric field from the body to the surroundings
and secondly, deform the distribution of an existing electric field near the
body. Body-area capacitive sensing, also called body-area electric field
sensing, is becoming a promising alternative for wearable devices to accomplish
certain tasks in human activity recognition and human-computer interaction.
Over the last decade, researchers have explored plentiful novel sensing systems
backed by the body-area electric field. On the other hand, despite the
pervasive exploration of the body-area electric field, a comprehensive survey
does not exist for an enlightening guideline. Moreover, the various hardware
implementations, applied algorithms, and targeted applications result in a
challenging task to achieve a systematic overview of the subject. This paper
aims to fill in the gap by comprehensively summarizing the existing works on
body-area capacitive sensing so that researchers can have a better view of the
current exploration status. To this end, we first sorted the explorations into
three domains according to the involved body forms: body-part electric field,
whole-body electric field, and body-to-body electric field, and enumerated the
state-of-art works in the domains with a detailed survey of the backed sensing
tricks and targeted applications. We then summarized the three types of sensing
frontends in circuit design, which is the most critical part in body-area
capacitive sensing, and analyzed the data processing pipeline categorized into
three kinds of approaches. Finally, we described the challenges and outlooks of
body-area electric sensing
Reconfigurable liquid crystals for wearable applications
This thesis explores the integration of liquid crystal (LC) materials into wearable devices, focusing on applications beyond Liquid Crystal Displays (LCDs). The investigation begins with a thorough review of smart contact lenses and glasses, identifying a gap in research on LC materials for vision assistance. Subsequent chapters introduced the background of LCs and summarized materials and techniques for LC cell fabrication. After that, I proposed methods for depositing materials on patterned surfaces, which could be used for the next generation of optic devices. Subsequently, my thesis demonstrates the reconfigurability of an antenna for smart glasses applications using LC materials, where the resonant frequency of the antenna can be tuned by LC substrate from 3.3 to 3.8 GHz. Moreover, due to the selective reflection nature of cholesteric liquid crystals (CLCs), I have shown how CLCs can be used in smart glasses. For example, I proposed CLC-based vision assist modules, which involve a CLC-based optical combiner (OC) and thermal-controlled glass lenses for epilepsy treatment. The CLC-based OC is able to seamlessly switch smart glasses between augmented reality (AR), virtual reality (VR), and transparent modes via temperature variation. Furthermore, the tunable glasses lens was proposed utilizing selective reflection of CLC materials to block a range of wavelengths of light that would trigger photosensitive epilepsy, showcasing the versatility of LC materials in healthcare applications. In conclusion, this thesis contributes to the advancement of reconfigurable and adaptive solutions in various wearable technologies using LC material, addressing critical aspects of Human-Machine Interaction (HMI) and healthcare needs
Applying Augmented Reality to Outdoors Industrial Use
Augmented Reality (AR) is currently gaining popularity in multiple different fields. However, the technology for AR still requires development in both hardware and software when considering industrial use. In order to create immersive AR applications, more accurate pose estimation techniques to define virtual camera location are required. The algorithms for pose estimation often require a lot of processing power, which makes robust pose estimation a difficult task when using mobile devices or designated AR tools. The difficulties are even larger in outdoor scenarios where the environment can vary a lot and is often unprepared for AR.
This thesis aims to research different possibilities for creating AR applications for outdoor environments. Both hardware and software solutions are considered, but the focus is more on software. The majority of the thesis focuses on different visual pose estimation and tracking techniques for natural features.
During the thesis, multiple different solutions were tested for outdoor AR. One commercial AR SDK was tested, and three different custom software solutions were developed for an Android tablet. The custom software solutions were an algorithm for combining data from magnetometer and a gyroscope, a natural feature tracker and a tracker based on panorama images. The tracker based on panorama images was implemented based on an existing scientific publication, and the presented tracker was further developed by integrating it to Unity 3D and adding a possibility for augmenting content.
This thesis concludes that AR is very close to becoming a usable tool for professional use. The commercial solutions currently available are not yet ready for creating tools for professional use, but especially for different visualization tasks some custom solutions are capable of achieving a required robustness. The panorama tracker implemented in this thesis seems like a promising tool for robust pose estimation in unprepared outdoor environments.Lisätyn todellisuuden suosio on tällä hetkellä kasvamassa usealla eri alalla. Saatavilla olevat ohjelmistot sekä laitteet eivät vielä riitä lisätyn todellisuuden soveltamiseen ammattimaisessa käytössä. Erityisesti posen estimointi vaatii tarkempia menetelmiä, jotta immersiivisten lisätyn todellisuuden sovellusten kehittäminen olisi mahdollista. Posen estimointiin (laitteen asennon- sekä paikan arviointiin) käytetyt algoritmit ovat usein monimutkaisia, joten ne vaativat merkittävästi laskentatehoa. Laskentatehon vaatimukset ovat usein haasteellisia varsinkin mobiililaitteita sekä lisätyn todellisuuden laitteita käytettäessä. Lisäongelmia tuottaa myös ulkotilat, jossa ympäristö voi muuttua usein ja ympäristöä ei ole valmisteltu lisätyn todellisuuden sovelluksille.
Diplomityön tarkoituksena on tutkia mahdollisuuksia lisätyn todellisuuden sovellusten kehittämiseen ulkotiloihin. Sekä laitteisto- että ohjelmistopohjaisia ratkaisuja käsitellään. Ohjelmistopohjaisia ratkaisuja käsitellään työssä laitteistopohjaisia ratkaisuja laajemmin. Suurin osa diplomityöstä keskittyy erilaisiin visuaalisiin posen estimointi tekniikoihin, jotka perustuvat kuvasta tunnistettujen luonnollisten piirteiden seurantaan.
Työn aikana testattiin useita ratkaisuja ulkotiloihin soveltuvaan lisättyyn todellisuuteen. Yhtä kaupallista työkalua testattiin, jonka lisäksi toteutettiin kolme omaa sovellusta Android tableteille. Työn aikana kehitetyt sovellukset olivat yksinkertainen algoritmi gyroskoopin ja magnetometrin datan yhdistämiseen, luonnollisen piirteiden seuranta-algoritmi sekä panoraamakuvaan perustuva seuranta-algoritmi. Panoraamakuvaan perustuva seuranta-algoritmi on toteuteutettu toisen tieteellisen julkaisun pohjalta, ja algoritmia jatkokehitettiin integroimalla se Unity 3D:hen. Unity 3D-integrointi mahdollisti myös sisällön esittämisen lisätyn todellisuuden avulla.
Työn lopputuloksena todetaan, että lisätyn todellisuuden teknologia on lähellä pistettä, jossa lisätyn todellisuuden työkaluja voitaisiin käyttää ammattimaisessa käytössä. Tällä hetkellä saatavilla olevat kaupalliset työkalut eivät vielä pääse ammattikäytön vaatimalle tasolle, mutta erityisesti visualisointitehtäviin soveltuvia ei-kaupallisia ratkaisuja on jo olemassa. Lisäksi työn aikana toteutetun panoraamakuviin perustuvan seuranta-algoritmin todetaan olevan lupaava työkalu posen estimointiin ulkotiloissa.Siirretty Doriast
The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution
Recommended from our members
Development of solar energy harvesting textiles
The Achilles heel of many wearable and electronic textile (E-textile) devices is their power requirement, which has been a major hurdle in the adoption of E-textiles. To keep these devices continuously powered without frequent recharging or bulky energy storage devices, many have proposed integrating energy harvesting capability into clothing. Solar energy harvesting has been one of the most investigated avenues for this due to the abundance of solar energy and maturity of photovoltaic technologies.
This research investigated a novel approach for realising solar energy harvesting with textiles by embedding miniature solar cells (SCs) within the fibres of a yarn, thus delivering a robust and consumer-friendly solution for powering wearable and mobile devices. SCs were first soldered onto fine copper wires and encapsulated inside of resin micro-pods, before being covered by a fibrous sheath, to realise solar cell embedded yarns (solar-E-yarns) that can be readily converted into fabrics with conventional fabric manufacturing processes such as weaving and knitting. Preliminary investigations conducted using miniature photodiode embedded E-yarns laid the foundation for embedding photovoltaic devices within yarns. A mathematical model was also formulated to characterise the performance of photovoltaic devices embedded in yarns and was experimentally validated using photodiodes to evaluate the effects of the resin micro-pod on photovoltaic response.
Subsequently solar-E-yarns were fabricated using silicon SCs. The photovoltaic response of these solar-E-yarns were studied at each stage of the E-yarn fabrication process and under a range of test conditions including different light intensities, incident light angles, ambient temperatures and humidity levels. Solar-E-yarn performance could be further enhanced by impregnating the photoactive sides of the yarns with an optically clear resin, as well as by using bifacial SCs.
A series of fit-for-purpose tests including wash durability tests were conducted on the solar-E-yarns which revealed that the solar-E-yarn embedded fabrics could undergo domestic laundering and maintained ~90% of the original power output after 15 machine wash cycles, which was vastly superior to other solutions proposed in the literature.
To demonstrate the energy harvesting capability, prototype demonstrators were created by weaving solar-E-yarns. A solar fabric demonstrator with ~25cm2 active area generated up to ~2.15 mW/cm2 under one sun illumination and maintained both the feel and aesthetics of a normal textile. The fabric demonstrator was capable of charging various electronic storage and powering low power mobile devices.
The research has generated a wealth of knowledge on the fabrication, performance and the utility of the solution for regular clothing applications. These attributes will enable these solar fabrics to feature in future wearable electronics and electronic textiles to provide a continuous supply of power, without having to compromise on comfort, aesthetics or wash durability
- …