37,658 research outputs found
High performance wearable ultrasound as a human-machine interface for wrist and hand kinematic tracking
Objective: Non-invasive human machine interfaces (HMIs) have high potential in medical, entertainment, and industrial applications. Traditionally, surface electromyography (sEMG) has been used to track muscular activity and infer motor intention. Ultrasound (US) has received increasing attention as an alternative to sEMG-based HMIs. Here, we developed a portable US armband system with 24 channels and a multiple receiver approach, and compared it with existing sEMG- and US-based HMIs on movement intention decoding. Methods: US and motion capture data was recorded while participants performed wrist and hand movements of four degrees of freedom (DoFs) and their combinations. A linear regression model was used to offline predict hand kinematics from the US (or sEMG, for comparison) features. The method was further validated in real-time for a 3-DoF target reaching task. Results: In the offline analysis, the wearable US system achieved an average R2 of 0.94 in the prediction of four DoFs of the wrist and hand while sEMG reached a performance of R2=0.06 . In online control, the participants achieved an average 93% completion rate of the targets. Conclusion: When tailored for HMIs, the proposed US A-mode system and processing pipeline can successfully regress hand kinematics both in offline and online settings with performances comparable or superior to previously published interfaces. Significance: Wearable US technology may provide a new generation of HMIs that use muscular deformation to estimate limb movements. The wearable US system allowed for robust proportional and simultaneous control over multiple DoFs in both offline and online settings
Real-time motion data annotation via action string
Even though there is an explosive growth of motion capture data, there is still a lack of efficient and reliable methods to automatically annotate all the motions in a database. Moreover, because of the popularity of mocap devices in home entertainment systems, real-time human motion annotation or recognition becomes more and more imperative. This paper presents a new motion annotation method that achieves both the aforementioned two targets at the same time. It uses a probabilistic pose feature based on the Gaussian Mixture Model to represent each pose. After training a clustered pose feature model, a motion clip could be represented as an action string. Then, a dynamic programming-based string matching method is introduced to compare the differences between action strings. Finally, in order to achieve the real-time target, we construct a hierarchical action string structure to quickly label each given action string. The experimental results demonstrate the efficacy and efficiency of our method
Business Case and Technology Analysis for 5G Low Latency Applications
A large number of new consumer and industrial applications are likely to
change the classic operator's business models and provide a wide range of new
markets to enter. This article analyses the most relevant 5G use cases that
require ultra-low latency, from both technical and business perspectives. Low
latency services pose challenging requirements to the network, and to fulfill
them operators need to invest in costly changes in their network. In this
sense, it is not clear whether such investments are going to be amortized with
these new business models. In light of this, specific applications and
requirements are described and the potential market benefits for operators are
analysed. Conclusions show that operators have clear opportunities to add value
and position themselves strongly with the increasing number of services to be
provided by 5G.Comment: 18 pages, 5 figure
Viewing the Future? Virtual Reality In Journalism
Journalism underwent a flurry of virtual reality content creation, production and distribution starting in the final months of 2015. The New York Times distributed more than 1 million cardboard virtual reality viewers and released an app showing a spherical video short about displaced refugees. The Los Angeles Times landed people next to a crater on Mars. USA TODAY took visitors on a ride-along in the "Back to the Future" car on the Universal Studios lot and on a spin through Old Havana in a bright pink '57 Ford. ABC News went to North Korea for a spherical view of a military parade and to Syria to see artifacts threatened by war. The Emblematic Group, a company that creates virtual reality content, followed a woman navigating a gauntlet of anti- abortion demonstrators at a family planning clinic and allowed people to witness a murder-suicide stemming from domestic violence.In short, the period from October 2015 through February 2016 was one of significant experimentation with virtual reality (VR) storytelling. These efforts are part of an initial foray into determining whether VR is a feasible way to present news. The year 2016 is shaping up as a period of further testing and careful monitoring of potential growth in the use of virtual reality among consumers
Visual Localisation of Mobile Devices in an Indoor Environment under Network Delay Conditions
Current progresses in home automation and service robotic environment have
highlighted the need to develop interoperability mechanisms that allow a
standard communication between the two systems. During the development of the
DHCompliant protocol, the problem of locating mobile devices in an indoor
environment has been investigated. The communication of the device with the
location service has been carried out to study the time delay that web services
offer in front of the sockets. The importance of obtaining data from real-time
location systems portends that a basic tool for interoperability, such as web
services, can be ineffective in this scenario because of the delays added in
the invocation of services. This paper is focused on introducing a web service
to resolve a coordinates request without any significant delay in comparison
with the sockets
Virtual reality in theatre education and design practice - new developments and applications
The global use of Information and Communication Technologies (ICTs) has already established new approaches to theatre education and research, shifting traditional methods of knowledge delivery towards a more visually enhanced experience, which is especially important for teaching scenography. In this paper, I examine the role of multimedia within the field of theatre studies, with particular focus on the theory and practice of theatre design and education. I discuss various IT applications that have transformed the way we experience, learn and co-create our cultural heritage. I explore a suite of rapidly developing communication and computer-visualization techniques that enable reciprocal exchange between students, theatre performances and artefacts. Eventually, I analyse novel technology-mediated teaching techniques that attempt to provide a new media platform for visually enhanced information transfer. My findings indicate that the recent developments in the personalization of knowledge delivery, and also in student-centred study and e-learning, necessitate the transformation of the learners from passive consumers of digital products to active and creative participants in the learning experience
- âŚ