18,319 research outputs found
Exploring computer-generated line graphs through virtual touch
This paper describes the development and evaluation of a haptic interface designed to provide access to line graphs for blind or visually impaired people. Computer-generated line graphs can be felt by users through the sense of touch produced by a PHANToM force feedback device. Experiments have been conducted to test the effectiveness of this interface with both sighted and blind people. The results show that sighted and blind people have achieved about 89.95% and 86.83% correct answers respectively in the experiment
A LiDAR Point Cloud Generator: from a Virtual World to Autonomous Driving
3D LiDAR scanners are playing an increasingly important role in autonomous
driving as they can generate depth information of the environment. However,
creating large 3D LiDAR point cloud datasets with point-level labels requires a
significant amount of manual annotation. This jeopardizes the efficient
development of supervised deep learning algorithms which are often data-hungry.
We present a framework to rapidly create point clouds with accurate point-level
labels from a computer game. The framework supports data collection from both
auto-driving scenes and user-configured scenes. Point clouds from auto-driving
scenes can be used as training data for deep learning algorithms, while point
clouds from user-configured scenes can be used to systematically test the
vulnerability of a neural network, and use the falsifying examples to make the
neural network more robust through retraining. In addition, the scene images
can be captured simultaneously in order for sensor fusion tasks, with a method
proposed to do automatic calibration between the point clouds and captured
scene images. We show a significant improvement in accuracy (+9%) in point
cloud segmentation by augmenting the training dataset with the generated
synthesized data. Our experiments also show by testing and retraining the
network using point clouds from user-configured scenes, the weakness/blind
spots of the neural network can be fixed
Analysis of user behavior with different interfaces in 360-degree videos and virtual reality
[eng] Virtual reality and its related technologies are being used for many kinds of content, like virtual environments or 360-degree videos. Omnidirectional, interactive, multimedia is consumed with a variety of devices, such as computers, mobile devices, or specialized virtual reality gear. Studies on user behavior with computer interfaces are an important part of the research in human-computer interaction, used in, e.g., studies on usability, user experience or the improvement of streaming techniques. User behavior in these environments has drawn the attention of the field but little attention has been paid to compare the behavior between different devices to reproduce virtual environments or 360-degree videos. We introduce an interactive system that we used to create and reproduce virtual reality environments and experiences based on 360-degree videos, which is able to automatically collect the usersâ behavior, so we can analyze it. We studied the behavior collected in the reproduction of a virtual reality environment with this system and we found significant differences in the behavior between users of an interface based on the Oculus Rift and another based on a mobile VR headset similar to the Google Cardboard: different time between interactions, likely due to the need to perform a gesture in the first interface; differences in spatial exploration, as users of the first interface chose a particular area of the environment to stay; and differences in the orientation of their heads, as Oculus users tended to look towards physical objects in the experiment setup and mobile users seemed to be influenced by the initial values of orientation of their browsers. A second study was performed with data collected with this system, which was used to play a hypervideo production made of 360-degree videos, where we compared the usersâ behavior with four interfaces (two based on immersive devices and the other two based on non-immersive devices) and with two categories of videos: we found significant differences in the spatiotemporal exploration, the dispersion of the orientation of the users, in the movement of these orientations and in the clustering of their trajectories, especially between different video types but also between devices, as we found that in some cases, behavior with immersive devices was similar due to similar constraints in the interface, which are not present in non-immersive devices, such as a computer mouse or the touchscreen of a smartphone. Finally, we report a model based on a recurrent neural network that is able to classify these reproductions with 360-degree videos into their corresponding video type and interface with an accuracy of more than 90% with only four seconds worth of orientation data; another deep learning model was implemented to predict orientations up to two seconds in the future from the last seconds of orientation, whose results were improved by up to 19% by a comparable model that leverages the video type and the device used to play it.[cat] La realitat virtual i les tecnologies que hi estan relacionades es fan servir per a molts tipus de continguts, com entorns virtuals o vĂdeos en 360 graus. Continguts multimèdia omnidireccional i interactiva sĂłn consumits amb diversos dispositius, com ordinadors, dispositius mòbils o aparells especialitzats de realitat virtual. Els estudis del comportament dels usuaris amb interfĂcies dâordinador sĂłn una part important de la recerca en la interacciĂł persona-ordinador fets servir en, per exemple, estudis de usabilitat, dâexperiència dâusuari o de la millora de tècniques de transmissiĂł de vĂdeo. El comportament dels usuaris en aquests entorns ha atret lâatenciĂł dels investigadors, però sâha parat poca atenciĂł a comparar el comportament dels usuaris entre diferents dispositius per reproduir entorns virtuals o vĂdeos en 360 graus. Nosaltres introduĂŻm un sistema interactiu que hem fet servir per crear i reproduir entorns de realitat virtual i experiències basades en vĂdeos en 360 graus, que ĂŠs capaç de recollir automĂ ticament el comportament dels usuaris, de manera que el puguem analitzar. Hem estudiat el comportament recollit en la reproducciĂł dâun entorn de realitat virtual amb aquest sistema i hem trobat diferències significatives en lâexecuciĂł entre usuaris dâuna interfĂcie basada en Oculus Rift i dâuna altra basada en un visor de RV mòbil semblant a la Google Cardboard: diferent temps entre interaccions, probablement causat per la necessitat de fer un gest amb la primera interfĂcie; diferències en lâexploraciĂł espacial, perquè els usuaris de la primera interfĂcie van triar romandre en una Ă rea de lâentorn; i diferències en lâorientaciĂł dels seus caps, ja que els usuaris dâOculus tendiren a mirar cap a objectes fĂsics de la instal¡laciĂł de lâexperiment i els usuaris dels visors mòbils semblen influĂŻts pels valors dâorientaciĂł inicials dels seus navegadors. Un segon estudi va ser executat amb les dades recollides amb aquest sistema, que va ser fet servir per reproduir un hipervĂdeo fet de vĂdeos en 360 graus, en què hem comparat el comportament dels usuaris entre quatre interfĂcies (dues basades en dispositius immersius i dues basades en dispositius no immersius) i dues categories de vĂdeos: hem trobat diferències significatives en lâexploraciĂł de lâespaitemps del vĂdeo, en la dispersiĂł de lâorientaciĂł dels usuaris, en el moviment dâaquestes orientacions i en lâagrupaciĂł de les seves trajectòries, especialment entre diferents tipus de vĂdeo però tambĂŠ entre dispositius, ja que hem trobat que, en alguns casos, el comportament amb dispositius immersius ĂŠs similar a causa de lĂmits semblants en la interfĂcie, que no sĂłn presents en dispositius no immersius, com amb un ratolĂ dâordinador o la pantalla tĂ ctil dâun mòbil. Finalment, hem reportat un model basat en una xarxa neuronal recurrent, que ĂŠs capaç de classificar aquestes reproduccions de vĂdeos en 360 graus en els seus corresponents tipus de vĂdeo i interfĂcie que sâha fet servir amb una precisiĂł de mĂŠs del 90% amb nomĂŠs quatre segons de trajectòria dâorientacions; un altre model dâaprenentatge profund ha estat implementat per predir orientacions fins a dos segons en el futur a partir dels darrers segons dâorientaciĂł, amb uns resultats que han estat millorats fins a un 19% per un model comparable que aprofita el tipus de vĂdeo i el dispositiu que sâha fet servir per reproduir-lo.[spa] La realidad virtual y las tecnologĂas que estĂĄn relacionadas con ella se usan para muchos tipos de contenidos, como entornos virtuales o vĂdeos en 360 grados. Contenidos multimedia omnidireccionales e interactivos son consumidos con diversos dispositivos, como ordenadores, dispositivos mĂłviles o aparatos especializados de realidad virtual. Los estudios del comportamiento de los usuarios con interfaces de ordenador son una parte importante de la investigaciĂłn en la interacciĂłn persona-ordenador usados en, por ejemplo, estudios de usabilidad, de experiencia de usuario o de la mejora de tĂŠcnicas de transmisiĂłn de vĂdeo. El comportamiento de los usuarios en estos entornos ha atraĂdo la atenciĂłn de los investigadores, pero se ha dedicado poca atenciĂłn en comparar el comportamiento de los usuarios entre diferentes dispositivos para reproducir entornos virtuales o vĂdeos en 360 grados. Nosotros introducimos un sistema interactivo que hemos usado para crear y reproducir entornos de realidad virtual y experiencias basadas en vĂdeos de 360 grados, que es capaz de recoger automĂĄticamente el comportamiento de los usuarios, de manera que lo podamos analizar. Hemos estudiado el comportamiento recogido en la reproducciĂłn de un entorno de realidad virtual con este sistema y hemos encontrado diferencias significativas en la ejecuciĂłn entre usuarios de una interficie basada en Oculus Rift y otra basada en un visor de RV mĂłvil parecido a la Google Cardboard: diferente tiempo entre interacciones, probablemente causado por la necesidad de hacer un gesto con la primera interfaz; diferencias en la exploraciĂłn espacial, porque los usuarios de la primera interfaz permanecieron en un ĂĄrea del entorno; y diferencias en la orientaciĂłn de sus cabezas, ya que los usuarios de Oculus tendieron a mirar hacia objetos fĂsicos en la instalaciĂłn del experimento y los usuarios de los visores mĂłviles parecieron influidos por los valores iniciales de orientaciĂłn de sus navegadores. Un segundo estudio fue ejecutado con los datos recogidos con este sistema, que fue usado para reproducir un hipervĂdeo compuesto de vĂdeos en 360 grados, en el que hemos comparado el comportamiento de los usuarios entre cuatro interfaces (dos basadas en dispositivos inmersivos y dos basadas en dispositivos no inmersivos) y dos categorĂas de vĂdeos: hemos encontrado diferencias significativas en la exploraciĂłn espaciotemporal del vĂdeo, en la dispersiĂłn de la orientaciĂłn de los usuarios, en el movimiento de estas orientaciones y en la agrupaciĂłn de sus trayectorias, especialmente entre diferentes tipos de vĂdeo pero tambiĂŠn entre dispositivos, ya que hemos encontrado que, en algunos casos, el comportamiento con dispositivos inmersivos es similar a causa de lĂmites parecidos en la interfaz, que no estĂĄn presentes en dispositivos no inmersivos, como con un ratĂłn de ordenador o la pantalla tĂĄctil de un mĂłvil. Finalmente, hemos reportado un modelo basado en una red neuronal recurrente, que es capaz de clasificar estas reproducciones de vĂdeos en 360 grados en sus correspondientes tipos de vĂdeo y la interfaz que se ha usado con una precisiĂłn de mĂĄs del 90% con sĂłlo cuatro segundos de trayectoria de orientaciĂłn; otro modelo de aprendizaje profundo ha sido implementad para predecir orientaciones hasta dos segundos en el futuro a partir de los Ăşltimos segundos de orientaciĂłn, con unos resultados que han sido mejorados hasta un 19% por un modelo comparable que aprovecha el tipo de vĂdeo y el dispositivo que se ha usado para reproducirlo
Exploring User Interface Improvements for Software Developers who are Blind
Software developers who are blind and interact with the computer non-visually face unique challenges with information retrieval. We explore the use of speech and Braille combined with software to provide an improved interface to aid with challenges associated with information retrieval. We motivate our design on common tasks performed by students in a software development course using a Microprocessor without Interlocked Pipeline Stages (MIPS) architecture simulation tool. We test our interface via a single-subject longitudinal study, and we measure and show improvement in both the userâs performance and the user experience
SGD Frequency-Domain Space-Frequency Semiblind Multiuser Receiver with an Adaptive Optimal Mixing Parameter
A novel stochastic gradient descent frequency-domain (FD) space-frequency (SF) semiblind multiuser receiver with an adaptive optimal mixing parameter is proposed to improve performance of FD semiblind multiuser receivers with a fixed mixing parameters and reduces computational complexity of suboptimal FD semiblind multiuser receivers in SFBC downlink MIMO MC-CDMA systems where various numbers of users exist. The receiver exploits an adaptive mixing parameter to mix information ratio between the training-based mode and the blind-based mode. Analytical results prove that the optimal mixing parameter value relies on power and number of active loaded users existing in the system. Computer simulation results show that when the mixing parameter is adapted closely to the optimal mixing parameter value, the performance of the receiver outperforms existing FD SF adaptive step-size (AS) LMS semiblind based with a fixed mixing parameter and conventional FD SF AS-LMS training-based multiuser receivers in the MSE, SER and signal to interference plus noise ratio in both static and dynamic environments
Living City, A Collaborative Browser-Based Massively Multiplayer Online Game
This work presents the design and implementation of our Browser-based Massively Multiplayer Online Game, Living City, a simulation game fully developed at the University of Messina. Living City is a persistent and real-time digital world, running in the Web browser environment and accessible from users without any client-side installation. Today Massively Multiplayer Online Games attract the attention of Computer Scientists both for their architectural peculiarity and the close interconnection with the social network phenomenon. We will cover these two aspects paying particular attention to some aspects of the project: game balancing (e.g. algorithms behind time and money balancing); business logic (e.g., handling concurrency, cheating avoidance and availability) and, finally, social and psychological aspects involved in the collaboration of players, analyzing their activities and interconnections
Overcoming barriers and increasing independence: service robots for elderly and disabled people
This paper discusses the potential for service robots to overcome barriers and increase independence of
elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly
people and advances in technology which will make new uses possible and provides suggestions for some of these new
applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses
the complementarity of assistive service robots and personal assistance and considers the types of applications and
users for which service robots are and are not suitable
Recommended from our members
Visualization of back pain data-A 3-D solution
Traditional approaches to gathering and visualizing pain data rely on two-dimensional (2-D) human body models, where different types of sensation are recorded with various monochrome symbols. We proposean alternative that uses a three-dimensional (3-D) representation of the human body, which can be marked in color to visualize and record pain data
- âŚ