56 research outputs found

    Mobile Devices at the Cinema Theatre

    Get PDF
    The pre-show experience is a significant part of the movie industry. Moviegoers, on average arrive 24 min before when the previews start. Previews have been a part of the movie experience for more than a hundred years and are a culturally significant aspect of the whole experience. Over the last decade, the premovie in-theatre experience has grown to a $600 million industry. This growth continues to accelerate. Since 2012, this industry has increased by 150%. Consequently, there is an industry-wide demand for innovation in the pre-movie area. In this paper, we describe Paths, an innovative multiplayer real-time socially engaging game that we designed, developed and evaluated. An iterative refinement application development methodology was used to create the game. The game may be played on any smartphone and group interactions are viewed on the large theatre screen. This paper also reports on the quasiexperimental mixed method study with repeated measures that was conducted to ascertain the effectiveness of this new game. The results show that Paths is very engaging with elements of suspense, pleasant unpredictability and effective team building and crowd-pleasing characteristics

    A novel parallel algorithm for surface editing and its FPGA implementation

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophySurface modelling and editing is one of important subjects in computer graphics. Decades of research in computer graphics has been carried out on both low-level, hardware-related algorithms and high-level, abstract software. Success of computer graphics has been seen in many application areas, such as multimedia, visualisation, virtual reality and the Internet. However, the hardware realisation of OpenGL architecture based on FPGA (field programmable gate array) is beyond the scope of most of computer graphics researches. It is an uncultivated research area where the OpenGL pipeline, from hardware through the whole embedded system (ES) up to applications, is implemented in an FPGA chip. This research proposes a hybrid approach to investigating both software and hardware methods. It aims at bridging the gap between methods of software and hardware, and enhancing the overall performance for computer graphics. It consists of four parts, the construction of an FPGA-based ES, Mesa-OpenGL implementation for FPGA-based ESs, parallel processing, and a novel algorithm for surface modelling and editing. The FPGA-based ES is built up. In addition to the Nios II soft processor and DDR SDRAM memory, it consists of the LCD display device, frame buffers, video pipeline, and algorithm-specified module to support the graphics processing. Since there is no implementation of OpenGL ES available for FPGA-based ESs, a specific OpenGL implementation based on Mesa is carried out. Because of the limited FPGA resources, the implementation adopts the fixed-point arithmetic, which can offer faster computing and lower storage than the floating point arithmetic, and the accuracy satisfying the needs of 3D rendering. Moreover, the implementation includes Bézier-spline curve and surface algorithms to support surface modelling and editing. The pipelined parallelism and co-processors are used to accelerate graphics processing in this research. These two parallelism methods extend the traditional computation parallelism in fine-grained parallel tasks in the FPGA-base ESs. The novel algorithm for surface modelling and editing, called Progressive and Mixing Algorithm (PAMA), is proposed and implemented on FPGA-based ES’s. Compared with two main surface editing methods, subdivision and deformation, the PAMA can eliminate the large storage requirement and computing cost of intermediated processes. With four independent shape parameters, the PAMA can be used to model and edit freely the shape of an open or closed surface that keeps globally the zero-order geometric continuity. The PAMA can be applied independently not only FPGA-based ESs but also other platforms. With the parallel processing, small size, and low costs of computing, storage and power, the FPGA-based ES provides an effective hybrid solution to surface modelling and editing

    Haptic-Enabled Handheld Mobile Robots: Design and Analysis

    Get PDF
    The Cellulo robots are small tangible robots that are designed to represent virtual interactive point-like objects that reside on a plane within carefully designed learning activities. In the context of these activities, our robots not only display autonomous motion and act as tangible interfaces, but are also usable as haptic devices in order to exploit, for instance, kinesthetic learning. In this article, we present the design and analysis of the haptic interaction module of the Cellulo robots. We first detail our hardware and controller design that is low-cost and versatile. Then, we describe the task-based experimental procedure to evaluate the robot's haptic abilities. We show that our robot is usable in most of the tested tasks and extract perceptive and manipulative guidelines for the design of haptic elements to be integrated in future learning activities. We conclude with limitations of the system and future work

    Development of a mixed reality application to perform feasibility studies on new robotic use cases

    Get PDF
    Dissertação de mestrado integrado em Engenharia e Gestão IndustrialManufacturing companies are trying to affirm their position in the market by introducing new concepts and processes to their production systems. For this purpose, new technologies must be employed to ensure better performance and quality of their processes. Robotics has evolved a lot in the past years, creating new hardware and software technologies to answer the increasing demands of the markets. Collaborative robots are seen as one of the emerging and most promising technologies to answer industry 4.0 necessities. However, the expertise needed to implement these robots is not often found in small and medium-sized enterprises that represent a large share of the existing manufacturing companies. At the same time, mixed reality represents a new and immersive way to test new processes without physically deploying them. To tackle this problem, a mixed reality application is developed from top to bottom, aiming to facilitate the research and feasibility studies of new robotic use cases in the pre-study implementation phase. This application serves as a proof-of-concept, and it is not developed for the end user. First, the application's requirements are set to answer the manufacturing companies’ needs, providing two testing robots, an intuitive robot placement method, a trajectory modeling and parameterization system, and a result framework. Then the development of the application’s functionalities is explained, answering the requirements previously established. A collision detection system was defined and developed to perceive self and environmental collisions. Furthermore, a novel process to configure the robot based on imitation learning was developed. In the end, a painting tool was integrated into the robot's 3D model and used for a use-case study of a painting task. Then, the results were registered, and the application was accessed according to the non-functional requirements. Finally, a qualitative analysis was made to evaluate the fields where this new concept can help manufacturing companies improve the implementation success of new robotic applications.As empresas de manufatura estão a tentar afirmar sua posição no mercado introduzindo novos conceitos e processos nos seus sistemas de produção. Para isso, novas tecnologias devem ser empregues para garantir um melhor desempenho e qualidade dos seus processos. O campo da robótica evoluiu bastante nos últimos anos, criando novas tecnologias de hardware e software para atender à crescente procura dos mercados. Neste sentido, os robots colaborativos surgem como uma das tecnologias mais promissoras para atender às necessidades da indústria 4.0. No entanto, o conhecimento necessário para implementar este tipo de robots não é frequentemente encontrado em pequenas e médias empresas que representam grande parte das empresas de manufatura existentes. Ao mesmo tempo, a realidade mista representa uma maneira nova e imersiva de testar novos processos sem implementá-los fisicamente. Para fazer face ao problema, uma aplicação de realidade mista é desenvolvida com o objetivo de facilitar a pesquisa e realização de estudos de viabilidade de novos casos de uso de robótica na fase de pré-estudo da sua implementação. A aplicação serve como prova de conceito e não é desenvolvida para o utilizador final. Primeiramente, os requisitos da aplicação são definidos de acordo com as necessidades das empresas de manufatura, sendo fornecidos dois robots de teste, um método intuitivo de posicionamento, um sistema de modelagem e parametrização de trajetórias e uma estrutura de resultados. Em seguida é apresentado o processo de desenvolvimento das funcionalidades da aplicação, tendo em conta os requisitos previamente estabelecidos. Um sistema de deteção de colisões foi pensado e desenvolvido para localizar e representar colisões do robot com a sua própria estrutura física e com o ambiente real. Além disso, foi desenvolvido um novo processo para definir a pose inicial do robot baseado na aprendizagem por imitação. No final, uma ferramenta de pintura foi desenvolvida e integrada no modelo 3D do robot com o objetivo de estudar o desempenho da aplicação numa tarefa de pintura. Em seguida, os resultados foram registados e a aplicação avaliada de acordo com os requisitos não funcionais. Por fim, foi realizada uma análise qualitativa para avaliar os campos em que este novo conceito pode ajudar as empresas de manufatura a melhorar o sucesso da implementação de novas aplicações robóticas

    3D Reconstruction Using High Resolution Implicit Surface Representations and Memory Management Strategies

    Get PDF
    La disponibilité de capteurs de numérisation 3D rapides et précis a permis de capturer de très grands ensembles de points à la surface de différents objets qui véhiculent la géométrie des objets. La métrologie appliquée consiste en l'application de mesures dans différents domaines tels que le contrôle qualité, l'inspection, la conception de produits et la rétroingénierie. Une fois que le nuage de points 3D non organisés couvrant toute la surface de l'objet a été capturé, un modèle de la surface doit être construit si des mesures métrologiques doivent être effectuées sur l'objet. Dans la reconstruction 3D en temps réel, à l'aide de scanners 3D portables, une représentation de surface implicite très efficace est le cadre de champ vectoriel, qui suppose que la surface est approchée par un plan dans chaque voxel. Le champ vectoriel contient la normale à la surface et la matrice de covariance des points tombant à l'intérieur d'un voxel. L'approche globale proposée dans ce projet est basée sur le cadre Vector Field. Le principal problème abordé dans ce projet est la résolution de l'incrément de consommation de mémoire et la précision du modèle reconstruit dans le champ vectoriel. Ce tte approche effectue une sélection objective de la taille optimale des voxels dans le cadre de champ vectoriel pour maintenir la consommation de mémoire aussi faible que possible et toujours obtenir un modèle précis de la surface. De plus, un ajustement d e surface d'ordre élevé est utilisé pour augmenter la précision du modèle. Étant donné que notre approche ne nécessite aucune paramétrisation ni calcul complexe, et qu'au lieu de travailler avec chaque point, nous travaillons avec des voxels dans le champ vectoriel, cela réduit la complexité du calcul.The availability of fast and accurate 3D scanning sensors has made it possible to capture very large sets of points at the surface of different objects that convey the geometry of the objects. A pplied metrology consists in the application of measurements in different fields such as quality control, inspection, product design and reverse engineering. Once the cloud of unorganized 3D points covering the entire surface of the object has been capture d, a model of the surface must be built if metrologic measurements are to be performed on the object. In realtime 3D reconstruction, using handheld 3D scanners a very efficient implicit surface representation is the Vector Field framework, which assumes that the surface is approximated by a plane in each voxel. The vector field contains the normal to the surface and the covariance matrix of the points falling inside a voxel. The proposed global approach in this project is based on the Vector Field framew ork. The main problem addressed in this project is solving the memory consumption increment and the accuracy of the reconstructed model in the vector field. This approach performs an objective selection of the optimal voxels size in the vector field frame work to keep the memory consumption as low as possible and still achieve an accurate model of the surface. Moreover, a highorder surface fitting is used to increase the accuracy of the model. Since our approach do not require any parametrization and compl ex calculation, and instead of working with each point we are working with voxels in the vector field, then it reduces the computational complexity

    Spatial Interaction for Immersive Mixed-Reality Visualizations

    Get PDF
    Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics. Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis. Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis. Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research. One of the resulting challenges, however, is the design of user interaction for these often complex systems. In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions: 1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them? 2) How does spatial interaction benefit these visualizations and how should such interactions be designed? 3) How can spatial interaction in these immersive environments be analyzed and evaluated? To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts. For the second question, I study how spatial interaction in particular can help to explore data in mixed reality. There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels. Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights. Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations.Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, führen zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse. Insbesondere bei inhärent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natürliche Interaktionstechniken als hilfreich für die Datenanalyse erwiesen. Darüber hinaus spielt in solchen Anwendungsfällen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext für die Analyse dient. Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung geführt hat. Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion für diese oft komplexen Systeme. In meiner Dissertation beschäftige ich mich mit dieser Herausforderung, indem ich die Interaktion für immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche: 1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden? 2) Wie profitieren diese Visualisierungen von räumlicher Interaktion und wie sollten solche Interaktionen gestaltet werden? 3) Wie kann räumliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden? Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen für immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren. Für die zweite Frage untersuche ich, wie insbesondere die räumliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann. Dabei betrachte ich die Interaktion mit räumlichen Geräten im Vergleich zur Touch-Eingabe, die Verwendung zusätzlicher mobiler Geräte als Controller und das Potenzial transparenter Interaktionspanels. Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darüber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann. Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie räumliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstützen können

    3-D Interfaces for Spatial Construction

    Get PDF
    It is becoming increasingly easy to bring the body directly to digital form via stereoscopic immersive displays and tracked input devices. Is this space a viable one in which to construct 3d objects? Interfaces built upon two-dimensional displays and 2d input devices are the current standard for spatial construction, yet 3d interfaces, where the dimensionality of the interactive space matches that of the design space, have something unique to offer. This work increases the richness of 3d interfaces by bringing several new tools into the picture: the hand is used directly to trace surfaces; tangible tongs grab, stretch, and rotate shapes; a handle becomes a lightsaber and a tool for dropping simple objects; and a raygun, analagous to the mouse, is used to select distant things. With these tools, a richer 3d interface is constructed in which a variety of objects are created by novice users with relative ease. What we see is a space, not exactly like the traditional 2d computer, but rather one in which a distinct and different set of operations is easy and natural. Design studies, complemented by user studies, explore the larger space of three-dimensional input possibilities. The target applications are spatial arrangement, freeform shape construction, and molecular design. New possibilities for spatial construction develop alongside particular nuances of input devices and the interactions they support. Task-specific tangible controllers provide a cultural affordance which links input devices to deep histories of tool use, enhancing intuition and affective connection within an interface. On a more practical, but still emotional level, these input devices frame kinesthetic space, resulting in high-bandwidth interactions where large amounts of data can be comfortably and quickly communicated. A crucial issue with this interface approach is the tension between specific and generic input devices. Generic devices are the tradition in computing -- versatile, remappable, frequently bereft of culture or relevance to the task at hand. Specific interfaces are an emerging trend -- customized, culturally rich, to date these systems have been tightly linked to a single application, limiting their widespread use. The theoretical heart of this thesis, and its chief contribution to interface research at large is an approach to customization. Instead of matching an application domain's data, each new input device supports a functional class. The spatial construction task is split into four types of manipulation: grabbing, pointing, holding, and rubbing. Each of these action classes spans the space of spatial construction, allowing a single tool to be used in many settings without losing the unique strengths of its specific form. Outside of 3d interface, outside of spatial construction, this approach strikes a balance between generic and specific suitable for many interface scenarios. In practice, these specific function groups are given versatility via a quick remapping technique which allows one physical tool to perform many digital tasks. For example, the handle can be quickly remapped from a lightsaber that cuts shapes to tools that place simple platonic solids, erase portions of objects, and draw double-helices in space. The contributions of this work lie both in a theoretical model of spatial interaction, and input devices (combined with new interactions) which illustrate the efficacy of this philosophy. This research brings the new results of Tangible User Interface to the field of Virtual Reality. We find a space, in and around the hand, where full-fledged haptics are not necessary for users physically connect with digital form.</p

    PolyVR - A Virtual Reality Authoring Framework for Engineering Applications

    Get PDF
    Die virtuelle Realität ist ein fantastischer Ort, frei von Einschränkungen und vielen Möglichkeiten. Für Ingenieure ist dies der perfekte Ort, um Wissenschaft und Technik zu erleben, es fehlt jedoch die Infrastruktur, um die virtuelle Realität zugänglich zu machen, insbesondere für technische Anwendungen. Diese Arbeit bescheibt die Entstehung einer Softwareumgebung, die eine einfachere Entwicklung von Virtual-Reality-Anwendungen und deren Implementierung in immersiven Hardware-Setups ermöglicht. Virtual Engineering, die Verwendung virtueller Umgebungen für Design-Reviews während des Produktentwicklungsprozesses, wird insbesondere von kleinen und mittleren Unternehmen nur äußerst selten eingesetzt. Die Hauptgründe sind nicht mehr die hohen Kosten für professionelle Virtual-Reality-Hardware, sondern das Fehlen automatisierter Virtualisierungsabläufe und die hohen Wartungs- und Softwareentwicklungskosten. Ein wichtiger Aspekt bei der Automatisierung von Virtualisierung ist die Integration von Intelligenz in künstlichen Umgebungen. Ontologien sind die Grundlage des menschlichen Verstehens und der Intelligenz. Die Kategorisierung unseres Universums in Begriffe, Eigenschaften und Regeln ist ein grundlegender Schritt von Prozessen wie Beobachtung, Lernen oder Wissen. Diese Arbeit zielt darauf ab, einen Schritt zu einem breiteren Einsatz von Virtual-Reality-Anwendungen in allen Bereichen der Wissenschaft und Technik zu entwickeln. Der Ansatz ist der Aufbau eines Virtual-Reality-Authoring-Tools, eines Softwarepakets zur Vereinfachung der Erstellung von virtuellen Welten und der Implementierung dieser Welten in fortschrittlichen immersiven Hardware-Umgebungen wie verteilten Visualisierungssystemen. Ein weiteres Ziel dieser Arbeit ist es, das intuitive Authoring von semantischen Elementen in virtuellen Welten zu ermöglichen. Dies sollte die Erstellung von virtuellen Inhalten und die Interaktionsmöglichkeiten revolutionieren. Intelligente immersive Umgebungen sind der Schlüssel, um das Lernen und Trainieren in virtuellen Welten zu fördern, Prozesse zu planen und zu überwachen oder den Weg für völlig neue Interaktionsparadigmen zu ebnen

    A context-aware application offering map orientation

    Full text link
    Arcos Machancoses, A. (2010). A context-aware application offering map orientation. http://hdl.handle.net/10251/8583.Archivo delegad

    Enhanced Virtuality: Increasing the Usability and Productivity of Virtual Environments

    Get PDF
    Mit stetig steigender Bildschirmauflösung, genauerem Tracking und fallenden Preisen stehen Virtual Reality (VR) Systeme kurz davor sich erfolgreich am Markt zu etablieren. Verschiedene Werkzeuge helfen Entwicklern bei der Erstellung komplexer Interaktionen mit mehreren Benutzern innerhalb adaptiver virtueller Umgebungen. Allerdings entstehen mit der Verbreitung der VR-Systeme auch zusätzliche Herausforderungen: Diverse Eingabegeräte mit ungewohnten Formen und Tastenlayouts verhindern eine intuitive Interaktion. Darüber hinaus zwingt der eingeschränkte Funktionsumfang bestehender Software die Nutzer dazu, auf herkömmliche PC- oder Touch-basierte Systeme zurückzugreifen. Außerdem birgt die Zusammenarbeit mit anderen Anwendern am gleichen Standort Herausforderungen hinsichtlich der Kalibrierung unterschiedlicher Trackingsysteme und der Kollisionsvermeidung. Beim entfernten Zusammenarbeiten wird die Interaktion durch Latenzzeiten und Verbindungsverluste zusätzlich beeinflusst. Schließlich haben die Benutzer unterschiedliche Anforderungen an die Visualisierung von Inhalten, z.B. Größe, Ausrichtung, Farbe oder Kontrast, innerhalb der virtuellen Welten. Eine strikte Nachbildung von realen Umgebungen in VR verschenkt Potential und wird es nicht ermöglichen, die individuellen Bedürfnisse der Benutzer zu berücksichtigen. Um diese Probleme anzugehen, werden in der vorliegenden Arbeit Lösungen in den Bereichen Eingabe, Zusammenarbeit und Erweiterung von virtuellen Welten und Benutzern vorgestellt, die darauf abzielen, die Benutzerfreundlichkeit und Produktivität von VR zu erhöhen. Zunächst werden PC-basierte Hardware und Software in die virtuelle Welt übertragen, um die Vertrautheit und den Funktionsumfang bestehender Anwendungen in VR zu erhalten. Virtuelle Stellvertreter von physischen Geräten, z.B. Tastatur und Tablet, und ein VR-Modus für Anwendungen ermöglichen es dem Benutzer reale Fähigkeiten in die virtuelle Welt zu übertragen. Des Weiteren wird ein Algorithmus vorgestellt, der die Kalibrierung mehrerer ko-lokaler VR-Geräte mit hoher Genauigkeit und geringen Hardwareanforderungen und geringem Aufwand ermöglicht. Da VR-Headsets die reale Umgebung der Benutzer ausblenden, wird die Relevanz einer Ganzkörper-Avatar-Visualisierung für die Kollisionsvermeidung und das entfernte Zusammenarbeiten nachgewiesen. Darüber hinaus werden personalisierte räumliche oder zeitliche Modifikationen vorgestellt, die es erlauben, die Benutzerfreundlichkeit, Arbeitsleistung und soziale Präsenz von Benutzern zu erhöhen. Diskrepanzen zwischen den virtuellen Welten, die durch persönliche Anpassungen entstehen, werden durch Methoden der Avatar-Umlenkung (engl. redirection) kompensiert. Abschließend werden einige der Methoden und Erkenntnisse in eine beispielhafte Anwendung integriert, um deren praktische Anwendbarkeit zu verdeutlichen. Die vorliegende Arbeit zeigt, dass virtuelle Umgebungen auf realen Fähigkeiten und Erfahrungen aufbauen können, um eine vertraute und einfache Interaktion und Zusammenarbeit von Benutzern zu gewährleisten. Darüber hinaus ermöglichen individuelle Erweiterungen des virtuellen Inhalts und der Avatare Einschränkungen der realen Welt zu überwinden und das Erlebnis von VR-Umgebungen zu steigern
    • …
    corecore