13 research outputs found

    Designing Explicit Numeric Input Interfaces for Immersive Virtual Environments

    Get PDF
    User interfaces involving explicit control of numeric values in immersive virtual environments have not been well studied. In the context of designing three-dimensional interaction techniques for the creation of multiple objects, called cloning, we have developed and tested a dynamic slider interface (D-Slider) and a virtual numeric keypad (VKey). Our cloning interface requires precise number input because it allows users to place objects at any location in the environment with a precision of 1/10 unit. The design of the interface focuses on feedback, constraints, and expressiveness. Comparative usability studies have shown that the newly designed user interfaces were easy to use, effective, and had a good quality of interaction. We describe a working prototype of our cloning interface, the iterative design process for D-Slider and V-Key, and lessons learned. Our interfaces can be re-used for any virtual environment interaction tasks requiring explicit numeric input

    Designing a geographic visual information system (GVIS) to support participation in urban planning

    Get PDF
    The growth of the international movement to involve the public in urban planning urges us to find new ways to achieve this. Recent studies have identified information communication technologies (ICT) as a mechanism to support such movement. It has been postulated that integrating geographic information system (GIS), virtual reality (VR) and Internet technologies will facilitate greater participation in planning activity and therefore strengthen and democratise the process. This is a growing area of research. There is, however, concern that a lack of a theoretical basis for these studies might undermine their success and hamper the widespread adoption of GIS-VR combination (GVIS). This thesis presents a theoretical framework based on the Learning System Theory (LST). ICT technologies are then assessed according to the framework. In the light of the assessmenta, prototype has been designed and developed based on a local urban regeneration project in Salford, UK. The prototype is then evaluated through two phases, namely formative evaluation and summative evaluation, to test the feasibility of the framework. The formative evaluation was focused on evaluating the functionality of the prototype system. In this case, evaluators were experts in IT or urban planning. The summative evaluation focused on testing the value of the prototype for different stakeholder groups of the urban regeneration project from local residents to planning officers. The findings from this research indicated that better visualization could help people in understanding planning issues and communicate their visions to others. The interactivity functions could further support interaction among users and the analysis of information. Moreover, the results indicated that the learning system theory could be used as a framework in looking at how GVIS could be developed in order to support public participation in urban planning.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Visualization of Four-Dimensional Spacetimes

    Get PDF
    Dokument1.pdf enthält den Text dieser Arbeit, Dokument2.html verweist auf elektronische Filme. In dieser Arbeit werden neue und verbesserte Methoden zur Visualisierung vierdimensionaler Raumzeiten dargestellt. Der erste Teil behandelt die flache Raumzeit der speziellen Relativitätstheorie. Fragestellungen, die sich auf Beleuchtung, Farbsehen, Transformation von Eigenschaften des Lichts und die Kinematik beschleunigter Körper beziehen, werden diskutiert. Es wird gezeigt, wie relativistische Beleuchtungseffekte in bekannten Darstellungsverfahren berücksichtigt werden können. Relativistisches Radiosity und textur- und bildbasiertes relativistisches Rendering werden als neue Darstellungsmethoden vorgestellt. Interaktive virtuelle Umgebungen zur Erkundung der speziellen Relativitätstheorie werden beschrieben, einschließlich der Relativistic-Vehicle-Control-Metapher zur Navigation bei hohen Geschwindigkeiten. Der zweite Teil dieser Arbeit behandelt gekrümmte vierdimensionale Raumzeiten der allgemeinen Relativitätstheorie. Durch nichtlineares Raytracing wird die visuelle Wahrnehmung eines Beobachters in einer allgemeinrelativistischen Umgebung visualisiert. Es werden Erweiterungen des Raytracings in einer einzelnen Karte vorgeschlagen, um das differentialgeometrische Konzept eines Atlanten zu implementieren. Zudem wird gezeigt, wie die Visualisierung von Gravitationslinsen innerhalb eines Raytracing-Systems berücksichtigt werden kann. Der Caustic Finder wird eingeführt als eine numerische Methode zur Bestimmung der zweidimensionalen Kaustiken einer Gravitationslinse. Die innere Geometrie von zweidimensionalen räumlichen Hyperflächen kann durch isometrische Einbettung in den dreidimensionalen euklidschen Raum visualisiert werden. Eine Methode zur Einbettung von Flächen mit sphärischer Topologie wird beschrieben. Schließlich wird ein Algorithmus zur adaptiven Triangulierung von Höhenfeldern als eine spezielle Anwendung in der klassischen Visualisierung vorgestellt.Dokument1.pdf contains this text of the thesis, Dokument2.html is an index to accompanying electronic videos. In this thesis, new and improved methods for the visualization of four-dimensional spacetimes are presented. The first part of this thesis deals with the flat spacetime of special relativity. Issues of illumination, color vision, transformation of properties of light, and the kinematics of accelerating bodies are discussed. It is shown how relativistic effects on illumination can be included in well-known rendering techniques. Relativistic radiosity, texture-based relativistic rendering, and image-based relativistic rendering are proposed as new rendering methods. Interactive virtual environments for the exploration of special relativity are introduced, including the relativistic-vehicle-control metaphor for navigating at high velocities. The second part of the thesis deals with curved four-dimensional spacetimes of general relativity. Direct visualization of what an observer would see in a general relativistic setting is achieved by means of non-linear ray tracing. Extensions to single-chart general relativistic ray tracing are proposed to incorporate the differential-geometric concept of an atlas. Furthermore, it is shown how the visualization of gravitational lensing can be included in a ray tracing system. The caustic finder is proposed as a numerical method to identify two-dimensional caustic structures induced by a gravitational lens. The inner geometry of two-dimensional spatial hypersurfaces can be visualized by isometric embedding in three-dimensional Euclidean space. A method is described which can embed surfaces of spherical topology. Finally, an algorithm for the adaptive triangulation of height fields is presented as a specific application in classical visualization

    Holistic Approach for Authoring Immersive and Smart Environments for the Integration in Engineering Education

    Get PDF
    Die vierte industrielle Revolution und der rasante technologische Fortschritt stellen die etablierten Bildungsstrukturen und traditionellen Bildungspraktiken in Frage. Besonders in der Ingenieurausbildung erfordert das lebenslange Lernen, dass man sein Wissen und seine Fähigkeiten ständig verbessern muss, um auf dem Arbeitsmarkt wettbewerbsfähig zu sein. Es besteht die Notwendigkeit eines Paradigmenwechsels in der Bildung und Ausbildung hin zu neuen Technologien wie virtueller Realität und künstlicher Intelligenz. Die Einbeziehung dieser Technologien in ein Bildungsprogramm ist jedoch nicht so einfach wie die Investition in neue Geräte oder Software. Es müssen neue Bildungsprogramme geschaffen oder alte von Grund auf umgestaltet werden. Dabei handelt es sich um komplexe und umfangreiche Prozesse, die Entscheidungsfindung, Design und Entwicklung umfassen. Diese sind mit erheblichen Herausforderungen verbunden, die die Überwindung vieler Hindernisse erfordert. Diese Arbeit stellt eine Methodologie vor, die sich mit den Herausforderungen der Nutzung von Virtueller Realität und Künstlicher Intelligenz als Schlüsseltechnologien in der Ingenieurausbildung befasst. Die Methodologie hat zum Ziel, die Hauptakteure anzuleiten, um den Lernprozess zu verbessern, sowie neuartige und effiziente Lernerfahrungen zu ermöglichen. Da jedes Bildungsprogramm einzigartig ist, folgt die Methodik einem ganzheitlichen Ansatz, um die Erstellung maßgeschneiderter Kurse oder Ausbildungen zu unterstützen. Zu diesem Zweck werden die Wechselwirkung zwischen verschiedenen Aspekten berücksichtigt. Diese werden in den drei Ebenen - Bildung, Technologie und Management zusammengefasst. Die Methodik betont den Einfluss der Technologien auf die Unterrichtsgestaltung und die Managementprozesse. Sie liefert Methoden zur Entscheidungsfindung auf der Grundlage einer umfassenden pädagogischen, technologischen und wirtschaftlichen Analyse. Darüber hinaus unterstützt sie den Prozess der didaktischen Gestaltung durch eine umfassende Kategorisierung der Vor- und Nachteile immersiver Lernumgebungen und zeigt auf, welche ihrer Eigenschaften den Lernprozess verbessern können. Ein besonderer Schwerpunkt liegt auf der systematischen Gestaltung immersiver Systeme und der effizienten Erstellung immersiver Anwendungen unter Verwendung von Methoden aus dem Bereich der künstlichen Intelligenz. Es werden vier Anwendungsfälle mit verschiedenen Ausbildungsprogrammen vorgestellt, um die Methodik zu validieren. Jedes Bildungsprogramm hat seine eigenen Ziele und in Kombination decken sie die Validierung aller Ebenen der Methodik ab. Die Methodik wurde iterativ mit jedem Validierungsprojekt weiterentwickelt und verbessert. Die Ergebnisse zeigen, dass die Methodik zuverlässig und auf viele Szenarien sowie auf die meisten Bildungsstufen und Bereiche übertragbar ist. Durch die Anwendung der in dieser Arbeit vorgestellten Methoden können Interessengruppen immersiven Technologien effektiv und effizient in ihre Unterrichtspraxis integrieren. Darüber hinaus können sie auf der Grundlage der vorgeschlagenen Ansätze Aufwand, Zeit und Kosten für die Planung, Entwicklung und Wartung der immersiven Systeme sparen. Die Technologie verlagert die Rolle des Lehrenden in eine Moderatorrolle. Außerdem bekommen die Lehrkräfte die Möglichkeit die Lernenden individuell zu unterstützen und sich auf deren kognitive Fähigkeiten höherer Ordnung zu konzentrieren. Als Hauptergebnis erhalten die Lernenden eine angemessene, qualitativ hochwertige und zeitgemäße Ausbildung, die sie qualifizierter, erfolgreicher und zufriedener macht

    The Application of Mixed Reality Within Civil Nuclear Manufacturing and Operational Environments

    Get PDF
    This thesis documents the design and application of Mixed Reality (MR) within a nuclear manufacturing cell through the creation of a Digitally Assisted Assembly Cell (DAAC). The DAAC is a proof of concept system, combining full body tracking within a room sized environment and bi-directional feedback mechanism to allow communication between users within the Virtual Environment (VE) and a manufacturing cell. This allows for training, remote assistance, delivery of work instructions, and data capture within a manufacturing cell. The research underpinning the DAAC encompasses four main areas; the nuclear industry, Virtual Reality (VR) and MR technology, MR within manufacturing, and finally the 4 th Industrial Revolution (IR4.0). Using an array of Kinect sensors, the DAAC was designed to capture user movements within a real manufacturing cell, which can be transferred in real time to a VE, creating a digital twin of the real cell. Users can interact with each other via digital assets and laser pointers projected into the cell, accompanied by a built-in Voice over Internet Protocol (VoIP) system. This allows for the capture of implicit knowledge from operators within the real manufacturing cell, as well as transfer of that knowledge to future operators. Additionally, users can connect to the VE from anywhere in the world. In this way, experts are able to communicate with the users in the real manufacturing cell and assist with their training. The human tracking data fills an identified gap in the IR4.0 network of Cyber Physical System (CPS), and could allow for future optimisations within manufacturing systems, Material Resource Planning (MRP) and Enterprise Resource Planning (ERP). This project is a demonstration of how MR could prove valuable within nuclear manufacture. The DAAC is designed to be low cost. It is hoped this will allow for its use by groups who have traditionally been priced out of MR technology. This could help Small to Medium Enterprises (SMEs) close the double digital divide between themselves and larger global corporations. For larger corporations it offers the benefit of being low cost, and, is consequently, easier to roll out across the value chain. Skills developed in one area can also be transferred to others across the internet, as users from one manufacturing cell can watch and communicate with those in another. However, as a proof of concept, the DAAC is at Technology Readiness Level (TRL) five or six and, prior to its wider application, further testing is required to asses and improve the technology. The work was patented in both the UK (S. R EDDISH et al., 2017a), the US (S. R EDDISH et al., 2017b) and China (S. R EDDISH et al., 2017c). The patents are owned by Rolls-Royce and cover the methods of bi-directional feedback from which users can interact from the digital to the real and vice versa. Stephen Reddish Mixed Mode Realities in Nuclear Manufacturing Key words: Mixed Mode Reality, Virtual Reality, Augmented Reality, Nuclear, Manufacture, Digital Twin, Cyber Physical Syste

    Multifaceted facade textures for 3D city models

    Get PDF
    Three-dimensional digital representations of cities are widely used today, from urban planning to navigation systems, emergency response and to energy and flood simu- lations. Many of these scenarios can be served by one multipurpose 3D city model that has the semantic and attribute information depth that is required (besides the ge- ometrical detail). These multipurpose models do not only represent the geometrical properties and textures and materials, which would be sufficient for pure visualization of the urban space, they also model semantic entities like walls, roofs, ground, etc. And all these parts , as well as the buildings, as specific, identifiable entities, can be linked to additional information and data sets from other sources.However, although these models have the required information-richness and can be used beyond pure visualization, one part of these models is still treated the same way as for pure visualization models: the textures. Textures in most of today's city models are still a tool to enhance the photo-realistic appearance. The primary task of the textures is still to add the 'naturalistic' elements that are not modelled in geometry. These elements are mainly located in the fagades, namely windows, doors, signs, fire escapes and many more.The presented work investigates how textures can be used for information visualization, which is more useful for the aforementioned multipurpose city models. A new texture concept is presented that is based on flexible content, which is managed in layers. In this way it is possible to adapt the appearance of buildings (especially fagades) to the actual scenario. The concept also allows the integration of additional information into the fagade, enhancing the 3D city model. In this way it is possible to generate scenario specific fagade textures integrating the relevant information into the texture content

    CNN Feature Map Interpretation and Key-Point Detection Using Statistics of Activation Layers

    Get PDF
    Convolutional Neural Networks (CNNs) have evolved to be very accurate for the classification of image objects from a single image or frames in video. A major function in a CNN model is the extraction and encoding of features from training or ground truth images, and simple CNN models are trained to identify a dominant object in an image from the feature encodings. More complex models such as RCNN and others can identify and locate multiple objects in an image. Feature Maps from trained CNNs contain useful information beyond the encoding for classification or detection. By examining the maximum activation values and statistics from early layer feature maps it is possible to identify key points of objects, including location, particularly object types that were included in the original training data set. Methods are introduced that leverage the key points extracted from these early layers to isolate objects for more accurate classification and detection, using simpler networks compared to more complex, integrated networks. An examination of the feature extraction process will provide insight into the information that is available in the various feature map layers of a CNN. While a basic CNN model does not explicitly create instances of visual or other types of information expression, it is possible to examine the Feature Map layers and create a framework for interpreting these layers. This can be valuable in a variety of different goals such object location and size, feature statistics, and redundancy analysis. In this thesis we examine in detail the interpretation of Feature Maps in CNN models, and develop a method for extracting information from trained convolutional layers to locate objects belonging to a pre-trained image data set. A major contribution of this work is the analysis of statistical characteristics of early layer feature maps and development of a method of identifying key-points of objects without the benefit of information from deeper layers. A second contribution is analysis of the accuracy of the selections as key-points of objects present in the image. A third contribution is the clustering of key-points to form partitions for cropping the original image and computing detection using the simple CNN model. This key-point detection method has the potential to greatly improve the classification capability of simple CNNs by making it possible to identify multiple objects in a complex input image, with a modest computation cost, and also provide localization information
    corecore