9,245 research outputs found
Recommended from our members
Augmenting the field experience: a student-led comparison of techniques and technologies
In this study we report on our experiences of creating and running a student fieldtrip exercise which allowed students to compare a range of approaches to the design of technologies for augmenting landscape scenes. The main study site is around Keswick in the English Lake District, Cumbria, UK, an attractive upland environment popular with tourists and walkers. The aim of the exercise for the students was to assess the effectiveness of various forms of geographic information in augmenting real landscape scenes, as mediated through a range of techniques and technologies. These techniques were: computer-generated acetate overlays showing annotated wireframe views from certain key points; a custom-designed application running on a PDA; a mediascape running on the mScape software on a GPS-enabled mobile phone; Google Earth on a tablet PC; and a head-mounted in-field Virtual Reality system. Each group of students had all five techniques available to them, and were tasked with comparing them in the context of creating a visitor guide to the area centred on the field centre. Here we summarise their findings and reflect upon some of the broader research questions emerging from the project
Recommended from our members
Education in the Wild: Contextual and Location-Based Mobile Learning in Action. A Report from the STELLAR Alpine Rendez-Vous Workshop Series
Recommended from our members
Introduction to location-based mobile learning
[About the book]
The report follows on from a 2-day workshop funded by the STELLAR Network of Excellence as part of their 2009 Alpine Rendez-Vous workshop series and is edited by Elizabeth Brown with a foreword from Mike Sharples. Contributors have provided examples of innovative and exciting research projects and practical applications for mobile learning in a location-sensitive setting, including the sharing of good practice and the key findings that have resulted from this work. There is also a debate about whether location-based and contextual learning results in shallower learning strategies and a section detailing the future challenges for location-based learning
Looking at instructional animations through the frame of virtual camera
This thesis investigates the virtual camera and the function of camera movements in expository motion graphics for the purpose of instruction. Motion graphic design is a popular video production technique often employed to create instructional animations that present educational content through the persuasive presentation styles of the entertainment media industry. Adopting animation as a learning tool has distinct concerns and challenges when compared to its use in entertainment, and combining cognitive learning and emotive design aspects requires additional design considerations for each design element. The thesis will address how the camera movement-effect in supporting the narrative and aesthetic in instructional animations. It does this by investigating the virtual camera in terms of technical, semiotic and psychological level, culminating in a systematic categorization of functional camera movements on the basis of conceptual framework that describes hybrid integration of physical, cognitive and affective design aspects; and a creative work as a case study in the form of a comprehensive instructional animation that demonstrates practiced camera movements. Due to the correlation of the conceptual framework relied upon by the supplementary work with the techniques of effective instructional video production and conventional entertainment filmmaking, this thesis touches on the relationship between live action and animation in terms of directing and staging, concluding that the virtual camera as a design factor can be useful for supporting a narrative, evoking emotion and directing the audience’s focus while revealing, tracking and emphasizing informatio
Characterizing Deformation of Buildings from Videos
We have started to explore the feasibility of extracting useful data on the deformation of buildings and structures based on optical videos, (Taghavi Larigani & Heaton).
In the beginning, we look at the characterizations and limitations of the hardware, which is composed of a high-quality digital camera, combined with its optical imaging system capturing a video-footage of the structure under test, and then introduce a straightforward targets-tracking algorithm that produces the time-series displacements of targets that we select on the video.
We performed preliminary measurements consisting of testing our targets-tracking algorithm using high definition format videos displaying the structures that we wanted to test. The measurements pertain to a 1) finite-element software-generated video of JPL/NASA principal building, 2) YouTube-video of a seismic dynamic test of a building, 3) YouTube-video of the Millennium London Bridge “Wobbly Bridge”, 4) YouTube-video of a United Boeing 777, 4) YouTube-video of NASA space shuttle rockets during launch.
So far, our tests are encouraging. If our approach proves viable, it can be transformative for the field of earthquake engineering and structural health monitoring. Hence, we consider the prospect of using our technique for surveying buildings and other civil structures in high seismic risk urban agglomerations.
In parallel, the same technique could be applied for 1) real-time structural health monitoring of civil structures, 2) nuclear plants, 3) oil and gas infrastructures, 4) rail & road networks, 5) aircraft, 6) spacecraft, 7) etc., by simply analyzing the structure-facing camera recorded data
Thirty-second Annual Symposium of Trinity College Undergraduate Research
2019 annual volume of abstracts for science research projects conducted by students at Trinity College
Analyzing the Impact of Spatio-Temporal Sensor Resolution on Player Experience in Augmented Reality Games
Along with automating everyday tasks of human life, smartphones have become one of the most popular devices to play video games on due to their interactivity. Smartphones are embedded with various sensors which enhance their ability to adopt new new interaction techniques for video games. These integrated sen- sors, such as motion sensors or location sensors, make the device able to adopt new interaction techniques that enhance usability. However, despite their mobility and embedded sensor capacity, smartphones are limited in processing power and display area compared to desktop computer consoles. When it comes to evaluat- ing Player Experience (PX), players might not have as compelling an experience because the rich graphics environments that a desktop computer can provide are absent on a smartphone. A plausible alternative in this regard can be substituting the virtual game world with a real world game board, perceived through the device camera by rendering the digital artifacts over the camera view. This technology is widely known as Augmented Reality (AR).
Smartphone sensors (e.g. GPS, accelerometer, gyro-meter, compass) have enhanced the capability for deploying Augmented Reality technology. AR has been applied to a large number of smartphone games including shooters, casual games, or puzzles. Because AR play environments are viewed through the camera, rendering the digital artifacts consistently and accurately is crucial because the digital characters need to move with respect to sensed orientation, then the accelerometer and gyroscope need to provide su ciently accurate and precise readings to make the game playable. In particular, determining the pose of the camera in space is vital as the appropriate angle to view the rendered digital characters are determined by the pose of the camera. This defines how well the players will be able interact with the digital game characters. Depending in the Quality of Service (QoS) of these sensors, the Player Experience (PX) may vary as the rendering of digital characters are affected by noisy sensors causing a loss of registration. Confronting such problem while developing AR games is di cult in general as it requires creating wide variety of game types, narratives, input modalities as well as user-testing. Moreover, current AR games developers do not have any specific guidelines for developing AR games, and concrete guidelines outlining the tradeoffs between QoS and PX for different genres and interaction techniques are required.
My dissertation provides a complete view (a taxonomy) of the spatio-temporal sensor resolution depen- dency of the existing AR games. Four user experiments have been conducted and one experiment is proposed to validate the taxonomy and demonstrate the differential impact of sensor noise on gameplay of different genres of AR games in different aspect of PX. This analysis is performed in the context of a novel instru- mentation technology, which allows the controlled manipulation of QoS on position and orientation sensors. The experimental outcome demonstrated how the QoS of input sensor noise impacts the PX differently while playing AR game of different genre and the key elements creating this differential impact are - the input modality, narrative and game mechanics. Later, concrete guidelines are derived to regulate the sensor QoS as complete set of instructions to develop different genres or AR games
Using film cutting in interface design
It has been suggested that computer interfaces could be made more usable if their designers utilized cinematography techniques, which have evolved to guide
the viewer through a narrative despite frequent discontinuities in the presented scene (i.e., cuts between shots). Because of differences between the domains of
film and interface design, it is not straightforward to understand how such techniques can be transferred. May and Barnard (1995) argued that a psychological
model of watching film could support such a transference. This article presents an extended account of this model, which allows identification of the practice of collocation
of objects of interest in the same screen position before and after a cut. To verify that filmmakers do, in fact, use such techniques successfully, eye movements
were measured while participants watched the entirety of a commerciall
- …