1,550 research outputs found
Digital Advertising and News: Who Advertises on News Sites and How Much Those Ads Are Targeted
Analyzes trends in advertising in twenty-two news operations, including shifts to digital advertising, use of consumer data to target ads, types of ads, and industries represented among advertisers by media type
Understanding the Participatory News Consumer
Analyzes survey findings on the impact of social media and mobile connectivity on news consumption behavior by demographics and political affiliation. Examines sources; topics; participation by sharing, commenting on, or creating news; and views on media
Front Matter
The Mount Royal Undergraduate Humanities Review (MRUHR) is an online, student-run, annual journal of undergraduate research in the Humanities. The MRUHR invites submissions from Mount Royal University students of essays or other kinds of intellectual work appropriate for an online journal that are relevant to the subjects taught by the Mount Royal Department of Humanities (History, Philosophy, Womenâs Studies, Religious Studies, Indigenous Studies, Canadian Studies, or Art History)
Collimated Whole Volume Light Scattering in Homogeneous Finite Media
Crepuscular rays form when light encounters an optically thick or opaque medium which masks out portions of the visible scene. Real-time applications commonly estimate this phenomena by connecting paths between light sources and the camera after a single scattering event. We provide a set of algorithms for solving integration and sampling of single-scattered collimated light in a box-shaped medium and show how they extend to multiple scattering and convex media. First, a method for exactly integrating the unoccluded single scattering in rectilinear box-shaped medium is proposed and paired with a ratio estimator and moment-based approximation. Compared to previous methods, it requires only a single sample in unoccluded areas to compute the whole integral solution and provides greater convergence in the rest of the scene. Second, we derive an importance sampling scheme accounting for the entire geometry of the medium. This sampling strategy is then incorporated in an optimized Monte Carlo integration. The resulting integration scheme yields visible noise reduction and it is directly applicable to indoor scene rendering in room-scale interactive experiences. Furthermore, it extends to multiple light sources and achieves superior converge compared to independent sampling with existing algorithms. We validate our techniques against previous methods based on ray marching and distance sampling to prove their superior noise reduction capability
Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis
Anamorphosis for 2D displays can provide viewer centric perspective viewing, enabling 3D appearance, eye contact and engagement, by adapting dynamically in real time to a single moving viewerâs viewpoint, but at the cost of distorted viewing for other viewers. We present a method for constructing non-linear projections as a combination of anamorphic rendering of selective objects whilst reverting to normal perspective rendering of the rest of the scene. Our study defines a scene consisting of five characters, with one of these characters selectively rendered in anamorphic perspective. We conducted an evaluation experiment and demonstrate that the tracked viewer-centric imagery for the selected character results in an improved gaze and engagement estimation. Critically, this is performed without sacrificing the other viewersâ viewing experience. In addition, we present findings on the perception of gaze direction for regularly viewed characters located off-center to the origin, where perceived gaze shifts from being aligned to misalignment increasingly as the distance between viewer and character increases. Finally, we discuss different viewpoints and the spatial relationship between objects
Intermediated Reality: A Framework for Communication Through Tele-Puppetry
We introduce Intermediated Reality (IR), a framework for intermediated communication enabling collaboration through remote possession of entities (e.g., toys) that come to life in mobile Mediated Reality (MR). As part of a two-way conversation, each person communicates through a toy figurine that is remotely located in front of the other participant. Each person's face is tracked through the front camera of their mobile devices and the tracking pose information is transmitted to the remote participant's device along with the synchronized captured voice audio, allowing a turn-based interactive avatar chat session, which we have called ToyMeet. By altering the camera video feed with a reconstructed appearance of the object in a deformed pose, we perform the illusion of movement in real-world objects to realize collaborative tele-present augmented reality (AR). In this turn based interaction, each participant first sees their own captured puppetry message locally with their device's front facing camera. Next, they receive a view of their counterpart's captured response locally (in AR) with seamless visual deformation of their local 3D toy seen through their device's rear facing camera. We detail optimization of the animation transmission and switching between devices with minimized latency for coherent smooth chat interaction. An evaluation of rendering performance and system latency is included. As an additional demonstration of our framework, we generate facial animation frames for 3D printed stop motion in collaborative mixed reality. This allows a reduction in printing costs since the in-between frames of key poses can be generated digitally with shared remote review
Empowerment and embodiment for collaborative mixed reality systems: Empowerment and Embodiment
We present several mixedârealityâbased remote collaboration settings by using consumer headâmounted displays. We investigated how two people are able to work together in these settings. We found that the person in the AR system will be regarded as the âleaderâ (i.e., they provide a greater contribution to the collaboration), whereas no similar âleaderâ emerges in augmented reality (AR)âtoâAR and ARâtoâVRBody settings. We also found that these special patterns of leadership only emerged for 3D interactions and not for 2D interactions. Results about the participants' experience of leadership, collaboration, embodiment, presence, and copresence shed further light on these findings
Photo-Realistic Facial Details Synthesis from Single Image
We present a single-image 3D face synthesis technique that can handle
challenging facial expressions while recovering fine geometric details. Our
technique employs expression analysis for proxy face geometry generation and
combines supervised and unsupervised learning for facial detail synthesis. On
proxy generation, we conduct emotion prediction to determine a new
expression-informed proxy. On detail synthesis, we present a Deep Facial Detail
Net (DFDN) based on Conditional Generative Adversarial Net (CGAN) that employs
both geometry and appearance loss functions. For geometry, we capture 366
high-quality 3D scans from 122 different subjects under 3 facial expressions.
For appearance, we use additional 20K in-the-wild face images and apply
image-based rendering to accommodate lighting variations. Comprehensive
experiments demonstrate that our framework can produce high-quality 3D faces
with realistic details under challenging facial expressions
Integrating real-time fluid simulation with a voxel engine
We present a method of adding sophisticated physical simulations to voxel-based games such as the hugely popular Minecraft (2012. http://minecraft.gamepedia.com/Liquid), thus providing a dynamic and realistic fluid simulation in a voxel environment. An assessment of existing simulators and voxel engines is investigated, and an efficient real-time method to integrate optimized fluid simulations with voxel-based rasterisation on graphics hardware is demonstrated. We compare graphics processing unit (GPU) computer processing for a well-known incompressible fluid advection method with recent results on geometry shader-based voxel rendering. The rendering of visibility-culled voxels from fluid simulation results stored intermediately in CPU memory is compared with a novel, entirely GPU-resident algorithm
- âŠ