55 research outputs found

    Copyright Protection of 3D Digitized Sculptures by Use of Haptic Device for Adding Local-Imperceptible Bumps

    Get PDF
    This research aims to improve some approaches for protecting digitized 3D models of cultural heritage objects such as the approach shown in the authors\u27 previous research on this topic. This technique can be used to protect works of art such as 3D models of sculptures, pottery, and 3D digital characters for animated film and gaming. It can also be used to preserve architectural heritage. In the research presented here adding protection to the scanned 3D model of the original sculpture was achieved using the digital sculpting technique with a haptic device. The original 3D model and the model with added protection were after that printed at the 3D printer, and then such 3D printed models were scanned. In order to measure the thickness of added protection, the original 3D model and the model with added protection were compared. Also, two scanned models of the printed sculptures were compared to define the amount of added material. The thickness of the added protection is up to 2 mm, whereas the highest difference detected between a matching scan of the original sculpture (or protected 3D model) and a scan of its printed version (or scan of the protected printed version) is about 1 mm

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Data wiping tool: ByteEditor Technique

    Get PDF
    This Wiping Tool is an anti-forensic tool that is built to wipe data permanently from laptop’s storage. This tool is capable to ensure the data from being recovered with any recovery tools. The objective of building this wiping tool is to maintain the confidentiality and integrity of the data from unauthorized access. People tend to delete the file in normal way, however, the file face the risk of being recovered. Hence, the integrity and confidentiality of the deleted file cannot be protected. Through wiping tools, the files are overwritten with random strings to make the files no longer readable. Thus, the integrity and the confidentiality of the file can be protected. Regarding wiping tools, nowadays, lots of wiping tools face issue such as data breach because the wiping tools are unable to delete the data permanently from the devices. This situation might affect their main function and a threat to their users. Hence, a new wiping tool is developed to overcome the problem. A new wiping tool named Data Wiping tool is applying two wiping techniques. The first technique is Randomized Data while the next one is enhancing wiping technique, known as ByteEditor. ByteEditor is a combination of two different techniques, byte editing and byte deletion. With the implementation of Object�Oriented methodology, this wiping tool is built. This methodology consists of analyzing, designing, implementation and testing. The tool is analyzed and compared with other wiping tools before the designing of the tool start. Once the designing is done, implementation phase take place. The code of the tool is created using Visual Studio 2010 with C# language and being tested their functionality to ensure the developed tool meet the objectives of the project. This tool is believed able to contribute to the development of wiping tools and able to solve problems related to other wiping tools

    Second Screen interaction in the cinema: Experimenting with transmedia narratives and commercialising user participation

    Get PDF
    In its relatively short life, second screen interaction has evolved into a variety of forms of viewer engagement. The practice of using two screens concurrently has become common in domestic TV viewing but remains a relatively specialist and niche experience in movie theatres. For this paper, three case studies explore the motivations and challenges involved in such projects. The film Late Shift (Weber, 2016) pulls together conventional cinematic narrative techniques and combines them with the interactivity of Full Motion Video games. An earlier film, App (Boermans, 2015) innovated with the second screen as a vehicle for transmedia content to enhance an affective response within the horror genre. The release of the film Angry Birds (Rovio, 2016) involved a symbiotic second screen play-along element which began in advance of the screening and continued after the movie concluded. This study also analyses other interactive projects within this context, including Disney’s short- lived ‘Second Screen Live’ that accompanied the release of The Little Mermaid (Disney, 1993) and commercial platforms including CiniMe and TimePlay. Mobile devices are being used as platforms for interactive gameplay, social participation and commercial opportunities. However, this landscape has implications on the culture of audience etiquette and the notion of user agency within an environment of immersive storytelling.http://www.participations.org/Volume%2014/Issue%202/27.pd

    Application of Machine Learning within Visual Content Production

    Get PDF
    We are living in an era where digital content is being produced at a dazzling pace. The heterogeneity of contents and contexts is so varied that a numerous amount of applications have been created to respond to people and market demands. The visual content production pipeline is the generalisation of the process that allows a content editor to create and evaluate their product, such as a video, an image, a 3D model, etc. Such data is then displayed on one or more devices such as TVs, PC monitors, virtual reality head-mounted displays, tablets, mobiles, or even smartwatches. Content creation can be simple as clicking a button to film a video and then share it into a social network, or complex as managing a dense user interface full of parameters by using keyboard and mouse to generate a realistic 3D model for a VR game. In this second example, such sophistication results in a steep learning curve for beginner-level users. In contrast, expert users regularly need to refine their skills via expensive lessons, time-consuming tutorials, or experience. Thus, user interaction plays an essential role in the diffusion of content creation software, primarily when it is targeted to untrained people. In particular, with the fast spread of virtual reality devices into the consumer market, new opportunities for designing reliable and intuitive interfaces have been created. Such new interactions need to take a step beyond the point and click interaction typical of the 2D desktop environment. The interactions need to be smart, intuitive and reliable, to interpret 3D gestures and therefore, more accurate algorithms are needed to recognise patterns. In recent years, machine learning and in particular deep learning have achieved outstanding results in many branches of computer science, such as computer graphics and human-computer interface, outperforming algorithms that were considered state of the art, however, there are only fleeting efforts to translate this into virtual reality. In this thesis, we seek to apply and take advantage of deep learning models to two different content production pipeline areas embracing the following subjects of interest: advanced methods for user interaction and visual quality assessment. First, we focus on 3D sketching to retrieve models from an extensive database of complex geometries and textures, while the user is immersed in a virtual environment. We explore both 2D and 3D strokes as tools for model retrieval in VR. Therefore, we implement a novel system for improving accuracy in searching for a 3D model. We contribute an efficient method to describe models through 3D sketch via an iterative descriptor generation, focusing both on accuracy and user experience. To evaluate it, we design a user study to compare different interactions for sketch generation. Second, we explore the combination of sketch input and vocal description to correct and fine-tune the search for 3D models in a database containing fine-grained variation. We analyse sketch and speech queries, identifying a way to incorporate both of them into our system's interaction loop. Third, in the context of the visual content production pipeline, we present a detailed study of visual metrics. We propose a novel method for detecting rendering-based artefacts in images. It exploits analogous deep learning algorithms used when extracting features from sketches

    User multimedia preferences to receive information through mobile phone

    Get PDF

    Perceptual quality and visual experience analysis for polygon mesh on different display devices

    Get PDF
    Polygon mesh models have been widely used in various areas due to its high degree of verisimilitude and interactivity. Since the mesh models usually undergo various phases of signal processing for the purpose of storage, simplification, transmission, and deformation, the perceptual quality as well as the visual experience of mesh models are often subject to distortions at every stage. Therefore, investigating the perceptual quality and the visual experience of mesh models have become one of the major tasks for both the academia and industry. In this paper, we have designed two subjective experiments to investigate the perceptual quality and the visual experience in both the virtual reality environment and the traditional 2-D environment. Experimental results showed that there is no statistically significant difference in the quality perception between the two viewing conditions, independent of the model content, the distortion type, and the distortion level. On the contrary, there exists significant difference in the visual experience between the two viewing conditions under various factors. This paper helps researchers to better understand the quality perception behavior and the visual experience toward polygon mesh models

    Real Virtuality: A Code of Ethical Conduct. Recommendations for Good Scientific Practice and the Consumers of VR-Technology

    Get PDF
    The goal of this article is to present a first list of ethical concerns that may arise from research and personal use of virtual reality (VR) and related technology, and to offer concrete recommendations for minimizing those risks. Many of the recommendations call for focused research initiatives. In the first part of the article, we discuss the relevant evidence from psychology that motivates our concerns. In Section “Plasticity in the Human Mind,” we cover some of the main results suggesting that one’s environment can influence one’s psychological states, as well as recent work on inducing illusions of embodiment. Then, in Section “Illusions of Embodiment and Their Lasting Effect,” we go on to discuss recent evidence indicating that immersion in VR can have psychological effects that last after leaving the virtual environment. In the second part of the article, we turn to the risks and recommendations. We begin, in Section “The Research Ethics of VR,” with the research ethics of VR, covering six main topics: the limits of experimental environments, informed consent, clinical risks, dual-use, online research, and a general point about the limitations of a code of conduct for research. Then, in Section “Risks for Individuals and Society,” we turn to the risks of VR for the general public, covering four main topics: long-term immersion, neglect of the social and physical environment, risky content, and privacy. We offer concrete recommendations for each of these 10 topics, summarized in Table 1
    corecore