6,807 research outputs found
Quality assessment for virtual reality technology based on real scene
Virtual reality technology is a new display technology, which provides users with real viewing experience. As known, most of the virtual reality display through stereoscopic images. However, image quality will be influenced by the collection, storage and transmission process. If the stereoscopic image quality in the virtual reality technology is seriously damaged, the user will feel uncomfortable, and this can even cause healthy problems. In this paper, we establish a set of accurate and effective evaluations for the virtual reality. In the preprocessing, we segment the original reference and distorted image into binocular regions and monocular regions. Then, the Information-weighted SSIM (IW-SSIM) or Information-weighted PSNR (IW-PSNR) values over the monocular regions are applied to obtain the IW-score. At the same time, the Stereo-weighted-SSIM (SW-SSIM) or Stereo-weighted-PSNR (SW-PSNR) can be used to calculate the SW-score. Finally, we pool the stereoscopic images score by combing the IW-score and SW-score. Experiments show that our method is very consistent with human subjective judgment standard in the evaluation of virtual reality technology
The adult life span
Distance learning materials for the B.Sc(Hons) Social work programme as part of the 'Working with Adults' module. A discussion, with critical thinking exercises, that presents a biographical approach to the study of the adult life course
Recommended from our members
Multimedia delivery in the future internet
The term “Networked Media” implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizens’ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications “on the move”, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Audiovisual preservation strategies, data models and value-chains
This is a report on preservation strategies, models and value-chains for digital file-based audiovisual content. The report includes: (a)current and emerging value-chains and business-models for audiovisual preservation;(b) a comparison of preservation strategies for audiovisual content including their strengths and weaknesses, and(c) a review of current preservation metadata models, and requirements for extension to support audiovisual files
Quality assessment technique for ubiquitous software and middleware
The new paradigm of computing or information systems is ubiquitous computing systems. The technology-oriented issues of ubiquitous computing systems have made researchers pay much attention to the feasibility study of the technologies rather than building quality assurance indices or guidelines. In this context, measuring quality is the key to developing high-quality ubiquitous computing products. For this reason, various quality models have been defined, adopted and enhanced over the years, for example, the need for one recognised standard quality model (ISO/IEC 9126) is the result of a consensus for a software quality model on three levels: characteristics, sub-characteristics, and metrics. However, it is very much unlikely that this scheme will be directly applicable to ubiquitous computing environments which are considerably different to conventional software, trailing a big concern which is being given to reformulate existing methods, and especially to elaborate new assessment techniques for ubiquitous computing environments. This paper selects appropriate quality characteristics for the ubiquitous computing environment, which can be used as the quality target for both ubiquitous computing product evaluation processes ad development processes. Further, each of the quality characteristics has been expanded with evaluation questions and metrics, in some cases with measures. In addition, this quality model has been applied to the industrial setting of the ubiquitous computing environment. These have revealed that while the approach was sound, there are some parts to be more developed in the future
EVALUATING THE CYBER SECURITY IN THE INTERNET OF THINGS: SMART HOME VULNERABILITIES
The need for advanced cyber security measures and strategies is attributed to modern sophistication of cyber-attacks and intense media attention when attacks and breaches occur. In May 2014, a congressional report suggested that Americans used approximately 500 million Internet-capable devices at home, including, but not limited to Smartphones, tablets, and other Internet-connected devices, which run various unimpeded applications. Owing to this high level of connectivity, our home environment is not immune to the cyber-attack paradigm; rather, the home has evolved to become one of the most influenced markets where the Internet of Things has had extensive surfaces, vectors for attacks, and unanswered security concerns. Thus, the aim of the present research was to investigate behavioral heuristics of the Internet of Things by adopting an exploratory multiple case study approach. A controlled Internet of Things ecosystem was constructed consisting of real-life data observed during a typical life cycle of initial configuration and average use. The information obtained during the course of this study involved the systematic acquisition and analysis of Smart Home ecosystem link-layer protocol data units (PDUs). The methodology employed during this study involved a recursive multiple case study evaluation of the Smart Home ecosystem data-link layer PDUs and aligned the case studies to the existing Intrusion Kill Chain design model. The proposed solution emerging from the case studies builds the appropriate data collection template while concurrently developing a Security as a Service (SECaaS) capability to evaluate collected results
A geometrical-based approach to recognise structure of complex interiors
3D modelling of building interiors has gained a lot of interest recently, specifically since the
rise of Building Information Modeling (BIM). A number of methods have been developed in
the past, however most of them are limited to modelling non-complex interiors. 3D laser
scanners are the preferred sensor to collect the 3D data, however the cost of state-of-the-art
laser scanners are prohibitive to many. Other types of sensors could also be used to generate
the 3D data but they have limitations especially when dealing with clutter and occlusions.
This research has developed a platform to produce 3D modelling of building interiors while
adapting a low-cost, low-level laser scanner to generate the 3D interior data. The PreSuRe
algorithm developed here, which introduces a new pipeline in modelling building interiors,
combines both novel methods and adapts existing approaches to produce the 3D modelling of
various interiors, from sparse room to complex interiors with non-ideal geometrical structure,
highly cluttered and occluded. This approach has successfully reconstructed the structure of
interiors, with above 96% accuracy, even with high amount of noise data and clutter. The
time taken to produce the resulting model is almost real-time, compared to existing
techniques which may take hours to generate the reconstruction. The produced model is also
equipped with semantic information which differentiates the model from a regular 3D CAD
drawing and can be use to assist professionals and experts in related fields
- …