1,962 research outputs found
The camera of the fifth H.E.S.S. telescope. Part I: System description
In July 2012, as the four ground-based gamma-ray telescopes of the H.E.S.S.
(High Energy Stereoscopic System) array reached their tenth year of operation
in Khomas Highlands, Namibia, a fifth telescope took its first data as part of
the system. This new Cherenkov detector, comprising a 614.5 m^2 reflector with
a highly pixelized camera in its focal plane, improves the sensitivity of the
current array by a factor two and extends its energy domain down to a few tens
of GeV.
The present part I of the paper gives a detailed description of the fifth
H.E.S.S. telescope's camera, presenting the details of both the hardware and
the software, emphasizing the main improvements as compared to previous
H.E.S.S. camera technology.Comment: 16 pages, 13 figures, accepted for publication in NIM
Visual Distortions in 360-degree Videos.
Omnidirectional (or 360°) images and videos are emergent signals being used in many areas, such as robotics and virtual/augmented reality. In particular, for virtual reality applications, they allow an immersive experience in which the user can interactively navigate through a scene with three degrees of freedom, wearing a head-mounted display. Current approaches for capturing, processing, delivering, and displaying 360° content, however, present many open technical challenges and introduce several types of distortions in the visual signal. Some of the distortions are specific to the nature of 360° images and often differ from those encountered in classical visual communication frameworks. This paper provides a first comprehensive review of the most common visual distortions that alter 360° signals going through the different processing elements of the visual communication pipeline. While their impact on viewers' visual perception and the immersive experience at large is still unknown-thus, it is an open research topic-this review serves the purpose of proposing a taxonomy of the visual distortions that can be encountered in 360° signals. Their underlying causes in the end-to-end 360° content distribution pipeline are identified. This taxonomy is essential as a basis for comparing different processing techniques, such as visual enhancement, encoding, and streaming strategies, and allowing the effective design of new algorithms and applications. It is also a useful resource for the design of psycho-visual studies aiming to characterize human perception of 360° content in interactive and immersive applications
Recommended from our members
End-to-end 3D video communication over heterogeneous networks
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Three-dimensional technology, more commonly referred to as 3D technology, has revolutionised many fields including entertainment, medicine, and communications to name a few. In addition to 3D films, games, and sports channels, 3D perception has made tele-medicine a reality. By the year 2015, 30% of the all HD panels at home will be 3D enabled, predicted by consumer electronics manufacturers. Stereoscopic cameras, a comparatively mature technology compared to other 3D systems, are now being used by ordinary citizens to produce 3D content and share at a click of a button just like they do with the 2D counterparts via sites like YouTube. But technical challenges still exist, including with autostereoscopic multiview displays. 3D content requires many complex considerations--including how to represent it, and deciphering what is the best compression format--when considering transmission or storage, because of its increased amount of data. Any decision must be taken in the light of the available bandwidth or storage capacity, quality and user expectations. Free viewpoint navigation also remains partly unsolved. The most pressing issue getting in the way of widespread uptake of consumer 3D systems is the ability to deliver 3D content to heterogeneous consumer displays over the heterogeneous networks. Optimising 3D video communication solutions must consider the entire pipeline, starting with optimisation at the video source to the end display and transmission optimisation. Multi-view offers the most compelling solution for 3D videos with motion parallax and freedom from wearing headgear for 3D video perception. Optimising multi-view video for delivery and display could increase the demand for true 3D in the consumer market. This thesis focuses on an end-to-end quality optimisation in 3D video communication/transmission, offering solutions for optimisation at the compression, transmission, and decoder levels.Brunel University - Isambard Research Scholarshi
Cloud enabled 3D tablet design for medical applications
The prime objective of any technological innovation is to improve the life of people. Technological innovation in the field of medical devices directly touches the lives of millions of people; not just patients but doctors and other technicians as well. Serving these care givers is serving humanity. Growth of Mobil Devices and Cloud Computing has changed the way we live and work. We try to bring the benefits of these technological innovations to the medical field via equipment which can improve the working efficiencies and capabilities of the medical professionals and technicians. The improvements in the camera and image processing capabilities of the Mobile Devices coupled with their improved processing power and an infinite processing and storage offered by Cloud Computing infrastructure opens up a window of opportunity to use them in the specialized field like microsurgery. To enable microsurgery, surgeons use optical microscope to zoom into the working area to get better visibility and control. However, these devices suffer from various drawbacks and are not comfortable to use. We build a Tablet with large stereoscopic screen allowing glasses free 3D display enabled by cameras capable of capturing 3D video and enhanced by an image processing pipeline, greatly improves the visibility and viewing comfort of the surgeon. Moreover using the capabilities of Cloud computing, these surgeries can be recorded and streamed live for education, training and consultation. An expert sitting in a geographically remote location can guide the surgeon performing the surgery. All vital parameters of the patient undergoing surgery can be shown as an overlay on the Tablet screen so that the surgeon is alerted of any parameter going beyond limit. Developing this kind of complex device involves engineering skills in hardware and software and huge amount of investments in terms of time, resources and money. To accelerate the development, we make use of open source hardware and software and demonstrate how we can accelerate the development using these open source resources
TALON - The Telescope Alert Operation Network System: Intelligent Linking of Distributed Autonomous Robotic Telescopes
The internet has brought about great change in the astronomical community,
but this interconnectivity is just starting to be exploited for use in
instrumentation. Utilizing the internet for communicating between distributed
astronomical systems is still in its infancy, but it already shows great
potential. Here we present an example of a distributed network of telescopes
that performs more efficiently in synchronous operation than as individual
instruments. RAPid Telescopes for Optical Response (RAPTOR) is a system of
telescopes at LANL that has intelligent intercommunication, combined with
wide-field optics, temporal monitoring software, and deep-field follow-up
capability all working in closed-loop real-time operation. The Telescope ALert
Operations Network (TALON) is a network server that allows intercommunication
of alert triggers from external and internal resources and controls the
distribution of these to each of the telescopes on the network. TALON is
designed to grow, allowing any number of telescopes to be linked together and
communicate. Coupled with an intelligent alert client at each telescope, it can
analyze and respond to each distributed TALON alert based on the telescopes
needs and schedule.Comment: Presentation at SPIE 2004, Glasgow, Scotland (UK
- …