1,962 research outputs found

    The camera of the fifth H.E.S.S. telescope. Part I: System description

    Full text link
    In July 2012, as the four ground-based gamma-ray telescopes of the H.E.S.S. (High Energy Stereoscopic System) array reached their tenth year of operation in Khomas Highlands, Namibia, a fifth telescope took its first data as part of the system. This new Cherenkov detector, comprising a 614.5 m^2 reflector with a highly pixelized camera in its focal plane, improves the sensitivity of the current array by a factor two and extends its energy domain down to a few tens of GeV. The present part I of the paper gives a detailed description of the fifth H.E.S.S. telescope's camera, presenting the details of both the hardware and the software, emphasizing the main improvements as compared to previous H.E.S.S. camera technology.Comment: 16 pages, 13 figures, accepted for publication in NIM

    Visual Distortions in 360-degree Videos.

    Get PDF
    Omnidirectional (or 360°) images and videos are emergent signals being used in many areas, such as robotics and virtual/augmented reality. In particular, for virtual reality applications, they allow an immersive experience in which the user can interactively navigate through a scene with three degrees of freedom, wearing a head-mounted display. Current approaches for capturing, processing, delivering, and displaying 360° content, however, present many open technical challenges and introduce several types of distortions in the visual signal. Some of the distortions are specific to the nature of 360° images and often differ from those encountered in classical visual communication frameworks. This paper provides a first comprehensive review of the most common visual distortions that alter 360° signals going through the different processing elements of the visual communication pipeline. While their impact on viewers' visual perception and the immersive experience at large is still unknown-thus, it is an open research topic-this review serves the purpose of proposing a taxonomy of the visual distortions that can be encountered in 360° signals. Their underlying causes in the end-to-end 360° content distribution pipeline are identified. This taxonomy is essential as a basis for comparing different processing techniques, such as visual enhancement, encoding, and streaming strategies, and allowing the effective design of new algorithms and applications. It is also a useful resource for the design of psycho-visual studies aiming to characterize human perception of 360° content in interactive and immersive applications

    Cloud enabled 3D tablet design for medical applications

    Get PDF
    The prime objective of any technological innovation is to improve the life of people. Technological innovation in the field of medical devices directly touches the lives of millions of people; not just patients but doctors and other technicians as well. Serving these care givers is serving humanity. Growth of Mobil Devices and Cloud Computing has changed the way we live and work. We try to bring the benefits of these technological innovations to the medical field via equipment which can improve the working efficiencies and capabilities of the medical professionals and technicians. The improvements in the camera and image processing capabilities of the Mobile Devices coupled with their improved processing power and an infinite processing and storage offered by Cloud Computing infrastructure opens up a window of opportunity to use them in the specialized field like microsurgery. To enable microsurgery, surgeons use optical microscope to zoom into the working area to get better visibility and control. However, these devices suffer from various drawbacks and are not comfortable to use. We build a Tablet with large stereoscopic screen allowing glasses free 3D display enabled by cameras capable of capturing 3D video and enhanced by an image processing pipeline, greatly improves the visibility and viewing comfort of the surgeon. Moreover using the capabilities of Cloud computing, these surgeries can be recorded and streamed live for education, training and consultation. An expert sitting in a geographically remote location can guide the surgeon performing the surgery. All vital parameters of the patient undergoing surgery can be shown as an overlay on the Tablet screen so that the surgeon is alerted of any parameter going beyond limit. Developing this kind of complex device involves engineering skills in hardware and software and huge amount of investments in terms of time, resources and money. To accelerate the development, we make use of open source hardware and software and demonstrate how we can accelerate the development using these open source resources

    TALON - The Telescope Alert Operation Network System: Intelligent Linking of Distributed Autonomous Robotic Telescopes

    Full text link
    The internet has brought about great change in the astronomical community, but this interconnectivity is just starting to be exploited for use in instrumentation. Utilizing the internet for communicating between distributed astronomical systems is still in its infancy, but it already shows great potential. Here we present an example of a distributed network of telescopes that performs more efficiently in synchronous operation than as individual instruments. RAPid Telescopes for Optical Response (RAPTOR) is a system of telescopes at LANL that has intelligent intercommunication, combined with wide-field optics, temporal monitoring software, and deep-field follow-up capability all working in closed-loop real-time operation. The Telescope ALert Operations Network (TALON) is a network server that allows intercommunication of alert triggers from external and internal resources and controls the distribution of these to each of the telescopes on the network. TALON is designed to grow, allowing any number of telescopes to be linked together and communicate. Coupled with an intelligent alert client at each telescope, it can analyze and respond to each distributed TALON alert based on the telescopes needs and schedule.Comment: Presentation at SPIE 2004, Glasgow, Scotland (UK
    corecore