23 research outputs found

    A simple footskate removal method for virtual reality applications

    Get PDF
    Footskate is a common problem encountered in interactive applications dealing with virtual character animations. It has proven difficult to fix without the use of complex numerical methods, which require expert skills for their implementations, along with a fair amount of user interaction to correct a motion. On the other hand, deformable bodies are being increasingly used in virtual reality (VR) applications, allowing users to customize their avatar as they wish. This introduces the need of adapting motions without any help from a designer, as a random user seldom has the skills required to drive the existing algorithms towards the right solution. In this paper, we present a simple method to remove footskate artifacts in VR applications. Unlike previous algorithms, our approach does not rely on the skeletal animation to perform the correction but rather on the skin. This ensures that the final foot planting really matches the virtual character's motion. The changes are applied to the root joint of the skeleton only so that the resulting animation is as close as possible to the original one. Eventually, thanks to the simplicity of its formulation, it can be quickly and easily added to existing framework

    Modern middleware for the data acquisition of the Cherenkov Telescope Array

    Full text link
    The data acquisition system (DAQ) of the future Cherenkov Telescope Array (CTA) must be ef- ficient, modular and robust to be able to cope with the very large data rate of up to 550 Gbps coming from many telescopes with different characteristics. The use of modern middleware, namely ZeroMQ and Protocol Buffers, can help to achieve these goals while keeping the development effort to a reasonable level. Protocol Buffers are used as an on-line data for- mat, while ZeroMQ is employed to communicate between processes. The DAQ will be controlled and monitored by the Alma Common Software (ACS). Protocol Buffers from Google are a way to define high-level data structures through an in- terface description language (IDL) and a meta-compiler. ZeroMQ is a middleware that augments the capabilities of TCP/IP sockets. It does not implement very high-level features like those found in CORBA for example, but makes use of sockets easier, more robust and almost as effective as raw TCP. The use of these two middlewares enabled us to rapidly develop a robust prototype of the DAQ including data persistence to compressed FITS files.Comment: In Proceedings of the 34th International Cosmic Ray Conference (ICRC2015), The Hague, The Netherlands. All CTA contributions at arXiv:1508.0589

    The On-Site Analysis of the Cherenkov Telescope Array

    Get PDF
    The Cherenkov Telescope Array (CTA) observatory will be one of the largest ground-based very high-energy gamma-ray observatories. The On-Site Analysis will be the first CTA scientific analysis of data acquired from the array of telescopes, in both northern and southern sites. The On-Site Analysis will have two pipelines: the Level-A pipeline (also known as Real-Time Analysis, RTA) and the level-B one. The RTA performs data quality monitoring and must be able to issue automated alerts on variable and transient astrophysical sources within 30 seconds from the last acquired Cherenkov event that contributes to the alert, with a sensitivity not worse than the one achieved by the final pipeline by more than a factor of 3. The Level-B Analysis has a better sensitivity (not be worse than the final one by a factor of 2) and the results should be available within 10 hours from the acquisition of the data: for this reason this analysis could be performed at the end of an observation or next morning. The latency (in particular for the RTA) and the sensitivity requirements are challenging because of the large data rate, a few GByte/s. The remote connection to the CTA candidate site with a rather limited network bandwidth makes the issue of the exported data size extremely critical and prevents any kind of processing in real-time of the data outside the site of the telescopes. For these reasons the analysis will be performed on-site with infrastructures co-located with the telescopes, with limited electrical power availability and with a reduced possibility of human intervention. This means, for example, that the on-site hardware infrastructure should have low-power consumption. A substantial effort towards the optimization of high-throughput computing service is envisioned to provide hardware and software solutions with high-throughput, low-power consumption at a low-cost.Comment: In Proceedings of the 34th International Cosmic Ray Conference (ICRC2015), The Hague, The Netherlands. All CTA contributions at arXiv:1508.0589

    A prototype for the real-time analysis of the Cherenkov Telescope Array

    Full text link
    The Cherenkov Telescope Array (CTA) observatory will be one of the biggest ground-based very-high-energy (VHE) Îł- ray observatory. CTA will achieve a factor of 10 improvement in sensitivity from some tens of GeV to beyond 100 TeV with respect to existing telescopes. The CTA observatory will be capable of issuing alerts on variable and transient sources to maximize the scientific return. To capture these phenomena during their evolution and for effective communication to the astrophysical community, speed is crucial. This requires a system with a reliable automated trigger that can issue alerts immediately upon detection of Îł-ray flares. This will be accomplished by means of a Real-Time Analysis (RTA) pipeline, a key system of the CTA observatory. The latency and sensitivity requirements of the alarm system impose a challenge because of the anticipated large data rate, between 0.5 and 8 GB/s. As a consequence, substantial efforts toward the optimization of highthroughput computing service are envisioned. For these reasons our working group has started the development of a prototype of the Real-Time Analysis pipeline. The main goals of this prototype are to test: (i) a set of frameworks and design patterns useful for the inter-process communication between software processes running on memory; (ii) the sustainability of the foreseen CTA data rate in terms of data throughput with different hardware (e.g. accelerators) and software configurations, (iii) the reuse of nonreal- time algorithms or how much we need to simplify algorithms to be compliant with CTA requirements, (iv) interface issues between the different CTA systems. In this work we focus on goals (i) and (ii)

    Status and plans for the Array Control and Data Acquisition System of the Cherenkov Telescope Array

    Get PDF
    The Cherenkov Telescope Array (CTA) is the next-generation atmospheric Cherenkov gamma-ray observatory. CTA will consist of two installations, one in the northern, and the other in the southern hemisphere, containing tens of telescopes of different sizes. The CTA performance requirements and the inherent complexity associated with the operation, control and monitoring of such a large distributed multi-telescope array leads to new challenges in the field of the gamma-ray astronomy. The ACTL (array control and data acquisition) system will consist of the hardware and software that is necessary to control and monitor the CTA arrays, as well as to time-stamp, read-out, filter and store -at aggregated rates of few GB/s- the scientific data. The ACTL system must be flexible enough to permit the simultaneous automatic operation of multiple sub-arrays of telescopes with a minimum personnel effort on site. One of the challenges of the system is to provide a reliable integration of the control of a large and heterogeneous set of devices. Moreover, the system is required to be ready to adapt the observation schedule, on timescales of a few tens of seconds, to account for changing environmental conditions or to prioritize incoming scientific alerts from time-critical transient phenomena such as gamma ray bursts. This contribution provides a summary of the main design choices and plans for building the ACTL system

    Motion Adaptation Based on Character Shape

    No full text
    Motion is an important part of virtual environments. Indeed, virtual humans movements must be realistic to trigger the sense of immersiveness and producing such animations is not an easy task which requires hours of manual work by skilled animators. To overcome this issue, motion captured clips tend to replace traditional hand animation because they offer a very high level of realism with minimal manual work. These clips, however, have one major drawback, namely that they can be applied only to a body with similar shape and sizes as the person who was captured..

    A simple footskate removal method for virtual reality applications

    No full text
    Footskate is a common problem encountered in interactive applications dealing with virtual character animations. It has proven difficult to fix without the use of complex numerical methods, which require expert skills for their implementations, along with a fair amount of user interaction to correct a motion. On the other hand, deformable bodies are being increasingly used in virtual reality (VR) applications, allowing users to customize their avatar as they wish. This introduces the need of adapting motions without any help from a designer, as a random user seldom has the skills required to drive the existing algorithms towards the right solution. In this paper, we present a simple method to remove footskate artifacts in VR applications. Unlike previous algorithms, our approach does not rely on the skeletal animation to perform the correction but rather on the skin. This ensures that the final foot planting really matches the virtual character’s motion. The changes are applied to the root joint of the skeleton only so that the resulting animation is as close as possible to the original one. Eventually, thanks to the simplicity of its formulation, it can be quickly and easily added to existing frameworks

    GAMAS - A Generic And Multipurpose Archive System

    No full text
    There exist many distributed storage systems that are mature and reliable. These systems are not fully adapted to the needs of an open astroparticle community mainly because they do not follow the Open Archival Information Systems (OAIS) standard. Moreover they require adaptations by each of the data centres at which they are installed. We introduce GAMAS, a novel distributed OAIS that tackles the problem of different technologies used at the various DCs. Instead of imposing the requirements of GAMAS to the DCs, we allows them to simply provide a python interface (or plugin) to their storage. This allows GAMAS to be easily deployed on top of different architecture and technologies, and to transparently allow users to retrieve data for processing, wherever they are. A metadata browsing system is incorporated within GAMAS and allows DCs as well as anonymous users to retrieve datasets based on high-level queries. GAMAS’ central database can be reconstructed from the archived data, which makes the system robust against corruption. We will expose the current status, architecture and functionalities of GAMAS and also detail the current test case which stores about 0.6 PB of data from the FACT experiment at separate data centres.ISSN:1824-803
    corecore