7,654 research outputs found

    The UTMOST: A hybrid digital signal processor transforms the MOST

    Get PDF
    The Molonglo Observatory Synthesis Telescope (MOST) is an 18,000 square meter radio telescope situated some 40 km from the city of Canberra, Australia. Its operating band (820-850 MHz) is now partly allocated to mobile phone communications, making radio astronomy challenging. We describe how the deployment of new digital receivers (RX boxes), Field Programmable Gate Array (FPGA) based filterbanks and server-class computers equipped with 43 GPUs (Graphics Processing Units) has transformed MOST into a versatile new instrument (the UTMOST) for studying the dynamic radio sky on millisecond timescales, ideal for work on pulsars and Fast Radio Bursts (FRBs). The filterbanks, servers and their high-speed, low-latency network form part of a hybrid solution to the observatory's signal processing requirements. The emphasis on software and commodity off-the-shelf hardware has enabled rapid deployment through the re-use of proven 'software backends' for its signal processing. The new receivers have ten times the bandwidth of the original MOST and double the sampling of the line feed, which doubles the field of view. The UTMOST can simultaneously excise interference, make maps, coherently dedisperse pulsars, and perform real-time searches of coherent fan beams for dispersed single pulses. Although system performance is still sub-optimal, a pulsar timing and FRB search programme has commenced and the first UTMOST maps have been made. The telescope operates as a robotic facility, deciding how to efficiently target pulsars and how long to stay on source, via feedback from real-time pulsar folding. The regular timing of over 300 pulsars has resulted in the discovery of 7 pulsar glitches and 3 FRBs. The UTMOST demonstrates that if sufficient signal processing can be applied to the voltage streams it is possible to perform innovative radio science in hostile radio frequency environments.Comment: 12 pages, 6 figure

    Short-time Fourier transform laser Doppler holography

    Get PDF
    We report a demonstration of laser Doppler holography at a sustained acquisition rate of 250 Hz on a 1 Megapixel complementary metal-oxide-semiconductor (CMOS) sensor array and image display at 10 Hz frame rate. The holograms are optically acquired in off-axis configuration, with a frequency-shifted reference beam. Wide-field imaging of optical fluctuations in a 250 Hz frequency band is achieved by turning time-domain samplings to the dual domain via short-time temporal Fourier transformation. The measurement band can be positioned freely within the low radio-frequency spectrum by tuning the frequency of the reference beam in real-time. Video-rate image rendering is achieved by streamline image processing with commodity computer graphics hardware. This experimental scheme is validated by a non-contact vibrometry experiment

    Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis

    Full text link
    We introduce a data-driven approach to complete partial 3D shapes through a combination of volumetric deep neural networks and 3D shape synthesis. From a partially-scanned input shape, our method first infers a low-resolution -- but complete -- output. To this end, we introduce a 3D-Encoder-Predictor Network (3D-EPN) which is composed of 3D convolutional layers. The network is trained to predict and fill in missing data, and operates on an implicit surface representation that encodes both known and unknown space. This allows us to predict global structure in unknown areas at high accuracy. We then correlate these intermediary results with 3D geometry from a shape database at test time. In a final pass, we propose a patch-based 3D shape synthesis method that imposes the 3D geometry from these retrieved shapes as constraints on the coarsely-completed mesh. This synthesis process enables us to reconstruct fine-scale detail and generate high-resolution output while respecting the global mesh structure obtained by the 3D-EPN. Although our 3D-EPN outperforms state-of-the-art completion method, the main contribution in our work lies in the combination of a data-driven shape predictor and analytic 3D shape synthesis. In our results, we show extensive evaluations on a newly-introduced shape completion benchmark for both real-world and synthetic data

    Future Directions in Astronomy Visualisation

    Full text link
    Despite the large budgets spent annually on astronomical research equipment such as telescopes, instruments and supercomputers, the general trend is to analyse and view the resulting datasets using small, two-dimensional displays. We report here on alternative advanced image displays, with an emphasis on displays that we have constructed, including stereoscopic projection, multiple projector tiled displays and a digital dome. These displays can provide astronomers with new ways of exploring the terabyte and petabyte datasets that are now regularly being produced from all-sky surveys, high-resolution computer simulations, and Virtual Observatory projects. We also present a summary of the Advanced Image Displays for Astronomy (AIDA) survey which we conducted from March-May 2005, in order to raise some issues pertitent to the current and future level of use of advanced image displays.Comment: 13 pages, 2 figures, accepted for publication in PAS

    Musica ex machina:a history of video game music

    Get PDF
    The history of video game music is a subject area that has received little attention by musicologists, and yet the form presents fascinating case studies both of musical minimalism, and the role of technology in influencing and shaping both musical form and aesthetics. This presentation shows how video game music evolved from simple tones, co-opted from sync circuits in early hardware to a sophisticated form of adaptive expression

    ScanNet++: A High-Fidelity Dataset of 3D Indoor Scenes

    Full text link
    We present ScanNet++, a large-scale dataset that couples together capture of high-quality and commodity-level geometry and color of indoor scenes. Each scene is captured with a high-end laser scanner at sub-millimeter resolution, along with registered 33-megapixel images from a DSLR camera, and RGB-D streams from an iPhone. Scene reconstructions are further annotated with an open vocabulary of semantics, with label-ambiguous scenarios explicitly annotated for comprehensive semantic understanding. ScanNet++ enables a new real-world benchmark for novel view synthesis, both from high-quality RGB capture, and importantly also from commodity-level images, in addition to a new benchmark for 3D semantic scene understanding that comprehensively encapsulates diverse and ambiguous semantic labeling scenarios. Currently, ScanNet++ contains 460 scenes, 280,000 captured DSLR images, and over 3.7M iPhone RGBD frames.Comment: ICCV 2023. Video: https://youtu.be/E6P9e2r6M8I , Project page: https://cy94.github.io/scannetpp
    corecore