94,892 research outputs found

    Robotic Telescopes and Networks: New Tools for Education and Science

    Full text link
    Nowadays many telescopes around the world are automated and some networks of robotic telescopes are active or planned as shown by the lists we draw up. Such equipment could be used for the training of students and for science in the Universities of Developing Countries and of New Astronomical Countries, by sending them observational data via Internet or through remotely controlled telescopes. It seems that it is time to open up for discussion with UN and ESA organizations and also with IAU, how to implement links between robotic telescopes and such Universities applying for collaborations. Many scientific fields could thus be accessible to them, for example on stellar variability, near-earth object follow-up, gamma-ray burst counterpart tracking, and so on.Comment: 18 pages, review presented at Eight UN/ESA Workshop on Basic Space Science: Scientific Exploration from Space, held in Mafraq (Jordan), 13-17 March 1999, to be published in Astrophys. and Space Sc. (Kluwer

    Real-time image streaming over a low-bandwidth wireless camera network

    Get PDF
    In this paper we describe the recent development of a low-bandwidth wireless camera sensor network. We propose a simple, yet effective, network architecture which allows multiple cameras to be connected to the network and synchronize their communication schedules. Image compression of greater than 90% is performed at each node running on a local DSP coprocessor, resulting in nodes using 1/8th the energy compared to streaming uncompressed images. We briefly introduce the Fleck wireless node and the DSP/camera sensor, and then outline the network architecture and compression algorithm. The system is able to stream color QVGA images over the network to a base station at up to 2 frames per second. © 2007 IEEE

    A Software Retina for Egocentric & Robotic Vision Applications on Mobile Platforms

    Get PDF
    We present work in progress to develop a low-cost highly integrated camera sensor for egocentric and robotic vision. Our underlying approach is to address current limitations to image analysis by Deep Convolutional Neural Networks, such as the requirement to learn simple scale and rotation transformations, which contribute to the large computational demands for training and opaqueness of the learned structure, by applying structural constraints based on known properties of the human visual system. We propose to apply a version of the retino-cortical transform to reduce the dimensionality of the input image space by a factor of ex100, and map this spatially to transform rotations and scale changes into spatial shifts. By reducing the input image size accordingly, and therefore learning requirements, we aim to develop compact and lightweight egocentric and robot vision sensor using a smartphone as the target platfor

    A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

    Full text link
    Fully-autonomous miniaturized robots (e.g., drones), with artificial intelligence (AI) based visual navigation capabilities are extremely challenging drivers of Internet-of-Things edge intelligence capabilities. Visual navigation based on AI approaches, such as deep neural networks (DNNs) are becoming pervasive for standard-size drones, but are considered out of reach for nanodrones with size of a few cm2{}^\mathrm{2}. In this work, we present the first (to the best of our knowledge) demonstration of a navigation engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based visual navigation. To achieve this goal we developed a complete methodology for parallel execution of complex DNNs directly on-bard of resource-constrained milliwatt-scale nodes. Our system is based on GAP8, a novel parallel ultra-low-power computing platform, and a 27 g commercial, open-source CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average. Our navigation engine is flexible and can be used to span a wide performance range: at its peak performance corner it achieves 18 fps while still consuming on average just 3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication in the IEEE Internet of Things Journal (IEEE IOTJ

    Telescience Testbed Pilot Program

    Get PDF
    The Telescience Testbed Pilot Program is developing initial recommendations for requirements and design approaches for the information systems of the Space Station era. During this quarter, drafting of the final reports of the various participants was initiated. Several drafts are included in this report as the University technical reports
    • …
    corecore