10 research outputs found

    The Maunakea Spectroscopic Explorer Book 2018

    Full text link
    (Abridged) This is the Maunakea Spectroscopic Explorer 2018 book. It is intended as a concise reference guide to all aspects of the scientific and technical design of MSE, for the international astronomy and engineering communities, and related agencies. The current version is a status report of MSE's science goals and their practical implementation, following the System Conceptual Design Review, held in January 2018. MSE is a planned 10-m class, wide-field, optical and near-infrared facility, designed to enable transformative science, while filling a critical missing gap in the emerging international network of large-scale astronomical facilities. MSE is completely dedicated to multi-object spectroscopy of samples of between thousands and millions of astrophysical objects. It will lead the world in this arena, due to its unique design capabilities: it will boast a large (11.25 m) aperture and wide (1.52 sq. degree) field of view; it will have the capabilities to observe at a wide range of spectral resolutions, from R2500 to R40,000, with massive multiplexing (4332 spectra per exposure, with all spectral resolutions available at all times), and an on-target observing efficiency of more than 80%. MSE will unveil the composition and dynamics of the faint Universe and is designed to excel at precision studies of faint astrophysical phenomena. It will also provide critical follow-up for multi-wavelength imaging surveys, such as those of the Large Synoptic Survey Telescope, Gaia, Euclid, the Wide Field Infrared Survey Telescope, the Square Kilometre Array, and the Next Generation Very Large Array.Comment: 5 chapters, 160 pages, 107 figure

    Aircraft Attitude Estimation Using Panoramic Images

    Full text link
    This thesis investigates the problem of reliably estimating attitude from panoramic imagery in cluttered environments. Accurate attitude is an essential input to the stabilisation systems of autonomous aerial vehicles. A new camera system which combines a CCD camera, UltraViolet (UV) filters and a panoramic mirror-lens is designed. Drawing on biological inspiration from the Ocelli organ possessed by certain insects, UV filtered images are used to enhance the contrast between the sky and ground and mitigate the effect of the sun. A novel method for real–time horizon-based attitude estimation using panoramic image that is capable of estimating an aircraft pitch and roll at a low altitude in the presence of sun, clouds and occluding features such as tree, building, is developed. Also, a new method for panoramic sky/ground thresholding, consisting of a horizon– and a sun–tracking system which works effectively even when the horizon line is difficult to detect by normal thresholding methods due to flares and other effects from the presence of the sun in the image, is proposed. An algorithm for estimating the attitude from three–dimensional mapping of the horizon projected onto a 3D plane is developed. The use of optic flow to determine pitch and roll rates is investigated using the panoramic image and image interpolation algorithm (I2A). Two methods which employ sensor fusion techniques, Extended Kalman Filter (EKF) and Artificial Neural Networks (ANNs), are used to fuse unfiltered measurements from inertial sensors and the vision system. The EKF estimates gyroscope biases and also the attitude. The ANN fuses the optic flow and horizon–based attitude to provide smooth attitude estimations. The results obtained from different parts of the research are tested and validated through simulations and real flight tests

    Geometric and Radiometric Calibration of Video Infrared Imagers for Photogrammetric Applications

    Get PDF
    This thesis is concerned with the geometric and radiometric calibration of infrared imagers with a view to their use in close-range and airborne photogrammetric applications. From the geometric point of view, three quite different types of infrared imager can be distinguished - these comprise (i) the pyroelectric vidicon camera; (ii) the CCD camera based on the use of an areal array of solid-state detectors; and (iii) the thermal video frame scanner (TVFS). The special optics and the detector technologies that are used in these imagers to generate images in the middle and thermal bands of the infrared spectrum, together with the underlying video technology, are first reviewed and discussed in some detail with an emphasis on their fundamental geometric and radiometric characteristics and properties. On this basis, the design and construction of a special target plate has been undertaken that allows all these different types of imager to be calibrated both geometrically and radiometrically. After describing this target plate, the actual experiment set-up and procedures and the subsequent data processing and analysis are outlined, including the method devised and used for the automatic measurement of the positions of all the target crosses on the calibration plate employing image matching techniques. The results obtained from the successful calibration of a representative sample of CCD cameras and thermal video frame scanners are presented and discussed in detail. They provide much new and accurate information on the geometric characteristics of these types of infrared imager that will be invaluable to those undertaking photogrammetric measurements on the infrared images that are being acquired and used in military, medical, industrial and environmental applications. For the radiometric calibration of each imager, measurements of the grey level values were made over the whole of the image covering the target radiation source for a range of temperatures. Thus much original and valuable information on the radiometric characteristics of the imagers has been obtained from the work undertaken during this research project, more especially at lower operational temperatures. However the techniques used gave less good results at higher temperatures and these need to be modified if more useful results are to be obtained. Suggestions are made for the further development of the calibration technique, in particular for its use with low-resolution imagers such as the pyroelectric vidicon camera which have not been calibrated in this research project due to time and financial limitations

    Geometric Accuracy Testing, Evaluation and Applicability of Space Imagery to the Small Scale Topographic Mapping of the Sudan

    Get PDF
    The geometric accuracy, interpretabilty and the applicability of using space imagery for the production of small-scale topographic maps of the Sudan have been assessed. Two test areas have been selected. The first test area was selected in the central Sudan including the area between the Blue Nile and the White Nile and extending to Atbara in the Nile Province. The second test area was selected in the Red Sea Hills area which has modern 1:100,000 scale topographic map coverage and has been covered by six types of images, Landsat MSS TM and RBV; MOMS; Metric Camera (MC); and Large format Camera (LFC). Geometric accuracy testing has been carried out using a test field of well-defined control points whose terrain coordinates have been obtained from the existing maps. The same points were measured on each of the images in a Zeiss Jena Stereocomparator (Stecometer C II) and transformed into the terrain coordinate system using polynomial transformations in the case of the scanner and RBV images; and space resection/intersection, relative/absolute orientation and bundle adjustment in the case of the MC and LFC photographs. The two sets of coordinates were then compared. The planimetric accuracies (root mean square errors) obtained for the scanner and RBV images were: Landsat MSS +/-80 m; TM +/-45 m; REV +/-40 m; and MOMS +/-28 m. The accuracies of the 3-dimensional coordinates obtained from the photographs were: MC:-X=+/-16 m, Y=+/-16 m, Z=+/-30 m; and LFC:- X=+/-14 m, Y=+/-14 m, and Z=+/-20 m. The planimetric accuracy figures are compatible with the specifications for topographic maps at scales of 1:250,000 in the case of MSS; 1:125,000 scale in the case of TM and RBV; and 1:100,000 scale in the case of MOMS. The planimetric accuracies (vector =+/-20 m) achieved with the two space cameras are compatible with topographic mapping at 1:60,000 to 1:70,000 scale. However, the spot height accuracies of +/-20 to +/-30 m - equivalent to a contour interval of 50 to 60 m - fall short of the required heighting accuracies for 1:60,000 to 1:100,000 scale mapping. The interpretation tests carried out on the MSS, TM, and RBV images showed that, while the main terrain features (hills, ridges, wadis, etc.) can be mapped reasonably well, there was an almost complete failure to pick up the cultural features - towns, villages, roads, railways, etc. - present in the test areas. The high resolution MOMS images and the space photographs were much more satisfactory in this respect though still the cultural features are difficult to pick up due to the buildings and roads being built out of local material and exhibiting little contrast on the images

    Multi-sensor human action recognition with particular application to tennis event-based indexing

    Get PDF
    The ability to automatically classify human actions and activities using vi- sual sensors or by analysing body worn sensor data has been an active re- search area for many years. Only recently with advancements in both fields and the ubiquitous nature of low cost sensors in our everyday lives has auto- matic human action recognition become a reality. While traditional sports coaching systems rely on manual indexing of events from a single modality, such as visual or inertial sensors, this thesis investigates the possibility of cap- turing and automatically indexing events from multimodal sensor streams. In this work, we detail a novel approach to infer human actions by fusing multimodal sensors to improve recognition accuracy. State of the art visual action recognition approaches are also investigated. Firstly we apply these action recognition detectors to basic human actions in a non-sporting con- text. We then perform action recognition to infer tennis events in a tennis court instrumented with cameras and inertial sensing infrastructure. The system proposed in this thesis can use either visual or inertial sensors to au- tomatically recognise the main tennis events during play. A complete event retrieval system is also presented to allow coaches to build advanced queries, which existing sports coaching solutions cannot facilitate, without an inordi- nate amount of manual indexing. The event retrieval interface is evaluated against a leading commercial sports coaching tool in terms of both usability and efficiency

    Aeronautical Engineering: A continuing bibliography with indexes

    Get PDF
    This bibliography lists 382 reports, articles and other documents introduced into the NASA scientific and technical information system in June 1982

    calibration of central catadioptric camera with one-dimensional object undertaking general motions

    No full text
    A 1D object is a segment with several known-distance markers, and calibration methods with 1D objects are more flexible than those with 2D/3D objects. Under the pinhole camera model, it is proved that the calibration with free-moving 1D objects is not possible. For a central catadioptric camera setup, can the camera be calibrated by a 1D object under general motions? In this paper, we prove that a central catadioptric camera can indeed be calibrated, and propose a catadiop-tric camera calibration method using 1D objects undertaking general motions. The proposed method consists of two steps. Firstly, the principal point is calculated with geometric invariants under catadioptric camera model; Secondly, we use images of 1D object to calibrate the focal lengths, skew factor and mirror parameter. The method needs neither prior knowledge of catadioptric parameters nor conic fitting, and it is linear, which makes it easy to implement. Experiments demonstrate its usefulness and stability. © 2011 IEEE.IEEE; IEEE Signal Processing SocietyA 1D object is a segment with several known-distance markers, and calibration methods with 1D objects are more flexible than those with 2D/3D objects. Under the pinhole camera model, it is proved that the calibration with free-moving 1D objects is not possible. For a central catadioptric camera setup, can the camera be calibrated by a 1D object under general motions? In this paper, we prove that a central catadioptric camera can indeed be calibrated, and propose a catadiop-tric camera calibration method using 1D objects undertaking general motions. The proposed method consists of two steps. Firstly, the principal point is calculated with geometric invariants under catadioptric camera model; Secondly, we use images of 1D object to calibrate the focal lengths, skew factor and mirror parameter. The method needs neither prior knowledge of catadioptric parameters nor conic fitting, and it is linear, which makes it easy to implement. Experiments demonstrate its usefulness and stability. © 2011 IEEE

    Exploiting Spatio-Temporal Coherence for Video Object Detection in Robotics

    Get PDF
    This paper proposes a method to enhance video object detection for indoor environments in robotics. Concretely, it exploits knowledge about the camera motion between frames to propagate previously detected objects to successive frames. The proposal is rooted in the concepts of planar homography to propose regions of interest where to find objects, and recursive Bayesian filtering to integrate observations over time. The proposal is evaluated on six virtual, indoor environments, accounting for the detection of nine object classes over a total of ∼ 7k frames. Results show that our proposal improves the recall and the F1-score by a factor of 1.41 and 1.27, respectively, as well as it achieves a significant reduction of the object categorization entropy (58.8%) when compared to a two-stage video object detection method used as baseline, at the cost of small time overheads (120 ms) and precision loss (0.92).</p

    CAPACITANCE METROLOGY OF CURVED SURFACES: STUDY AND CHARACTERIZATION OF A NOVEL PROBE DESIGN

    Get PDF
    Capacitive sensors are frequently applied to curved target surfaces for precision displacement measurements. In most cases, these sensors have not been recalibrated to take the curvature of the target into consideration. This recalibration becomes more critical as the target surface becomes smaller in comparison to the sensor. Calibration data are presented for a variety of capacitance probe sizes with widely varying geometries. One target surface particularly difficult to characterize is the inner surface of small holes, less than one millimeter in diameter. Although contact probes can successfully measure the inner surface of a hole, these probes are often fragile and require additional sensors to determine when contact occurs. Probes may adhere to the wall of the hole, and only a small number of data points are collected. Direct capacitance measurement of small holes requires a completely new capacitance probe geometry and method of operation. A curved, elongated surface minimizes the gap between the sensor surface and the inner surface of the hole. Reduction in the size of the sensing area is weighed against electronics limitations. The performance of a particular probe geometry is studied using computer simulations to determine the optimal probe design. Multiple, overlapping passes are deconvolved to reveal finer features on the surface of the hole. A prototype sub-millimeter capacitance probe is machined from tungsten carbide, with four additional material layers added using ebeam deposition. Several techniques are studied to remove these layers and create a sensing area along one side of the probe. Both mechanical processes and photolithography are employed

    ΠŸΡ€ΠΈΠΊΠ»Π°Π΄Π½Π° Ρ„Ρ–Π·ΠΈΠΊΠ° : ΡƒΠΊΡ€Π°Ρ—Π½ΡΡŒΠΊΠΎ-Ρ€ΠΎΡΡ–ΠΉΡΡŒΠΊΠΎ-Π°Π½Π³Π»Ρ–ΠΉΡΡŒΠΊΠΈΠΉ Ρ‚Π»ΡƒΠΌΠ°Ρ‡Π½ΠΈΠΉ словник. Π£ 4 Ρ‚. Π’. 2. Π— – Н

    Get PDF
    Π‘Π»ΠΎΠ²Π½ΠΈΠΊ ΠΎΡ…ΠΎΠΏΠ»ΡŽΡ” близько 30 тис. Ρ‚Π΅Ρ€ΠΌΡ–Π½Ρ–Π² Π· ΠΏΡ€ΠΈΠΊΠ»Π°Π΄Π½ΠΎΡ— Ρ„Ρ–Π·ΠΈΠΊΠΈ Ρ– Π΄ΠΎΡ‚ΠΈΡ‡Π½ΠΈΡ… Π΄ΠΎ Π½Π΅Ρ— Π³Π°Π»ΡƒΠ·Π΅ΠΉ знань Ρ‚Π° Ρ—Ρ… тлумачСння Ρ‚Ρ€ΡŒΠΎΠΌΠ° ΠΌΠΎΠ²Π°ΠΌΠΈ (ΡƒΠΊΡ€Π°Ρ—Π½ΡΡŒΠΊΠΎΡŽ, Ρ€ΠΎΡΡ–ΠΉΡΡŒΠΊΠΎΡŽ Ρ‚Π° Π°Π½Π³Π»Ρ–ΠΉΡΡŒΠΊΠΎΡŽ). Π‘Π°Π³Π°Ρ‚ΠΎ Ρ‚Π΅Ρ€ΠΌΡ–Π½Ρ–Π² Ρ– Π²ΠΈΠ·Π½Π°Ρ‡Π΅Π½ΡŒ, Π½Π°Π²Π΅Π΄Π΅Π½ΠΈΡ… Ρƒ словнику, якими ΠΏΠΎΡΠ»ΡƒΠ³ΠΎΠ²ΡƒΡŽΡ‚ΡŒΡΡ Ρƒ Π²Ρ–Π΄ΠΏΠΎΠ²Ρ–Π΄Π½Ρ–ΠΉ Π³Π°Π»ΡƒΠ·Ρ– знань, досі Π½Π΅ Π²Ρ…ΠΎΠ΄ΠΈΠ»ΠΈ Π΄ΠΎ ΠΆΠΎΠ΄Π½ΠΎΠ³ΠΎ Π·Ρ– спСціалізованих словників. Π‘Π»ΠΎΠ²Π½ΠΈΠΊ ΠΏΡ€ΠΈΠ·Π½Π°Ρ‡Π΅Π½ΠΈΠΉ для Π²ΠΈΠΊΠ»Π°Π΄Π°Ρ‡Ρ–Π², Π½Π°ΡƒΠΊΠΎΠ²Ρ†Ρ–Π², Ρ–Π½ΠΆΠ΅Π½Π΅Ρ€Ρ–Π², аспірантів, студСнтів Π²ΠΈΡ‰ΠΈΡ… Π½Π°Π²Ρ‡Π°Π»ΡŒΠ½ΠΈΡ… Π·Π°ΠΊΠ»Π°Π΄Ρ–Π², ΠΏΠ΅Ρ€Π΅ΠΊΠ»Π°Π΄Π°Ρ‡Ρ–Π² Π· ΠΏΡ€ΠΈΡ€ΠΎΠ΄Π½ΠΈΡ‡ΠΈΡ… Ρ– Ρ‚Π΅Ρ…Π½Ρ–Ρ‡Π½ΠΈΡ… дисциплін
    corecore