6,298 research outputs found

    Deep-sea image processing

    Get PDF
    High-resolution seafloor mapping often requires optical methods of sensing, to confirm interpretations made from sonar data. Optical digital imagery of seafloor sites can now provide very high resolution and also provides additional cues, such as color information for sediments, biota and divers rock types. During the cruise AT11-7 of the Woods Hole Oceanographic Institution (WHOI) vessel R/V Atlantis (February 2004, East Pacific Rise) visual imagery was acquired from three sources: (1) a digital still down-looking camera mounted on the submersible Alvin, (2) observer-operated 1-and 3-chip video cameras with tilt and pan capabilities mounted on the front of Alvin, and (3) a digital still camera on the WHOI TowCam (Fornari, 2003). Imagery from the first source collected on a previous cruise (AT7-13) to the Galapagos Rift at 86°W was successfully processed and mosaicked post-cruise, resulting in a single image covering area of about 2000 sq.m, with the resolution of 3 mm per pixel (Rzhanov et al., 2003). This paper addresses the issues of the optimal acquisition of visual imagery in deep-seaconditions, and requirements for on-board processing. Shipboard processing of digital imagery allows for reviewing collected imagery immediately after the dive, evaluating its importance and optimizing acquisition parameters, and augmenting acquisition of data over specific sites on subsequent dives.Images from the deepsea power and light (DSPL) digital camera offer the best resolution (3.3 Mega pixels) and are taken at an interval of 10 seconds (determined by the strobe\u27s recharge rate). This makes images suitable for mosaicking only when Alvin moves slowly (≪1/4 kt), which is not always possible for time-critical missions. Video cameras provided a source of imagery more suitable for mosaicking, despite its inferiority in resolution. We discuss required pre-processing and imageenhancement techniques and their influence on the interpretation of mosaic content. An algorithm for determination of camera tilt parameters from acquired imagery is proposed and robustness conditions are discussed

    Instruments on large optical telescopes -- A case study

    Get PDF
    In the distant past, telescopes were known, first and foremost, for the sizes of their apertures. Advances in technology are now enabling astronomers to build extremely powerful instruments to the extent that instruments have now achieved importance comparable or even exceeding the usual importance accorded to the apertures of the telescopes. However, the cost of successive generations of instruments has risen at a rate noticeably above that of the rate of inflation. Here, given the vast sums of money now being expended on optical telescopes and their instrumentation, I argue that astronomers must undertake "cost-benefit" analysis for future planning. I use the scientific output of the first two decades of the W. M. Keck Observatory as a laboratory for this purpose. I find, in the absence of upgrades, that the time to reach peak paper production for an instrument is about six years. The prime lifetime of instruments (sans upgrades), as measured by citations returns, is about a decade. Well thought out and timely upgrades increase and sometimes even double the useful lifetime. I investigate how well instrument builders are rewarded. I find acknowledgements ranging from almost 100% to as low as 60%. Next, given the increasing cost of operating optical telescopes, the management of existing observatories continue to seek new partnerships. This naturally raises the question "What is the cost of a single night of telescope time". I provide a rational basis to compute this quantity. I then end the paper with some thoughts on the future of large ground-based optical telescopes, bearing in mind the explosion of synoptic precision photometric, astrometric and imaging surveys across the electromagnetic spectrum, the increasing cost of instrumentation and the rise of mega instruments.Comment: Revised from previous submission (typos fixed, table 6 was garbled). Submitted to PAS

    An ultrahigh-speed digitizer for the Harvard College Observatory astronomical plates

    Full text link
    A machine capable of digitizing two 8 inch by 10 inch (203 mm by 254 mm) glass astrophotographic plates or a single 14 inch by 17 inch (356 mm by 432 mm) plate at a resolution of 11 microns per pixel or 2309 dots per inch (dpi) in 92 seconds is described. The purpose of the machine is to digitize the \~500,000 plate collection of the Harvard College Observatory in a five year time frame. The digitization must meet the requirements for scientific work in astrometry, photometry, and archival preservation of the plates. This paper describes the requirements for and the design of the subsystems of the machine that was developed specifically for this task.Comment: 12 pages, 9 figures, 1 table; presented at SPIE (July, 2006) and published in Proceeding

    Low-Cost Motility Tracking System (LOCOMOTIS) for time-lapse microscopy applications and cell visualisation

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.Direct visualisation of cells for the purpose of studying their motility has typically required expensive microscopy equipment. However, recent advances in digital sensors mean that it is now possible to image cells for a fraction of the price of a standard microscope. Along with low-cost imaging there has also been a large increase in the availability of high quality, open-source analysis programs. In this study we describe the development and performance of an expandable cell motility system employing inexpensive, commercially available digital USB microscopes to image various cell types using time-lapse and perform tracking assays in proof-of-concept experiments. With this system we were able to measure and record three separate assays simultaneously on one personal computer using identical microscopes, and obtained tracking results comparable in quality to those from other studies that used standard, more expensive, equipment. The microscopes used in our system were capable of a maximum magnification of 413.6x. Although resolution was lower than that of a standard inverted microscope we found this difference to be indistinguishable at the magnification chosen for cell tracking experiments (206.8x). In preliminary cell culture experiments using our system, velocities (mean mm/min ± SE) of 0.81±0.01 (Biomphalaria glabrata hemocytes on uncoated plates), 1.17±0.004 (MDA-MB-231 breast cancer cells), 1.24±0.006 (SC5 mouse Sertoli cells) and 2.21±0.01 (B. glabrata hemocytes on Poly-L-Lysine coated plates), were measured and are consistent with previous reports. We believe that this system, coupled with open-source analysis software, demonstrates that higher throughput time-lapse imaging of cells for the purpose of studying motility can be an affordable option for all researchers. © 2014 Lynch et al

    Television image compression and small animal remote monitoring

    Get PDF
    It was shown that a subject can reliably discriminate a difference in video image quality (using a specific commercial product) for image compression levels ranging from 384 kbits per second to 1536 kbits per second. However, their discriminations are significantly influenced by whether or not the TV camera is stable or moving and whether or not the animals are quiescent or active, which is correlated with illumination level (daylight versus night illumination, respectively). The highest video rate used here was 1.54 megabits per second, which is about 18 percent of the so-called normal TV resolution of 8.4MHz. Since this video rate was judged to be acceptable by 27 of the 34 subjects (79 percent), for monitoring the general health and status of small animals within their illuminated (lights on) cages (regardless of whether the camera was stable or moved), it suggests that an immediate Space Station Freedom to ground bandwidth reduction of about 80 percent can be tolerated without a significant loss in general monitoring capability. Another general conclusion is that the present methodology appears to be effective in quantifying visual judgments of video image quality

    Sensor System for Rescue Robots

    Get PDF
    A majority of rescue worker fatalities are a result of on-scene responses. Existing technologies help assist the first responders in scenarios of no light, and there even exist robots that can navigate radioactive areas. However, none are able to be both quickly deployable and enter hard to reach or unsafe areas in an emergency event such as an earthquake or storm that damages a structure. In this project we created a sensor platform system to augment existing robotic solutions so that rescue workers can search for people in danger while avoiding preventable injury or death and saving time and resources. Our results showed that we were able to map out a 2D map of the room with updates for robot motion on a display while also showing a live thermal image in front of the system. The system is also capable of taking a digital picture from a triggering event and then displaying it on the computer screen. We discovered that data transfer plays a huge role in making different programs like Arduino and Processing interact with each other. Consequently, this needs to be accounted for when improving our project. In particular our project is wired right now but should deliver data wirelessly to be of any practical use. Furthermore, we dipped our feet into SLAM technologies and if our project were to become autonomous, more research into the algorithms would make this autonomy feasible

    Investigation of a new method for improving image resolution for camera tracking applications

    Get PDF
    Camera based systems have been a preferred choice in many motion tracking applications due to the ease of installation and the ability to work in unprepared environments. The concept of these systems is based on extracting image information (colour and shape properties) to detect the object location. However, the resolution of the image and the camera field-of- view (FOV) are two main factors that can restrict the tracking applications for which these systems can be used. Resolution can be addressed partially by using higher resolution cameras but this may not always be possible or cost effective. This research paper investigates a new method utilising averaging of offset images to improve the effective resolution using a standard camera. The initial results show that the minimum detectable position change of a tracked object could be improved by up to 4 times
    • …
    corecore