23,665 research outputs found

    MScMS-II: an innovative IR-based indoor coordinate measuring system for large-scale metrology applications

    No full text
    According to the current great interest concerning large-scale metrology applications in many different fields of manufacturing industry, technologies and techniques for dimensional measurement have recently shown a substantial improvement. Ease-of-use, logistic and economic issues, as well as metrological performance are assuming a more and more important role among system requirements. This paper describes the architecture and the working principles of a novel infrared (IR) optical-based system, designed to perform low-cost and easy indoor coordinate measurements of large-size objects. The system consists of a distributed network-based layout, whose modularity allows fitting differently sized and shaped working volumes by adequately increasing the number of sensing units. Differently from existing spatially distributed metrological instruments, the remote sensor devices are intended to provide embedded data elaboration capabilities, in order to share the overall computational load. The overall system functionalities, including distributed layout configuration, network self-calibration, 3D point localization, and measurement data elaboration, are discussed. A preliminary metrological characterization of system performance, based on experimental testing, is also presente

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    A high speed Tri-Vision system for automotive applications

    Get PDF
    Purpose: Cameras are excellent ways of non-invasively monitoring the interior and exterior of vehicles. In particular, high speed stereovision and multivision systems are important for transport applications such as driver eye tracking or collision avoidance. This paper addresses the synchronisation problem which arises when multivision camera systems are used to capture the high speed motion common in such applications. Methods: An experimental, high-speed tri-vision camera system intended for real-time driver eye-blink and saccade measurement was designed, developed, implemented and tested using prototype, ultra-high dynamic range, automotive-grade image sensors specifically developed by E2V (formerly Atmel) Grenoble SA as part of the European FP6 project – sensation (advanced sensor development for attention stress, vigilance and sleep/wakefulness monitoring). Results : The developed system can sustain frame rates of 59.8 Hz at the full stereovision resolution of 1280 × 480 but this can reach 750 Hz when a 10 k pixel Region of Interest (ROI) is used, with a maximum global shutter speed of 1/48000 s and a shutter efficiency of 99.7%. The data can be reliably transmitted uncompressed over standard copper Camera-Link® cables over 5 metres. The synchronisation error between the left and right stereo images is less than 100 ps and this has been verified both electrically and optically. Synchronisation is automatically established at boot-up and maintained during resolution changes. A third camera in the set can be configured independently. The dynamic range of the 10bit sensors exceeds 123 dB with a spectral sensitivity extending well into the infra-red range. Conclusion: The system was subjected to a comprehensive testing protocol, which confirms that the salient requirements for the driver monitoring application are adequately met and in some respects, exceeded. The synchronisation technique presented may also benefit several other automotive stereovision applications including near and far-field obstacle detection and collision avoidance, road condition monitoring and others.Partially funded by the EU FP6 through the IST-507231 SENSATION project.peer-reviewe

    Low cost underwater acoustic localization

    Full text link
    Over the course of the last decade, the cost of marine robotic platforms has significantly decreased. In part this has lowered the barriers to entry of exploring and monitoring larger areas of the earth's oceans. However, these advances have been mostly focused on autonomous surface vehicles (ASVs) or shallow water autonomous underwater vehicles (AUVs). One of the main drivers for high cost in the deep water domain is the challenge of localizing such vehicles using acoustics. A low cost one-way travel time underwater ranging system is proposed to assist in localizing deep water submersibles. The system consists of location aware anchor buoys at the surface and underwater nodes. This paper presents a comparison of methods together with details on the physical implementation to allow its integration into a deep sea micro AUV currently in development. Additional simulation results show error reductions by a factor of three.Comment: 73rd Meeting of the Acoustical Society of Americ

    DOES: A Deep Learning-based approach to estimate roll and pitch at sea

    Get PDF
    The use of Attitude and Heading Reference Systems (AHRS) for orientation estimation is now common practice in a wide range of applications, e.g., robotics and human motion tracking, aerial vehicles and aerospace, gaming and virtual reality, indoor pedestrian navigation and maritime navigation. The integration of the high-rate measurements can provide very accurate estimates, but these can suffer from errors accumulation due to the sensors drift over longer time scales. To overcome this issue, inertial sensors are typically combined with additional sensors and techniques. As an example, camera-based solutions have drawn a large attention by the community, thanks to their low-costs and easy hardware setup; moreover, impressive results have been demonstrated in the context of Deep Learning. This work presents the preliminary results obtained by DOES, a supportive Deep Learning method specifically designed for maritime navigation, which aims at improving the roll and pitch estimations obtained by common AHRS. DOES recovers these estimations through the analysis of the frames acquired by a low-cost camera pointing the horizon at sea. The training has been performed on the novel ROPIS dataset, presented in the context of this work, acquired using the FrameWO application developed for the scope. Promising results encourage to test other network backbones and to further expand the dataset, improving the accuracy of the results and the range of applications of the method as a valid support to visual-based odometry techniques

    Elasticity mapping for breast cancer diagnosis using tactile imaging and auxiliary sensor fusion

    Get PDF
    Tactile Imaging (TI) is a technology utilising capacitive pressure sensors to image elasticity distributions within soft tissues such as the breast for cancer screening. TI aims to solve critical problems in the cancer screening pathway, particularly: low sensitivity of manual palpation, patient discomfort during X-ray mammography, and the poor quality of breast cancer referral forms between primary and secondary care facilities. TI is effective in identifying ‘non-palpable’, early-stage tumours, with basic differential ability that reduced unnecessary biopsies by 21% in repeated clinical studies. TI has its limitations, particularly: the measured hardness of a lesion is relative to the background hardness, and lesion location estimates are subjective and prone to operator error. TI can achieve more than simple visualisation of lesions and can act as an accurate differentiator and material analysis tool with further metric development and acknowledgement of error sensitivities when transferring from phantom to clinical trials. This thesis explores and develops two methods, specifically inertial measurement and IR vein imaging, for determining the breast background elasticity, and registering tactile maps for lesion localisation, based on fusion of tactile and auxiliary sensors. These sensors enhance the capabilities of TI, with background tissue elasticity determined with MAE < 4% over tissues in the range 9 kPa – 90 kPa and probe trajectory across the breast measured with an error ratio < 0.3%, independent of applied load, validated on silicone phantoms. A basic TI error model is also proposed, maintaining tactile sensor stability and accuracy with 1% settling times < 1.5s over a range of realistic operating conditions. These developments are designed to be easily implemented into commercial systems, through appropriate design, to maximise impact, providing a stable platform for accurate tissue measurements. This will allow clinical TI to further reduce benign referral rates in a cost-effective manner, by elasticity differentiation and lesion classification in future works.Tactile Imaging (TI) is a technology utilising capacitive pressure sensors to image elasticity distributions within soft tissues such as the breast for cancer screening. TI aims to solve critical problems in the cancer screening pathway, particularly: low sensitivity of manual palpation, patient discomfort during X-ray mammography, and the poor quality of breast cancer referral forms between primary and secondary care facilities. TI is effective in identifying ‘non-palpable’, early-stage tumours, with basic differential ability that reduced unnecessary biopsies by 21% in repeated clinical studies. TI has its limitations, particularly: the measured hardness of a lesion is relative to the background hardness, and lesion location estimates are subjective and prone to operator error. TI can achieve more than simple visualisation of lesions and can act as an accurate differentiator and material analysis tool with further metric development and acknowledgement of error sensitivities when transferring from phantom to clinical trials. This thesis explores and develops two methods, specifically inertial measurement and IR vein imaging, for determining the breast background elasticity, and registering tactile maps for lesion localisation, based on fusion of tactile and auxiliary sensors. These sensors enhance the capabilities of TI, with background tissue elasticity determined with MAE < 4% over tissues in the range 9 kPa – 90 kPa and probe trajectory across the breast measured with an error ratio < 0.3%, independent of applied load, validated on silicone phantoms. A basic TI error model is also proposed, maintaining tactile sensor stability and accuracy with 1% settling times < 1.5s over a range of realistic operating conditions. These developments are designed to be easily implemented into commercial systems, through appropriate design, to maximise impact, providing a stable platform for accurate tissue measurements. This will allow clinical TI to further reduce benign referral rates in a cost-effective manner, by elasticity differentiation and lesion classification in future works

    Review of heliostat calibration and tracking control methods

    Get PDF
    Large scale central receiver systems typically deploy between thousands to more than a hundred thousand heliostats. During solar operation, each heliostat is aligned individually in such a way that the overall surface normal bisects the angle between the sun’s position and the aim point coordinate on the receiver. Due to various tracking error sources, achieving accurate alignment ≤1 mrad for all the heliostats with respect to the aim points on the receiver without a calibration system can be regarded as unrealistic. Therefore, a calibration system is necessary not only to improve the aiming accuracy for achieving desired flux distributions but also to reduce or eliminate spillage. An overview of current larger-scale central receiver systems (CRS), tracking error sources and the basic requirements of an ideal calibration system is presented. Leading up to the main topic, a description of general and specific terms on the topics heliostat calibration and tracking control clarifies the terminology used in this work. Various figures illustrate the signal flows along various typical components as well as the corresponding monitoring or measuring devices that indicate or measure along the signal (or effect) chain. The numerous calibration systems are described in detail and classified in groups. Two tables allow the juxtaposition of the calibration methods for a better comparison. In an assessment, the advantages and disadvantages of individual calibration methods are presented

    A high-resolution full-field range imaging system

    Get PDF
    There exist a number of applications where the range to all objects in a field of view needs to be obtained. Specific examples include obstacle avoidance for autonomous mobile robots, process automation in assembly factories, surface profiling for shape analysis, and surveying. Ranging systems can be typically characterized as being either laser scanning systems where a laser point is sequentially scanned over a scene or a full-field acquisition where the range to every point in the image is simultaneously obtained. The former offers advantages in terms of range resolution, while the latter tend to be faster and involve no moving parts. We present a system for determining the range to any object within a camera's field of view, at the speed of a full-field system and the range resolution of some point laser scans. Initial results obtained have a centimeter range resolution for a 10 second acquisition time. Modifications to the existing system are discussed that should provide faster results with submillimeter resolution
    corecore