734 research outputs found

    Fourth Conference on Artificial Intelligence for Space Applications

    Get PDF
    Proceedings of a conference held in Huntsville, Alabama, on November 15-16, 1988. The Fourth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: space applications of expert systems in fault diagnostics, in telemetry monitoring and data collection, in design and systems integration; and in planning and scheduling; knowledge representation, capture, verification, and management; robotics and vision; adaptive learning; and automatic programming

    SHINE: Deep Learning-Based Accessible Parking Management System

    Full text link
    The ongoing expansion of urban areas facilitated by advancements in science and technology has resulted in a considerable increase in the number of privately owned vehicles worldwide, including in South Korea. However, this gradual increment in the number of vehicles has inevitably led to parking-related issues, including the abuse of disabled parking spaces (hereafter referred to as accessible parking spaces) designated for individuals with disabilities. Traditional license plate recognition (LPR) systems have proven inefficient in addressing such a problem in real-time due to the high frame rate of surveillance cameras, the presence of natural and artificial noise, and variations in lighting and weather conditions that impede detection and recognition by these systems. With the growing concept of parking 4.0, many sensors, IoT and deep learning-based approaches have been applied to automatic LPR and parking management systems. Nonetheless, the studies show a need for a robust and efficient model for managing accessible parking spaces in South Korea. To address this, we have proposed a novel system called, Shine, which uses the deep learning-based object detection algorithm for detecting the vehicle, license plate, and disability badges (referred to as cards, badges, or access badges hereafter) and verifies the rights of the driver to use accessible parking spaces by coordinating with the central server. Our model, which achieves a mean average precision of 92.16%, is expected to address the issue of accessible parking space abuse and contributes significantly towards efficient and effective parking management in urban environments

    HoloHDR: Multi-color Holograms Improve Dynamic Range

    Full text link
    Holographic displays generate Three-Dimensional (3D) images by displaying single-color holograms time-sequentially, each lit by a single-color light source. However, representing each color one by one limits peak brightness and dynamic range in holographic displays. This paper introduces a new driving scheme, HoloHDR, for realizing higher dynamic range images in holographic displays. Unlike the conventional driving scheme, in HoloHDR, three light sources illuminate each displayed hologram simultaneously at various brightness levels. In this way, HoloHDR reconstructs a multiplanar three-dimensional target scene using consecutive multi-color holograms and persistence of vision. We co-optimize multi-color holograms and required brightness levels from each light source using a gradient descent-based optimizer with a combination of application-specific loss terms. We experimentally demonstrate that HoloHDR can increase the brightness levels in holographic displays up to three times with support for a broader dynamic range, unlocking new potentials for perceptual realism in holographic displays.Comment: 10 pages, 11 figure

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    Helmet-Mounted Display System Based on IoT

    Get PDF
    Many people enjoy motorcycle riding and there are thousands of people who have lost their lives due to road accidents. This is mainly due to the delay in the state of emergency that must be provided to the victims. The helmet-mounted display system that uses the Internet of Things (IoT) reduces accidents and informs its contacts in emergencies so the helmet module contains sensors to determine the passenger\u27s pulse rate, alcohol content, and vibration intensity. The pulse rate sensor is used to determine whether the rider has worn the helmet and which will be connected to the rider\u27s start of his trip on the road. That\u27s why we implemented a prototype proposal using the IoT to connect all devices and make it easier for the user to reduce road accidents by displaying all their needs in full on the helmet screen. So, in the implementation of our proposal, we made several systems connected with Raspberry Pi 4 which are Global Positioning System (GPS) applications, camera systems, and sensors that display all output data in the background, after that will transmit all these data from Raspberry Pi 4 to Raspberry Pi 3 through User Datagram Protocol (UDP), which Raspberry Pi 3 connected with Digital Light Processing )DLP) projector to display all background data as a hologram to the user giving him safety on the road without any distractions

    LiDAR-derived digital holograms for automotive head-up displays.

    Get PDF
    A holographic automotive head-up display was developed to project 2D and 3D ultra-high definition (UHD) images using LiDAR data in the driver's field of view. The LiDAR data was collected with a 3D terrestrial laser scanner and was converted to computer-generated holograms (CGHs). The reconstructions were obtained with a HeNe laser and a UHD spatial light modulator with a panel resolution of 3840×2160 px for replay field projections. By decreasing the focal distance of the CGHs, the zero-order spot was diffused into the holographic replay field image. 3D holograms were observed floating as a ghost image at a variable focal distance with a digital Fresnel lens into the CGH and a concave lens.This project was funded by the EPSRC Centre for Doctoral Training in Connected Electronic and Photonic Systems (CEPS) (EP/S022139/1), Project Reference: 2249444

    Augmented Reality and Its Application

    Get PDF
    Augmented Reality (AR) is a discipline that includes the interactive experience of a real-world environment, in which real-world objects and elements are enhanced using computer perceptual information. It has many potential applications in education, medicine, and engineering, among other fields. This book explores these potential uses, presenting case studies and investigations of AR for vocational training, emergency response, interior design, architecture, and much more

    High-throughput label-free cell detection and counting from diffraction patterns with deep fully convolutional neural networks

    Get PDF
    SIGNIFICANCE: Digital holographic microscopy (DHM) is a promising technique for the study of semitransparent biological specimen such as red blood cells (RBCs). It is important and meaningful to detect and count biological cells at the single cell level in biomedical images for biomarker discovery and disease diagnostics. However, the biological cell analysis based on phase information of images is inefficient due to the complexity of numerical phase reconstruction algorithm applied to raw hologram images. New cell study methods based on diffraction pattern directly are desirable. AIM: Deep fully convolutional networks (FCNs) were developed on raw hologram images directly for high-throughput label-free cell detection and counting to assist the biological cell analysis in the future. APPROACH: The raw diffraction patterns of RBCs were recorded by use of DHM. Ground-truth mask images were labeled based on phase images reconstructed from RBC holograms using numerical reconstruction algorithm. A deep FCN, which is UNet, was trained on the diffraction pattern images to achieve the label-free cell detection and counting. RESULTS: The implemented deep FCNs provide a promising way to high-throughput and label-free counting of RBCs with a counting accuracy of 99% at a throughput rate of greater than 288 cells per second and 200  μm  ×  200  μm field of view at the single cell level. Compared to convolutional neural networks, the FCNs can get much better results in terms of accuracy and throughput rate. CONCLUSIONS: High-throughput label-free cell detection and counting were successfully achieved from diffraction patterns with deep FCNs. It is a promising approach for biological specimen analysis based on raw hologram directly.1
    corecore