604 research outputs found

    Mixed Reality on a Virtual Globe

    Get PDF

    Requirements for digitized aircraft spotting (Ouija) board for use on U.S. Navy aircraft carriers

    Get PDF
    This thesis will evaluate system and process elements to initiate requirements modeling necessary for the next generation Digitized Aircraft Spotting (Ouija) Board for use on U.S. Navy aircraft carriers to track and plan aircraft movement. The research will examine and evaluate the feasibility and suitability of transforming the existing two-dimensional static board to an electronic, dynamic display that will enhance situational awareness by using sensors and system information from various sources to display a comprehensive operational picture of the current flight and hangar decks aboard aircraft carriers. The authors will evaluate the current processes and make recommendations on elements the new system would display. These elements include what information is displayed, which external systems feed information to the display, and how intelligent agents could be used to transform the static display to a powerful decision support tool. Optimally, the Aircraft Handler will use this system to effectively manage the Flight and Hangar decks to support the projection of air power from U.S. aircraft carriers.http://archive.org/details/requirementsford109454447Lieutenant Commander, United States NavyLieutenant Commander, United States Navy ReserveApproved for public release; distribution is unlimited

    Third-Person Autonomous Control using Deep RC

    Get PDF
    The ability to fly a remote control (RC) aircraft from a third-person perspective is a skill that many hobbyists and enthusiasts enjoy. With a little practice, an RC pilot can sense the aircraft’s orientation and apply the correct inputs for it to orbit, achieve specific mission objectives, or line-of-sight waypoints. The work done in [1] proved that third-person sensing of an aircraft’s attitude is possible. This work seeks to improve the deep learning methods, increase hardware capabilities through the addition of a turret and motion capture validation, and add control algorithms for complete autonomous flight. While the results are not comprehensive, this report gives the work accomplished thus far. Overall this report will show that substantial progress has been made towards third-person autonomous control

    Improving Indoor Security Surveillance by Fusing Data from BIM, UWB and Video

    Get PDF
    Indoor physical security, as a perpetual and multi-layered phenomenon, is a time-intensive and labor-consuming task. Various technologies have been leveraged to develop automatic access control, intrusion detection, or video monitoring systems. Video surveillance has been significantly enhanced by the advent of Pan-Tilt-Zoom (PTZ) cameras and advanced video processing, which together enable effective monitoring and recording. The development of ubiquitous object identification and tracking technologies provides the opportunity to accomplish automatic access control and tracking. Intrusion detection has also become possible through deploying networks of motion sensors for alerting about abnormal behaviors. However, each of the above-mentioned technologies has its own limitations. This thesis presents a fully automated indoor security solution that leverages an Ultra-wideband (UWB) Real-Time Locating System (RTLS), PTZ surveillance cameras and a Building Information Model (BIM) as three sources of environmental data. Providing authorized persons with UWB tags, unauthorized intruders are distinguished as the mismatch observed between the detected tag owners and the persons detected in the video, and intrusion alert is generated. PTZ cameras allow for wide-area monitoring and motion-based recording. Furthermore, the BIM is used for space modeling and mapping the locations of intruders in the building. Fusing UWB tracking, video and spatial data can automate the entire security procedure from access control to intrusion alerting and behavior monitoring. Other benefits of the proposed method include more complex query processing and interoperability with other BIM-based solutions. A prototype system is implemented that demonstrates the feasibility of the proposed method

    Detecting and Tracking Vulnerable Road Users\u27 Trajectories Using Different Types of Sensors Fusion

    Get PDF
    Vulnerable road user (VRU) detection and tracking has been a key challenge in transportation research. Different types of sensors such as the camera, LiDAR, and inertial measurement units (IMUs) have been used for this purpose. For detection and tracking with the camera, it is necessary to perform calibration to obtain correct GPS trajectories. This method is often tedious and necessitates accurate ground truth data. Moreover, if the camera performs any pan-tilt-zoom function, it is usually necessary to recalibrate the camera. In this thesis, we propose camera calibration using an auxiliary sensor: ultra-wideband (UWB). USBs are capable of tracking a road user with ten-centimeter-level accuracy. Once a VRU with a UWB traverses in the camera view, the UWB GPS data is fused with the camera to perform real-time calibration. As the experimental results in this thesis have shown, the camera is able to output better trajectories after calibration. It is expected that the use of UWB is needed only once to fuse the data and determine the correct trajectories at the same intersection and location of the camera. All other trajectories collected by the camera can be corrected using the same adjustment. In addition, data analysis was conducted to evaluate the performance of the UWB sensors. This study also predicted pedestrian trajectories using data fused by the UWB and smartphone sensors. UWB GPS coordinates are very accurate although it lacks other sensor parameters such as accelerometer, gyroscope, etc. The smartphone data have been used in this scenario to augment the UWB data. The two datasets were merged on the basis of the closest timestamp. The resulting dataset has precise latitude and longitude from UWB as well as the accelerometer, gyroscope, and speed data from smartphones making the fused dataset accurate and rich in terms of parameters. The fused dataset was then used to predict the GPS coordinates of pedestrians and scooters using LSTM

    WATCHING PEOPLE: ALGORITHMS TO STUDY HUMAN MOTION AND ACTIVITIES

    Get PDF
    Nowadays human motion analysis is one of the most active research topics in Computer Vision and it is receiving an increasing attention from both the industrial and scientific communities. The growing interest in human motion analysis is motivated by the increasing number of promising applications, ranging from surveillance, human–computer interaction, virtual reality to healthcare, sports, computer games and video conferencing, just to name a few. The aim of this thesis is to give an overview of the various tasks involved in visual motion analysis of the human body and to present the issues and possible solutions related to it. In this thesis, visual motion analysis is categorized into three major areas related to the interpretation of human motion: tracking of human motion using virtual pan-tilt-zoom (vPTZ) camera, recognition of human motions and human behaviors segmentation. In the field of human motion tracking, a virtual environment for PTZ cameras (vPTZ) is presented to overcame the mechanical limitations of PTZ cameras. The vPTZ is built on equirectangular images acquired by 360° cameras and it allows not only the development of pedestrian tracking algorithms but also the comparison of their performances. On the basis of this virtual environment, three novel pedestrian tracking algorithms for 360° cameras were developed, two of which adopt a tracking-by-detection approach while the last adopts a Bayesian approach. The action recognition problem is addressed by an algorithm that represents actions in terms of multinomial distributions of frequent sequential patterns of different length. Frequent sequential patterns are series of data descriptors that occur many times in the data. The proposed method learns a codebook of frequent sequential patterns by means of an apriori-like algorithm. An action is then represented with a Bag-of-Frequent-Sequential-Patterns approach. In the last part of this thesis a methodology to semi-automatically annotate behavioral data given a small set of manually annotated data is presented. The resulting methodology is not only effective in the semi-automated annotation task but can also be used in presence of abnormal behaviors, as demonstrated empirically by testing the system on data collected from children affected by neuro-developmental disorders

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    A Survey of Smart Classroom Literature

    Get PDF
    Recently, there has been a substantial amount of research on smart classrooms, encompassing a number of areas, including Information and Communication Technology, Machine Learning, Sensor Networks, Cloud Computing, and Hardware. Smart classroom research has been quickly implemented to enhance education systems, resulting in higher engagement and empowerment of students, educators, and administrators. Despite decades of using emerging technology to improve teaching practices, critics often point out that methods miss adequate theoretical and technical foundations. As a result, there have been a number of conflicting reviews on different perspectives of smart classrooms. For a realistic smart classroom approach, a piecemeal implementation is insufficient. This survey contributes to the current literature by presenting a comprehensive analysis of various disciplines using a standard terminology and taxonomy. This multi-field study reveals new research possibilities and problems that must be tackled in order to integrate interdisciplinary works in a synergic manner. Our analysis shows that smart classroom is a rapidly developing research area that complements a number of emerging technologies. Moreover, this paper also describes the co-occurrence network of technological keywords using VOSviewer for an in-depth analysis

    Self-localization in ubiquitous computing using sensor fusion

    Get PDF
    The widespread availability of small and inexpensive mobile computing devices and the desire to connect them at any time in any place has driven the need to develop an accurate means of self-localization. Devices that typically operate outdoors use GPS for localization. However, most mobile computing devices operate not only outdoors but indoors where GPS is typically unavailable. Therefore, other localization techniques must be used. Currently, there are several commercially available indoor localization systems. However, most of these systems rely on specialized hardware which must be installed in the mobile device as well as the building of operation. The deployment of this additional infrastructure may be unfeasible or costly. This work addresses the problem of indoor self-localization of mobile devices without the use of specialized infrastructure. We aim to leverage existing assets rather than deploy new infrastructure. The problem of self-localization utilizing single and dual sensor systems has been well studied. Typically, dual sensor systems are used when the limitations of a single sensor prevent it from functioning with the required level of performance and accuracy. A second sensor is often used to complement and improve the measurements of the first one. Sometimes it is better to use more than two sensors. In this work the use of three sensors with complementary characteristics was explored. The three sensor system that was developed included a positional sensor, an inertial sensor and a visual sensor. Positional information was obtained via radio localization. Acceleration information was obtained via an accelerometer and visual object identification was performed with a video camera. This system was selected as representative of typical ubiquitous computing devices that will be capable of developing an awareness of their environment in order to provide users with contextually relevant information. As a part of this research a prototype system consisting of a video camera, accelerometer and an 802.11g receiver was built. The specific sensors were chosen for their low cost and ubiquitous nature and by their ability to complement each other in a self-localization task using existing infrastructure. A Discrete Kalman filter was designed to fuse the sensor information in an effort to get the best possible estimate of the system position. Experimental results showed that the system could, when provided with a reasonable initial position estimate, determine its position with an average error of 8.26 meters
    • …
    corecore