7,669 research outputs found

    Global Positioning from a Single Image of a Rectangle in Conical Perspective

    Get PDF
    This article presents a method to obtain the overall positioning of the focus of a camera from an image that includes a rectangle in a fixed reference with known position and dimension. This technique uses basic principles of descriptive geometry introduced in engineering courses. The document will first show how to obtain the dihedral projections of a rectangle after three turns and one translation. Secondly, we will proceed to obtain the image of the rectangle rotated in a conical perspective, taking the elevation plane as the drawing plane and a specific point in space as the view point, and represented in the dihedral system. Thirdly, we proceed with the inverse perspective transformation; we will expose a method to obtain the coordinates in the space of a rectangle obtained from an image. Finally, we check the method experimentally by taking an image of the rectangle with a camera in which the coordinates in the drawing plane (center of the image) are the only available position information. Then, the positioning and orientation of the camera in 3D will be obtained

    Evaluation of automated decision making methodologies and development of an integrated robotic system simulation: Study results

    Get PDF
    The implementation of a generic computer simulation for manipulator systems (ROBSIM) is described. The program is written in FORTRAN, and allows the user to: (1) Interactively define a manipulator system consisting of multiple arms, load objects, targets, and an environment; (2) Request graphic display or replay of manipulator motion; (3) Investigate and simulate various control methods including manual force/torque and active compliance control; and (4) Perform kinematic analysis, requirements analysis, and response simulation of manipulamotion. Previous reports have described the algorithms and procedures for using ROBSIM. These reports are superseded and additional features which were added are described. They are: (1) The ability to define motion profiles and compute loads on a common base to which manipulator arms are attached; (2) Capability to accept data describing manipulator geometry from a Computer Aided Design data base using the Initial Graphics exchange Specification format; (3) A manipulator control algorithm derived from processing the TV image of known reference points on a target; and (4) A vocabulary of simple high level task commands which can be used to define task scenarios

    An Accessible Approach to Exploring Space through Augmented Reality

    Get PDF
    Physically engaging with space is often difficult for people who struggle with mobility. Elderly people and people with disabilities in particular may find it challenging to walk for long periods of time on various terrain in order to explore their environment. This project is designed to provide an alternative way to physically engage with spaces without requiring the user to walk, and I am focusing on the accessibility of Bard’s campus specifically. My project involves a map of the college that users can tour in an augmented reality environment. Through the use of a projector-camera system, this program projects a map and tracks objects placed on that map. It tells the user information about the space based on the object’s location. Users are meant to collaboratively trace the map and label buildings as they explore them. Finally, users highlight their favorite locations with colored markers, and take a screenshot of the completed map. The colors used are associated with different subjective experiences of the campus and are projected back on the table in the final step of this project. This experience is meant to operate as an alternative to traditional physical tours while also maintaining the communal experience that Bard tours provide

    Biometric fusion methods for adaptive face recognition in computer vision

    Get PDF
    PhD ThesisFace recognition is a biometric method that uses different techniques to identify the individuals based on the facial information received from digital image data. The system of face recognition is widely used for security purposes, which has challenging problems. The solutions to some of the most important challenges are proposed in this study. The aim of this thesis is to investigate face recognition across pose problem based on the image parameters of camera calibration. In this thesis, three novel methods have been derived to address the challenges of face recognition and offer solutions to infer the camera parameters from images using a geomtric approach based on perspective projection. The following techniques were used: camera calibration CMT and Face Quadtree Decomposition (FQD), in order to develop the face camera measurement technique (FCMT) for human facial recognition. Facial information from a feature extraction and identity-matching algorithm has been created. The success and efficacy of the proposed algorithm are analysed in terms of robustness to noise, the accuracy of distance measurement, and face recognition. To overcome the intrinsic and extrinsic parameters of camera calibration parameters, a novel technique has been developed based on perspective projection, which uses different geometrical shapes to calibrate the camera. The parameters used in novel measurement technique CMT that enables the system to infer the real distance for regular and irregular objects from the 2-D images. The proposed system of CMT feeds into FQD to measure the distance between the facial points. Quadtree decomposition enhances the representation of edges and other singularities along curves of the face, and thus improves directional features from face detection across face pose. The proposed FCMT system is the new combination of CMT and FQD to recognise the faces in the various pose. The theoretical foundation of the proposed solutions has been thoroughly developed and discussed in detail. The results show that the proposed algorithms outperform existing algorithms in face recognition, with a 2.5% improvement in main error recognition rate compared with recent studies

    Three-dimensional graphics

    Get PDF
    Three-dimensional graphics is the area of computer graphics that deals with producing two-dimensional representations, or images, of three-dimensional synthetic scenes, as seen from a given viewing configuration. The level of sophistication of these images may vary from simple wire-frame representations, where objects are depicted as a set of segment lines, with no data on surfaces and volumes, to photorealistic rendering, where illumination effects are computed using the physical laws of light propagation. All the different approaches are based on the metaphor of a virtual camera positioned in 3D space and looking at the scene. Hence, independently from the rendering algorithm used, producing an image of the scene always requires the resolution of the following problems: 1. Modeling geometric relationships among scene objects, and in particular efficiently representing the situation in 3D space of objects and virtual cameras; 2. Culling and clipping, i.e. efficiently determining which objects are visible from the virtual camera; 3. Projecting visible objects on the film plane of the virtual camera in order to render them. This chapter provides an introduction to the field by presenting the standard approaches for solving the aforementioned problems.168-17

    2D and 3D Pointing Device Based on a Passive Lights Detection Operation Method Using One Camera

    Get PDF
    Systems for surface-free pointing and/or command input include a computing device operably linked to an imaging device. The imaging device can be any suitable video recording device including a conventional webcam. At least one pointing/input device is provided including first, second, and third sets of actuable light sources, wherein at least the first and second sets emit differently colored light. The imaging device captures one or more sequential image frames each including a view of a scene including the activated light sources. One or more computer program products calculate a two-dimensional or three-dimensional position and/or a motion and/or an orientation of the pointing/ input device in the captured image frames by identifying a two-dimensional or three-dimensional position of the activated light sources of the first, second, and/or third sets of light sources. Certain activation patterns of light sources are mapped to particular pointing and/or input commands
    • 

    corecore