Scene Projection by Non-Linear Transforms to a Geo-Referenced Map for Situational Awareness

Abstract

There are many transportation and surveillance cameras currently in use in major cities that are close to the ground and show scenes from a perspective point of view. It can be difficult to follow an object of interest across multiple cameras if many of these cameras are in the same area due to the different orientations of these cameras. This is especially true when compared to wide area aerial surveillance (WAAS). To correct this problem, this research provides a method to non-linearly transform current camera perspective views into real world coordinates that can be placed on a map. Using a perspective transformation, perspective views are transformed into approximate WAAS views and placed on a map. All images are then on the same plane, allowing a user to follow an object of interest across several camera views on a map. While these transformed images will not fit every feature of the map as WAAS images would, the most important aspects of a scene (i.e. roads, cars, people, sidewalks etc.) are accurate enough to give the user situational awareness. Our algorithm is proven to be successful when tested on cameras from the downtown area of Dayton, Ohio

    Similar works