2 research outputs found

    3D visualization of in-flight recorded data.

    Get PDF
    Human being can easily acquire information by showing the object than reading the description of it. Our brain stores images that the eyes are seeing and by the brain mapping, people can analyze information by imagination in the brain. This is the reason why visualization is important and powerful. It helps people remember the scene later. Visualization transforms the symbolic into the geometric, enabling researchers to observe their simulations and computations (Flurchick, 2001). As a consequence, many computer scientists and programmers take their time to build better visualization of the data for users. For the flight data from an aircraft, it is better to understand data in 3D computer graphics rather than to look at mere numbers. The flight data consists of several fields such as elapsed time, latitude, longitude, altitude, ground speed, roll angle, pitch angle, heading, wind speed, and so on. With these data variables, filtering is the first process for visualization in order to gather important information. The collection of processed data is transformed to 3D graphics form to be rendered by generating Keyhole Mark-up Language (KML) files in the system. KML is an XML grammar and file format for modeling and storing geographic features such as points, lines, images, polygons, and models for display in Google Earth or Google Maps. Like HTML, KML has a tag-based structure with names and attributes used for specific display purposes. In the present work, new approaches to visualize flight using Google Earth are developed. Because of the limitation of the Google Earth API, the Great Circle Distance calculation and trigonometric functions are implemented to handle the position, angles of roll and pitch, and a range of the camera positions to generate several points of view. Currently, visual representation of flight data depends on 2D graphics although an aircraft flies in a 3D space. The graphical interface allows flight analysts to create ground traces in 2D, and flight ribbons and flight paths with altitude in 3D. Additionally, by incorporating weather information, fog and clouds can also be generated as part of the animation effects. With 3D stereoscopic technique, a realistic visual representation of the flights is realized

    Face recognition using statistical adapted local binary patterns.

    Get PDF
    Biometrics is the study of methods of recognizing humans based on their behavioral and physical characteristics or traits. Face recognition is one of the biometric modalities that received a great amount of attention from many researchers during the past few decades because of its potential applications in a variety of security domains. Face recognition however is not only concerned with recognizing human faces, but also with recognizing faces of non-biological entities or avatars. Fortunately, the need for secure and affordable virtual worlds is attracting the attention of many researchers who seek to find fast, automatic and reliable ways to identify virtual worlds’ avatars. In this work, I propose new techniques for recognizing avatar faces, which also can be applied to recognize human faces. Proposed methods are based mainly on a well-known and efficient local texture descriptor, Local Binary Pattern (LBP). I am applying different versions of LBP such as: Hierarchical Multi-scale Local Binary Patterns and Adaptive Local Binary Pattern with Directional Statistical Features in the wavelet space and discuss the effect of this application on the performance of each LBP version. In addition, I use a new version of LBP called Local Difference Pattern (LDP) with other well-known descriptors and classifiers to differentiate between human and avatar face images. The original LBP achieves high recognition rate if the tested images are pure but its performance gets worse if these images are corrupted by noise. To deal with this problem I propose a new definition to the original LBP in which the LBP descriptor will not threshold all the neighborhood pixel based on the central pixel value. A weight for each pixel in the neighborhood will be computed, a new value for each pixel will be calculated and then using simple statistical operations will be used to compute the new threshold, which will change automatically, based on the pixel’s values. This threshold can be applied with the original LBP or any other version of LBP and can be extended to work with Local Ternary Pattern (LTP) or any version of LTP to produce different versions of LTP for recognizing noisy avatar and human faces images
    corecore