8 research outputs found

    Object reconstruction using close-range all-round digital photogrammetry for applications in industry

    Get PDF
    Bibliography: p. 66-68.Photogrammetry has many inherent advantages in engineering and industrial applications, which include the ability to obtain accurate, non-contact measurements from data rapidly acquired with the object in situ. Along with these advantages, digital photogrammetry offers the potential for the automation or semi-automation of many of the conventional photogrammetric procedures, leading to real-time or near real-time measurement capabilities. However, all-round surface measurement of an object usually benefits less from the above advantages of photogrammetry. To obtain the necessary imagery from all sides of the measurement object, real-time processing is nearly impossible, and it becomes difficult to avoid moving the object, thus precluding in situ measurement. However, all-round digital photogrammetry and, in particular, the procedure presented here, still offer advantages over other methods of full surface measurement, including rapid, non-contact data acquisition along with the ability to store and reprocess data at a later date. Conventional or topographic photogrammetry is well-established as a tool for mapping simple terrain surfaces and for acquiring accurate 3-D point data. The complexities of all-round photogrammetry make many of the standard photogrammetric methods all but redundant. The work presented in this thesis was aimed at the development of a reliable method of obtaining complete surface data of an object with non-topographic, all-round, close-range digital photogrammetry. A method was developed to improve the integrity of the data, and possibilities for the presentation and visualisation of the data were explored. The potential for automation was considered important, as was the need to keep the overall time required to a minimum. A measurement system was developed to take as input an object, and produce as output an accurate, representative point cloud, allowing for the reconstruction of the surface. This system included the following procedures: â–  a novel technique of achieving high-accuracy system pre-calibration using a cubic control frame and fixed camera stations, â–  separate image capture for the control frame and the object, â–  surface sub-division and all-round step-wise image matching to produce a comprehensive 3-D data set, â–  point cloud refinement, and â–  surface reconstruction by separate surface generation. The development and reliability of these new approaches is discussed and investigated; and the results of various test procedures are presented. The technique of system pre-calibration involved the use of a mechanical device - a rotary table - to impart precisely repeatable rotations to the control frame and, separately, the object. The actual repeatability precision was tested and excellent results achieved, with standard deviations for the resected camera station coordinates of between 0.05 and 0.5 mm. In a detailed test case, actual rotations differed from the desired rotations by an average of 0.7" with a standard deviation of less than 2'. The image matching for the test case, from a set of forty-eight images, achieved a satisfactory final accuracy, comparable to that achieved in other similar work. The meaningful reconstruction of surfaces presented problems, although an acceptable rendering was achieved, and a thorough survey of current commercially available software failed to produce a package capable of all-round modelling from random 3-D data. The final analysis of the results indicated that digital photogrammetry, and this method in particular, are highly suited to accurate all-round surface measurement. The potential for automation - and, therefore, for near real-time results - of the method in the stages of image acquisition and processing, calibration, image matching and data visualisation is great. The method thus lends itself to industrial applications. However, the need for a robust and rapid method of surface reconstruction needs to be fulfilled

    eCulture: examining and quantifying cultural differences in user acceptance between Chinese and British web site users

    Get PDF
    A thesis submitted for the degree of Doctor of Philosophy of the University of LutonThe World Wide Web (WWW) has become an important medium for communicating between people all over the world. It is regarded as a global system and is associated with a wide user and social system diversity. The effects of differing user-groups and their associated cultures on user acceptance of web sites can be significant, and as a result understanding the behaviour of web users in various cultures is becoming a significant concern. The eCulture research project is based on previous classical theories and research in culture. It applies a factorial experimental design strategy (the Taguchi method) in crosscultural usability / acceptability, together with other approaches such as semiotic analysis and card sorting. Two types of analysis, both top-down and bottom-up have been implemented to investigate differences in web site usability and acceptability between users from Mainland China and the United Kingdom. Based on experiments on web sites investigating the relationship between cultural issues and usability lacceptability aspects between Chinese and British web users, several issues, such as cultural factors, cognitive abilities, social semiotic differences and other issues have emerged. One of the goals has been to develop 'cultural fingerprints' for both web sites and users in different cultures. By comparing cultural and site fingerprints, usability and acceptability of web sites can be diagrammatically matched to the target culture. Experiments investigating qualitative factors and quantitative data collection and analysis based on the Taguchi method has led to the successful development of two versions of 'cultural fingerprint' for both web sites and target cultures in the UK and China. It has been possible to relate these studies to a wider body of knowledge, and to suggest ways in which the work may be extended in the future

    A generative graph template toolkit (GRA Te-TK) for C++

    Get PDF
    This study provides an investigation and discussion into the construction of a graph toolkit using generative programming techniques. It defines and analyses directed graphs, representations and identifies different techniques that may be used to discover new representations. Each representation is classified according to a unique classification system. Doing this enables us to uniquely identify a particular type of graph representation. A naming convention is used when identifying each graph representation which is a direct by product of the classification system. Details of how to implement the graph toolkit is presented and analysed. The toolkit is discussed and critically analysed with other major toolkits currently available today such as the Boost and Leda graph toolkits. CopyrightDissertation (MSc)--University of Pretoria, 2010.Computer Scienceunrestricte

    Segmentation of brain x-ray CT images using seeded region growing

    Get PDF
    Includes bibliographical references.Three problems are addressed in this dissertation. They are intracranial volume extraction, noise suppression and automated segmentation of X-Ray Computerized Tomography (CT) images. The segmentation scheme is based on a Seeded Region Growing algorithm. The intracranial volume extraction is based on image symmetry and the noise suppression filter is based on the Gaussian nature of the tissue distribution. Both are essential in achieving good segmentation results. Simulated phantoms and real medical images were used in testing and development of the algorithms. The testing was done over a wide range of noise values, object sizes and mean object grey levels. All the methods were first implemented in two- and then three-dimensions. The 3-D implementation also included an investigation into volume formation and the advantages of 3-D processing. The results of the intracranial extraction showed that 9% of the data in the relevant grey level range consisted of unwanted scalp (The scalp is spatially not part of the intracranial volume, but has the same grey level values). This justified the extraction the intracranial volume for further processing. For phantom objects greater than 741.51mmÂł (voxel resolution 0.48mm x 0.48mm x 2mm) and having a mean grey level distance of 10 from any other object, a maximum segmentation volume error of 15% was achieved

    Classic data structures in C++

    No full text
    xviii, 543 p.il
    corecore