19 research outputs found

    Fast OBJ file importing and parsing in CUDA

    Get PDF
    Alias-Wavefront OBJ meshes are a common text file type for transferring 3D mesh data between applications made by different vendors. However, as the mesh complexity gets higher and denser, the files become larger and slower to import. This paper explores the use of GPUs to accelerate the importing and parsing of OBJ files by studying file read-time, runtime, and load resistance. We propose a new method of reading and parsing that circumvents GPU architecture limitations and improves performance, seeing the new GPU method outperforms CPU methods with a 6× - 8× speedup. When running on a heavily loaded system, the new method only received an 80% performance hit, compared to the 160% that the CPU methods received. The loaded GPU speedup compared to unloaded CPU methods was 3.5×, and, when compared to loaded CPU methods, 8×. These results demonstrate that the time is right for further research into the use of data-parallel GPU acceleration beyond that of computer graphics and high performance computing

    Research on generic interactive deformable 3D models: focus on the human inguinal region

    Get PDF
    The goal of this project is to research for real-time approximate methods of physicallybased animation in conjunction with static polygonal meshes with the aim of deforming them and simulating an elastic behaviour for these meshes. Because of this, in this project it has been developed a software suite capable of doing a lot of tasks, each one from different computer graphics research fields, conforming a versatile capability project

    Just-in-time Analytics Over Heterogeneous Data and Hardware

    Get PDF
    Industry and academia are continuously becoming more data-driven and data-intensive, relying on the analysis of a wide variety of datasets to gain insights. At the same time, data variety increases continuously across multiple axes. First, data comes in multiple formats, such as the binary tabular data of a DBMS, raw textual files, and domain-specific formats. Second, different datasets follow different data models, such as the relational and the hierarchical one. Data location also varies: Some datasets reside in a central "data lake", whereas others lie in remote data sources. In addition, users execute widely different analysis tasks over all these data types. Finally, the process of gathering and integrating diverse datasets introduces several inconsistencies and redundancies in the data, such as duplicate entries for the same real-world concept. In summary, heterogeneity significantly affects the way data analysis is performed. In this thesis, we aim for data virtualization: Abstracting data out of its original form and manipulating it regardless of the way it is stored or structured, without a performance penalty. To achieve data virtualization, we design and implement systems that i) mask heterogeneity through the use of heterogeneity-aware, high-level building blocks and ii) offer fast responses through on-demand adaptation techniques. Regarding the high-level building blocks, we use a query language and algebra to handle multiple collection types, such as relations and hierarchies, express transformations between these collection types, as well as express complex data cleaning tasks over them. In addition, we design a location-aware compiler and optimizer that masks away the complexity of accessing multiple remote data sources. Regarding on-demand adaptation, we present a design to produce a new system per query. The design uses customization mechanisms that trigger runtime code generation to mimic the system most appropriate to answer a query fast: Query operators are thus created based on the query workload and the underlying data models; the data access layer is created based on the underlying data formats. In addition, we exploit emerging hardware by customizing the system implementation based on the available heterogeneous processors Ăą CPUs and GPGPUs. We thus pair each workload with its ideal processor type. The end result is a just-in-time database system that is specific to the query, data, workload, and hardware instance. This thesis redesigns the data management stack to natively cater for data heterogeneity and exploit hardware heterogeneity. Instead of centralizing all relevant datasets, converting them to a single representation, and loading them in a monolithic, static, suboptimal system, our design embraces heterogeneity. Overall, our design decouples the type of performed analysis from the original data layout; users can perform their analysis across data stores, data models, and data formats, but at the same time experience the performance offered by a custom system that has been built on demand to serve their specific use case

    Abstraction Raising in General-Purpose Compilers

    Get PDF

    Development of an object detection and mask generation software for dynamic beam projection in automotive pixel lighting applications

    Get PDF
    Nowadays there are many contributions to the automotive industry and the field is developing fast. This work can be used for some real-time autonomous driving applications. The goal was to add advanced functionality to a standard light source in collaboration with electronic systems. Including advanced features may result in safer and more pleasant driving. The application fields of the work could include glare-free light sources, orientation and lane lights, marking lights, and symbol projection. On a real-time source, object detection and classification with a confidence score is implemented. The best model is obtained by intending to train the model with varying parameters. The most accurate result which is mAP value 0.572 was obtained by distributing the training dataset with learning rate 0.2 and setting the epochs to 300. Moreover, a basic implementation of a glare-free light source was done to avoid the drivers from being blinded by the illumination of the beams. The car and rectangle shape masks were generated as image files and sent as CSV files to the pixel light source device. As a result, the rectangle shaped mask functions more precisely then car shaped.Nowadays there are many contributions to the automotive industry and the field is developing fast. This work can be used for some real-time autonomous driving applications. The goal was to add advanced functionality to a standard light source in collaboration with electronic systems. Including advanced features may result in safer and more pleasant driving. The application fields of the work could include glare-free light sources, orientation and lane lights, marking lights, and symbol projection. On a real-time source, object detection and classification with a confidence score is implemented. The best model is obtained by intending to train the model with varying parameters. The most accurate result which is mAP value 0.572 was obtained by distributing the training dataset with learning rate 0.2 and setting the epochs to 300. Moreover, a basic implementation of a glare-free light source was done to avoid the drivers from being blinded by the illumination of the beams. The car and rectangle shape masks were generated as image files and sent as CSV files to the pixel light source device. As a result, the rectangle shaped mask functions more precisely then car shaped

    Establishing beneficial roles: integrating community members into archaeological practices in Atlantic Canada

    Get PDF
    This master’s research seeks to understand how working with undocumented collections, private artifact collectors and avocational archaeologists benefits the field of archaeology. Governmental policies related to the act of private artifact collecting and avocational archaeology in Atlantic Canada are examined. By revisiting these policies and legislation, archaeologists and non-archaeologists can begin discussing what roles private collectors and avocational archaeologists have to play in professional archaeological methodology and interpretation. Case studies are presented from Canada, England, Taiwan, the United States of America and Wales to demonstrate the significant contribution to the global archaeological record from responsible private collectors, avocational archaeologists and community museums. The need for collaboration between professional archaeologists and non-archaeologists, is heavily emphasized based on the need to improve the discipline methodologically, theoretically and ethically. Fieldwork and museum work was completed in North West River, Labrador, to produce 3D models of Tshiashinnu artifacts and to demonstrate what role community museums have in collaborating with archaeologists. The results of this research demonstrate that the most effective 3D scanning or photogrammetry equipment that a non-archaeologist can use to produce 3D models are Android or iOS applications that are user-friendly affordable. Statistical data was collected from an online survey to gather information related to different stakeholder positions on the involvement of private collectors and avocational archaeologists in the documentation of cultural material and heritage sites. There were 14 multiple-choice questions and 171 participants who answered the survey questionnaire. Over 94% of survey respondents answered that they were in favour of collaborating with private artifact collectors. Based on survey results, this demonstrates that many more contemporary archaeologists are in favour of collaborating with private artifact compared to those who are opposed to it. By collaborating with non-archaeologists such as responsible private artifact collectors and institutions like private museums and using 3D scanning technologies, archaeologists can help digitize and document archaeological collections held in private collections or community museums. We can then make these collections more accessible and share them with wider audiences. This benefits archaeology as it becomes more inclusive to those who are not trained in archaeological methodology or theory

    Proceedings, MSVSCC 2018

    Get PDF
    Proceedings of the 12th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 19, 2018 at VMASC in Suffolk, Virginia. 155 pp

    Towards A Practical High-Assurance Systems Programming Language

    Full text link
    Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation. Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code. To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process
    corecore