103,964 research outputs found

    A novel parallel algorithm for surface editing and its FPGA implementation

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophySurface modelling and editing is one of important subjects in computer graphics. Decades of research in computer graphics has been carried out on both low-level, hardware-related algorithms and high-level, abstract software. Success of computer graphics has been seen in many application areas, such as multimedia, visualisation, virtual reality and the Internet. However, the hardware realisation of OpenGL architecture based on FPGA (field programmable gate array) is beyond the scope of most of computer graphics researches. It is an uncultivated research area where the OpenGL pipeline, from hardware through the whole embedded system (ES) up to applications, is implemented in an FPGA chip. This research proposes a hybrid approach to investigating both software and hardware methods. It aims at bridging the gap between methods of software and hardware, and enhancing the overall performance for computer graphics. It consists of four parts, the construction of an FPGA-based ES, Mesa-OpenGL implementation for FPGA-based ESs, parallel processing, and a novel algorithm for surface modelling and editing. The FPGA-based ES is built up. In addition to the Nios II soft processor and DDR SDRAM memory, it consists of the LCD display device, frame buffers, video pipeline, and algorithm-specified module to support the graphics processing. Since there is no implementation of OpenGL ES available for FPGA-based ESs, a specific OpenGL implementation based on Mesa is carried out. Because of the limited FPGA resources, the implementation adopts the fixed-point arithmetic, which can offer faster computing and lower storage than the floating point arithmetic, and the accuracy satisfying the needs of 3D rendering. Moreover, the implementation includes Bézier-spline curve and surface algorithms to support surface modelling and editing. The pipelined parallelism and co-processors are used to accelerate graphics processing in this research. These two parallelism methods extend the traditional computation parallelism in fine-grained parallel tasks in the FPGA-base ESs. The novel algorithm for surface modelling and editing, called Progressive and Mixing Algorithm (PAMA), is proposed and implemented on FPGA-based ES’s. Compared with two main surface editing methods, subdivision and deformation, the PAMA can eliminate the large storage requirement and computing cost of intermediated processes. With four independent shape parameters, the PAMA can be used to model and edit freely the shape of an open or closed surface that keeps globally the zero-order geometric continuity. The PAMA can be applied independently not only FPGA-based ESs but also other platforms. With the parallel processing, small size, and low costs of computing, storage and power, the FPGA-based ES provides an effective hybrid solution to surface modelling and editing

    Recent advances in describing and driving crystal nucleation using machine learning and artificial intelligence

    Full text link
    With the advent of faster computer processors and especially graphics processing units (GPUs) over the last few decades, the use of data-intensive machine learning (ML) and artificial intelligence (AI) has increased greatly, and the study of crystal nucleation has been one of the beneficiaries. In this review, we outline how ML and AI have been applied to address four outstanding difficulties of crystal nucleation: how to discover better reaction coordinates (RCs) for describing accurately non-classical nucleation situations; the development of more accurate force fields for describing the nucleation of multiple polymorphs or phases for a single system; more robust identification methods for determining crystal phases and structures; and as a method to yield improved course-grained models for studying nucleation.Comment: 15 pages; 1 figur

    Four Decades of Computing in Subnuclear Physics - from Bubble Chamber to LHC

    Full text link
    This manuscript addresses selected aspects of computing for the reconstruction and simulation of particle interactions in subnuclear physics. Based on personal experience with experiments at DESY and at CERN, I cover the evolution of computing hardware and software from the era of track chambers where interactions were recorded on photographic film up to the LHC experiments with their multi-million electronic channels

    The importance of being accessible: The graphics calculator in mathematics education

    Get PDF
    The first decade of the availability of graphics calculators in secondary schools has just concluded, although evidence for this is easier to find in some countries and schools than in others, since there are gross socio-economic differences in both cases. It is now almost the end of the second decade since the invention of microcomputers and their appearance in mathematics educational settings. Most of the interest in technology for mathematics education has been concerned with microcomputers. But there has been a steady increase in interest in graphics calculators by students, teachers, curriculum developers and examination authorities, in growing recognition that accessibility of technology at the level of the individual student is the key factor in responding appropriately to technological change; the experience of the last decade suggests very strongly that mathematics teachers are well advised to pay more attention to graphics calculators than to microcomputers. There are clear signs that the commercial marketplace, especially in the United States, is acutely aware of this trend. It was recently reported that current US sales of graphics calculators are around six million units per year, and rising. There are now four major corporations developing products aimed directly at the high school market, with all four producing graphics calculators of high quality and beginning to understand the educational needs of students and their teachers. To get some evidence of this interest, I scanned a recent issue (April 1995) of The Mathematics Teacher, the NCTM journal focussed on high school mathematics. The evidence was very strong: of almost 20 full pages devoted to paid advertising, nine featured graphics calculators, while only two featured computer products, with two more featuring both computers and graphics calculators. The main purposes of this paper are to explain and justify this heightened level of interest in graphics calculators at the secondary school level, and to identify some of the resulting implications for mathematics education, both generally, and in the South-East Asian region

    Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    Full text link
    General purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplyfing the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best-practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks, and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.Comment: 13 pages, 5 figures, accepted for publication in PAS
    • …
    corecore