70 research outputs found

    Parallel algorithms for lattice QCD

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre- DSC:D81971 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Solution of partial differential equations on vector and parallel computers

    Get PDF
    The present status of numerical methods for partial differential equations on vector and parallel computers was reviewed. The relevant aspects of these computers are discussed and a brief review of their development is included, with particular attention paid to those characteristics that influence algorithm selection. Both direct and iterative methods are given for elliptic equations as well as explicit and implicit methods for initial boundary value problems. The intent is to point out attractive methods as well as areas where this class of computer architecture cannot be fully utilized because of either hardware restrictions or the lack of adequate algorithms. Application areas utilizing these computers are briefly discussed

    Implementation and Evaluation of Algorithmic Skeletons: Parallelisation of Computer Algebra Algorithms

    Get PDF
    This thesis presents design and implementation approaches for the parallel algorithms of computer algebra. We use algorithmic skeletons and also further approaches, like data parallel arithmetic and actors. We have implemented skeletons for divide and conquer algorithms and some special parallel loops, that we call ‘repeated computation with a possibility of premature termination’. We introduce in this thesis a rational data parallel arithmetic. We focus on parallel symbolic computation algorithms, for these algorithms our arithmetic provides a generic parallelisation approach. The implementation is carried out in Eden, a parallel functional programming language based on Haskell. This choice enables us to encode both the skeletons and the programs in the same language. Moreover, it allows us to refrain from using two different languages—one for the implementation and one for the interface—for our implementation of computer algebra algorithms. Further, this thesis presents methods for evaluation and estimation of parallel execution times. We partition the parallel execution time into two components. One of them accounts for the quality of the parallelisation, we call it the ‘parallel penalty’. The other is the sequential execution time. For the estimation, we predict both components separately, using statistical methods. This enables very confident estimations, although using drastically less measurement points than other methods. We have applied both our evaluation and estimation approaches to the parallel programs presented in this thesis. We haven also used existing estimation methods. We developed divide and conquer skeletons for the implementation of fast parallel multiplication. We have implemented the Karatsuba algorithm, Strassen’s matrix multiplication algorithm and the fast Fourier transform. The latter was used to implement polynomial convolution that leads to a further fast multiplication algorithm. Specially for our implementation of Strassen algorithm we have designed and implemented a divide and conquer skeleton basing on actors. We have implemented the parallel fast Fourier transform, and not only did we use new divide and conquer skeletons, but also developed a map-and-transpose skeleton. It enables good parallelisation of the Fourier transform. The parallelisation of Karatsuba multiplication shows a very good performance. We have analysed the parallel penalty of our programs and compared it to the serial fraction—an approach, known from literature. We also performed execution time estimations of our divide and conquer programs. This thesis presents a parallel map+reduce skeleton scheme. It allows us to combine the usual parallel map skeletons, like parMap, farm, workpool, with a premature termination property. We use this to implement the so-called ‘parallel repeated computation’, a special form of a speculative parallel loop. We have implemented two probabilistic primality tests: the Rabin–Miller test and the Jacobi sum test. We parallelised both with our approach. We analysed the task distribution and stated the fitting configurations of the Jacobi sum test. We have shown formally that the Jacobi sum test can be implemented in parallel. Subsequently, we parallelised it, analysed the load balancing issues, and produced an optimisation. The latter enabled a good implementation, as verified using the parallel penalty. We have also estimated the performance of the tests for further input sizes and numbers of processing elements. Parallelisation of the Jacobi sum test and our generic parallelisation scheme for the repeated computation is our original contribution. The data parallel arithmetic was defined not only for integers, which is already known, but also for rationals. We handled the common factors of the numerator or denominator of the fraction with the modulus in a novel manner. This is required to obtain a true multiple-residue arithmetic, a novel result of our research. Using these mathematical advances, we have parallelised the determinant computation using the Gauß elimination. As always, we have performed task distribution analysis and estimation of the parallel execution time of our implementation. A similar computation in Maple emphasised the potential of our approach. Data parallel arithmetic enables parallelisation of entire classes of computer algebra algorithms. Summarising, this thesis presents and thoroughly evaluates new and existing design decisions for high-level parallelisations of computer algebra algorithms

    A Parallel Processing Development System to Perform Automatic Correlation of Subsurface Petrophysical Logs. (Volumes I and II).

    Get PDF
    A parallel processing framework for the investigation of three dimensional subsurface geological problems is developed. The Floating Point Systems T-20 hypercube parallel computer system is used in the automatic correlation of petrophysical well logs. Parallel processing issues such as processor management, interprocessor communications management, memory management and algorithm development are discussed. Techniques are discussed to use spontaneous potential petrophysical logs to segregate subsurface strata into sand and shale segments. These segments are then used to generate sand/shale ratio logs for use in an automatic subsurface stratigraphic analysis system on a parallel machine. Phase one prepares petrophysical logs for stratigraphic analysis. The petrophysical logs are filtered and cleaned to remove any as salinity effects and improper tool responses. The spontaneous potential log is used to segregate each log into individual sand and shale beds and use these beds to create sand and shale databases which are used to create sand/shale ratio logs for later analysis. Phase two creates and solves a two dimensional matching probability matrix. The theoretical foundation is developed and several implementation issues are discussed. These issues include: (1) how to select data points, (2) how to compare data values from two petrophysical logs, (3) how to view the data, and (4) how to find the optimal set of strata observing several constraints imposed by the nature of the subsurface. Phase three developments parallel processing techniques to handle a multi-dimensional matching probability matrix. Each petrophysical log is treated as an axis of a multi-dimensional matrix with all points from one log being compared with all points from all other logs. A theoretical foundation is developed followed by implementation details. The data storage space required for this study was in excess of 10\sp{56} bytes

    Various Applications of Methods and Elements of Adaptive Optics

    Get PDF
    This volume is focused on a wide range of topics, including adaptive optic components and tools, wavefront sensing, different control algorithms, astronomy, and propagation through turbulent and turbid media

    Design and development of a system for vario-scale maps

    Get PDF
    Nowadays, there are many geo-information data sources available such as maps on the Internet, in-car navigation devices and mobile apps. All datasets used in these applications are the same in principle, and face the same issues, namely: Maps of different scales are stored separately. With many separate fixed levels, a lot of information is the same, but still needs to be included, which leads to duplication. With many redundant data throughout the scales, features are represented again and again, which may lead to inconsistency. Currently available maps contain significantly more levels of detail (twenty map scales on average) than in the past. These levels must be created, but the optimal strategy to do so is not known. For every user’s data request, a significant part of the data remains the same, but still needs to be included. This leads to more data transfer, and slower response. The interactive Internet environment is not used to its full potential for user navigation. It is common to observe lagging, popping features or flickering of a newly retrieved map scale feature while using the map. This research develops principles of variable scale (vario-scale) maps to address these issues. The vario-scale approach is an alternative for obtaining and maintaining geographical data sets at different map scales. It is based on the specific topological structure called tGAP (topological Generalized Area Partitioning) which addresses the main open issues of current solutions for managing spatial data sets of different scales such as: redundancy data, inconsistency of map scales and dynamic transfer. The objective of this thesis is to design, to develop and to extend the variable-scale data structures and it is expressed as the following research question: How to design and develop a system for vario-scale maps?  To address the above research question, this research has been conducted using the following outline: 1) Investigate state-of-the-art in map generalization. 2) Study development of vario-scale structure done so far. 3) Propose techniques for generating better vario-scale map content. 4) Implement strategies to process really massive datasets. 5) Research smooth representation of map features and their impact on user interaction. Results of our research led to new functionality, were addressed in prototype developments and were tested against real world data sets. Throughout this research we have made following main contributions to the design and development of a system of vario-scale maps. We have: studied vario-scale development in the past and we have identified the most urgent needs of the research. designed the concept of granularity and presented our strategy where changes in map content should be as small and as gradual as possible (e. g. use groups, maintain road network, support line feature representation). introduced line features in the solution and presented a fully-automated generalization process that preserves a road network features throughout all scales. proposed an approach to create a vario-scale data structure of massive datasets. demonstrated a method to generate an explicit 3D representation from the structure which can provide smoother user experience. developed a software prototype where a 3D vario-scale dataset can be used to its full potential. conducted initial usability test. All aspects together with already developed functionality provide a more complex and more unified solution for vario-scale mapping. Based on our research, design and development of a system for vario-scale maps should be clearer now. In addition, it is easier to identified necessary steps which need to be taken towards an optimal solution. Our recommendations for future work are: One of the contributions has been an integration of the road features in the structure and their automated generalization throughout the process. Integrating more map features besides roads deserve attention. We have investigated how to deal with massive datasets which do not fit in the main memory of the computer. Our experiences consisted of dataset of one province or state with records in order of millions. To verify our findings, it will be interesting to process even bigger dataset with records in order of billions (a whole continent). We have introduced representation where map content changes as gradually as possible. It is based on process where: 1) explicit 3D geometry from the structure is generated. 2) A slice of the geometry is calculated. 3) Final maps based on the slice is constructed. Investigation of how to integrate this in a server-client pipeline on the Internet is another point of further research. Our research focus has been mainly on one specific aspect of the concept at a time. Now all aspects may be brought together where integration, tuning and orchestration play an important role is another interesting research that desire attention. Carry out more user testing including; 1) maps of sufficient cartographic quality, 2) a large testing region, and 3) the finest version of visualization prototype. &nbsp

    Design and development of a system for vario-scale maps

    Get PDF
    Nowadays, there are many geo-information data sources available such as maps on the Internet, in-car navigation devices and mobile apps. All datasets used in these applications are the same in principle, and face the same issues, namely: Maps of different scales are stored separately. With many separate fixed levels, a lot of information is the same, but still needs to be included, which leads to duplication. With many redundant data throughout the scales, features are represented again and again, which may lead to inconsistency. Currently available maps contain significantly more levels of detail (twenty map scales on average) than in the past. These levels must be created, but the optimal strategy to do so is not known. For every user’s data request, a significant part of the data remains the same, but still needs to be included. This leads to more data transfer, and slower response. The interactive Internet environment is not used to its full potential for user navigation. It is common to observe lagging, popping features or flickering of a newly retrieved map scale feature while using the map. This research develops principles of variable scale (vario-scale) maps to address these issues. The vario-scale approach is an alternative for obtaining and maintaining geographical data sets at different map scales. It is based on the specific topological structure called tGAP (topological Generalized Area Partitioning) which addresses the main open issues of current solutions for managing spatial data sets of different scales such as: redundancy data, inconsistency of map scales and dynamic transfer. The objective of this thesis is to design, to develop and to extend the variable-scale data structures and it is expressed as the following research question: How to design and develop a system for vario-scale maps? To address the above research question, this research has been conducted using the following outline:  To address the above research question, this research has been conducted using the following outline: 1) Investigate state-of-the-art in map generalization. 2) Study development of vario-scale structure done so far. 3) Propose techniques for generating better vario-scale map content. 4) Implement strategies to process really massive datasets. 5) Research smooth representation of map features and their impact on user interaction. Results of our research led to new functionality, were addressed in prototype developments and were tested against real world data sets. Throughout this research we have made following main contributions to the design and development of a system of vario-scale maps. We have: studied vario-scale development in the past and we have identified the most urgent needs of the research. designed the concept of granularity and presented our strategy where changes in map content should be as small and as gradual as possible (e. g. use groups, maintain road network, support line feature representation). introduced line features in the solution and presented a fully-automated generalization process that preserves a road network features throughout all scales. proposed an approach to create a vario-scale data structure of massive datasets. demonstrated a method to generate an explicit 3D representation from the structure which can provide smoother user experience. developed a software prototype where a 3D vario-scale dataset can be used to its full potential. conducted initial usability test. All aspects together with already developed functionality provide a more complex and more unified solution for vario-scale mapping. Based on our research, design and development of a system for vario-scale maps should be clearer now. In addition, it is easier to identified necessary steps which need to be taken towards an optimal solution. Our recommendations for future work are: One of the contributions has been an integration of the road features in the structure and their automated generalization throughout the process. Integrating more map features besides roads deserve attention. We have investigated how to deal with massive datasets which do not fit in the main memory of the computer. Our experiences consisted of dataset of one province or state with records in order of millions. To verify our findings, it will be interesting to process even bigger dataset with records in order of billions (a whole continent). We have introduced representation where map content changes as gradually as possible. It is based on process where: 1) explicit 3D geometry from the structure is generated. 2) A slice of the geometry is calculated. 3) Final maps based on the slice is constructed. Investigation of how to integrate this in a server-client pipeline on the Internet is another point of further research. Our research focus has been mainly on one specific aspect of the concept at a time. Now all aspects may be brought together where integration, tuning and orchestration play an important role is another interesting research that desire attention. Carry out more user testing including; 1) maps of sufficient cartographic quality, 2) a large testing region, and 3) the finest version of visualization prototype

    Supercomputing in Aerospace

    Get PDF
    Topics addressed include: numerical aerodynamic simulation; computational mechanics; supercomputers; aerospace propulsion systems; computational modeling in ballistics; turbulence modeling; computational chemistry; computational fluid dynamics; and computational astrophysics

    Hydrolysis inhibition of complex biowaste

    Get PDF
    The increasing demand of renewable energy sources and reuse of wastes, challenges our society for better technological solutions for energy production. Co-digestion of agricultural biowaste, such as animal manure and plant residues, offers an interesting contribution to the renewable energy strategies. The biogas plants, where the complex substrates, such as agricultural biowaste, get converted into biogas, are then able to produce electricity and heat, which can be used in the farm and delivered to the main electricity grid. Moreover, due to its decentralised nature, the implementation of small-scale biogas plants can supply renewable energy to people without the need for large-scale infrastructural networks such as electricity grids, thereby solving part of the populations’ energy demands. The production of biogas from complex biowaste is rate-limited by the hydrolysis step of the anaerobic digestion process. However the hydrolysis step has been poorly described and not very well understood, resulting in non-optimized anaerobic digester volumes. Due to that, a review on the anaerobic hydrolysis step is in this thesis presented, together with ways to accelerate the hydrolysis, either by mitigating the revealed inhibiting compounds, by pre-treating difficultly hydrolysable substrates, or as is nowadays also applied, by adding hydrolytic enzymes to full scale biogas co-digestion plants. In this thesis two compounds were studied in terms of its inhibiting effect on hydrolysis: ammonia nitrogen and Humic Matter (HM). Ammonia nitrogen did not show an inhibiting effect on anaerobic hydrolysis. On the other hand Humic acids-like (HAL) and Fulvic acids-like (FAL) extracted from fesh cow manure and silage maize, and in this thesis extensively described in terms of its chemical characteristics, showed a strong inhibiting effect on the hydrolysis step. Plant matter is high in lignocellulosic biomass. Lignocellulosic biomass consists of lignin, which is resistant to anaerobic degradation, cellulose and hemicelluloses. Pre-treatment of plant material, is particularly important in order to increase biogas production during co-digestion of manure. Calcium hydroxide pre-treatment was shown, in this thesis, to improve the biodegradability of lignocellulosic biomass, especially for high lignin content substrates. Maleic acid generated the highest percentage of dissolved COD during pre-treatment, however its high market price makes it not so attractive as calcium hydroxyde. Enzyme addition has recently gained the attention of biogas plants’ operators in order to accelerate hydrolysis, however further research is needed. <br/
    corecore