139 research outputs found

    comparisons between different interpolation techniques

    Get PDF
    Digital terrain models are key tools in land analysis and management as they are directly employable in GIS systems and other specific applications like hydraulic modelling, geotechnical analyses, road planning, telecommunication, and many others. TIN generation, from different kind of measurement techniques, is ruled by specific regulations. Interpolation techniques to compute a regular grid from a TIN, are, instead, still lacking in specific regulations: a unitary and shared methodology has not already been made compulsory in order to be used in cartographic production while generating digital models. Such ambiguity obviously involves non univocal results and can affect precision, which can lead to divergent analyses on the same territory. In the present study different algorithms will be analysed in order to spot an optimal interpolation methodology. The availability of the recent digital model produced by the Regione Piemonte with airborne LIDAR and the presence of sections of testing realized with higher resolutions and the presence of independent digital models on the same territory allow to set a series of analysis with consequent determination of the best methodologies of interpolation. The analysis of the residuals on the test sites allows to calculate the descriptive statistics of the computed values: all the algorithms have furnished interesting results; all the more interesting, notably for dense models, the IDW (Inverse Distance Weighing) algorithm results to give best results in this study case. Moreover, a comparative analysis was carried out by interpolating data at different input point density, with the purpose of highlighting thresholds in input density that may influence the quality reduction of the final output in the interpolation phase

    Hybrid performance modeling and prediction of large-scale computing systems

    Get PDF
    Performance is a key feature of large-scale computing systems. However, the achieved performance when a certain program is executed is significantly lower than the maximal theoretical performance of the large-scale computing system. The model-based performance evaluation may be used to support the performance-oriented program development for large-scale computing systems. In this paper we present a hybrid approach for performance modeling and prediction of parallel and distributed computing systems, which combines mathematical modeling and discrete-event simulation. We use mathematical modeling to develop parameterized performance models for components of the system. Thereafter, we use discrete-event simulation to describe the structure of system and the interaction among its components. As a result, we obtain a high-level performance model, which combines the evaluation speed of mathematical models with the structure awareness and fidelity of the simulation model. We evaluate empirically our approach with a real-world material science program that comprises more than 15,000 lines of codePeer ReviewedPostprint (published version

    Grid and P2P middleware for scientific computing systems

    Get PDF
    Grid and P2P systems have achieved a notable success in the domain of scientific and engineering applications, which commonly demand considerable amounts of computational resources. However, Grid and P2P systems remain still difficult to be used by the domain scientists and engineers due to the inherent complexity of the corresponding middleware and the lack of adequate documentation. In this paper we survey recent developments of Grid and P2P middleware in the context of scientific computing systems. The differences on the approaches taken for Grid and P2P middleware as well as the common points of both paradigms are highlighted. In addition, we discuss the corresponding programming models, languages, and applications.Peer ReviewedPostprint (published version

    An Optimized Architecture for CGA Operations and Its Application to a Simulated Robotic Arm

    Get PDF
    Conformal geometric algebra (CGA) is a new geometric computation tool that is attracting growing attention in many research fields, such as computer graphics, robotics, and computer vision. Regarding the robotic applications, new approaches based on CGA have been proposed to efficiently solve problems as the inverse kinematics and grasping of a robotic arm. The hardware acceleration of CGA operations is required to meet real-time performance requirements in embedded robotic platforms. In this paper, we present a novel embedded coprocessor for accelerating CGA operations in robotic tasks. Two robotic algorithms, namely, inverse kinematics and grasping of a human-arm-like kinematics chain, are used to prove the effectiveness of the proposed approach. The coprocessor natively supports the entire set of CGA operations including both basic operations (products, sums/differences, and unary operations) and complex operations as rigid body motion operations (reflections, rotations, translations, and dilations). The coprocessor prototype is implemented on the Xilinx ML510 development platform as a complete system-on-chip (SoC), integrating both a PowerPC processing core and a CGA coprocessing core on the same Xilinx Virtex-5 FPGA chip. Experimental results show speedups of 78x and 246x for inverse kinematics and grasping algorithms, respectively, with respect to the execution on the PowerPC processor

    COMPARISONS BETWEEN DIFFERENT INTERPOLATION TECHNIQUES

    Get PDF
    Digital terrain models are key tools in land analysis and management as they are directly employable in GIS systems and other specific applications like hydraulic modelling, geotechnical analyses, road planning, telecommunication, and many others. TIN generation, from different kind of measurement techniques, is ruled by specific regulations. Interpolation techniques to compute a regular grid from a TIN, are, instead, still lacking in specific regulations: a unitary and shared methodology has not already been made compulsory in order to be used in cartographic production while generating digital models. Such ambiguity obviously involves non univocal results and can affect precision, which can lead to divergent analyses on the same territory. In the present study different algorithms will be analysed in order to spot an optimal interpolation methodology. The availability of the recent digital model produced by the Regione Piemonte with airborne LIDAR and the presence of sections of testing realized with higher resolutions and the presence of independent digital models on the same territory allow to set a series of analysis with consequent determination of the best methodologies of interpolation. The analysis of the residuals on the test sites allows to calculate the descriptive statistics of the computed values: all the algorithms have furnished interesting results; all the more interesting, notably for dense models, the IDW (Inverse Distance Weighing) algorithm results to give best results in this study case. Moreover, a comparative analysis was carried out by interpolating data at different input point density, with the purpose of highlighting thresholds in input density that may influence the quality reduction of the final output in the interpolation phase

    Supporting effective monitoring and knowledge building in online collaborative learning systems

    Get PDF
    This paper aims to report on an experience of using an innovative groupware tool to support real, collaborative learning. We base the success of on-line collaborative learning on extracting relevant knowledge from interaction data analysis in order to provide learners and instructors with efficient awareness, feedback, and monitoring as regards individual and group performance and collaboration. Monitoring is especially important for online instructors since they can use this valuable provision of information as a meta cognitive tool for regulating the collaborative learning process more conveniently and provide adequate support when needed. In addition, learning and knowledge building may be greatly enhanced by presenting selected knowledge to learners as for their particular skills exhibited during interaction, such as the impact and effectiveness of their contributions. Indeed, by letting learners be aware of both their own and others’ progress in the process of knowledge building may promote learners’ participation and boost group performance. The ultimate goal of this paper is to provide a model to achieve a more effective support and assessment of the collaborative process while enhancing and improving the learning experience. To validate this study, a real online learning environment is employed to support asynchronous collaborative activities.Peer ReviewedPostprint (author's final draft

    Information Security Risk Management: In Which Security Solutions Is It Worth Investing?

    Get PDF
    As companies are increasingly exposed to information security threats, decision makers are permanently forced to pay attention to security issues. Information security risk management provides an approach for measuring the security through risk assessment, risk mitigation, and risk evaluation. Although a variety of approaches have been proposed, decision makers lack well-founded techniques that (1) show them what they are getting for their investment, (2) show them if their investment is efficient, and (3) do not demand in-depth knowledge of the IT security domain. This article defines a methodology for management decision makers that effectively addresses these problems. This work involves the conception, design, and implementation of the methodology into a software solution. The results from two qualitative case studies show the advantages of this methodology in comparison to established methodologies
    corecore