215,889 research outputs found

    Design and motion control of a 6-UPS fully parallel robot for long bone fracture reduction : a thesis presented in partial fulfillment of the requirements for the degree of Master of Engineering in Mechatronics at Massey University

    Get PDF
    The incidences of long bone fractures in New Zealand are approximately 1 in 10,000. Long bones such as tibia and femur have complicated anatomic structures, making the realignment of these long bone fractures reliant on the skill of the surgeon. The drawbacks of current practice result in long time exposure to radiation, slow recovery and possible morbidity. A semi-automated long bone fracture reduction system based on a 6-DOF parallel robot platform has been in development since 2004. The developed 6-DOF parallel robot platform comprises of six linear actuators with rotary incremental encoders. To implement a realignment of long bone fractures, a framework for the 6-DOF platform robot has been developed. The inverse kinematics and singularity of the 6-DOF parallel robot has been studied to obtain the actions and Jacobin matrices. In motion control a multiple axis motion controller and amplifiers were used for 6-DOF parallel robot. PID tuning algorithms were developed based on the combination of the general tuning result and the contour control principle. The PID parameters have been validated by a number of experiments. The practical realignment of bone fractures requires a "Pull-Rotate-Push" action implemented by the 6-DOF parallel robot. After calibration, the reduction trajectories were generated accurately. The actual trials on the artificial fractures have shown that the robot developed is capable of performing the required reduction motion

    Grid Infrastructure for Domain Decomposition Methods in Computational ElectroMagnetics

    Get PDF
    The accurate and efficient solution of Maxwell's equation is the problem addressed by the scientific discipline called Computational ElectroMagnetics (CEM). Many macroscopic phenomena in a great number of fields are governed by this set of differential equations: electronic, geophysics, medical and biomedical technologies, virtual EM prototyping, besides the traditional antenna and propagation applications. Therefore, many efforts are focussed on the development of new and more efficient approach to solve Maxwell's equation. The interest in CEM applications is growing on. Several problems, hard to figure out few years ago, can now be easily addressed thanks to the reliability and flexibility of new technologies, together with the increased computational power. This technology evolution opens the possibility to address large and complex tasks. Many of these applications aim to simulate the electromagnetic behavior, for example in terms of input impedance and radiation pattern in antenna problems, or Radar Cross Section for scattering applications. Instead, problems, which solution requires high accuracy, need to implement full wave analysis techniques, e.g., virtual prototyping context, where the objective is to obtain reliable simulations in order to minimize measurement number, and as consequence their cost. Besides, other tasks require the analysis of complete structures (that include an high number of details) by directly simulating a CAD Model. This approach allows to relieve researcher of the burden of removing useless details, while maintaining the original complexity and taking into account all details. Unfortunately, this reduction implies: (a) high computational effort, due to the increased number of degrees of freedom, and (b) worsening of spectral properties of the linear system during complex analysis. The above considerations underline the needs to identify appropriate information technologies that ease solution achievement and fasten required elaborations. The authors analysis and expertise infer that Grid Computing techniques can be very useful to these purposes. Grids appear mainly in high performance computing environments. In this context, hundreds of off-the-shelf nodes are linked together and work in parallel to solve problems, that, previously, could be addressed sequentially or by using supercomputers. Grid Computing is a technique developed to elaborate enormous amounts of data and enables large-scale resource sharing to solve problem by exploiting distributed scenarios. The main advantage of Grid is due to parallel computing, indeed if a problem can be split in smaller tasks, that can be executed independently, its solution calculation fasten up considerably. To exploit this advantage, it is necessary to identify a technique able to split original electromagnetic task into a set of smaller subproblems. The Domain Decomposition (DD) technique, based on the block generation algorithm introduced in Matekovits et al. (2007) and Francavilla et al. (2011), perfectly addresses our requirements (see Section 3.4 for details). In this chapter, a Grid Computing infrastructure is presented. This architecture allows parallel block execution by distributing tasks to nodes that belong to the Grid. The set of nodes is composed by physical machines and virtualized ones. This feature enables great flexibility and increase available computational power. Furthermore, the presence of virtual nodes allows a full and efficient Grid usage, indeed the presented architecture can be used by different users that run different applications

    Strategies for maximizing sugarcane yield with limited water in the Bundaberg district

    Get PDF
    Between 1995 and 2003 sugarcane farmers in Bundaberg had access to limited irrigation water. Over this time water allocations were effectively a quarter of the requirements for a fully irrigated crop. In response to this problem irrigation strategies were developed to assist farmers. Field investigations focused on the performance of water winch and furrow irrigation systems, which make up 91 percent of the irrigated area in the district. As most of these application systems have insufficient capacity to meet crop demands, opportunities to schedule irrigations were limited to start up after rain. Improvements in irrigation system performance were found to provide the greatest potential to increase sugarcane yield under conditions of limited water. Investigations identified that irrigation performance could be significantly improved through relatively minor adjustment. Timing of irrigation start up after rain influenced how much water could be applied to the field. Even with relatively low allocations delayed start up strategies could lead to a situation where water was left over at the end of the season

    Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond

    Full text link
    In this and a set of companion whitepapers, the USQCD Collaboration lays out a program of science and computing for lattice gauge theory. These whitepapers describe how calculation using lattice QCD (and other gauge theories) can aid the interpretation of ongoing and upcoming experiments in particle and nuclear physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers

    A Survey on Compiler Autotuning using Machine Learning

    Full text link
    Since the mid-1990s, researchers have been trying to use machine-learning based approaches to solve a number of different compiler optimization problems. These techniques primarily enhance the quality of the obtained results and, more importantly, make it feasible to tackle two main compiler optimization problems: optimization selection (choosing which optimizations to apply) and phase-ordering (choosing the order of applying optimizations). The compiler optimization space continues to grow due to the advancement of applications, increasing number of compiler optimizations, and new target architectures. Generic optimization passes in compilers cannot fully leverage newly introduced optimizations and, therefore, cannot keep up with the pace of increasing options. This survey summarizes and classifies the recent advances in using machine learning for the compiler optimization field, particularly on the two major problems of (1) selecting the best optimizations and (2) the phase-ordering of optimizations. The survey highlights the approaches taken so far, the obtained results, the fine-grain classification among different approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated quarterly here (Send me your new published papers to be added in the subsequent version) History: Received November 2016; Revised August 2017; Revised February 2018; Accepted March 2018

    Pervasive Parallel And Distributed Computing In A Liberal Arts College Curriculum

    Get PDF
    We present a model for incorporating parallel and distributed computing (PDC) throughout an undergraduate CS curriculum. Our curriculum is designed to introduce students early to parallel and distributed computing topics and to expose students to these topics repeatedly in the context of a wide variety of CS courses. The key to our approach is the development of a required intermediate-level course that serves as a introduction to computer systems and parallel computing. It serves as a requirement for every CS major and minor and is a prerequisite to upper-level courses that expand on parallel and distributed computing topics in different contexts. With the addition of this new course, we are able to easily make room in upper-level courses to add and expand parallel and distributed computing topics. The goal of our curricular design is to ensure that every graduating CS major has exposure to parallel and distributed computing, with both a breadth and depth of coverage. Our curriculum is particularly designed for the constraints of a small liberal arts college, however, much of its ideas and its design are applicable to any undergraduate CS curriculum
    corecore