35,990 research outputs found

    Path planning algorithm for a car-like robot based on cell decomposition method

    Get PDF
    This project proposes an obstacle avoiding path planning algorithm based on cell decomposition method for a car-like robot. Dijkstra’s algorithm is applied in order to find the shortest path. Using cell decomposition, the free space of the robot is exactly partitioned into cells. Then, the connectivity graph is created followed by calculating the shortest path by Dijkstra’s algorithm. This project also concerns the robot kinematic constraints such as minimum turning radius. Thus, kinematic modeling and Bezier curve have been used to obtain a feasible path. The algorithm is able to obtain a curvature bounded path with sub-optimal curve length while taking cell decomposition as reference skeleton. The C-space concept has been applied in this situation. The obstacles on the map are expanded according to the size of car-like robot, so that the robot could be treated as points on this map and the coordinates of the map is corresponding to these points. The simulation and experimental result shows the algorithm can obtain the collision free path which satisfies the curvature constraint and approaches the minimal curve length for a car-like robot

    Parametric optimization of the femoropopliteal artery stent design based on numerical analysis

    Get PDF
    High-failure rates of Peripheral Arterial Disease (PAD) stenting were reported due to the inability of certain stent strut configuration to accommodate severe biomechanical environment of the Femoro-Popliteal Artery (FPA) such as bends, twists, and axially compresses during limb flexion. The unique of mechanical deformation environment in FPA has been considered one of main factors affecting the durability of the FPA stent and reducing the stent life. Consequently, various optimization techniques have been developed to improve the mechanical performance of the FPA stent. The present work shown that, the first-two of twelve FPA resemble stent models stent models have been selected with a net score of 3.65 Model I and, with a net score of 3.55 Model II via applying Pictorial Selection Method. Finite Element Method (FEM) of optimization study based-parameterization has been conducted for stent strut dimensions, stents were compared in terms of force-stress behavior. Multi Criteria Decision Making (MCDM) method has been utilized to identify the best combination of strut dimensions. The strut thickness parameterization results were in relation T α 1/σ (T is strut thickness) for both models with all mechanical loading modes. Moreover, the strut width parameterization results were in relation W α 1/σ (W is strut width) for both models with all mechanical loading modes. Whereas, the strut length parameterization results were in relation L α σ in case of Model I and, L α 1/σ (L is strut length) in case of Model II, under axial loads, while under three-point bending and torsion loading modes L α σ for both models, under radial compression the relations were L α 1/σ in case of Model I and, L α σ in case of Model II. The best combination of strut dimension in the thickness case was t4 = 230 µm for both models, in strut width were w3=0.180, and w4= 0.250 mm for Model I and Model II, respectively, and in strut length were l2= 1.40, and l2= 1.75 mm for Model I and Model II, respectively. In conclusions, the mathematical selection approach and the consistent mathematical approach of MCDM has been proposed, also the mechanical performance has been improved for parameterized stent models

    A multi-criteria decision making approach for food engineering

    Get PDF
    The objective of this study was to propose a decision making approach and tools (software packages) to solve the multi-criteria decision making problems arising in the food engineering. The proposed decision making approach is based on a simultaneous utilization for a given set of Pareto-optimal solutions the two following decision making methods: 1) well-known Analytic Hierarchy Process method and 2) Tabular Method. The using of Tabular Method allows utilizing the AHP method in a straightforward manner, which avoids the information overload and makes the decision making process easier. The aggregating functions approach, adaptive random search algorithm coupled with penalty functions approach, and the finite difference method with cubic spline approximation were utilized in this study to compute the initial set of the Pareto-optimal solutions. The decision making software ―MPRIORITY‖ and ―T-CHOICE‖ based on the Analytic Hierarchy Process and Tabular Method methods, respectively, were utilized for choosing the best alternative among the obtained set of Pareto-optimal solutions. The proposed in this study approach and tools was successfully tested on the multi-objective optimization problem of the thermal processing of packaged food. The proposed decision making approach and tools are useful for food scientists (research and education) and engineers (real thermal food process evaluation and optimization)

    Optimal, scalable forward models for computing gravity anomalies

    Full text link
    We describe three approaches for computing a gravity signal from a density anomaly. The first approach consists of the classical "summation" technique, whilst the remaining two methods solve the Poisson problem for the gravitational potential using either a Finite Element (FE) discretization employing a multilevel preconditioner, or a Green's function evaluated with the Fast Multipole Method (FMM). The methods utilizing the PDE formulation described here differ from previously published approaches used in gravity modeling in that they are optimal, implying that both the memory and computational time required scale linearly with respect to the number of unknowns in the potential field. Additionally, all of the implementations presented here are developed such that the computations can be performed in a massively parallel, distributed memory computing environment. Through numerical experiments, we compare the methods on the basis of their discretization error, CPU time and parallel scalability. We demonstrate the parallel scalability of all these techniques by running forward models with up to 10810^8 voxels on 1000's of cores.Comment: 38 pages, 13 figures; accepted by Geophysical Journal Internationa

    An empirical learning-based validation procedure for simulation workflow

    Full text link
    Simulation workflow is a top-level model for the design and control of simulation process. It connects multiple simulation components with time and interaction restrictions to form a complete simulation system. Before the construction and evaluation of the component models, the validation of upper-layer simulation workflow is of the most importance in a simulation system. However, the methods especially for validating simulation workflow is very limit. Many of the existing validation techniques are domain-dependent with cumbersome questionnaire design and expert scoring. Therefore, this paper present an empirical learning-based validation procedure to implement a semi-automated evaluation for simulation workflow. First, representative features of general simulation workflow and their relations with validation indices are proposed. The calculation process of workflow credibility based on Analytic Hierarchy Process (AHP) is then introduced. In order to make full use of the historical data and implement more efficient validation, four learning algorithms, including back propagation neural network (BPNN), extreme learning machine (ELM), evolving new-neuron (eNFN) and fast incremental gaussian mixture model (FIGMN), are introduced for constructing the empirical relation between the workflow credibility and its features. A case study on a landing-process simulation workflow is established to test the feasibility of the proposed procedure. The experimental results also provide some useful overview of the state-of-the-art learning algorithms on the credibility evaluation of simulation models
    • …
    corecore