30 research outputs found

    A Partially Randomized Approach to Trajectory Planning and Optimization for Mobile Robots with Flat Dynamics

    Get PDF
    Motion planning problems are characterized by huge search spaces and complex obstacle structures with no concise mathematical expression. The fixed-wing airplane application considered in this thesis adds differential constraints and point-wise bounds, i. e. an infinite number of equality and inequality constraints. An optimal trajectory planning approach is presented, based on the randomized Rapidly-exploring Random Trees framework (RRT*). The local planner relies on differential flatness of the equations of motion to obtain tree branch candidates that automatically satisfy the differential constraints. Flat output trajectories, in this case equivalent to the airplane's flight path, are designed using Bézier curves. Segment feasibility in terms of point-wise inequality constraints is tested by an indicator integral, which is evaluated alongside the segment cost functional. Although the RRT* guarantees optimality in the limit of infinite planning time, it is argued by intuition and experimentation that convergence is not approached at a practically useful rate. Therefore, the randomized planner is augmented by a deterministic variational optimization technique. To this end, the optimal planning task is formulated as a semi-infinite optimization problem, using the intermediate result of the RRT(*) as an initial guess. The proposed optimization algorithm follows the feasible flavor of the primal-dual interior point paradigm. Discretization of functional (infinite) constraints is deferred to the linear subproblems, where it is realized implicitly by numeric quadrature. An inherent numerical ill-conditioning of the method is circumvented by a reduction-like approach, which tracks active constraint locations by introducing new problem variables. Obstacle avoidance is achieved by extending the line search procedure and dynamically adding obstacle-awareness constraints to the problem formulation. Experimental evaluation confirms that the hybrid approach is practically feasible and does indeed outperform RRT*'s built-in optimization mechanism, but the computational burden is still significant.Bewegungsplanungsaufgaben sind typischerweise gekennzeichnet durch umfangreiche Suchräume, deren vollständige Exploration nicht praktikabel ist, sowie durch unstrukturierte Hindernisse, für die nur selten eine geschlossene mathematische Beschreibung existiert. Bei der in dieser Arbeit betrachteten Anwendung auf Flächenflugzeuge kommen differentielle Randbedingungen und beschränkte Systemgrößen erschwerend hinzu. Der vorgestellte Ansatz zur optimalen Trajektorienplanung basiert auf dem Rapidly-exploring Random Trees-Algorithmus (RRT*), welcher die Suchraumkomplexität durch Randomisierung beherrschbar macht. Der spezifische Beitrag ist eine Realisierung des lokalen Planers zur Generierung der Äste des Suchbaums. Dieser erfordert ein flaches Bewegungsmodell, sodass differentielle Randbedingungen automatisch erfüllt sind. Die Trajektorien des flachen Ausgangs, welche im betrachteten Beispiel der Flugbahn entsprechen, werden mittels Bézier-Kurven entworfen. Die Einhaltung der Ungleichungsnebenbedingungen wird durch ein Indikator-Integral überprüft, welches sich mit wenig Zusatzaufwand parallel zum Kostenfunktional berechnen lässt. Zwar konvergiert der RRT*-Algorithmus (im probabilistischen Sinne) zu einer optimalen Lösung, jedoch ist die Konvergenzrate aus praktischer Sicht unbrauchbar langsam. Es ist daher naheliegend, den Planer durch ein gradientenbasiertes lokales Optimierungsverfahren mit besseren Konvergenzeigenschaften zu unterstützen. Hierzu wird die aktuelle Zwischenlösung des Planers als Initialschätzung für ein kompatibles semi-infinites Optimierungsproblem verwendet. Der vorgeschlagene Optimierungsalgorithmus erweitert das verbreitete innere-Punkte-Konzept (primal dual interior point method) auf semi-infinite Probleme. Eine explizite Diskretisierung der funktionalen Ungleichungsnebenbedingungen ist nicht erforderlich, denn diese erfolgt implizit durch eine numerische Integralauswertung im Rahmen der linearen Teilprobleme. Da die Methode an Stellen aktiver Nebenbedingungen nicht wohldefiniert ist, kommt zusätzlich eine Variante des Reduktions-Ansatzes zum Einsatz, bei welcher der Vektor der Optimierungsvariablen um die (endliche) Menge der aktiven Indizes erweitert wird. Weiterhin wurde eine Kollisionsvermeidung integriert, die in den Teilschritt der Liniensuche eingreift und die Problemformulierung dynamisch um Randbedingungen zur lokalen Berücksichtigung von Hindernissen erweitert. Experimentelle Untersuchungen bestätigen, dass die Ergebnisse des hybriden Ansatzes aus RRT(*) und numerischem Optimierungsverfahren der klassischen RRT*-basierten Trajektorienoptimierung überlegen sind. Der erforderliche Rechenaufwand ist zwar beträchtlich, aber unter realistischen Bedingungen praktisch beherrschbar

    Accurate Real-Time Framework for Complex Pre-defined Cuts in Finite Element Modeling

    Get PDF
    PhD ThesisAchieving detailed pre-defined cuts on deformable materials is vitally pivotal for many commercial applications, such as cutting scenes in games and vandalism effects in virtual movies. In these types of applications, the majority of resources are allocated to achieve high-fidelity representations of materials and the virtual environments. In the case of limited computing resources, it is challenging to achieve a convincing cutting effect. On the premise of sacrificing realism effects or computational cost, a considerable amount of research work has been carried out, but the best solution that can be compatible with both cases has not yet been identified. This doctoral dissertation is dedicated to developing a unique framework for representing pre-defined cuts of deformable surface models, which can achieve real-time, detailed cutting while maintaining the realistic physical behaviours. In order to achieve this goal, we have made in-depth explorations from geometric and numerical perspectives. From a geometric perspective, we propose a robust subdivision mechanism that allows users to make arbitrary predetermined cuts on elastic surface models based on the finite element method (FEM). Specifically, after the user separates the elements in an arbitrary manner (i.e., linear or non-linear) on the topological mesh, we then optimise the resulting mesh by regenerating the triangulation within the element based on the Delaunay triangulation principle. The optimisation of regenerated triangles, as a process of refining the ill-shaped elements that have small Aspect Ratio, greatly improves the realism of physical behaviours and guarantees that the refinement process is balanced with real-time requirements. The above subdivision mechanism can improve the visual effect of cutting, but it neglects the fact that elements cannot be perfectly cut through any pre-defined trajectories. The number of ill-shaped elements generated yield a significant impact on the optimisation time: a large number of ill-shaped elements will render the cutting slow or even collapse, and vice versa. Our idea is based on the core observation that the producing of ill-shaped elements is largely attributed to the condition number of the global stiffness matrix. Practically, for a stiffness matrix, a large condition number means that it is almost singular, and the calculation of its inverse or the solution of a system of linear equations are prone to large numerical errors and time-consuming. It motivates us to alleviate the impact of condition number of the global stiffness matrix from the numerical aspects. Specifically, we address this issue in a novel manner by converting the global stiffness matrix into the form of a covariance matrix, in which the number of conditions of the matrix can be reduced by exploiting appropriate matrix normalisation to the eigenvalues. Furthermore, we investigated the efficiency of two different scenarios: an exact square-root normalisation and its approximation based on the Newton-Schulz iteration. Experimental tests of the proposed framework demonstrate that it can successfully reproduce competitive visuals of detailed pre-defined cuts compared with the state-of-the-art method (Manteaux et al. 2015) while obtaining a significant improvement on the FPS, increasing up to 46.49 FPS and 21.93 FPS during and after the cuts, respectively. Also, the new refinement method can stably maintain the average Aspect Ratio of the model mesh after the cuts at less than 3 and the average Area Ratio around 3%. Besides, the proposed two matrix normalisation strategies, including ES-CGM and AS-CGM, have shown the superiority of time efficiency compared with the baseline method (Xin et al. 2018). Specifically, the ES-CGM and AS-CGM methods obtained 5 FPS and 10 FPS higher than the baseline method, respectively. These experimental results strongly support our conclusion which is that this new framework would provide significant benefits when implemented for achieving detailed pre-defined cuts at a real-time rate

    Ray tracing geometric models and parametric surfaces

    Get PDF
    Ankara : The Department of Computer Engineering and Information Sciences and the Institute of Engineering and Science of Bilkent Univ. , 1989.Thesis (Master's) -- Bilkent University, 1989.Includes bibliographical references leaves 42-46.In many computer graphics applications such as CAD, realistic displays have very important and positive effects on designers using the system. There axe several techniques to generate realistic images with the computer. Ray tracing gives the most effective results by simulating the interaction of light with its environment. Furthermore, this technique can be easily adopted to many physical phenomena such as reflection, refraction, shadows, etc. by which the interaction of many different objects with each other could be realistically simulated. However, it may require excessive amount of time to generate an image. In this thesis , we studied the ray tracing algorithm arid the speed problem associated with it and several methods developed to overcome this problem. We also implemented a ray tracer system that could be used to model a three dimensional scene and And out the lighting effects on the objects.İşler, VeysiM.S

    Fast Volume Rendering and Deformation Algorithms

    Full text link
    Volume rendering is a technique for simultaneous visualization of surfaces and inner structures of objects. However, the huge number of volume primitives (voxels) in a volume, leads to high computational cost. In this dissertation I developed two algorithms for the acceleration of volume rendering and volume deformation. The first algorithm accelerates the ray casting of volume. Previous ray casting acceleration techniques like space-leaping and early-ray-termination are only efficient when most voxels in a volume are either opaque or transparent. When many voxels are semi-transparent, the rendering time will increase considerably. Our new algorithm improves the performance of ray casting of semi-transparently mapped volumes by exploiting the opacity coherency in object space, leading to a speedup factor between 1.90 and 3.49 in rendering semi-transparent volumes. The acceleration is realized with the help of pre-computed coherency distances. We developed an efficient algorithm to encode the coherency information, which requires less than 12 seconds for data sets with about 8 million voxels. The second algorithm is for volume deformation. Unlike the traditional methods, our method incorporates the two stages of volume deformation, i.e. deformation and rendering, into a unified process. Instead to deform each voxel to generate an intermediate deformed volume, the algorithm follows inversely deformed rays to generate the desired deformation. The calculations and memory for generating the intermediate volume are thus saved. The deformation continuity is achieved by adaptive ray division which matches the amplitude of local deformation. We proposed approaches for shading and opacit adjustment which guarantee the visual plausibility of deformation results. We achieve an additional deformation speedup factor of 2.34~6.58 by incorporating early-ray-termination, space-leaping and the coherency acceleration technique in the new deformation algorithm

    A Parametrization-Based Surface Reconstruction System for Triangular Mesh Simplification with Application to Large Scale Scenes

    Full text link
    The laser scanner is nowadays widely used to capture the geometry of art, animation maquettes, or large architectural, industrial, and land form models. It thus poses specific problems depending on the model scale. This thesis provides a solution for simplification of triangulated data and for surface reconstruction of large data sets, where feature edges provide an obvious segmentation structure. It also explores a new method for model segmentation, with the goal of applying multiresolution techniques to data sets characterized by curvy areas and the lack of clear demarcation features. The preliminary stage of surface segmentation, which takes as input single or multiple scan data files, generates surface patches which are processed independently. The surface components are mapped onto a two-dimensional domain with boundary constraints, using a novel parametrization weight coefficient. This stage generates valid parameter domain points, which can be fed as arguments to parametric modeling functions or surface approximation schemes. On this domain, our approach explores two types of remeshing. First, we generate points in a regular grid pattern, achieving multiresolution through a flexible grid step, which nevertheless is designed to produce a globally uniform resampling aspect. In this case, for reconstruction, we attempt to solve the open problem of border reconciliation across adjacent domains by retriangulating the border gap between the grid and the fixed irregular border. Alternatively, we straighten the domain borders in the parameter domain and coarsely triangulate the resulting simplified polygons, resampling the base domain triangles in a 1-4 subdivision pattern, achieving multiresolution from the number of subdivision steps. For mesh reconstruction, we use a linear interpolation method based on the original mesh triangles as control points on local planes, using a saved triangle correspondence between the original mesh and the parametric domain. We also use a region-wide approximation method, applied to the parameter grid points, which first generates data-trained control points, and then uses them to obtain the reconstruction values at the resamples. In the grid resampling scheme, due to the border constraints, the reassembly of the segmented, sequentially processed data sets is seamless. In the subdivision scheme, we align adjacent border fragments in the parameter space, and use a region-to-fragment map to achieve the same border reconstruction across two neighboring components. We successfully process data sets up to 1,000,000 points in one pass of our program, and are capable of assembling larger scenes from sequential runs. Our program consists of a single run, without intermediate storage. Where we process large input data files, we fragment the input using a nested application of our segmentation algorithm to reduce the size of the input scenes, and our pipeline reassembles the reconstruction output from multiple data files into a unique view

    Let the agents do the talking: On the influence of vocal tract anatomy no speech during ontogeny

    Get PDF

    Dynamic data structures and saliency-influenced rendering

    Get PDF
    With increasing heterogeneity of modern hardware, different requirements for 3d applications arise. Despite the fact that real-time rendering of photo-realistic images is possible using today’s graphics cards, still large computational effort is required. Furthermore, smart-phones or computers with older, less powerful graphics cards may not be able to reproduce these results. To retain interactive rendering, usually the detail of a scene is reduced, and so less data needs to be processed. This removal of data, however, may introduce errors, so called artifacts. These artifacts may be distracting for a human spectator when gazing at the display. Thus, the visual quality of the presented scene is reduced. This is counteracted by identifying features of an object that can be removed without introducing artifacts. Most methods utilize geometrical properties, such as distance or shape, to rate the quality of the performed reduction. This information used to generate so called Levels Of Detail (LODs), which are made available to the rendering system. This reduces the detail of an object using the precalculated LODs, e.g. when it is moved into the back of the scene. The appropriate LOD is selected using a metric, and it is replaced with the current displayed version. This exchange must be made smoothly, requiring both LOD-versions to be drawn simultaneously during a transition. Otherwise, this exchange will introduce discontinuities, which are easily discovered by a human spectator. After completion of the transition, only the newly introduced LOD-version is drawn and the previous overhead removed. These LOD-methods usually operate with discrete levels and exploit limitations of both the display and the spectator: the human. Humans are limited in their vision. This ranges from being unable to distinct colors at varying illumination scenarios to the limitation to focus only at one location at a time. Researchers have developed many applications to exploit these limitations to increase the quality of an applied compression. Some popular methods of vision-based compression are MPEG or JPEG. For example, a JPEG compression exploits the reduced sensitivity of humans regarding color and so encodes colors with a lower resolution. Also, other fields, such as auditive perception, allow the exploitation of human limitations. The MP3 compression, for example, reduces the quality of stored frequencies if other frequencies are masking it. For representation of perception various computer models exist. In our rendering scenario, a model is advantageous that cannot be influenced by a human spectator, such as the visual salience or saliency. Saliency is a notion from psycho-physics that determines how an object “pops out” of its surrounding. These outstanding objects (or features) are important for the human vision and are directly evaluated by our Human Visual System (HVS). Saliency combines multiple parts of the HVS and allows an identification of regions where humans are likely to look at. In applications, saliency-based methods have been used to control recursive or progressive rendering methods. Especially expensive display methods, such as pathtracing or global illumination calculations, benefit from a perceptual representation as recursions or calculations can be aborted if only small or unperceivable errors are expected to occur. Yet, saliency is commonly applied to 2d images, and an extension towards 3d objects has only partially been presented. Some issues need to be addressed to accomplish a complete transfer. In this work, we present a smart rendering system that not only utilizes a 3d visual salience model but also applies the reduction in detail directly during rendering. As opposed to normal LOD-methods, this detail reduction is not limited to a predefined set of levels, but rather a dynamic and continuous LOD is created. Furthermore, to apply this reduction in a human-oriented way, a universal function to compute saliency of a 3d object is presented. The definition of this function allows to precalculate and store object-related visual salience information. This stored data is then applicable in any illumination scenario and allows to identify regions of interest on the surface of a 3d object. Unlike preprocessed methods, which generate a view-independent LOD, this identification includes information of the scene as well. Thus, we are able to define a perception-based, view-specific LOD. Performance measures of a prototypical implementation on computers with modern graphic cards achieved interactive frame rates, and several tests have proven the validity of the reduction. The adaptation of an object is performed with a dynamic data structure, the TreeCut. It is designed to operate on hierarchical representations, which define a multi-resolution object. In such a hierarchy, the leaf nodes contain the highest detail while inner nodes are approximations of their respective subtree. As opposed to classical hierarchical rendering methods, a cut is stored and re-traversal of a tree during rendering is avoided. Due to the explicit cut representation, the TreeCut can be altered using only two core operations: refine and coarse. The refine-operation increases detail by replacing a node of the tree with its children while the coarse-operation removes the node along with its siblings and replaces them with their parent node. These operations do not rely on external information and can be performed in a local manner. These only require direct successor or predecessor information. Different strategies to evolve the TreeCut are presented, which adapt the representation using only information given by the current cut. These evaluate the cut by assigning either a priority or a target-level (or bucket) to each cut-node. The former is modelled as an optimization problem that increases the average priority of a cut while being restricted in some way, e.g. in size. The latter evolves the cut to match a certain distribution. This is applied in cases where a prioritization of nodes is not applicable. Both evaluation strategies operate with linear time complexity with respect to the size of the current TreeCut. The data layout is chosen to separate rendering data and hierarchy to enable multi-threaded evaluation and display. The object is adapted over multiple frames while the rendering is not interrupted by the used evaluation strategy. Therefore, we separate the representation of the hierarchy from the rendering data. Due to its design, this overhead imposed to the TreeCut data structure does not influence rendering performance, and a linear time complexity for rendering is retained. The TreeCut is not only limited to alter geometrical detail of an object. The TreeCut has successfully been applied to create a non-photo-realistic stippling display, which draws the object with equal sized points in varying density. In this case the bucket-based evaluation strategy is utilized, which determines the distribution of the cut based on local illumination information. As an alternative, an attention drawing mechanism is proposed, which applies the TreeCut evaluation strategies to define the display style of a notification icon. A combination of external priorities is used to derive the appropriate icon version. An application for this mechanism is a messaging system that accounts for the current user situation. When optimizing an object or scene, perceptual methods allow to account for or exploit human limitations. Therefore, visual salience approaches derive a saliency map, which encodes regions of interest in a 2d map. Rendering algorithms extract importance from such a map and adapt the rendering accordingly, e.g. abort a recursion when the current location is unsalient. The visual salience depends on multiple factors including the view and the illumination of the scene. We extend the existing definition of the 2d saliency and propose a universal function for 3d visual salience: the Bidirectional Saliency Weight Distribution Function (BSWDF). Instead of extracting the saliency from 2d image and approximate 3d information, we directly compute this information using the 3d data. We derive a list of equivalent features for the 3d scenario and add them to the BSWDF. As the BSWDF is universal, also 2d images are covered with the BSWDF, and the calculation of the important regions within images is possible. To extract the individual features that contribute to visual salience, capabilities of modern graphics card in combination with an accumulation method for rendering is utilized. Inspired from point-based rendering methods local features are summed up in a single surface element (surfel) and are compared with their surround to determine whether they “pop out”. These operations are performed with a shader-program that is executed on the Graphics Processing Unit (GPU) and has direct access to the 3d data. This increases processing speed because no transfer of the data is required. After computation, each of these object-specific features can be combined to derive a saliency map for this object. Surface specific information, e.g. color or curvature, can be preprocessed and stored onto disk. We define a sampling scheme to determine the views that need to be evaluated for each object. With these schemes, the features can be interpolated for any view that occurs during rendering, and the according surface data is reconstructed. These sampling schemes compose a set of images in form of a lookup table. This is similar to existing rendering techniques, which extract illumination information from a lookup. The size of the lookup table increases only with the number of samples or the image size used for creation as the images are of equal size. Thus, the quality of the saliency data is independent of the object’s geometrical complexity. The computation of a BSWDF can be performed either on a Central Processing Unit (CPU) or a GPU, and an implementation requires only a few instructions when using a shader program. If the surface features have been stored during a preprocess, a reprojection of the data is performed and combined with the current information of the object. Once the data is available, the computation of the saliency values is done using a specialized illumination model, and a priority for each primitive is extracted. If the GPU is used, the calculated data has to be transferred from the graphics card. We therefore use the “transform feedback” capabilities, which allow high transfer rates and preserve the order of processed primitives. So, an identification of regions of interest based on the currently used primitives is achieved. The TreeCut evaluation strategies are then able to optimize the representation in an perception-based manner. As the adaptation utilizes information of the current scene, each change to an object can result in new visual salience information. So, a self-optimizing system is defined: the Feedback System. The output generated by this system converges towards a perception-optimized solution. To proof the saliency information to be useful, user tests have been performed with the results generated by the proposed Feedback System. We compared a saliency-enhanced object compression to a pure geometrical approach, common for LOD-generation. One result of the tests is that saliency information allows to increase compression even further as possible with the pure geometrical methods. The participants were not able to distinguish between objects even if the saliency-based compression had only 60% of the size of the geometrical reduced object. If the size ratio is greater, saliency-based compression is rated, on average, with higher score and these results have a high significance using statistical tests. The Feedback System extends an 3d object with the capability of self-optimization. Not only geometrical detail but also other properties can be limited and optimized using the TreeCut in combination with a BSWDF. We present a dynamic animation, which utilizes a Software Development Kit (SDK) for physical simulations. This was chosen, on the one hand, to show the universal applicability of the proposed system, and on the other hand, to focus on the connection between the TreeCut and the SDK. We adapt the existing framework, and include the SDK within our design. In this case, the TreeCut-operations not only alter geometrical but also simulation detail. This increases calculation performance because both the rendering and the SDK operate on less data after the reduction has been completed. The selected simulation type is a soft-body simulation. Soft-bodies are deformable in a certain degree but retain their internal connection. An example is a piece of cloth that smoothly fits the underlying surface without tearing apart. Other types are rigid bodies, i.e. idealistic objects that cannot be deformed, and fluids or gaseous materials, which are well suited for point-based simulations. Any of these simulations scales with the number of simulation nodes used, and a reduction of detail increases performance significantly. We define a specialized BSWDF to evaluate simulation specific features, such as motion. The Feedback System then increases detail in highly salient regions, e.g. those with large motion, and saves computation time by reducing detail in static parts of the simulation. So, detail of the simulation is preserved while less nodes are simulated. The incorporation of perception in real-time rendering is an important part of recent research. Today, the HVS is well understood, and valid computer models have been derived. These models are frequently used in commercial and free software, e.g. JPEG compression. Within this thesis, the Tree-Cut is presented to change the LOD of an object in a dynamic and continuous manner. No definition of the individual levels in advance is required, and the transitions are performed locally. Furthermore, in combination with an identification of important regions by the BSWDF, a perceptual evaluation of a 3d object is achieved. As opposed to existing methods, which approximate data from 2d images, the perceptual information is directly acquired from 3d data. Some of this data can be preprocessed if necessary, to defer additional computations during rendering. The Feedback System, created by the TreeCut and the BSWDF, optimizes the representation and is not limited to visual data alone. We have shown with our prototype that interactive frame rates can be achieved with modern hardware, and we have proven the validity of the reductions by performing several user tests. However, the presented system only focuses on specific aspects, and more research is required to capture even more capabilities that a perception-based rendering system can provide
    corecore