337 research outputs found

    Accessibility for Line-Cutting in Freeform Surfaces

    Get PDF
    Manufacturing techniques such as hot-wire cutting, wire-EDM, wire-saw cutting, and flank CNC machining all belong to a class of processes called line-cutting where the cutting tool moves tangentially along the reference geometry. From a geometric point of view, line-cutting brings a unique set of challenges in guaranteeing that the process is collision-free. In this work, given a set of cut-paths on a freeform geometry as the input, we propose a conservative algorithm for finding collision-free tangential cutting directions. These directions, if they exist, are guaranteed to be globally accessible for fabricating the geometry by line-cutting. We then demonstrate how this information can be used to generate globally collision-free cut-paths. We apply our algorithm to freeform models of varying complexity.RYC-2017-2264

    CAD/CAM integration based on machining features for prismatic parts

    Get PDF
    The development of CAD and CAM technology has significantly increased efficiency in each individual area. The independent development, however, greatly restrained the improvement of overall efficiency from design to manufacturing. The simple integration between CAD and CAM systems has been achieved. Current integrated CAD/CAM systems can share the same geometry model of a product in a neutral or proprietary format. However, the process plan information of the product from CAPP systems cannot serve as a starting point for CAM systems to generate tool paths and NC programs. The user still needs to manually create the machining operations and define geometry, cutting tool, and various parameters for each operation. Features play an important role in the recent research on CAD/CAM integration. This thesis investigated the integration of CAD/CAM systems based on machining features. The focus of the research is to connect CAPP systems and CAM systems by machining features, to reduce the unnecessary user interface and to automate the process of tool path preparation. Machining features are utilized to define machining geometries and eliminate the necessity of user interventions in UG. A prototype is developed to demonstrate the CAD/CAM integration based on machining features for prismatic parts. The prototype integration layer is implemented in conjunction with an existing CAPP system, FBMach, and a commercial CAD/CAM system, Unigraphics. Not only geometry information of the product but also the process plan information and machining feature information are directly available to the CAM system and tool paths can be automatically generated from solid models and process plans

    Global optimisation of multiple gravity assist trajectories

    Get PDF
    Multiple gravity assist (MGA) trajectories represent a particular class of space trajectories in which a spacecraft exploits the encounter with one or more celestial bodies to change its velocity vector; they have been essential to reach high Delta-v targets with low propellant consumption. The search for optimal transfer trajectories can be formulated as a mixed combinatorial-continuous global optimisation problem; however, it is known that the problem is difficult to solve, especially if deep space manoeuvres (DSM) are considered. This thesis addresses the automatic design of MGA trajectories through global search techniques, in answer to the requirements of having a large number of mission options in a short time, during the preliminary design phase. Two different approaches are presented. The first is a two-level approach: a number of feasible planetary sequences are initially generated; then, for each one, families of the MGA trajectories are built incrementally. The whole transfer is decomposed into sub-problems of smaller dimension and complexity, and the trajectory is progressively composed by solving one problem after the other. At each incremental step, a stochastic search identifies sets of feasible solutions: this region is preserved, while the rest of the search space is pruned out. The process iterates by adding one planet-to-planet leg at a time and pruning the unfeasible portion of the solution space. Therefore, when another leg is added to the trajectory, only the feasible set for the previous leg is considered and the search space is reduced. It is shown, through comparative tests, how the proposed incremental search performs an effective pruning of the search space, providing families of optimal solutions with a lower computational cost than a non-incremental approach. Known deterministic and stochastic methods are used for the comparison. The algorithm is applied to real MGA case studies, including the ESA missions BepiColombo and Laplace. The second approach performs an integrated search for the planetary sequence and the associated trajectories. The complete design of an MGA trajectory is formulated as an autonomous planning and scheduling problem. The resulting scheduled plan provides the planetary sequence for a MGA trajectory and a good estimation of the optimality of the associated trajectories. For each departure date, a full tree of possible transfers from departure to destination is generated. An algorithm inspired by Ant Colony Optimization (ACO) is devised to explore the space of possible plans. The ants explore the tree from departure to destination, adding one node at a time, using a probability function to select one of the feasible directions. Unlike standard ACO, a taboo-based heuristics prevents ants from re-exploring the same solutions. This approach is applied to the design of optimal transfers to Saturn (inspired by Cassini) and to Mercury, and it demonstrated to be very competitive against known traditional stochastic population-based techniques

    Automatic video segmentation employing object/camera modeling techniques

    Get PDF
    Practically established video compression and storage techniques still process video sequences as rectangular images without further semantic structure. However, humans watching a video sequence immediately recognize acting objects as semantic units. This semantic object separation is currently not reflected in the technical system, making it difficult to manipulate the video at the object level. The realization of object-based manipulation will introduce many new possibilities for working with videos like composing new scenes from pre-existing video objects or enabling user-interaction with the scene. Moreover, object-based video compression, as defined in the MPEG-4 standard, can provide high compression ratios because the foreground objects can be sent independently from the background. In the case that the scene background is static, the background views can even be combined into a large panoramic sprite image, from which the current camera view is extracted. This results in a higher compression ratio since the sprite image for each scene only has to be sent once. A prerequisite for employing object-based video processing is automatic (or at least user-assisted semi-automatic) segmentation of the input video into semantic units, the video objects. This segmentation is a difficult problem because the computer does not have the vast amount of pre-knowledge that humans subconsciously use for object detection. Thus, even the simple definition of the desired output of a segmentation system is difficult. The subject of this thesis is to provide algorithms for segmentation that are applicable to common video material and that are computationally efficient. The thesis is conceptually separated into three parts. In Part I, an automatic segmentation system for general video content is described in detail. Part II introduces object models as a tool to incorporate userdefined knowledge about the objects to be extracted into the segmentation process. Part III concentrates on the modeling of camera motion in order to relate the observed camera motion to real-world camera parameters. The segmentation system that is described in Part I is based on a background-subtraction technique. The pure background image that is required for this technique is synthesized from the input video itself. Sequences that contain rotational camera motion can also be processed since the camera motion is estimated and the input images are aligned into a panoramic scene-background. This approach is fully compatible to the MPEG-4 video-encoding framework, such that the segmentation system can be easily combined with an object-based MPEG-4 video codec. After an introduction to the theory of projective geometry in Chapter 2, which is required for the derivation of camera-motion models, the estimation of camera motion is discussed in Chapters 3 and 4. It is important that the camera-motion estimation is not influenced by foreground object motion. At the same time, the estimation should provide accurate motion parameters such that all input frames can be combined seamlessly into a background image. The core motion estimation is based on a feature-based approach where the motion parameters are determined with a robust-estimation algorithm (RANSAC) in order to distinguish the camera motion from simultaneously visible object motion. Our experiments showed that the robustness of the original RANSAC algorithm in practice does not reach the theoretically predicted performance. An analysis of the problem has revealed that this is caused by numerical instabilities that can be significantly reduced by a modification that we describe in Chapter 4. The synthetization of static-background images is discussed in Chapter 5. In particular, we present a new algorithm for the removal of the foreground objects from the background image such that a pure scene background remains. The proposed algorithm is optimized to synthesize the background even for difficult scenes in which the background is only visible for short periods of time. The problem is solved by clustering the image content for each region over time, such that each cluster comprises static content. Furthermore, it is exploited that the times, in which foreground objects appear in an image region, are similar to the corresponding times of neighboring image areas. The reconstructed background could be used directly as the sprite image in an MPEG-4 video coder. However, we have discovered that the counterintuitive approach of splitting the background into several independent parts can reduce the overall amount of data. In the case of general camera motion, the construction of a single sprite image is even impossible. In Chapter 6, a multi-sprite partitioning algorithm is presented, which separates the video sequence into a number of segments, for which independent sprites are synthesized. The partitioning is computed in such a way that the total area of the resulting sprites is minimized, while simultaneously satisfying additional constraints. These include a limited sprite-buffer size at the decoder, and the restriction that the image resolution in the sprite should never fall below the input-image resolution. The described multisprite approach is fully compatible to the MPEG-4 standard, but provides three advantages. First, any arbitrary rotational camera motion can be processed. Second, the coding-cost for transmitting the sprite images is lower, and finally, the quality of the decoded sprite images is better than in previously proposed sprite-generation algorithms. Segmentation masks for the foreground objects are computed with a change-detection algorithm that compares the pure background image with the input images. A special effect that occurs in the change detection is the problem of image misregistration. Since the change detection compares co-located image pixels in the camera-motion compensated images, a small error in the motion estimation can introduce segmentation errors because non-corresponding pixels are compared. We approach this problem in Chapter 7 by integrating risk-maps into the segmentation algorithm that identify pixels for which misregistration would probably result in errors. For these image areas, the change-detection algorithm is modified to disregard the difference values for the pixels marked in the risk-map. This modification significantly reduces the number of false object detections in fine-textured image areas. The algorithmic building-blocks described above can be combined into a segmentation system in various ways, depending on whether camera motion has to be considered or whether real-time execution is required. These different systems and example applications are discussed in Chapter 8. Part II of the thesis extends the described segmentation system to consider object models in the analysis. Object models allow the user to specify which objects should be extracted from the video. In Chapters 9 and 10, a graph-based object model is presented in which the features of the main object regions are summarized in the graph nodes, and the spatial relations between these regions are expressed with the graph edges. The segmentation algorithm is extended by an object-detection algorithm that searches the input image for the user-defined object model. We provide two objectdetection algorithms. The first one is specific for cartoon sequences and uses an efficient sub-graph matching algorithm, whereas the second processes natural video sequences. With the object-model extension, the segmentation system can be controlled to extract individual objects, even if the input sequence comprises many objects. Chapter 11 proposes an alternative approach to incorporate object models into a segmentation algorithm. The chapter describes a semi-automatic segmentation algorithm, in which the user coarsely marks the object and the computer refines this to the exact object boundary. Afterwards, the object is tracked automatically through the sequence. In this algorithm, the object model is defined as the texture along the object contour. This texture is extracted in the first frame and then used during the object tracking to localize the original object. The core of the algorithm uses a graph representation of the image and a newly developed algorithm for computing shortest circular-paths in planar graphs. The proposed algorithm is faster than the currently known algorithms for this problem, and it can also be applied to many alternative problems like shape matching. Part III of the thesis elaborates on different techniques to derive information about the physical 3-D world from the camera motion. In the segmentation system, we employ camera-motion estimation, but the obtained parameters have no direct physical meaning. Chapter 12 discusses an extension to the camera-motion estimation to factorize the motion parameters into physically meaningful parameters (rotation angles, focal-length) using camera autocalibration techniques. The speciality of the algorithm is that it can process camera motion that spans several sprites by employing the above multi-sprite technique. Consequently, the algorithm can be applied to arbitrary rotational camera motion. For the analysis of video sequences, it is often required to determine and follow the position of the objects. Clearly, the object position in image coordinates provides little information if the viewing direction of the camera is not known. Chapter 13 provides a new algorithm to deduce the transformation between the image coordinates and the real-world coordinates for the special application of sport-video analysis. In sport videos, the camera view can be derived from markings on the playing field. For this reason, we employ a model of the playing field that describes the arrangement of lines. After detecting significant lines in the input image, a combinatorial search is carried out to establish correspondences between lines in the input image and lines in the model. The algorithm requires no information about the specific color of the playing field and it is very robust to occlusions or poor lighting conditions. Moreover, the algorithm is generic in the sense that it can be applied to any type of sport by simply exchanging the model of the playing field. In Chapter 14, we again consider panoramic background images and particularly focus ib their visualization. Apart from the planar backgroundsprites discussed previously, a frequently-used visualization technique for panoramic images are projections onto a cylinder surface which is unwrapped into a rectangular image. However, the disadvantage of this approach is that the viewer has no good orientation in the panoramic image because he looks into all directions at the same time. In order to provide a more intuitive presentation of wide-angle views, we have developed a visualization technique specialized for the case of indoor environments. We present an algorithm to determine the 3-D shape of the room in which the image was captured, or, more generally, to compute a complete floor plan if several panoramic images captured in each of the rooms are provided. Based on the obtained 3-D geometry, a graphical model of the rooms is constructed, where the walls are displayed with textures that are extracted from the panoramic images. This representation enables to conduct virtual walk-throughs in the reconstructed room and therefore, provides a better orientation for the user. Summarizing, we can conclude that all segmentation techniques employ some definition of foreground objects. These definitions are either explicit, using object models like in Part II of this thesis, or they are implicitly defined like in the background synthetization in Part I. The results of this thesis show that implicit descriptions, which extract their definition from video content, work well when the sequence is long enough to extract this information reliably. However, high-level semantics are difficult to integrate into the segmentation approaches that are based on implicit models. Intead, those semantics should be added as postprocessing steps. On the other hand, explicit object models apply semantic pre-knowledge at early stages of the segmentation. Moreover, they can be applied to short video sequences or even still pictures since no background model has to be extracted from the video. The definition of a general object-modeling technique that is widely applicable and that also enables an accurate segmentation remains an important yet challenging problem for further research

    Feature-based hybrid inspection planning for complex mechanical parts

    Get PDF
    Globalization and emerging new powers in the manufacturing world are among many challenges, major manufacturing enterprises are facing. This resulted in increased alternatives to satisfy customers\u27 growing needs regarding products\u27 aesthetic and functional requirements. Complexity of part design and engineering specifications to satisfy such needs often require a better use of advanced and more accurate tools to achieve good quality. Inspection is a crucial manufacturing function that should be further improved to cope with such challenges. Intelligent planning for inspection of parts with complex geometric shapes and free form surfaces using contact or non-contact devices is still a major challenge. Research in segmentation and localization techniques should also enable inspection systems to utilize modern measurement technologies capable of collecting huge number of measured points. Advanced digitization tools can be classified as contact or non-contact sensors. The purpose of this thesis is to develop a hybrid inspection planning system that benefits from the advantages of both techniques. Moreover, the minimization of deviation of measured part from the original CAD model is not the only characteristic that should be considered when implementing the localization process in order to accept or reject the part; geometric tolerances must also be considered. A segmentation technique that deals directly with the individual points is a necessary step in the developed inspection system, where the output is the actual measured points, not a tessellated model as commonly implemented by current segmentation tools. The contribution of this work is three folds. First, a knowledge-based system was developed for selecting the most suitable sensor using an inspection-specific features taxonomy in form of a 3D Matrix where each cell includes the corresponding knowledge rules and generate inspection tasks. A Travel Salesperson Problem (TSP) has been applied for sequencing these hybrid inspection tasks. A novel region-based segmentation algorithm was developed which deals directly with the measured point cloud and generates sub-point clouds, each of which represents a feature to be inspected and includes the original measured points. Finally, a new tolerance-based localization algorithm was developed to verify the functional requirements and was applied and tested using form tolerance specifications. This research enhances the existing inspection planning systems for complex mechanical parts with a hybrid inspection planning model. The main benefits of the developed segmentation and tolerance-based localization algorithms are the improvement of inspection decisions in order not to reject good parts that would have otherwise been rejected due to misleading results from currently available localization techniques. The better and more accurate inspection decisions achieved will lead to less scrap, which, in turn, will reduce the product cost and improve the company potential in the market

    Automated Process Planning for Five-Axis Point Milling of Sculptured Surfaces

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Lv volume quantification via spatiotemporal analysis of real-time 3-d echocardiography

    Get PDF
    Abstract—This paper presents a method of four-dimensional (4-D) (3-D + Time) space–frequency analysis for directional denoising and enhancement of real-time three-dimensional (RT3D) ultrasound and quantitative measures in diagnostic cardiac ultrasound. Expansion of echocardiographic volumes is performed with complex exponential wavelet-like basis functions called brushlets. These functions offer good localization in time and frequency and decompose a signal into distinct patterns of oriented harmonics, which are invariant to intensity and contrast range. Deformable-model segmentation is carried out on denoised data after thresholding of transform coefficients. This process attenuates speckle noise while preserving cardiac structure location. The superiority of 4-D over 3-D analysis for decorrelating additive white noise and multiplicative speckle noise on a 4-D phantom volume expanding in time is demonstrated. Quantitative validation, computed for contours and volumes, is performed on in vitro balloon phantoms. Clinical applications of this spaciotemporal analysis tool are reported for six patient cases providing measures of left ventricular volumes and ejection fraction. Index Terms—Echocardiography, LV volume, spaciotemporal analysis, speckle denoising. I

    Coronal loop detection from solar images and extraction of salient contour groups from cluttered images.

    Get PDF
    This dissertation addresses two different problems: 1) coronal loop detection from solar images: and 2) salient contour group extraction from cluttered images. In the first part, we propose two different solutions to the coronal loop detection problem. The first solution is a block-based coronal loop mining method that detects coronal loops from solar images by dividing the solar image into fixed sized blocks, labeling the blocks as Loop or Non-Loop , extracting features from the labeled blocks, and finally training classifiers to generate learning models that can classify new image blocks. The block-based approach achieves 64% accuracy in IO-fold cross validation experiments. To improve the accuracy and scalability, we propose a contour-based coronal loop detection method that extracts contours from cluttered regions, then labels the contours as Loop and Non-Loop , and extracts geometric features from the labeled contours. The contour-based approach achieves 85% accuracy in IO-fold cross validation experiments, which is a 20% increase compared to the block-based approach. In the second part, we propose a method to extract semi-elliptical open curves from cluttered regions. Our method consists of the following steps: obtaining individual smooth contours along with their saliency measures; then starting from the most salient contour, searching for possible grouping options for each contour; and continuing the grouping until an optimum solution is reached. Our work involved the design and development of a complete system for coronal loop mining in solar images, which required the formulation of new Gestalt perceptual rules and a systematic methodology to select and combine them in a fully automated judicious manner using machine learning techniques that eliminate the need to manually set various weight and threshold values to define an effective cost function. After finding salient contour groups, we close the gaps within the contours in each group and perform B-spline fitting to obtain smooth curves. Our methods were successfully applied on cluttered solar images from TRACE and STEREO/SECCHI to discern coronal loops. Aerial road images were also used to demonstrate the applicability of our grouping techniques to other contour-types in other real applications
    corecore