12,736 research outputs found

    Geometric and form feature recognition tools applied to a design for assembly methodology

    Get PDF
    The paper presents geometric tools for an automated Design for Assembly (DFA) assessment system. For each component in an assembly a two step features search is performed: firstly (using the minimal bounding box) mass, dimensions and symmetries are identified allowing the part to be classified, according to DFA convention, as either rotational or prismatic; secondly form features are extracted allowing an effective method of mechanised orientation to be determined. Together these algorithms support the fuzzy decision support system, of an assembly-orientated CAD system known as FuzzyDFA

    Accurate detection of dysmorphic nuclei using dynamic programming and supervised classification

    Get PDF
    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows

    Blend recognition from CAD mesh models using pattern matching

    Get PDF
    This paper reports a unique, platform-independent approach for blend recognition from CAD mesh model using pattern matching. About 60% of the average portion of the total facets in CAD mesh model is blended features. So, it becomes essential and necessary to extract these blend features for the successful accomplishment of seamless CAD/CAM integration. The facets of the same region have similar patterns. The focus of this paper is to recognize the blends using hybrid mesh segmentation based on pattern matching. Blend recognition has been carried out in three phases viz. preprocessing, pattern matching hybrid mesh segmentation and blend feature identification. In preprocessing, the adjacency relationship is set in facets of CAD mesh model, and then Artificial Neural Networks based threshold prediction is employed for hybrid mesh segmentation. In the second phase, pattern matching hybrid mesh segmentation is used for clustering the facets into patches based on distinct geometrical properties. After segmentation, each facet group is subjected to several conformal tests to identify the type of analytical surfaces such as a cylinder, cone, sphere, or tori. In the blend feature recognition phase, the rule-based reasoning is used for blend feature extraction. The proposed method has been implemented in VC++ and extensively tested on benchmark test cases for prismatic surfaces. The proposed algorithm extracts the features with coverage of more than 95 %. The innovation lies in “Facet Area” based pattern matching hybrid mesh segmentation and blend recognition rules. The extracted feature information can be utilized for downstream applications like tool path generation, computer-aided process planning, FEA, reverse engineering, and additive manufacturing

    Multimodal Convolutional Neural Networks for Matching Image and Sentence

    Full text link
    In this paper, we propose multimodal convolutional neural networks (m-CNNs) for matching image and sentence. Our m-CNN provides an end-to-end framework with convolutional architectures to exploit image representation, word composition, and the matching relations between the two modalities. More specifically, it consists of one image CNN encoding the image content, and one matching CNN learning the joint representation of image and sentence. The matching CNN composes words to different semantic fragments and learns the inter-modal relations between image and the composed fragments at different levels, thus fully exploit the matching relations between image and sentence. Experimental results on benchmark databases of bidirectional image and sentence retrieval demonstrate that the proposed m-CNNs can effectively capture the information necessary for image and sentence matching. Specifically, our proposed m-CNNs for bidirectional image and sentence retrieval on Flickr30K and Microsoft COCO databases achieve the state-of-the-art performances.Comment: Accepted by ICCV 201

    Idealized models for FEA derived from generative modeling processes based on extrusion primitives

    No full text
    International audienceShape idealization transformations are very common when adapting a CAD component to FEA requirements. Here, an idealization approach is proposed that is based on generative shape processes used to decompose an initial B-Rep object, i.e. extrusion processes. The corresponding primitives form the basis of candidate sub domains for idealization and their connections conveyed through the generative processes they belong to, bring robustness to set up the appropriate connections between idealized sub domains. Taking advantage of an existing construction tree as available in a CAD software does not help much because it may be complicated to use it for idealization processes. Using generative processes attached to an object that are no longer reduced to a single construction tree but to a graph containing all non trivial construction trees, is more useful for the engineer to evaluate variants of idealization. From this automated decomposition, each primitive is analyzed to define whether it can idealized or not. Subsequently, geometric interfaces between primitives are taken into account to determine more precisely the idealizable sub domains and their contours when primitives are incrementally merged to come back to the initial object

    Efficient and effective human action recognition in video through motion boundary description with a compact set of trajectories

    Get PDF
    Human action recognition (HAR) is at the core of human-computer interaction and video scene understanding. However, achieving effective HAR in an unconstrained environment is still a challenging task. To that end, trajectory-based video representations are currently widely used. Despite the promising levels of effectiveness achieved by these approaches, problems regarding computational complexity and the presence of redundant trajectories still need to be addressed in a satisfactory way. In this paper, we propose a method for trajectory rejection, reducing the number of redundant trajectories without degrading the effectiveness of HAR. Furthermore, to realize efficient optical flow estimation prior to trajectory extraction, we integrate a method for dynamic frame skipping. Experiments with four publicly available human action datasets show that the proposed approach outperforms state-of-the-art HAR approaches in terms of effectiveness, while simultaneously mitigating the computational complexity
    corecore