4,626 research outputs found

    Integrated Reverse Modeling Techniques for the Survey of Complex Shapes in Industrial Design

    Get PDF
    This chapter proposal deals with three-dimensional survey methods applied to the geometrical acquisition of Industrial Design products. 3D acquisition techniques are defined by well-know procedures that nowadays are applied in a lot of fields, from Mechanics to Aerospace, from Robotics to Cultural Heritage. In the last years the impressive technology evolution used for developing hardware and software allowed to reach excellence peaks in the 3D instruments application. At the same time a lot of experiments and researches were leaded in order to reach a well structured pipeline of reverse modeling process, supporting the process from the real object to its digital mould. In the last decade 3D acquisition and modeling techniques tried to support Industrial Design production, but their role in every single process is not yet systematically codified for some bottlenecks present in the design process and in product knowledge, that will be discussed in the chapter in relation with the state of the art of the current technology. In addition the level of geometrical complexity of any specific product often highlight limitations in the use of a single 3D laser scanner technology, which can’t reach good results with all the typologies of Industrial Design products. These factors are critically framed, outlining the definition of object complexity towards the suited choice of survey methods and technologies for every condition. The actual limits of 3D acquisition systems will be identified and compared to integrate ones (i.e. systems composed by different complementary instruments are used together) applied in different fields, from Car Design to Product Restyling, from Nautical Analysis to Design in Cultural Heritage. The aim of the contribution is to demonstrate the intrinsic limits of a single 3D instrument application and the necessity to apply multi-resolution system or sensor fusion to solve the larger part of the problems in the 3D acquisition of complex shapes

    Scale Stain: Multi-Resolution Feature Enhancement in Pathology Visualization

    Full text link
    Digital whole-slide images of pathological tissue samples have recently become feasible for use within routine diagnostic practice. These gigapixel sized images enable pathologists to perform reviews using computer workstations instead of microscopes. Existing workstations visualize scanned images by providing a zoomable image space that reproduces the capabilities of the microscope. This paper presents a novel visualization approach that enables filtering of the scale-space according to color preference. The visualization method reveals diagnostically important patterns that are otherwise not visible. The paper demonstrates how this approach has been implemented into a fully functional prototype that lets the user navigate the visualization parameter space in real time. The prototype was evaluated for two common clinical tasks with eight pathologists in a within-subjects study. The data reveal that task efficiency increased by 15% using the prototype, with maintained accuracy. By analyzing behavioral strategies, it was possible to conclude that efficiency gain was caused by a reduction of the panning needed to perform systematic search of the images. The prototype system was well received by the pathologists who did not detect any risks that would hinder use in clinical routine

    Turbidity weakens selection for assortment in body size in groups

    Get PDF
    Prey animals commonly associate with similar-looking individuals to reduce predation risk, via a reduction in predator targeting accuracy (the confusion effect) and preferential targeting of distinct individuals (the oddity effect). These effects are mediated by body size, as predators often preferentially select large-bodied individuals, which are therefore at an increased risk within a group. The selection pressure to avoid oddity by associating with similar sized group mates is stronger for large individuals than small. This selection depends on the ability of both predators and prey to accurately assess body size and respond accordingly. In aquatic systems, turbidity degrades the visual environment and negatively impacts on the ability of predators to detect (and consume) prey. We assessed the effect of algal turbidity on predator–prey interactions in the context of the oddity effect from the perspective of both predator and prey. From a predator’s perspective, we find that 9-spined sticklebacks preferentially target larger Daphnia in mixed swarms in clear water, but not in turbid water, although the difference in attack rates is not statistically significant. When making shoaling decisions, large sticklebacks preferentially associate with size-matched individuals in clear water, but not turbid water, whereas small individuals showed no social preference in either clear or turbid water. We suggest that a reduced ability or motivation to discriminate between prey in turbid water relaxes the predation pressure on larger prey individuals allowing greater flexibility in shoaling decisions. Thus, turbidity may play a significant role in predator–prey interactions, by altering predator–prey interactions

    Bottleneck Management through Strategic Sequencing in Smart Manufacturing Systems

    Get PDF
    Nowadays, industries put a significant emphasis on finding the optimum order for carrying out jobs in sequence. This is a crucial element in determining net productivity. Depending on the demand criterion, all production systems, including flexible manufacturing systems, follow a predefined sequence of job-based machine operations. The complexity of the problem increases with increasing machines and jobs to sequence, demanding the use of an appropriate sequencing technique. The major contribution of this work is to modify an existing algorithm with a very unusual machine setup and find the optimal sequence which will really minimize the makespan. This custom machine setup completes all tasks by maintaining precedence and satisfying all other constraints. This thesis concentrates on identifying the most effective technique of sequencing which will be validated in a lab environment and a simulated environment. It illustrates some of the key methods of addressing a circular non permutation flow shop sequencing problem with some additional constraints. Additionally, comparisons among the various heuristics algorithms are presented based on different sequencing criteria. The optimum sequence is provided as an input to a real-life machine set up and a simulated environment for selecting the best performing algorithm which is the basic goal of this research. To achieve this goal, at first, a code using python programming language was generated to find an optimum sequence. By analyzing the results, the makespan is increasing with the number of jobs but additional pallet constraint shows, adding more pallets will help to reduce makespan for both flow shops and job shops. Though the sequence obtained from both algorithms is different, for flow shops the makespan remains same for both cases but in the job shop scenario Nawaz, Enscore and Ham (NEH) algorithms always perform better than Campbell Dudek Smith (CDS) algorithms. For job shops with different combinations the makespan decreases mostly for maximum percentage of easy category jobs combined with equal percentage of medium and complex category jobs

    Machine Learning and Deep Learning for the Built Heritage Analysis: Laser Scanning and UAV-Based Surveying Applications on a Complex Spatial Grid Structure

    Get PDF
    The reconstruction of 3D geometries starting from reality-based data is challenging and time-consuming due to the difficulties involved in modeling existing structures and the complex nature of built heritage. This paper presents a methodological approach for the automated segmentation and classification of surveying outputs to improve the interpretation and building information modeling from laser scanning and photogrammetric data. The research focused on the surveying of reticular, space grid structures of the late 19th–20th–21st centuries, as part of our architectural heritage, which might require monitoring maintenance activities, and relied on artificial intelligence (machine learning and deep learning) for: (i) the classification of 3D architectural components at multiple levels of detail and (ii) automated masking in standard photogrammetric processing. Focusing on the case study of the grid structure in steel named La Vela in Bologna, the work raises many critical issues in space grid structures in terms of data accuracy, geometric and spatial complexity, semantic classification, and component recognition

    Digitization of industrial quality control procedures applied to visual and geometrical inspections

    Get PDF
    Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáIndustries quality control procedures are usually dependent on gauge inspection tools, and these tools are used to inspect visual and geometrical tolerance conformity. Operators are guided during an inspection by using paper tutorials that assist them in performing their tasks and registering the result of the performed analysis. This traditional method of registering information may be misleading, lowering the effectiveness of the quality control by providing inaccurate and error-prone inspection results. This work implements a system that uses emergent technologies (e.g., Human-Machine Interfaces, Virtual Reality, Distributed Systems, Cloud Computing, and Internet of Things (IoT)) to propose a costeffective solution that supports operators and quality control managers in the realization and data collection of gauge inspection control procedures. The final system was deployed in an industrial production plant, with the delivered results showing its efficiency, robustness, and highly positive feedback from the operators and managers. The software may offer a quicker and efficient execution of analysis tasks, significantly decreasing the setup time required to change the inspected product reference

    Aerial Field Robotics

    Full text link
    Aerial field robotics research represents the domain of study that aims to equip unmanned aerial vehicles - and as it pertains to this chapter, specifically Micro Aerial Vehicles (MAVs)- with the ability to operate in real-life environments that present challenges to safe navigation. We present the key elements of autonomy for MAVs that are resilient to collisions and sensing degradation, while operating under constrained computational resources. We overview aspects of the state of the art, outline bottlenecks to resilient navigation autonomy, and overview the field-readiness of MAVs. We conclude with notable contributions and discuss considerations for future research that are essential for resilience in aerial robotics.Comment: Accepted in the Encyclopedia of Robotics, Springe

    The development and validation of the Virtual Tissue Matrix, a software application that facilitates the review of tissue microarrays on line

    Get PDF
    BACKGROUND: The Tissue Microarray (TMA) facilitates high-throughput analysis of hundreds of tissue specimens simultaneously. However, bottlenecks in the storage and manipulation of the data generated from TMA reviews have become apparent. A number of software applications have been developed to assist in image and data management; however no solution currently facilitates the easy online review, scoring and subsequent storage of images and data associated with TMA experimentation. RESULTS: This paper describes the design, development and validation of the Virtual Tissue Matrix (VTM). Through an intuitive HTML driven user interface, the VTM provides digital/virtual slide based images of each TMA core and a means to record observations on each TMA spot. Data generated from a TMA review is stored in an associated relational database, which facilitates the use of flexible scoring forms. The system allows multiple users to record their interpretation of each TMA spot for any parameters assessed. Images generated for the VTM were captured using a standard background lighting intensity and corrective algorithms were applied to each image to eliminate any background lighting hue inconsistencies or vignetting. Validation of the VTM involved examination of inter-and intra-observer variability between microscope and digital TMA reviews. Six bladder TMAs were immunohistochemically stained for E-Cadherin, β-Catenin and PhosphoMet and were assessed by two reviewers for the amount of core and tumour present, the amount and intensity of membrane, cytoplasmic and nuclear staining. CONCLUSION: Results show that digital VTM images are representative of the original tissue viewed with a microscope. There were equivalent levels of inter-and intra-observer agreement for five out of the eight parameters assessed. Results also suggest that digital reviews may correct potential problems experienced when reviewing TMAs using a microscope, for example, removal of background lighting variance and tint, and potential disorientation of the reviewer, which may have resulted in the discrepancies evident in the remaining three parameters
    • …
    corecore