228 research outputs found

    A state-of-the-art study of cloud manufacturing

    Get PDF
    There is a considerable industrial standard transfer. This will drive to the adaptable usage of various globally shared, emerging within the manufacturing field by Combining advanced manufacturing models such as the Internet of things, cloud computing, service-oriented technologies, big data, and recently Cloud Manufacturing. Cloud manufacturing a program is a combined program that empowers manufacturers to distribute sources contribute manufacturing services and including supports compatible cooperation. Economically significantources, such as the manufacture of software tools, importance, knowledge, and manufacturing capacity, and material, then become available to supposed users on a global basis, the principal benefits and difficulties of achieving cloud manufacturing are analyzed. the key New technologies for manufacturing model, is cloud manufacturing, cloud manufacturing aims to achieve full participation, free of charge, And easy for utilization of various sources and manufacturing abilities in the form of manufacturing setting. This paper contributes an overview of the cloud manufacturing

    Optimal control of wave energy converters

    Get PDF
    Wave Energy Converters (WECs) are devices designed to absorb energy from ocean waves. The particular type of Wave Energy Converter (WEC) considered in this thesis is an oscillating body; energy conversion is carried out by means of a structure immersed in water which oscillates under forces exerted by waves. This thesis addresses the control of oscillating body WECs and the objective of the control system is to optimise the motion of the devices that maximises the energy absorption. In particular, this thesis presents the formulation of the optimal control problem for WECs in the framework of direct transcription methods, known as spectral and pseudospectral optimal control. Direct transcription methods transform continuous time optimal control problems into Non Linear Programming (NLP) problems, for which the literature (and the market) offer a large number of standard algorithms (and software packages). It is shown, in this thesis, that direct transcription gives the possibility of formulating complex control problems where realistic scenarios can be taken into account, such as physical limitations and nonlinearities in the behaviour of the devices. Additionally, by means of spectral and pseudospectral methods, it is possible to find an approximation of the optimal solution directly from sampled frequency and impulse response models of the radiation forces, obviating the need for finite order approximate models. By implementing a spectral method, convexity of the NLP problem, associated with the optimal control problem for a single body WEC described by a linear model, is demonstrated analytically. The solution to a nonlinear optimal control problem is approximated by means of pseudospectral optimal control. In the nonlinear case, simulation results show a significant difference in the optimal behaviour of the device, both in the motion and in the energy absorption, when the quadratic term describing the viscous forces are dominant, compared to the linear case. This thesis also considers the comparison of two control strategies for arrays of WECs. A Global Control strategy computes the optimal motion by taking into account the complete model of the array and it provides the global optimum for the absorbed energy. In contrast, an Independent Control strategy implements a control system on each device which is independent from all the other devices. The final part of the thesis illustrates an approach for the study of the effects of constraints on the total absorbed energy. The procedure allows the feasibility of the constrained energy maximisation problem to be studied, and it provides an intuitive framework for the design of WECs relating to the power take-off operating envelope, thanks to the geometrical interpretation of the functions describing both the total absorbed energy and the constraints

    Efficient and Accurate Segmentation of Defects in Industrial CT Scans

    Get PDF
    Industrial computed tomography (CT) is an elementary tool for the non-destructive inspection of cast light-metal or plastic parts. A comprehensive testing not only helps to ensure the stability and durability of a part, it also allows reducing the rejection rate by supporting the optimization of the casting process and to save material (and weight) by producing equivalent but more filigree structures. With a CT scan it is theoretically possible to locate any defect in the part under examination and to exactly determine its shape, which in turn helps to draw conclusions about its harmfulness. However, most of the time the data quality is not good enough to allow segmenting the defects with simple filter-based methods which directly operate on the gray-values—especially when the inspection is expanded to the entire production. In such in-line inspection scenarios the tight cycle times further limit the available time for the acquisition of the CT scan, which renders them noisy and prone to various artifacts. In recent years, dramatic advances in deep learning (and convolutional neural networks in particular) made even the reliable detection of small objects in cluttered scenes possible. These methods are a promising approach to quickly yield a reliable and accurate defect segmentation even in unfavorable CT scans. The huge drawback: a lot of precisely labeled training data is required, which is utterly challenging to obtain—particularly in the case of the detection of tiny defects in huge, highly artifact-afflicted, three-dimensional voxel data sets. Hence, a significant part of this work deals with the acquisition of precisely labeled training data. Firstly, we consider facilitating the manual labeling process: our experts annotate on high-quality CT scans with a high spatial resolution and a high contrast resolution and we then transfer these labels to an aligned ``normal'' CT scan of the same part, which holds all the challenging aspects we expect in production use. Nonetheless, due to the indecisiveness of the labeling experts about what to annotate as defective, the labels remain fuzzy. Thus, we additionally explore different approaches to generate artificial training data, for which a precise ground truth can be computed. We find an accurate labeling to be crucial for a proper training. We evaluate (i) domain randomization which simulates a super-set of reality with simple transformations, (ii) generative models which are trained to produce samples of the real-world data distribution, and (iii) realistic simulations which capture the essential aspects of real CT scans. Here, we develop a fully automated simulation pipeline which provides us with an arbitrary amount of precisely labeled training data. First, we procedurally generate virtual cast parts in which we place reasonable artificial casting defects. Then, we realistically simulate CT scans which include typical CT artifacts like scatter, noise, cupping, and ring artifacts. Finally, we compute a precise ground truth by determining for each voxel the overlap with the defect mesh. To determine whether our realistically simulated CT data is eligible to serve as training data for machine learning methods, we compare the prediction performance of learning-based and non-learning-based defect recognition algorithms on the simulated data and on real CT scans. In an extensive evaluation, we compare our novel deep learning method to a baseline of image processing and traditional machine learning algorithms. This evaluation shows how much defect detection benefits from learning-based approaches. In particular, we compare (i) a filter-based anomaly detection method which finds defect indications by subtracting the original CT data from a generated ``defect-free'' version, (ii) a pixel-classification method which, based on densely extracted hand-designed features, lets a random forest decide about whether an image element is part of a defect or not, and (iii) a novel deep learning method which combines a U-Net-like encoder-decoder-pair of three-dimensional convolutions with an additional refinement step. The encoder-decoder-pair yields a high recall, which allows us to detect even very small defect instances. The refinement step yields a high precision by sorting out the false positive responses. We extensively evaluate these models on our realistically simulated CT scans as well as on real CT scans in terms of their probability of detection, which tells us at which probability a defect of a given size can be found in a CT scan of a given quality, and their intersection over union, which gives us information about how precise our segmentation mask is in general. While the learning-based methods clearly outperform the image processing method, the deep learning method in particular convinces by its inference speed and its prediction performance on challenging CT scans—as they, for example, occur in in-line scenarios. Finally, we further explore the possibilities and the limitations of the combination of our fully automated simulation pipeline and our deep learning model. With the deep learning method yielding reliable results for CT scans of low data quality, we examine by how much we can reduce the scan time while still maintaining proper segmentation results. Then, we take a look on the transferability of the promising results to CT scans of parts of different materials and different manufacturing techniques, including plastic injection molding, iron casting, additive manufacturing, and composed multi-material parts. Each of these tasks comes with its own challenges like an increased artifact-level or different types of defects which occasionally are hard to detect even for the human eye. We tackle these challenges by employing our simulation pipeline to produce virtual counterparts that capture the tricky aspects and fine-tuning the deep learning method on this additional training data. With that we can tailor our approach towards specific tasks, achieving reliable and robust segmentation results even for challenging data. Lastly, we examine if the deep learning method, based on our realistically simulated training data, can be trained to distinguish between different types of defects—the reason why we require a precise segmentation in the first place—and we examine if the deep learning method can detect out-of-distribution data where its predictions become less trustworthy, i.e. an uncertainty estimation

    Extents and limits of radioscopic detection of nuclear materials in cargo containers with two megavoltage energy barriers

    Full text link
    The megavoltage X-ray technology is utilized for detecting nuclear materials in cargo containers. Interlaced response is obtained by switching rapidly between 6MeV and 9 MeV beams. It is known that the ratio of penetration levels of cargo contents taken at nominal and dual energies provides the information about atomic numbers of materials, and thus can also indicate the threat group. However, the identification is not straightforward if combinations of materials are present. The latter can lead to misdetections. It is imperative to know what are the extent and the limit of the currently employed technology, and how to carry out the inspection in real-time by balancing the human involvement and the computer assistance. We have performed experiments with Linatron K9, analyze data and conclude on an efficient system configuration. The following are addressed: (a) visualization the contents to produce an image suitable for the visual analysis, and (b) prompting the custom personnel on the presence and the location of suspicious objects

    Electrical and Computer Engineering Annual Report 2015

    Get PDF
    Faculty Directory Faculty Awards Google ATAP—Michigan Tech MURA The Sound Beneath the Surface Advancing Microgrid Deployment Clearing the Air Power in Their Hands Faculty Publications Graduate Student Highlights Staff Profile—Chito Kendrick New ECE Concentrations SLAM Systems Senior Design and Enterprise External Advisory Committee Contracts and Grants Departmental Statistics Lind Memorial Endowed Fellowshiphttps://digitalcommons.mtu.edu/ece-annualreports/1003/thumbnail.jp

    Optimal control of wave energy converters

    Get PDF
    Wave Energy Converters (WECs) are devices designed to absorb energy from ocean waves. The particular type of Wave Energy Converter (WEC) considered in this thesis is an oscillating body; energy conversion is carried out by means of a structure immersed in water which oscillates under forces exerted by waves. This thesis addresses the control of oscillating body WECs and the objective of the control system is to optimise the motion of the devices that maximises the energy absorption. In particular, this thesis presents the formulation of the optimal control problem for WECs in the framework of direct transcription methods, known as spectral and pseudospectral optimal control. Direct transcription methods transform continuous time optimal control problems into Non Linear Programming (NLP) problems, for which the literature (and the market) offer a large number of standard algorithms (and software packages). It is shown, in this thesis, that direct transcription gives the possibility of formulating complex control problems where realistic scenarios can be taken into account, such as physical limitations and nonlinearities in the behaviour of the devices. Additionally, by means of spectral and pseudospectral methods, it is possible to find an approximation of the optimal solution directly from sampled frequency and impulse response models of the radiation forces, obviating the need for finite order approximate models. By implementing a spectral method, convexity of the NLP problem, associated with the optimal control problem for a single body WEC described by a linear model, is demonstrated analytically. The solution to a nonlinear optimal control problem is approximated by means of pseudospectral optimal control. In the nonlinear case, simulation results show a significant difference in the optimal behaviour of the device, both in the motion and in the energy absorption, when the quadratic term describing the viscous forces are dominant, compared to the linear case. This thesis also considers the comparison of two control strategies for arrays of WECs. A Global Control strategy computes the optimal motion by taking into account the complete model of the array and it provides the global optimum for the absorbed energy. In contrast, an Independent Control strategy implements a control system on each device which is independent from all the other devices. The final part of the thesis illustrates an approach for the study of the effects of constraints on the total absorbed energy. The procedure allows the feasibility of the constrained energy maximisation problem to be studied, and it provides an intuitive framework for the design of WECs relating to the power take-off operating envelope, thanks to the geometrical interpretation of the functions describing both the total absorbed energy and the constraints

    Electrical and Computer Engineering Annual Report 2015

    Get PDF
    Faculty Directory Faculty Awards Google ATAP—Michigan Tech MURA The Sound Beneath the Surface Advancing Microgrid Deployment Clearing the Air Power in Their Hands Faculty Publications Graduate Student Highlights Staff Profile—Chito Kendrick New ECE Concentrations SLAM Systems Senior Design and Enterprise External Advisory Committee Contracts and Grants Departmental Statistics Lind Memorial Endowed Fellowshiphttps://digitalcommons.mtu.edu/ece-annualreports/1003/thumbnail.jp

    Computer Aided Optimal Robotic Assembly Sequence Generation

    Get PDF
    Robots are widely used for assembly operations across manufacturing industries to attain high productivity through automation. An appropriate robotic assembly sequence further minimizes the total production lead time and overall cost by minimizing the number of assembly direction changes, assembly gripper changes and assembly energy thus selection of a valid optimal robotic assembly sequence is significantly essential to achieve economized manufacturing process. An optimal assembly sequence must comply with various assembly requirements in order to make sure that the sequence of assembly operations is functionally feasible in physical environment. In order to test an assembly sequence for its practical possibility, necessary assembly information must be collected accurately from the product. Obtaining such assembly information from product drawings or Computer Aided Design (CAD) models in manual mode were involved in lots of complexity and needs high level skills to ensure correctness. Though retrieving such information from products with less number of parts is simple and less time consuming, for products composed of huge number parts it is very complicated and time consuming. Besides retrieving the assembly information, using it for validating an assembly sequence further raises the complexity of the Assembly Sequence Generation (ASG) problem. To perform optimal feasible assembly sequence generation efficiently, an effective computer aided automated method is developed and executed at two phases. The first phase of research is mainly focused on representing the assembly information in a streamlined manner by considering all possible states of assembly configurations for ease of computerization and developing efficient methods to extract the assembly information automatically from CAD environment though Computer Aided Automation (CAA). These methods basically use assembly contact analysis, part transformations and laws of equilibrium & balancing of rigid bodies. From the existing ASG methods, it is observed most of the researchers ignored/not-considered few of the assembly information such as assembly stability data and mechanical feasibility data due to higher complexity in retrieving it from CAD environment....

    Prognostic-based Life Extension Methodology with Application to Power Generation Systems

    Get PDF
    Practicable life extension of engineering systems would be a remarkable application of prognostics. This research proposes a framework for prognostic-base life extension. This research investigates the use of prognostic data to mobilize the potential residual life. The obstacles in performing life extension include: lack of knowledge, lack of tools, lack of data, and lack of time. This research primarily considers using the acoustic emission (AE) technology for quick-response diagnostic. To be specific, an important feature of AE data was statistically modeled to provide quick, robust and intuitive diagnostic capability. The proposed model was successful to detect the out of control situation when the data of faulty bearing was applied. This research also highlights the importance of self-healing materials. One main component of the proposed life extension framework is the trend analysis module. This module analyzes the pattern of the time-ordered degradation measures. The trend analysis is helpful not only for early fault detection but also to track the improvement in the degradation rate. This research considered trend analysis methods for the prognostic parameters, degradation waveform and multivariate data. In this respect, graphical methods was found appropriate for trend detection of signal features. Hilbert Huang Transform was applied to analyze the trends in waveforms. For multivariate data, it was realized that PCA is able to indicate the trends in the data if accompanied by proper data processing. In addition, two algorithms are introduced to address non-monotonic trends. It seems, both algorithms have the potential to treat the non-monotonicity in degradation data. Although considerable research has been devoted to developing prognostics algorithms, rather less attention has been paid to post-prognostic issues such as maintenance decision making. A multi-objective optimization model is presented for a power generation unit. This model proves the ability of prognostic models to balance between power generation and life extension. In this research, the confronting objective functions were defined as maximizing profit and maximizing service life. The decision variables include the shaft speed and duration of maintenance actions. The results of the optimization models showed clearly that maximizing the service life requires lower shaft speed and longer maintenance time

    Usage of daylight in the built environment:impact on health

    Get PDF
    Abstract only
    corecore