709 research outputs found

    A feature-based reverse engineering system using artificial neural networks

    Get PDF
    Reverse Engineering (RE) is the process of reconstructing CAD models from scanned data of a physical part acquired using 3D scanners. RE has attracted a great deal of research interest over the last decade. However, a review of the literature reveals that most research work have focused on creation of free form surfaces from point cloud data. Representing geometry in terms of surface patches is adequate to represent positional information, but can not capture any of the higher level structure of the part. Reconstructing solid models is of importance since the resulting solid models can be directly imported into commercial solid modellers for various manufacturing activities such as process planning, integral property computation, assembly analysis, and other applications. This research discusses the novel methodology of extracting geometric features directly from a data set of 3D scanned points, which utilises the concepts of artificial neural networks (ANNs). In order to design and develop a generic feature-based RE system for prismatic parts, the following five main tasks were investigated. (1) point data processing algorithms; (2) edge detection strategies; (3) a feature recogniser using ANNs; (4) a feature extraction module; (5) a CAD model exchanger into other CAD/CAM systems via IGES. A key feature of this research is the incorporation of ANN in feature recognition. The use of ANN approach has enabled the development of a flexible feature-based RE methodology that can be trained to deal with new features. ANNs require parallel input patterns. In this research, four geometric attributes extracted from a point set are input to the ANN module for feature recognition: chain codes, convex/concave, circular/rectangular and open/closed attribute. Recognising each feature requires the determination of these attributes. New and robust algorithms are developed for determining these attributes for each of the features. This feature-based approach currently focuses on solving the feature recognition problem based on 2.5D shapes such as block pocket, step, slot, hole, and boss, which are common and crucial in mechanical engineering products. This approach is validated using a set of industrial components. The test results show that the strategy for recognising features is reliable

    Development of a manufacturing feature-based design system

    Get PDF
    Traditional CAD systems are based on the serial approach of the product development cycle: the design process is not integrated with other activities and thus it can not provide information for subsequent phases of product development. In order to eliminate this problem, many modern CAD systems allow the composition of designs from building blocks of higher level of abstraction called features. Although features used in current systems tend to be named after manufacturing processes, they do not, in reality, provide valuable manufacturing data. Apart from the obvious disadvantage that process engineers need to re-evaluate the design and capture the intent of the designer, this approach also prohibits early detection of possible manufacturing problems. This research attempts to bring the design and manufacturing phases together by implementing manufacturing features. A design is composed entirely in a bottom-up manner using manufacturable entities in the same way as they would be produced during the manufacturing phase. Each feature consists of parameterised geometry, manufacturing information (including machine tool, cutting tools, cutting conditions, fixtures, and relative cost information), design limitations, functionality rules, and design-for-manufacture rules. The designer selects features from a hierarchical feature library. Upon insertion of a feature, the system ensures that no functionality or manufacturing rules are violated. If a feature is modified, the system validates the feature by making sure that it remains consistent with its original functionality and design-for-manufacture rules are re-applied. The system also allows analysis of designs, from a manufacturing point of view, that were not composed using features. In order to reduce the complexity of the system, design functionality and design-for manufacture rules are organised into a hierarchical system and are pointed to the appropriate entries of the feature hierarchy. The system makes it possible to avoid costly designs by eliminating possible manufacturing problems early in the product development cycle. It also makes computer-aided process planning feasible. The system is developed as an extension of a commercially available CAD/CAM system (Pro/Engineer), and at its current stage only deals with machining features. However, using the same principles, it can be expanded to cover other kinds of manufacturing processes

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Applying machine learning techniques to an imperfect information game

    Get PDF
    The game of poker presents a challenging game to Artificial Intelligence researchers because it is a complex asymmetric information game. In such games, a player can improve his performance by inferring the private information held by the other players from their prior actions. A novel connectionist structure was designed to play a version of poker (multi-player limit Hold‟em). This allows simple reinforcement learning techniques to be used which previously not been considered for the game of multi-player hold‟em. A related hidden Markov model was designed to be fitted to records of poker play without using any private information. Belief vectors generated by this model provide a more convenient and flexible representation of an opponent‟s action history than alternative approaches. The structure was tested in two settings. Firstly self-play simulation was used to generate an approximation to a Nash equilibrium strategy. A related, but slower, rollout strategy that uses Monte-Carlo samples was used to evaluate the performance. Secondly the structure was used to model and hence exploit a population of opponents within a relatively small number of games. When and how to adapt quickly to new opponents are open questions in poker AI research. A opponent model with a small number of discrete types is used to identify the largest differences in strategy between members of the population. A commercial software package (Poker Academy) was used to provide a population of sophisticated opponents to test against. A series of experiments was conducted to compare adaptive and static systems. All systems showed positive results but surprisingly the adaptive systems did not show a significant improvement over similar static systems. The possible reasons for this result are discussed. This work formed the basis of a series of entries to the computer poker competition hosted at the annual conferences of the Association for the Advancement of Artificial Intelligence (AAAI). Its best rankings were 3rd in the 2006 6-player limit hold‟em competition and 2nd in the 2008 3-player limit hold‟em competition

    Sustainable Agriculture and Advances of Remote Sensing (Volume 1)

    Get PDF
    Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publishing the results, among others

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Methods for monitoring the human circadian rhythm in free-living

    Get PDF
    Our internal clock, the circadian clock, determines at which time we have our best cognitive abilities, are physically strongest, and when we are tired. Circadian clock phase is influenced primarily through exposure to light. A direct pathway from the eyes to the suprachiasmatic nucleus, where the circadian clock resides, is used to synchronise the circadian clock to external light-dark cycles. In modern society, with the ability to work anywhere at anytime and a full social agenda, many struggle to keep internal and external clocks synchronised. Living against our circadian clock makes us less efficient and poses serious health impact, especially when exercised over a long period of time, e.g. in shift workers. Assessing circadian clock phase is a cumbersome and uncomfortable task. A common method, dim light melatonin onset testing, requires a series of eight saliva samples taken in hourly intervals while the subject stays in dim light condition from 5 hours before until 2 hours past their habitual bedtime. At the same time, sensor-rich smartphones have become widely available and wearable computing is on the rise. The hypothesis of this thesis is that smartphones and wearables can be used to record sensor data to monitor human circadian rhythms in free-living. To test this hypothesis, we conducted research on specialised wearable hardware and smartphones to record relevant data, and developed algorithms to monitor circadian clock phase in free-living. We first introduce our smart eyeglasses concept, which can be personalised to the wearers head and 3D-printed. Furthermore, hardware was integrated into the eyewear to recognise typical activities of daily living (ADLs). A light sensor integrated into the eyeglasses bridge was used to detect screen use. In addition to wearables, we also investigate if sleep-wake patterns can be revealed from smartphone context information. We introduce novel methods to detect sleep opportunity, which incorporate expert knowledge to filter and fuse classifier outputs. Furthermore, we estimate light exposure from smartphone sensor and weather in- formation. We applied the Kronauer model to compare the phase shift resulting from head light measurements, wrist measurements, and smartphone estimations. We found it was possible to monitor circadian phase shift from light estimation based on smartphone sensor and weather information with a weekly error of 32±17min, which outperformed wrist measurements in 11 out of 12 participants. Sleep could be detected from smartphone use with an onset error of 40±48 min and wake error of 42±57 min. Screen use could be detected smart eyeglasses with 0.9 ROC AUC for ambient light intensities below 200lux. Nine clusters of ADLs were distinguished using Gaussian mixture models with an average accuracy of 77%. In conclusion, a combination of the proposed smartphones and smart eyeglasses applications could support users in synchronising their circadian clock to the external clocks, thus living a healthier lifestyle

    Advances on Mechanics, Design Engineering and Manufacturing III

    Get PDF
    This open access book gathers contributions presented at the International Joint Conference on Mechanics, Design Engineering and Advanced Manufacturing (JCM 2020), held as a web conference on June 2–4, 2020. It reports on cutting-edge topics in product design and manufacturing, such as industrial methods for integrated product and process design; innovative design; and computer-aided design. Further topics covered include virtual simulation and reverse engineering; additive manufacturing; product manufacturing; engineering methods in medicine and education; representation techniques; and nautical, aeronautics and aerospace design and modeling. The book is organized into four main parts, reflecting the focus and primary themes of the conference. The contributions presented here not only provide researchers, engineers and experts in a range of industrial engineering subfields with extensive information to support their daily work; they are also intended to stimulate new research directions, advanced applications of the methods discussed and future interdisciplinary collaborations

    Accounting for variance and hyperparameter optimization in machine learning benchmarks

    Full text link
    La rĂ©cente rĂ©volution de l'apprentissage automatique s'est fortement appuyĂ©e sur l'utilisation de bancs de test standardisĂ©s. Ces derniers sont au centre de la mĂ©thodologie scientifique en apprentissage automatique, fournissant des cibles et mesures indĂ©niables des amĂ©liorations des algorithmes d'apprentissage. Ils ne garantissent cependant pas la validitĂ© des rĂ©sultats ce qui implique que certaines conclusions scientifiques sur les avancĂ©es en intelligence artificielle peuvent s'avĂ©rer erronĂ©es. Nous abordons cette question dans cette thĂšse en soulevant d'abord la problĂ©matique (Chapitre 5), que nous Ă©tudions ensuite plus en profondeur pour apporter des solutions (Chapitre 6) et finalement developpons un nouvel outil afin d'amĂ©lioration la mĂ©thodologie des chercheurs (Chapitre 7). Dans le premier article, chapitre 5, nous dĂ©montrons la problĂ©matique de la reproductibilitĂ© pour des bancs de test stables et consensuels, impliquant que ces problĂšmes sont endĂ©miques aussi Ă  de grands ensembles d'applications en apprentissage automatique possiblement moins stable et moins consensuels. Dans cet article, nous mettons en Ă©vidence l'impact important de la stochasticitĂ© des bancs de test, et ce mĂȘme pour les plus stables tels que la classification d'images. Nous soutenons d'aprĂšs ces rĂ©sultats que les solutions doivent tenir compte de cette stochasticitĂ© pour amĂ©liorer la reproductibilitĂ© des bancs de test. Dans le deuxiĂšme article, chapitre 6, nous Ă©tudions les diffĂ©rentes sources de variation typiques aux bancs de test en apprentissage automatique, mesurons l'effet de ces variations sur les mĂ©thodes de comparaison d'algorithmes et fournissons des recommandations sur la base de nos rĂ©sultats. Une contribution importante de ce travail est la mesure de la fiabilitĂ© d'estimateurs peu coĂ»teux Ă  calculer mais biaisĂ©s servant Ă  estimer la performance moyenne des algorithmes. Tel qu'expliquĂ© dans l'article, un estimateur idĂ©al implique plusieurs exĂ©cution d'optimisation d'hyperparamĂštres ce qui le rend trop coĂ»teux Ă  calculer. La plupart des chercheurs doivent donc recourir Ă  l'alternative biaisĂ©e, mais nous ne savions pas jusqu'Ă  prĂ©sent la magnitude de la dĂ©gradation de cet estimateur. Sur la base de nos rĂ©sultats, nous fournissons des recommandations pour la comparison d'algorithmes sur des bancs de test avec des budgets de calculs limitĂ©s. PremiĂšrement, les sources de variations devraient ĂȘtre randomisĂ© autant que possible. DeuxiĂšmement, la randomization devrait inclure le partitionnement alĂ©atoire des donnĂ©es pour les ensembles d'entraĂźnement, de validation et de test, qui s'avĂšre ĂȘtre la plus importante des sources de variance. TroisiĂšmement, des tests statistiques tel que la version du Mann-Withney U-test prĂ©sentĂ© dans notre article devrait ĂȘtre utilisĂ© plutĂŽt que des comparisons sur la simple base de moyennes afin de prendre en considĂ©ration l'incertitude des mesures de performance. Dans le chapitre 7, nous prĂ©sentons un cadriciel d'optimisation d'hyperparamĂštres dĂ©veloppĂ© avec principal objectif de favoriser les bonnes pratiques d'optimisation des hyperparamĂštres. Le cadriciel est conçu de façon Ă  privilĂ©gier une interface simple et intuitive adaptĂ©e aux habitudes de travail des chercheurs en apprentissage automatique. Il inclut un nouveau systĂšme de versionnage d'expĂ©riences afin d'aider les chercheurs Ă  organiser leurs itĂ©rations expĂ©rimentales et tirer profit des rĂ©sultats antĂ©rieurs pour augmenter l'efficacitĂ© de l'optimisation des hyperparamĂštres. L'optimisation des hyperparamĂštres joue un rĂŽle important dans les bancs de test, les hyperparamĂštres Ă©tant un facteur confondant significatif. Fournir aux chercheurs un instrument afin de bien contrĂŽler ces facteurs confondants est complĂ©mentaire aux recommandations pour tenir compte des sources de variation dans le chapitre 6. Nos recommendations et l'outil pour l'optimisation d'hyperparametre offre une base solide pour une mĂ©thodologie robuste et fiable.The recent revolution in machine learning has been strongly based on the use of standardized benchmarks. Providing clear target metrics and undeniable measures of improvements of learning algorithms, they are at the center of the scientific methodology in machine learning. They do not ensure validity of results however, therefore some scientific conclusions based on flawed methodology may prove to be wrong. In this thesis we address this question by first raising the issue (Chapter 5), then we study it to find solutions and recommendations (Chapter 6) and build tools to help improve the methodology of researchers (Chapter 7). In first article, Chapter 5, we demonstrate the issue of reproducibility in stable and consensual benchmarks, implying that these issues are endemic to a large ensemble of machine learning applications that are possibly less stable or less consensual. We raise awareness of the important impact of stochasticity even in stable image classification tasks and contend that solutions for reproducible benchmarks should account for this stochasticity. In second article, Chapter 6, we study the different sources of variation that are typical in machine learning benchmarks, measure their effect on comparison methods to benchmark algorithms and provide recommendations based on our results. One important contribution of this work is that we measure the reliability of a cheaper but biased estimator for the average performance of algorithms. As explained in the article, an ideal estimator involving multiple rounds of hyperparameter optimization is too computationally expensive. Most researchers must resort to use the biased alternative, but it has been unknown until now how serious a degradation of the quality of estimation this leads to. Our investigations provides guidelines for benchmarks on practical budgets. First, as many sources of variations as possible should be randomized. Second, the partitioning of data in training, validation and test sets should be randomized as well, since this is the most important source of variation. Finally, statistical tests should be used instead of ad-hoc average comparisons so that the uncertainty of performance estimation can be accounted for when comparing machine learning algorithms. In Chapter 7, we present a framework for hyperparameter optimization that has been developed with the main goal of encouraging best practices for hyperparameter optimization. The framework is designed to favor a simple and intuitive interface adapted to the workflow of machine learning researchers. It includes a new version control system for experiments to help researchers organize their rounds of experimentations and leverage prior results for more efficient hyperparameter optimization. Hyperparameter optimization plays an important role in benchmarking, with the effect of hyperparameters being a serious confounding factor. Providing an instrument for researchers to properly control this confounding factor is complementary to our guidelines to account for sources of variation in Chapter 7. Our recommendations together with our tool for hyperparameter optimization provides a solid basis for a reliable methodology in machine learning benchmarks

    An intelligent knowledge based cost modelling system for innovative product development

    Get PDF
    This research work aims to develop an intelligent knowledge-based system for product cost modelling and design for automation at an early design stage of the product development cycle, that would enable designers/manufacturing planners to make more accurate estimates of the product cost. Consequently, a quicker response to customers’ expectations. The main objectives of the research are to: (1) develop a prototype system that assists an inexperienced designer to estimate the manufacturing cost of the product, (2) advise designers on how to eliminate design and manufacturing related conflicts that may arise during the product development process, (3) recommend the most economic assembly technique for the product in order to consider this technique during the design process and provide design improvement suggestions to simplify the assembly operations (i.e. to provide an opportunity for designers to design for assembly (DFA)), (4) apply a fuzzy logic approach to certain cases, and (5) evaluate the developed prototype system through five case studies. The developed system for cost modelling comprises of a CAD solid modelling system, a material selection module, knowledge-based system (KBS), process optimisation module, design for assembly module, cost estimation technique module, and a user interface. In addition, the system encompasses two types of databases, permanent (static) and temporary (dynamic). These databases are categorised into five separate groups of database, Feature database, Material database, Machinability database, Machine database, and Mould database. The system development process has passed through four major steps: firstly, constructing the knowledge-based and process optimisation system, secondly developing a design for assembly module. Thirdly, integrating the KBS with both material selection database and a CAD system. Finally, developing and implementing a ii fuzzy logic approach to generate reliable estimation of cost and to handle the uncertainty in cost estimation model that cannot be addressed by traditional analytical methods. The developed system has, besides estimating the total cost of a product, the capability to: (1) select a material as well as the machining processes, their sequence and machining parameters based on a set of design and production parameters that the user provides to the system, and (2) recommend the most economic assembly technique for a product and provide design improvement suggestion, in the early stages of the design process, based on a design feasibility technique. It provides recommendations when a design cannot be manufactured with the available manufacturing resources and capabilities. In addition, a feature-by-feature cost estimation report was generated using the system to highlight the features of high manufacturing cost. The system can be applied without the need for detailed design information, so that it can be implemented at an early design stage and consequently cost redesign, and longer lead-time can be avoided. One of the tangible advantages of this system is that it warns users of features that are costly and difficult to manufacture. In addition, the system is developed in such a way that, users can modify the product design at any stage of the design processes. This research dealt with cost modelling of both machined components and injection moulded components. The developed cost effective design environment was evaluated on real products, including a scientific calculator, a telephone handset, and two machined components. Conclusions drawn from the system indicated that the developed prototype system could help companies reducing product cost and lead time by estimating the total product cost throughout the entire product development cycle including assembly cost. Case studies demonstrated that designing a product using the developed system is more cost effective than using traditional systems. The cost estimated for a number of products used in the case studies was almost 10 to 15% less than cost estimated by the traditional system since the latter does not take into consideration process optimisation, design alternatives, nor design for assembly issue
    • 

    corecore