60 research outputs found

    Advances of Machine Learning in Materials Science: Ideas and Techniques

    Full text link
    In this big data era, the use of large dataset in conjunction with machine learning (ML) has been increasingly popular in both industry and academia. In recent times, the field of materials science is also undergoing a big data revolution, with large database and repositories appearing everywhere. Traditionally, materials science is a trial-and-error field, in both the computational and experimental departments. With the advent of machine learning-based techniques, there has been a paradigm shift: materials can now be screened quickly using ML models and even generated based on materials with similar properties; ML has also quietly infiltrated many sub-disciplinary under materials science. However, ML remains relatively new to the field and is expanding its wing quickly. There are a plethora of readily-available big data architectures and abundance of ML models and software; The call to integrate all these elements in a comprehensive research procedure is becoming an important direction of material science research. In this review, we attempt to provide an introduction and reference of ML to materials scientists, covering as much as possible the commonly used methods and applications, and discussing the future possibilities.Comment: 80 pages; 22 figures. To be published in Frontiers of Physics, 18, xxxxx, (2023

    MAPiS 2019 - First MAP-i Seminar: proceedings

    Get PDF
    This book contains a selection of Informatics papers accepted for presentation and discussion at “MAPiS 2019 - First MAP-i Seminar”, held in Aveiro, Portugal, January 31, 2019. MAPiS is the first conference organized by the MAP-i first year students, in the context of the Seminar course. The MAP-i Doctoral Programme in Computer Science is a joint Doctoral Programme in Computer Science of the University of Minho, the University of Aveiro and the University of Porto. This programme aims to form highly-qualified professionals, fostering their capacity and knowledge to the research area. This Conference was organized by the first grade students attending the Seminar Course. The aim of the course was to introduce concepts which are complementary to scientific and technological education, but fundamental to both completing a PhD successfully and entailing a career on scientific research. The students had contact with the typical procedures and difficulties of organizing and participate in such a complex event. These students were in charge of the organization and management of all the aspects of the event, such as the accommodation of participants or revision of the papers. The works presented in the Conference and the papers submitted were also developed by these students, fomenting their enthusiasm regarding the investigation in the Informatics area. (...)publishe

    Implementing machine ethics: using machine learning to raise ethical machines

    Get PDF
    As more decisions and tasks are delegated to the artificially intelligent machines of the 21st century, we must ensure that these machines are, on their own, able to engage in ethical decision-making and behaviour. This dissertation makes the case that bottom-up reinforcement learning methods are the best suited for implementing machine ethics by raising ethical machines. This is one of three main theses in this dissertation, that we must seriously consider how machines themselves, as moral agents that can impact human well-being and flourishing, might make ethically preferable decisions and take ethically preferable actions. The second thesis is that artificially intelligent machines are different in kind from all previous machines. The conjunction of autonomy and intelligence, among other unique features like the ability to learn and their general-purpose nature, is what sets artificially intelligent machines apart from all previous machines and tools. The third thesis concerns the limitations of artificially intelligent machines. As impressive as these machines are, their abilities are still derived from humans and as such lack the sort of normative commitments humans have. In short, we ought to care deeply about artificially intelligent machines, especially those used in times and places when considered human judgment is required, because we risk lapsing into a state of moral complacency otherwise

    The Labor of Play: the Political Economy of Computer Game Culture

    Get PDF
    This dissertation questions the relationship between computer game culture and ideologies of neoliberalism and financialization. It questions the role computer games play in cultivating neoliberal practices and how the industry develops games and systems making play and work indistinguishable activities. Chapter 1 examines how computer game inculcate players into neoliberal practice through play. In chapter 2, the project shows Blizzard Entertainment systematically redevelops their games to encourage perpetual play aimed at increasing the consumption of digital commodities and currencies. Chapter 3 considers the role of esports, or professional competitive computer game play, to disperse neoliberal ideologies amongst nonprofessional players. Chapter 4 examines the streaming platform Twitch and the transformation of computer gameplay into a consumable commodity. This chapter examines Twitch’s systems designed at making production and consumption inseparable practices. The dissertation concludes by examining the economic, conceptual, and theoretical collapses threatening game culture and the field of game studies

    Um modelo para suporte automatizado ao reconhecimento, extração, personalização e reconstrução de gráficos estáticos

    Get PDF
    Data charts are widely used in our daily lives, being present in regular media, such as newspapers, magazines, web pages, books, and many others. A well constructed data chart leads to an intuitive understanding of its underlying data and in the same way, when data charts have wrong design choices, a redesign of these representations might be needed. However, in most cases, these charts are shown as a static image, which means that the original data are not usually available. Therefore, automatic methods could be applied to extract the underlying data from the chart images to allow these changes. The task of recognizing charts and extracting data from them is complex, largely due to the variety of chart types and their visual characteristics. Computer Vision techniques for image classification and object detection are widely used for the problem of recognizing charts, but only in images without any disturbance. Other features in real-world images that can make this task difficult are not present in most literature works, like photo distortions, noise, alignment, etc. Two computer vision techniques that can assist this task and have been little explored in this context are perspective detection and correction. These methods transform a distorted and noisy chart in a clear chart, with its type ready for data extraction or other uses. The task of reconstructing data is straightforward, as long the data is available the visualization can be reconstructed, but the scenario of reconstructing it on the same context is complex. Using a Visualization Grammar for this scenario is a key component, as these grammars usually have extensions for interaction, chart layers, and multiple views without requiring extra development effort. This work presents a model for automated support for custom recognition, and reconstruction of charts in images. The model automatically performs the process steps, such as reverse engineering, turning a static chart back into its data table for later reconstruction, while allowing the user to make modifications in case of uncertainties. This work also features a model-based architecture along with prototypes for various use cases. Validation is performed step by step, with methods inspired by the literature. This work features three use cases providing proof of concept and validation of the model. The first use case features usage of chart recognition methods focused on documents in the real-world, the second use case focus on vocalization of charts, using a visualization grammar to reconstruct a chart in audio format, and the third use case presents an Augmented Reality application that recognizes and reconstructs charts in the same context (a piece of paper) overlaying the new chart and interaction widgets. The results showed that with slight changes, chart recognition and reconstruction methods are now ready for real-world charts, when taking time, accuracy and precision into consideration.Os gráficos de dados são amplamente utilizados na nossa vida diária, estando presentes nos meios de comunicação regulares, tais como jornais, revistas, páginas web, livros, e muitos outros. Um gráfico bem construído leva a uma compreensão intuitiva dos seus dados inerentes e da mesma forma, quando os gráficos de dados têm escolhas de conceção erradas, poderá ser necessário um redesenho destas representações. Contudo, na maioria dos casos, estes gráficos são mostrados como uma imagem estática, o que significa que os dados originais não estão normalmente disponíveis. Portanto, poderiam ser aplicados métodos automáticos para extrair os dados inerentes das imagens dos gráficos, a fim de permitir estas alterações. A tarefa de reconhecer os gráficos e extrair dados dos mesmos é complexa, em grande parte devido à variedade de tipos de gráficos e às suas características visuais. As técnicas de Visão Computacional para classificação de imagens e deteção de objetos são amplamente utilizadas para o problema de reconhecimento de gráficos, mas apenas em imagens sem qualquer ruído. Outras características das imagens do mundo real que podem dificultar esta tarefa não estão presentes na maioria das obras literárias, como distorções fotográficas, ruído, alinhamento, etc. Duas técnicas de visão computacional que podem ajudar nesta tarefa e que têm sido pouco exploradas neste contexto são a deteção e correção da perspetiva. Estes métodos transformam um gráfico distorcido e ruidoso em um gráfico limpo, com o seu tipo pronto para extração de dados ou outras utilizações. A tarefa de reconstrução de dados é simples, desde que os dados estejam disponíveis a visualização pode ser reconstruída, mas o cenário de reconstrução no mesmo contexto é complexo. A utilização de uma Gramática de Visualização para este cenário é um componente chave, uma vez que estas gramáticas têm normalmente extensões para interação, camadas de gráficos, e visões múltiplas sem exigir um esforço extra de desenvolvimento. Este trabalho apresenta um modelo de suporte automatizado para o reconhecimento personalizado, e reconstrução de gráficos em imagens estáticas. O modelo executa automaticamente as etapas do processo, tais como engenharia inversa, transformando um gráfico estático novamente na sua tabela de dados para posterior reconstrução, ao mesmo tempo que permite ao utilizador fazer modificações em caso de incertezas. Este trabalho também apresenta uma arquitetura baseada em modelos, juntamente com protótipos para vários casos de utilização. A validação é efetuada passo a passo, com métodos inspirados na literatura. Este trabalho apresenta três casos de uso, fornecendo prova de conceito e validação do modelo. O primeiro caso de uso apresenta a utilização de métodos de reconhecimento de gráficos focando em documentos no mundo real, o segundo caso de uso centra-se na vocalização de gráficos, utilizando uma gramática de visualização para reconstruir um gráfico em formato áudio, e o terceiro caso de uso apresenta uma aplicação de Realidade Aumentada que reconhece e reconstrói gráficos no mesmo contexto (um pedaço de papel) sobrepondo os novos gráficos e widgets de interação. Os resultados mostraram que com pequenas alterações, os métodos de reconhecimento e reconstrução dos gráficos estão agora prontos para os gráficos do mundo real, tendo em consideração o tempo, a acurácia e a precisão.Programa Doutoral em Engenharia Informátic

    Advances in Computational Intelligence Applications in the Mining Industry

    Get PDF
    This book captures advancements in the applications of computational intelligence (artificial intelligence, machine learning, etc.) to problems in the mineral and mining industries. The papers present the state of the art in four broad categories: mine operations, mine planning, mine safety, and advances in the sciences, primarily in image processing applications. Authors in the book include both researchers and industry practitioners

    Rule-based Machine Learning Algorithms for Smart Automatic Quadrilateral Mesh Generation System

    Get PDF
    Mesh generation, as one of six basic research directions identified in NASA Vision 2030, is an important area in computational geometry and plays a fundamental role in numerical simulations in the area of finite element analysis (FEA) and computational fluid dynamics (CFD). With the rapid progress of high-performance computing hardware, mesh generation methods are required to handle geometric domains with more complex shapes and higher resolution in reliable and fast fashions. Yet, existing mesh generation methods suffer from high computational complexity, low mesh quality in complex geometries, and speed limitations, and have continued to be the bottleneck in those simulation tasks. This thesis addresses the quadrilateral mesh generation problem from three aspects, element extraction, sequential decision making, and data generation, and their combinations. First, a self-learning system, FreeMesh-S, for finite element extraction system is investigated. Element extraction is a major mesh generation method for its capabilities to generate high-quality meshes around the domain boundary and can be formulated into a sequential decision making process. Three kinds of primitive element extraction rules are conceptually identified. FreeMesh-S, then learns the rules by 1) sampling the element generation rules by a reinforcement learning (RL) algorithm, 2) extracting high quality samples, and 3) training the final rules by a feedforward neural network (FNN). The comprehensive experiments demonstrate the effectiveness of the self-learned meshing rules by FreeMesh-S. Second, an RL-based computational framework for automatic mesh generation is proposed to improve algorithm automation further. A state-of-the-art RL algorithm, soft actor-critic (SAC), is used to learn the mesh generator's policy from trials. It achieves a fully automatic mesh generation without human intervention and any extra clean-up operations, which are typically needed in current commercial software. The reward function is carefully designed to balance the contradiction between the instant element quality and the remaining boundary quality, in order to achieve an overall high quality mesh. The experiments have shown the competitive performance with two representative meshing methods with respect to generalizability, robustness, and effectiveness. The potentials of mesh generation as a benchmark problem for RL are also identified. Last, a quality function-based data generation method for the meshing algorithm is devised to increase learning efficiency and algorithm performance. For any data-driven algorithms, high quality and balanced data are essential and deterministic to the performance. This method samples the input-output of the three rules according to their feature spaces; selects high quality samples by a quality function that evaluates if the output is an appropriate solution to the input; and trains an FNN model to simulate the mapping relation via the obtained data. The experiments show that the learning time is greatly reduced while the model has competitive performance comparing with other meshing methods. To conclude, this thesis combines artificial intelligence techniques, rule-based system, neural networks, and RL, to automate the quadrilateral mesh generation while significantly reducing the time and expertise needed during the creation of high quality mesh generation algorithm. All the techniques can be directly generalized to 3D mesh generation
    corecore