877 research outputs found

    It Takes Two to Negotiate: Modeling Social Exchange in Online Multiplayer Games

    Full text link
    Online games are dynamic environments where players interact with each other, which offers a rich setting for understanding how players negotiate their way through the game to an ultimate victory. This work studies online player interactions during the turn-based strategy game, Diplomacy. We annotated a dataset of over 10,000 chat messages for different negotiation strategies and empirically examined their importance in predicting long- and short-term game outcomes. Although negotiation strategies can be predicted reasonably accurately through the linguistic modeling of the chat messages, more is needed for predicting short-term outcomes such as trustworthiness. On the other hand, they are essential in graph-aware reinforcement learning approaches to predict long-term outcomes, such as a player's success, based on their prior negotiation history. We close with a discussion of the implications and impact of our work. The dataset is available at https://github.com/kj2013/claff-diplomacy.Comment: 28 pages, 11 figures. Accepted to CSCW '24 and forthcoming the Proceedings of ACM HCI '2

    Métodos machine learning para la predicción de inclusiones no metálicas en alambres de acero para refuerzo de neumáticos

    Get PDF
    ABSTRACT: Non-metallic inclusions are unavoidably produced during steel casting resulting in lower mechanical strength and other detrimental effects. This study was aimed at developing a reliable Machine Learning algorithm to classify castings of steel for tire reinforcement depending on the number and properties of inclusions, experimentally determined. 855 observations were available for training, validation and testing the algorithms, obtained from the quality control of the steel. 140 parameters are monitored during fabrication, which are the features of the analysis; the output is 1 or 0 depending on whether the casting is rejected or not. The following algorithms have been employed: Logistic Regression, K-Nearest Neighbors, Support Vector Classifier (linear and RBF kernels), Random Forests, AdaBoost, Gradient Boosting and Artificial Neural Networks. The reduced value of the rejection rate implies that classification must be carried out on an imbalanced dataset. Resampling methods and specific scores for imbalanced datasets (Recall, Precision and AUC rather than Accuracy) were used. Random Forest was the most successful method providing an AUC in the test set of 0.85. No significant improvements were detected after resampling. The improvement derived from implementing this algorithm in the sampling procedure for quality control during steelmaking has been quantified. In this sense, it has been proved that this tool allows the samples with a higher probability of being rejected to be selected, thus improving the effectiveness of the quality control. In addition, the optimized Random Forest has enabled to identify the most important features, which have been satisfactorily interpreted on a metallurgical basis.RESUMEN: Las inclusiones no metálicas se producen inevitablemente durante la fabricación del acero, lo que resulta en una menor resistencia mecánica y otros efectos perjudiciales. El objetivo de este estudio fue desarrollar un algoritmo fiable para clasificar las coladas de acero de refuerzo de neumáticos en función del número y el tipo de las inclusiones, determinadas experimentalmente. Se dispuso de 855 observaciones para el entrenamiento, validación y test de los algoritmos, obtenidos a partir del control de calidad del acero. Durante la fabricación se controlan 140 parámetros, que son las características del análisis; el resultado es 1 ó 0 dependiendo de si la colada es rechazada o no. Se han empleado los siguientes algoritmos: Regresión Logística, Vecinos K-Cercanos, Clasificador de Vectores Soporte (kernels lineales y RBF), Bosques Aleatorios, AdaBoost, Gradient Boosting y Redes Neurales Artificiales. El bajo índice de rechazo implica que la clasificación debe llevarse a cabo en un set de datos desequilibrado. Se utilizaron métodos de remuestreo y métricas específicas para conjuntos de datos desequilibrados (Recall, Precision y AUC en lugar de Accuracy). Random Forest fue el algoritmo más exitoso que proporcionó un AUC en los datos de test de 0.83. No se detectaron mejoras significativas después del remuestreo. Se ha cuantificado la mejora derivada de la implementación de este algoritmo en el procedimiento de muestreo para el control de calidad durante la fabricación de acero. En este sentido, se ha comprobado que esta herramienta permite seleccionar las muestras con mayor probabilidad de ser rechazadas, mejorando así la eficacia del control de calidad. Además, el Random Forest optimizado ha permitido identificar las variables más importantes, que han sido interpretadas satisfactoriamente sobre una base metalúrgica.Máster en Ciencia de Dato

    A Framework to improve Turbulence Models using Full-field Inversion and Machine Learning

    Full text link
    Accurate prediction of turbulent flows remains a barrier to the widespread use of computational fluid dynamics in analysis and design. Since practical wall-bounded turbulent flows involve a very wide range of length and time scales, it is intractable to resolve all relevant scales, due to limitations in computational power. The usual tools for predictions, in order of their accuracy, includes direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds-averaged Navier-Stokes (RANS) based models. DNS and LES will continue to be prohibitively expensive for analysis of high Reynolds number wall-bounded flows for at least two more decades and for much longer for design applications. At the same time, the high-quality data generated by such simulations provides detailed information about turbulence physics in affordable problems. Experimental measurements have the potential to offer limited data in more practical regimes. However, data from simulations and experiments are mostly used for validation, but not directly in model improvement. This thesis presents a generalized framework of data-augmented modeling, which we refer to as field-inversion and machine-learning (FIML). FIML is utilized to develop augmentations to RANS-based models using data from DNS, LES or experiments. This framework involves the solution of multiple inverse problems to infer spatial discrepancies in a baseline turbulence model by minimizing the misfit between data and predictions. Solving the inverse problem to infer the spatial discrepancy field allows the use of a wide variety and fidelity of data. Inferring the field discrepancy using this approach connects the data and the turbulence model in a manner consistent with the underlying assumptions in the baseline model. Several such discrepancy fields are used as inputs to a machine learning procedure, which in turn reconstructs corrective functional forms in terms of local flow quantities. The machine-learned discrepancy is then embedded within existing turbulence closures, resulting in a partial differential equation/machine learning hybrid, and utilized for prediction. The FIML framework is applied to augment the Spalart-Allmaras (SA) and the Wilcox's KOM model and for flows involving curvature, adverse pressure gradients, and separation. The value of the framework is demonstrated by augmenting the SA model for massively separated flows over airfoil using lift data for just one airfoil. The augmented SA model is able to accurately predict the surface pressure, the point of separation and the maximum lift -- even for Reynolds numbers and airfoil shapes not used for training the model. The portability of the augmented model is demonstrated by utilizing in-house finite-volume flow solver with FIML to develop augmentations and embedding them in a commercial finite-element solver. The implication is that the ML-augmented model can thus be used in a fashion that is similar to present-day turbulence model. While the results presented in this thesis are limited to turbulence modeling, the FIML framework represents a general physics-constrained data-driven paradigm that can be applied to augment models governed by partial differential equations.PHDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144034/1/anandps_1.pd

    Physical layer authentication using ensemble learning technique in wireless communications

    Get PDF
    Cyber-physical wireless systems have surfaced as an important data communication and networking research area. It is an emerging discipline that allows effective monitoring and efficient real-time communication between the cyber and physical worlds by embedding computer software and integrating communication and networking technologies. Due to their high reliability, sensitivity and connectivity, their security requirements are more comparable to the Internet as they are prone to various security threats such as eavesdropping, spoofing, botnets, man-in-the-middle attack, denial of service (DoS) and distributed denial of service (DDoS) and impersonation. Existing methods use physical layer authentication (PLA), the most promising solution to detect cyber-attacks. Still, the cyber-physical systems (CPS) have relatively large computational requirements and require more communication resources, thus making it impossible to achieve a low latency target. These methods perform well but only in stationary scenarios. We have extracted the relevant features from the channel matrices using discrete wavelet transformation to improve the computational time required for data processing by considering mobile scenarios. The features are fed to ensemble learning algorithms, such as AdaBoost, LogitBoost and Gentle Boost, to classify data. The authentication of the received signal is considered a binary classification problem. The transmitted data is labeled as legitimate information, and spoofing data is illegitimate information. Therefore, this paper proposes a threshold-free PLA approach that uses machine learning algorithms to protect critical data from spoofing attacks. It detects the malicious data packets in stationary scenarios and detects them with high accuracy when receivers are mobile. The proposed model achieves better performance than the existing approaches in terms of accuracy and computational time by decreasing the processing time

    Wind Power Forecasting Methods Based on Deep Learning: A Survey

    Get PDF
    Accurate wind power forecasting in wind farm can effectively reduce the enormous impact on grid operation safety when high permeability intermittent power supply is connected to the power grid. Aiming to provide reference strategies for relevant researchers as well as practical applications, this paper attempts to provide the literature investigation and methods analysis of deep learning, enforcement learning and transfer learning in wind speed and wind power forecasting modeling. Usually, wind speed and wind power forecasting around a wind farm requires the calculation of the next moment of the definite state, which is usually achieved based on the state of the atmosphere that encompasses nearby atmospheric pressure, temperature, roughness, and obstacles. As an effective method of high-dimensional feature extraction, deep neural network can theoretically deal with arbitrary nonlinear transformation through proper structural design, such as adding noise to outputs, evolutionary learning used to optimize hidden layer weights, optimize the objective function so as to save information that can improve the output accuracy while filter out the irrelevant or less affected information for forecasting. The establishment of high-precision wind speed and wind power forecasting models is always a challenge due to the randomness, instantaneity and seasonal characteristics

    A Learning-based Approach to Exploiting Sensing Diversity in Performance Critical Sensor Networks

    Get PDF
    Wireless sensor networks for human health monitoring, military surveillance, and disaster warning all have stringent accuracy requirements for detecting and classifying events while maximizing system lifetime. to meet high accuracy requirements and maximize system lifetime, we must address sensing diversity: sensing capability differences among both heterogeneous and homogeneous sensors in a specific deployment. Existing approaches either ignore sensing diversity entirely and assume all sensors have similar capabilities or attempt to overcome sensing diversity through calibration. Instead, we use machine learning to take advantage of sensing differences among heterogeneous sensors to provide high accuracy and energy savings for performance critical applications.;In this dissertation, we provide five major contributions that exploit the nuances of specific sensor deployments to increase application performance. First, we demonstrate that by using machine learning for event detection, we can explore the sensing capability of a specific deployment and use only the most capable sensors to meet user accuracy requirements. Second, we expand our diversity exploiting approach to detect multiple events using a distributed manner. Third, we address sensing diversity in body sensor networks, providing a practical, user friendly solution for activity recognition. Fourth, we further increase accuracy and energy savings in body sensor networks by sharing sensing resources among neighboring body sensor networks. Lastly, we provide a learning-based approach for forwarding event detection decisions to data sinks in an environment with mobile sensor nodes

    Artificial intelligence in the cyber domain: Offense and defense

    Get PDF
    Artificial intelligence techniques have grown rapidly in recent years, and their applications in practice can be seen in many fields, ranging from facial recognition to image analysis. In the cybersecurity domain, AI-based techniques can provide better cyber defense tools and help adversaries improve methods of attack. However, malicious actors are aware of the new prospects too and will probably attempt to use them for nefarious purposes. This survey paper aims at providing an overview of how artificial intelligence can be used in the context of cybersecurity in both offense and defense.Web of Science123art. no. 41

    Trends and Prospects in Geotechnics

    Get PDF
    The Special Issue book presents some works considered innovative in the field of geotechnics and whose practical application may occur in the near future. This collection of twelve papers, in addition to their scientific merit, addresses some of the current and future challenges in geotechnics. The published papers cover a wide range of emerging topics with a specific focus on the research, design, construction, and performance of geotechnical works. These works are expected to inspire the development of geotechnics, contributing to the future construction of more resilient and sustainable geotechnical structures
    corecore