206 research outputs found

    Player Behavior Modeling In Video Games

    Get PDF
    Player Behavior Modeling in Video Games In this research, we study playersā€™ interactions in video games to understand player behavior. The first part of the research concerns predicting the winner of a game, which we apply to StarCraft and Destiny. We manage to build models for these games which have reasonable to high accuracy. We also investigate which features of a game comprise strong predictors, which are economic features and micro commands for StarCraft, and key shooter performance metrics for Destiny, though features differ between different match types. The second part of the research concerns distinguishing playing styles of players of StarCraft and Destiny. We find that we can indeed recognize different styles of playing in these games, related to different match types. We relate these different playing styles to chance of winning, but find that there are no significant differences between the effects of different playing styles on winning. However, they do have an effect on the length of matches. In Destiny, we also investigate what player types are distinguished when we use Archetype Analysis on playing style features related to change in performance, and find that the archetypes correspond to different ways of learning. In the final part of the research, we investigate to what extent playing styles are related to different demographics, in particular to national cultures. We investigate this for four popular Massively multiplayer online games, namely Battlefield 4, Counter-Strike, Dota 2, and Destiny. We found that playing styles have relationship with nationality and cultural dimensions, and that there are clear similarities between the playing styles of similar cultures. In particular, the Hofstede dimension Individualism explained most of the variance in playing styles between national cultures for the games that we examined

    Dynamic Batch Norm Statistics Update for Natural Robustness

    Full text link
    DNNs trained on natural clean samples have been shown to perform poorly on corrupted samples, such as noisy or blurry images. Various data augmentation methods have been recently proposed to improve DNN's robustness against common corruptions. Despite their success, they require computationally expensive training and cannot be applied to off-the-shelf trained models. Recently, it has been shown that updating BatchNorm (BN) statistics of an off-the-shelf model on a single corruption improves its accuracy on that corruption significantly. However, adopting the idea at inference time when the type of corruption is unknown and changing decreases the effectiveness of this method. In this paper, we harness the Fourier domain to detect the corruption type, a challenging task in the image domain. We propose a unified framework consisting of a corruption-detection model and BN statistics update that improves the corruption accuracy of any off-the-shelf trained model. We benchmark our framework on different models and datasets. Our results demonstrate about 8% and 4% accuracy improvement on CIFAR10-C and ImageNet-C, respectively. Furthermore, our framework can further improve the accuracy of state-of-the-art robust models, such as AugMix and DeepAug

    Charge carrier and phonon transport in nanostructured thermoelectrics

    Get PDF
    There is currently no quantum mechanical transport model for charge (or phonon) transport in multiphase nano-crystalline structures. Due to absence of periodicity, one cannot apply any of the elegant theorems, such as Bloch's theorem, which are implicit in the basic theory of crystalline solids. Atomistic models such as Kubo and NEGF may assume an accurate knowledge of the interatomic potentials; however, calculations for real 3D random multi-phase systems require so large computational times that makes them practically impossible. In a multi-phase nano-crystalline material, grains and interfacial microstructures may have three distinct types as depicted in figure. In such a material, the physical processes in each individual grain no longer follow the well described classical continuum linear transport theory. Therefore, a proper model for coupled transport of charge carriers and phonons that takes into account the effect of their non-equilibrium energy distribution is highly desirable.Two new theories and associated codes based on Coherent Potential Approximation (CPA) one for electron transport and one for phonon transport are developed. The codes calculate the charge and phonon transport parameters in nanocomposite structures. These can be nano-crystalline (symmetric case) or the material with embedded nano-particles (dispersion case). CPA specifically considers multi-scattering effect that cannot be explained with other semi-classical methods such as Partial Wave or Fermi's golden rule. To our knowledge this is the first CPA code developed to study both charge and phonon transport in nanocomposite structures. The codes can be extend to different types of nano-crystalline materials taking into account the average grain size, as well as the grain size distribution, and volume fraction of the different constituents in the materials. This is a strong tool that can describe more complex systems such as nano-crystals with randomly oriented grains with predictive power for the properties of electrical and thermal properties of disordered nano-crystalline electronic materials

    Multifractal Analysis on the Return Series of Stock Markets Using MF-DFA Method

    Get PDF
    Part 3: Finance and Service ScienceInternational audienceAnalyzing the daily returns of NASDAQ Composite Index by using MF-DFA method has led to findings that the return series does not fit the normal distribution and its leptokurtic indicates that a single-scale index is insufficient to describe the stock price fluctuation. Furthermore, it is found that the long-term memory characteristics are a main source of multifractality in time series. Based on the main reason causing multifractality, a contrast of the original return series and the reordered return series is made to demonstrate the stock price index fluctuation, suggesting that the both return series have multifractality. In addition, the empirical results verify the validity of the measures which illustrates that the stock market fails to reach the weak form efficiency

    Revisiting Image Classifier Training for Improved Certified Robust Defense against Adversarial Patches

    Full text link
    Certifiably robust defenses against adversarial patches for image classifiers ensure correct prediction against any changes to a constrained neighborhood of pixels. PatchCleanser arXiv:2108.09135 [cs.CV], the state-of-the-art certified defense, uses a double-masking strategy for robust classification. The success of this strategy relies heavily on the model's invariance to image pixel masking. In this paper, we take a closer look at model training schemes to improve this invariance. Instead of using Random Cutout arXiv:1708.04552v2 [cs.CV] augmentations like PatchCleanser, we introduce the notion of worst-case masking, i.e., selecting masked images which maximize classification loss. However, finding worst-case masks requires an exhaustive search, which might be prohibitively expensive to do on-the-fly during training. To solve this problem, we propose a two-round greedy masking strategy (Greedy Cutout) which finds an approximate worst-case mask location with much less compute. We show that the models trained with our Greedy Cutout improves certified robust accuracy over Random Cutout in PatchCleanser across a range of datasets and architectures. Certified robust accuracy on ImageNet with a ViT-B16-224 model increases from 58.1\% to 62.3\% against a 3\% square patch applied anywhere on the image.Comment: 12 pages, 5 figure

    Uncertainty in the Fluctuations of the Price of Stocks

    Full text link
    We report on a study of the Tehran Price Index (TEPIX) from 2001 to 2006 as an emerging market that has been affected by several political crises during the recent years, and analyze the non-Gaussian probability density function (PDF) of the log returns of the stocks' prices. We show that while the average of the index did not fall very much over the time period of the study, its day-to-day fluctuations strongly increased due to the crises. Using an approach based on multiplicative processes with a detrending procedure, we study the scale-dependence of the non-Gaussian PDFs, and show that the temporal dependence of their tails indicates a gradual and systematic increase in the probability of the appearance of large increments in the returns on approaching distinct critical time scales over which the TEPIX has exhibited maximum uncertainty.Comment: 5 pages, 5 figures. Accepted to appear in IJMP

    A deep active learning system for species identification and counting in camera trap images

    Get PDF
    1. A typical camera trap survey may produce millions of images that require slow, expensive manual review. Consequently, critical conservation questions may be answered too slowly to support decisionā€making. Recent studies demonstrated the potential for computer vision to dramatically increase efficiency in imageā€based biodiversity surveys; however, the literature has focused on projects with a large set of labeled training images, and hence many projects with a smaller set of labeled images cannot benefit from existing machine learning techniques. Furthermore, even sizable projects have struggled to adopt computer vision methods because classification models overfit to specific image backgrounds (i.e., camera locations). 2. In this paper, we combine the power of machine intelligence and human intelligence via a novel active learning system to minimize the manual work required to train a computer vision model. Furthermore, we utilize object detection models and transfer learning to prevent overfitting to camera locations. To our knowledge, this is the first work to apply an active learning approach to camera trap images. 3. Our proposed scheme can match stateā€ofā€theā€art accuracy on a 3.2 million image dataset with as few as 14,100 manual labels, which means decreasing manual labeling effort by over 99.5%. Our trained models are also less dependent on background pixels, since they operate only on cropped regions around animals. 4. The proposed active deep learning scheme can significantly reduce the manual labor required to extract information from camera trap images. Automation of information extraction will not only benefit existing camera trap projects, but can also catalyze the deployment of larger camera trap arrays
    • ā€¦
    corecore