215 research outputs found
Player Behavior Modeling In Video Games
Player Behavior Modeling in Video Games In this research, we study playersā interactions in video games to understand player behavior. The first part of the research concerns predicting the winner of a game, which we apply to StarCraft and Destiny. We manage to build models for these games which have reasonable to high accuracy. We also investigate which features of a game comprise strong predictors, which are economic features and micro commands for StarCraft, and key shooter performance metrics for Destiny, though features differ between different match types. The second part of the research concerns distinguishing playing styles of players of StarCraft and Destiny. We find that we can indeed recognize different styles of playing in these games, related to different match types. We relate these different playing styles to chance of winning, but find that there are no significant differences between the effects of different playing styles on winning. However, they do have an effect on the length of matches. In Destiny, we also investigate what player types are distinguished when we use Archetype Analysis on playing style features related to change in performance, and find that the archetypes correspond to different ways of learning. In the final part of the research, we investigate to what extent playing styles are related to different demographics, in particular to national cultures. We investigate this for four popular Massively multiplayer online games, namely Battlefield 4, Counter-Strike, Dota 2, and Destiny. We found that playing styles have relationship with nationality and cultural dimensions, and that there are clear similarities between the playing styles of similar cultures. In particular, the Hofstede dimension Individualism explained most of the variance in playing styles between national cultures for the games that we examined
Dynamic Batch Norm Statistics Update for Natural Robustness
DNNs trained on natural clean samples have been shown to perform poorly on
corrupted samples, such as noisy or blurry images. Various data augmentation
methods have been recently proposed to improve DNN's robustness against common
corruptions. Despite their success, they require computationally expensive
training and cannot be applied to off-the-shelf trained models. Recently, it
has been shown that updating BatchNorm (BN) statistics of an off-the-shelf
model on a single corruption improves its accuracy on that corruption
significantly. However, adopting the idea at inference time when the type of
corruption is unknown and changing decreases the effectiveness of this method.
In this paper, we harness the Fourier domain to detect the corruption type, a
challenging task in the image domain. We propose a unified framework consisting
of a corruption-detection model and BN statistics update that improves the
corruption accuracy of any off-the-shelf trained model. We benchmark our
framework on different models and datasets. Our results demonstrate about 8%
and 4% accuracy improvement on CIFAR10-C and ImageNet-C, respectively.
Furthermore, our framework can further improve the accuracy of state-of-the-art
robust models, such as AugMix and DeepAug
Charge carrier and phonon transport in nanostructured thermoelectrics
There is currently no quantum mechanical transport model for charge (or phonon) transport in multiphase nano-crystalline structures. Due to absence of periodicity, one cannot apply any of the elegant theorems, such as Bloch's theorem, which are implicit in the basic theory of crystalline solids. Atomistic models such as Kubo and NEGF may assume an accurate knowledge of the interatomic potentials; however, calculations for real 3D random multi-phase systems require so large computational times that makes them practically impossible. In a multi-phase nano-crystalline material, grains and interfacial microstructures may have three distinct types as depicted in figure. In such a material, the physical processes in each individual grain no longer follow the well described classical continuum linear transport theory. Therefore, a proper model for coupled transport of charge carriers and phonons that takes into account the effect of their non-equilibrium energy distribution is highly desirable.Two new theories and associated codes based on Coherent Potential Approximation (CPA) one for electron transport and one for phonon transport are developed. The codes calculate the charge and phonon transport parameters in nanocomposite structures. These can be nano-crystalline (symmetric case) or the material with embedded nano-particles (dispersion case). CPA specifically considers multi-scattering effect that cannot be explained with other semi-classical methods such as Partial Wave or Fermi's golden rule. To our knowledge this is the first CPA code developed to study both charge and phonon transport in nanocomposite structures. The codes can be extend to different types of nano-crystalline materials taking into account the average grain size, as well as the grain size distribution, and volume fraction of the different constituents in the materials. This is a strong tool that can describe more complex systems such as nano-crystals with randomly oriented grains with predictive power for the properties of electrical and thermal properties of disordered nano-crystalline electronic materials
Multifractal Analysis on the Return Series of Stock Markets Using MF-DFA Method
Part 3: Finance and Service ScienceInternational audienceAnalyzing the daily returns of NASDAQ Composite Index by using MF-DFA method has led to findings that the return series does not fit the normal distribution and its leptokurtic indicates that a single-scale index is insufficient to describe the stock price fluctuation. Furthermore, it is found that the long-term memory characteristics are a main source of multifractality in time series. Based on the main reason causing multifractality, a contrast of the original return series and the reordered return series is made to demonstrate the stock price index fluctuation, suggesting that the both return series have multifractality. In addition, the empirical results verify the validity of the measures which illustrates that the stock market fails to reach the weak form efficiency
Identifying and Prioritizing Dimensions of Information Technology Development in Human Resource Management with an Organizational Agility Approach: A Fuzzy Approach
Objectives: Research and study information technology resources and capabilities that enable organizations to be agile in the face of changes is emerging. Creating agility in an organization through information technology is a long-term process and a difficult task for any organization. In this regard, information technology capabilities support the organization's agility and play a fundamental role in understanding and reacting to the environment. Today, the use of information technology in organizations has had a tremendous impact on human resource management by applying different methods and increased productivity of human resource management. Based on this, the present research has been conducted for the first time with the aim of identifying the dimensions of information technology development in human resource management with an organizational agility approach in the police command headquarters of Ardabil province.
Methods: This research is a combination of descriptive-exploratory types. The statistical population of this research was 20 employees of the police command headquarters of Ardabil province, who were employed in the informatics and technology department and were selected purposefully and by chain reference sampling (snowball method). The data collection tool was in-depth and semi-structured interviews with experts, which were conducted using the Delphi method in two rounds. Also, Cronbach's alpha index was used to check the reliability of the questionnaire used, and Fornell and Locker's method was used to check its validity. In this research, after determining the dimensions, using the hierarchical approach, and using the Expert Choice software, the priorities have been determined. Also, Kendall's coordination coefficient has been used to determine the degree of coordination of experts' opinions. The present research was conducted between January 2022 and July 2022.
Results: The findings from the interviews with experts show that the criteria of intelligence and speed of human resources, the competence of technology and human resources, knowledge sharing, and responsibility and flexibility in information technology were identified as the dimensions of information technology development with an organizational agility approach in the police command headquarters of Ardabil province.
Conclusions: The results of the research show that the competency of technology and human resources with a value of 0.319 as the most important dimension was ranked first in the dimensions of information technology development with an organizational flexibility approach. Then the intelligence and speed of human resources with a value of 0.255, knowledge sharing and responsibility with a value of 0.236, and flexibility in information technology with a value of 0.186 respectively as the most effective dimensions of information technology development with an organizational agility approach in the police command headquarters of Ardabil province. were placed Finally, the compatibility rate in this study was 0.004
Revisiting Image Classifier Training for Improved Certified Robust Defense against Adversarial Patches
Certifiably robust defenses against adversarial patches for image classifiers
ensure correct prediction against any changes to a constrained neighborhood of
pixels. PatchCleanser arXiv:2108.09135 [cs.CV], the state-of-the-art certified
defense, uses a double-masking strategy for robust classification. The success
of this strategy relies heavily on the model's invariance to image pixel
masking. In this paper, we take a closer look at model training schemes to
improve this invariance. Instead of using Random Cutout arXiv:1708.04552v2
[cs.CV] augmentations like PatchCleanser, we introduce the notion of worst-case
masking, i.e., selecting masked images which maximize classification loss.
However, finding worst-case masks requires an exhaustive search, which might be
prohibitively expensive to do on-the-fly during training. To solve this
problem, we propose a two-round greedy masking strategy (Greedy Cutout) which
finds an approximate worst-case mask location with much less compute. We show
that the models trained with our Greedy Cutout improves certified robust
accuracy over Random Cutout in PatchCleanser across a range of datasets and
architectures. Certified robust accuracy on ImageNet with a ViT-B16-224 model
increases from 58.1\% to 62.3\% against a 3\% square patch applied anywhere on
the image.Comment: 12 pages, 5 figure
A deep active learning system for species identification and counting in camera trap images
1. A typical camera trap survey may produce millions of images that require slow, expensive manual review. Consequently, critical conservation questions may be answered too slowly to support decisionāmaking. Recent studies demonstrated the potential for computer vision to dramatically increase efficiency in imageābased biodiversity surveys; however, the literature has focused on projects with a large set of labeled training images, and hence many projects with a smaller set of labeled images cannot benefit from existing machine learning techniques. Furthermore, even sizable projects have struggled to adopt computer vision methods because classification models overfit to specific image backgrounds (i.e., camera locations).
2. In this paper, we combine the power of machine intelligence and human intelligence via a novel active learning system to minimize the manual work required to train a computer vision model. Furthermore, we utilize object detection models and transfer learning to prevent overfitting to camera locations. To our knowledge, this is the first work to apply an active learning approach to camera trap images.
3. Our proposed scheme can match stateāofātheāart accuracy on a 3.2 million image dataset with as few as 14,100 manual labels, which means decreasing manual labeling effort by over 99.5%. Our trained models are also less dependent on background pixels, since they operate only on cropped regions around animals.
4. The proposed active deep learning scheme can significantly reduce the manual labor required to extract information from camera trap images. Automation of information extraction will not only benefit existing camera trap projects, but can also catalyze the deployment of larger camera trap arrays
- ā¦