76,685 research outputs found

    English language readability task performance in a mobile setting - the effect of gender

    Get PDF
    Mobile computing has become very common in the present day fast changing technological development. It is expected that in future, people will be more mobile than today and all kinds of tasks that are performed in the stationary environment will be undertaken in a mobile environment also. As traffic on the road and also the population are increasing at a very fast pace, the future generation will spend a lot of time in a mobile environment. Therefore, assessment of operators’ performance in a mobile setting will become all the more important. Mobile environment is influenced by vehicular vibration for all kinds of tasks. The present study made an attempt to explore the English language readability performance of a target group. Fourteen subjects (seven males and seven females) from an English language teaching institute were selected for this study. The base line value of reading speed was obtained on the basis of stationary environment reading task performance. Reading speed was noted in the number of words read per minute (NWRPM). The same subjects were used for reading in the vibratory environment and difference in the performance was noticed. A stimulus was presented on a lap-top in both cases. Vibration was assessed on the basis of ISO 2631-1 (1997) guideline. ANOVA statistical tool was used to analyze the data. The result indicated that the performance of operators was significantly affected due to the presence of vibration and text/background color

    Generative Adversarial Networks for Mitigating Biases in Machine Learning Systems

    Full text link
    In this paper, we propose a new framework for mitigating biases in machine learning systems. The problem of the existing mitigation approaches is that they are model-oriented in the sense that they focus on tuning the training algorithms to produce fair results, while overlooking the fact that the training data can itself be the main reason for biased outcomes. Technically speaking, two essential limitations can be found in such model-based approaches: 1) the mitigation cannot be achieved without degrading the accuracy of the machine learning models, and 2) when the data used for training are largely biased, the training time automatically increases so as to find suitable learning parameters that help produce fair results. To address these shortcomings, we propose in this work a new framework that can largely mitigate the biases and discriminations in machine learning systems while at the same time enhancing the prediction accuracy of these systems. The proposed framework is based on conditional Generative Adversarial Networks (cGANs), which are used to generate new synthetic fair data with selective properties from the original data. We also propose a framework for analyzing data biases, which is important for understanding the amount and type of data that need to be synthetically sampled and labeled for each population group. Experimental results show that the proposed solution can efficiently mitigate different types of biases, while at the same time enhancing the prediction accuracy of the underlying machine learning model

    Fourteenth Biennial Status Report: März 2017 - February 2019

    No full text
    • …
    corecore