48,765 research outputs found

    A Zero-Inflated Box-Cox Normal Unipolar Item Response Model for Measuring Constructs of Psychopathology

    Get PDF
    This research introduces a latent class item response theory (IRT) approach for modeling item response data from zero-inflated, positively skewed, and arguably unipolar constructs of psychopathology. As motivating data, the authors use 4,925 responses to the Patient Health Questionnaire (PHQ-9), a nine Likert-type item depression screener that inquires about a variety of depressive symptoms. First, Lucke’s log-logistic unipolar item response model is extended to accommodate polytomous responses. Then, a nontrivial proportion of individuals who do not endorse any of the symptoms are accounted for by including a nonpathological class that represents those who may be absent on or at some floor level of the latent variable that is being measured by the PHQ-9. To enhance flexibility, a Box-Cox normal distribution is used to empirically determine a transformation parameter that can help characterize the degree of skewness in the latent variable density. A model comparison approach is used to test the necessity of the features of the proposed model. Results suggest that (a) the Box-Cox normal transformation provides empirical support for using a log-normal population density, and (b) model fit substantially improves when a nonpathological latent class is included. The parameter estimates from the latent class IRT model are used to interpret the psychometric properties of the PHQ-9, and a method of computing IRT scale scores that reflect unipolar constructs is described, focusing on how these scores may be used in clinical contexts

    Analysis and evaluation of fragment size distributions in rock blasting at the Erdenet Mine

    Get PDF
    Master's Project (M.S.) University of Alaska Fairbanks, 2015Rock blasting is one of the most important operations in mining. It significantly affects the subsequent comminution processes and, therefore, is critical to successful mining productions. In this study, for the evaluation of the blasting performance at the Erdenet Mine, we analyzed rock fragment size distributions with the digital image processing method. The uniformities of rock fragments and the mean fragment sizes were determined and applied in the Kuz-Ram model. Statistical prediction models were also developed based on the field measured parameters. The results were compared with the Kuz-Ram model predictions and the digital image processing measurements. A total of twenty-eight images from eleven blasting patterns were processed, and rock size distributions were determined by Split-Desktop program in this study. Based on the rock mass and explosive properties and the blasting parameters, the rock fragment size distributions were also determined with the Kuz-Ram model and compared with the measurements by digital image processing. Furthermore, in order to improve the prediction of rock fragment size distributions at the mine, regression analyses were conducted and statistical models w ere developed for the estimation of the uniformity and characteristic size. The results indicated that there were discrepancies between the digital image measurements and those estimated by the Kuz-Ram model. The uniformity indices of image processing measurements varied from 0.76 to 1.90, while those estimate by the Kuz-Ram model were from 1.07 to 1.13. The mean fragment size of the Kuz-Ram model prediction was 97.59% greater than the mean fragment size of the image processing. The multivariate nonlinear regression analyses conducted in this study indicated that rock uniaxial compressive strength and elastic modulus, explosive energy input in the blasting, bench height to burden ratio and blast area per hole were significant predictor variables in determining the fragment characteristic size and the uniformity index. The regression models developed based on the above predictor variables showed much closer agreement with the measurements

    Forecasting Player Behavioral Data and Simulating in-Game Events

    Full text link
    Understanding player behavior is fundamental in game data science. Video games evolve as players interact with the game, so being able to foresee player experience would help to ensure a successful game development. In particular, game developers need to evaluate beforehand the impact of in-game events. Simulation optimization of these events is crucial to increase player engagement and maximize monetization. We present an experimental analysis of several methods to forecast game-related variables, with two main aims: to obtain accurate predictions of in-app purchases and playtime in an operational production environment, and to perform simulations of in-game events in order to maximize sales and playtime. Our ultimate purpose is to take a step towards the data-driven development of games. The results suggest that, even though the performance of traditional approaches such as ARIMA is still better, the outcomes of state-of-the-art techniques like deep learning are promising. Deep learning comes up as a well-suited general model that could be used to forecast a variety of time series with different dynamic behaviors

    Improved classification for compositional data using the α\alpha-transformation

    Get PDF
    In compositional data analysis an observation is a vector containing non-negative values, only the relative sizes of which are considered to be of interest. Without loss of generality, a compositional vector can be taken to be a vector of proportions that sum to one. Data of this type arise in many areas including geology, archaeology, biology, economics and political science. In this paper we investigate methods for classification of compositional data. Our approach centres on the idea of using the α\alpha-transformation to transform the data and then to classify the transformed data via regularised discriminant analysis and the k-nearest neighbours algorithm. Using the α\alpha-transformation generalises two rival approaches in compositional data analysis, one (when α=1\alpha=1) that treats the data as though they were Euclidean, ignoring the compositional constraint, and another (when α=0\alpha=0) that employs Aitchison's centred log-ratio transformation. A numerical study with several real datasets shows that whether using α=1\alpha=1 or α=0\alpha=0 gives better classification performance depends on the dataset, and moreover that using an intermediate value of α\alpha can sometimes give better performance than using either 1 or 0.Comment: This is a 17-page preprint and has been accepted for publication at the Journal of Classificatio

    Hybrid LSTM and Encoder-Decoder Architecture for Detection of Image Forgeries

    Full text link
    With advanced image journaling tools, one can easily alter the semantic meaning of an image by exploiting certain manipulation techniques such as copy-clone, object splicing, and removal, which mislead the viewers. In contrast, the identification of these manipulations becomes a very challenging task as manipulated regions are not visually apparent. This paper proposes a high-confidence manipulation localization architecture which utilizes resampling features, Long-Short Term Memory (LSTM) cells, and encoder-decoder network to segment out manipulated regions from non-manipulated ones. Resampling features are used to capture artifacts like JPEG quality loss, upsampling, downsampling, rotation, and shearing. The proposed network exploits larger receptive fields (spatial maps) and frequency domain correlation to analyze the discriminative characteristics between manipulated and non-manipulated regions by incorporating encoder and LSTM network. Finally, decoder network learns the mapping from low-resolution feature maps to pixel-wise predictions for image tamper localization. With predicted mask provided by final layer (softmax) of the proposed architecture, end-to-end training is performed to learn the network parameters through back-propagation using ground-truth masks. Furthermore, a large image splicing dataset is introduced to guide the training process. The proposed method is capable of localizing image manipulations at pixel level with high precision, which is demonstrated through rigorous experimentation on three diverse datasets
    corecore