733 research outputs found

    CES-D measurement stability across individualistic and collectivistic cultures

    Get PDF
    The purpose of this study was to analyze the cross-cultural and cross-time measurement invariance of Center for Epidemiological Studies Depression Scale (CES-D). The research data that were collected in 1994-1995 and 1996 respectively were from the National Longitudinal Study of Adolescent Health (Add Health). There were two steps that were used for analyzing the measurement invariance. First, Exploratory Factor Analysis (EFA) was used to analyze the dimension invariance (factor structure) for the Asian American and European American adolescents across two waves. Three-factor model was established for both groups across two waves. Second, Confirmatory Factor Analysis (CFA) was used to check the factor loading and threshold invariance of the items that were dimension invariant. The results showed that the 13-item three-factor CES-D was measurement invariant across the groups and across time. Nevertheless, new sample should be collected in order to implement the new 13 items as a depression screening method across groups

    Pattern of genomic loci controlling morphological responses to UV-B radiation in maize (Zea mays L.)

    Get PDF
    The sessile nature of plants determines that they tolerate rather than escape from environmental changes. Therefore, studying plant responses to ultraviolet radiation (UV) is important for understanding how plants respond to environmental challenges. Although numerous UV responses have been reported, little is known about the genetics controlling quantitative natural variation in those UV responses. To address this question, I examined morphological UV responses in maize (Zea mays). First, dose-response and reciprocity experiments were conducted to find a standard experimental UV dose of six hours per day for four days. Second, a 84 subset of 94 mapping lines from the recombinant inbred of maize (IBM) population was planted in a greenhouse in a completely randomized design. Maize UV responses including ratio of leaf rolling, plant height, dry weight of second and third leaf, and dry weight of root, were compared for “control” and “UV” environments. A composite interval mapping (CIM) analysis detected 12 significant quantitative trait loci (QTL) affecting at least one of five traits. A total of 8 significant QTL were identified by multitrait composite interval mapping (MCIM). Only two QTL were detected by both CIM and MCIM. The allelic sensitivity model was supported most often. Genome-wide QTL mapping is an efficient way to generate a more complete understanding of the genetic basis of plant responses to UV irradiation

    Efeito da hemodiálise sobre a pressão intra-abdominal

    Get PDF
    OBJECTIVE: To study the effect of hemodialysis on intra-abdominal pressure. METHODS: Five patients admitted between July and November of 2003 were evaluated in the intensive care unit. Intra-abdominal pressure was measured before and after hemodialysis, maintaining the ventilatory parameters except for PEEP (positive-end expiratory pressure). RESULTS: Intra-abdominal pressure was significantly reduced by hemodialysis in all the 5 patients. CONCLUSION: Hemodialysis significantly reduced intra-abdominal pressure in the 5 patients, an effect which could have influence over other organic systems. This reduction is related to the weight variation before and after hemodialysis, as well as to the loss of volume caused by this procedure.OBJETIVO: Pesquisar o efeito da hemodiálise sobre a pressão intra-abdominal. MÉTODOS: Foram avaliados cinco pacientes internados entre julho e novembro de 2003, na Unidade de Terapia Intensiva do Serviço de Nefrologia do Hospital das Clínicas de São Paulo. Mensurou-se a pressão intra-abdominal antes e após a hemodiálise, mantendo os parâmetros ventilatórios exceto a PEEP (positive end expiratory pressure). RESULTADOS: Constatou-se que a hemodiálise foi capaz de reduzir significativamente a PIA em cinco pacientes na Unidade de Terapia Intensiva CONCLUSÃO: A hemodiálise reduziu a pressão intra-abdominal numa amostra de cinco pacientes, de maneira significativa, o que poderia influenciar os demais sistemas orgânicos. Essa redução está relacionada com a variação de peso pré e pós-hemodiálise, e com a perda de volume promovida pelo procedimento

    Evaluating holistic aggregators efficiently for very large datasets

    Get PDF
    In data warehousing applications, numerous OLAP queries involve the processing of holistic aggregators such as computing the “top n,” median, quantiles, etc. In this paper, we present a novel approach called dynamic bucketing to efficiently evaluate these aggregators. We partition data into equiwidth buckets and further partition dense buckets into sub-buckets as needed by allocating and reclaiming memory space. The bucketing process dynamically adapts to the input order and distribution of input datasets. The histograms of the buckets and subbuckets are stored in our new data structure called structure trees. A recent selection algorithm based on regular sampling is generalized and its analysis extended. We have also compared our new algorithms with this generalized algorithm and several other recent algorithms. Experimental results show that our new algorithms significantly outperform prior ones not only in the runtime but also in accuracy

    Handheld Computing and Programming for Mobile Commerce.

    Get PDF
    Using Internet-enabled mobile handheld devices to access the World Wide Web is a promising addition to the Web and traditional e-commerce. Mobile handheld devices provide convenience and portable access to the huge information on the Internet for mobile users from anywhere and at anytime. However, mobile commerce has not enjoyed the same level of success as the e-commerce has so far because mobile Web contents are scarce and mostly awkward for browsing. The major reason of the problems is most software engineers are not familiar with handheld devices, let alone programming for them. To help software engineers better understand this subject, this article gives a comprehensive study of handheld computing and programming for mobile commerce. It includes live major topics: (i) mobile commerce systems, (ii) mobile handheld devices, (iii) handheld computing, (iv) server-side handheld computing and programming, and (v) client-side handheld computing and programming. The most popular server-side handheld applications are mostly functioning through mobile Web contents, which are constructed by using only few technologies and languages. On the other hand, various environments/languages are available for client-side handheld computing and programming. Five of the most popular are (i) BREW, (ii) J2ME, (iii) Palm OS, (iv) Symbian OS, and (v) Windows Mobile. They are using either C/C++ or Java programming languages. This article will explain J2ME, a micro version of Java, and Palm OS programming, using C, by giving step-by-step procedures of J2ME and Palm application development

    Comparisons of subscoring methods in computerized adaptive testing: a simulation study

    Get PDF
    Given the increasing demands of subscore reports, various subscoring methods and augmentation techniques have been developed aiming to improve the subscore estimates, but few studies have been conducted to systematically compare these methods under the framework of computerized adaptive tests (CAT). This research conducts a simulation study, for the purpose of comparing five subscoring methods on score estimation under variable simulated CAT conditions. Among the five subscoring methods, the IND-UCAT scoring ignores the correlations among subtests, whereas the other four correlation-based scoring methods (SEQ-CAT, PC-MCAT, reSEQ-CAT, and AUG-CAT) capitalize on the correlation information in the scoring procedure. By manipulating the sublengths, the correlation structures, and the item selection algorithms, more comparable, pragmatic, and systematic testing scenarios are created for comparison purposes. Also, to make the best of the sources underlying the assessments, the study proposes a successive scoring procedure according to the structure of the higher-order IRT model, in which the test total score of individual examinees can be calculated after the subscore estimation procedure is conducted. Through the successive scoring procedure, the subscores and the total score of an examinee can be sequentially derived from one test. The results of the study indicate that in the low correlation structure, the original IND-CAT is suggested for subscore estimation considering the ease of implementation in practice, while the suggested total score estimation procedure is not recommended given the large divergences from the true total scores. For the mixed correlation structure with two moderate correlations and one strong correlation, the original SEQ-CAT or the combination of the SEQ-CAT item selection and the PC-MCAT scoring should be considered not only for subscore estimation but also for total score estimation. If the post-hoc estimation procedure is allowed, the original SEQ-CAT and the reSEQ-CAT scoring could be jointly conducted for the best score estimates. In the high correlation structure, the original PC-MCAT and the combination of the PC-MCAT scoring and the SEQ-CAT item selection are suggested for both the subscore estimation and the total score estimation. In terms of the post-hoc score estimation, the reSEQ-CAT scoring in conjunction with the original SEQ-CAT is strongly recommended. If the complexity of the implementation is an issue in practice, the reSEQ-CAT scoring jointly conducted with the original IND-UCAT could be considered for reasonable score estimates. Additionally, to compensate for the constrained use of item pools in PC-MCAT, the PC-MCAT with adaptively sequencing subtests (SEQ-MCAT) is proposed for future investigations. The simplifications of item and/or subtest selection criteria in a simple-structure MCAT, PC-MCAT, and SEQ-MCAT are also pointed out for the convenience of their applications in practice. Last, the limitations of the study are discussed and the directions for future studies are also provided

    Comparison of general diagnostic classification model for multiple-choice and dichotomous diagnostic classification model

    Get PDF
    A submodel of the general diagnostic classification models for multiple choice (GDCM-MC), the excluding guessing from the correct answer (EGCA) model, was first introduced because the submodel with kernel extended reparameterized unified model (ERUM) can be compared directly to the dichotomous reduced reparameterized unified model (RRUM) without model induced bias. A simulation study was used to demonstrate this equivalence of the EGCA parameters of the correct options and the RRUM item parameters. At the same time, the simulation study was also used to demonstrate the equivalence of the two models when there were no skills or misconceptions measured by the incorrect options, and show the improvement of the EGCA estimation when distractors are created to provide additional information. The results confirmed the equivalence of the EGCA parameters of the correct options and the RRUM item parameters. The results also show that the correct classification rates (CCRs) and test-level cognitive diagnostic index (??DI•) were the same for the two models when there was no informative distractor. Additionally, by including weakly informative distractors, the EGCA showed higher CCRs and ??DI• than the RRUM. When the distractors were strongly informative, the EGCA had much higher CCRs and ??DI•. The studies also showed that CCRs and ??DI• increased when the sample size, test length, and item quality increased, as well as when the number of measured test skills and misconceptions decreased. A real-world example was used to compare the classification differences and predictability of the classification on the selection of the options between the two models in a distractor-driven assessment. The results show that the profile classification agreement was 48%, and the classification based on the EGCA was more correlated with the students’ selection of the correct or the misconception-embedded options than the classification based on the RRUM. The results indicate that the EGCA provides more realistic classification than the RRUM. The results of both simulation and the real data studies suggest that the polytomous diagnostic classification models (DCMs), rather than the dichotomous DCMs, should be used when the multiple-choice items have informative distractors

    CubiST++: Evaluating Ad-Hoc CUBE Queries Using Statistics Trees

    Get PDF
    We report on a new, efficient encoding for the data cube, which results in a drastic speed-up of OLAP queries that aggregate along any combination of dimensions over numerical and categorical attributes. We are focusing on a class of queries called cube queries, which return aggregated values rather than sets of tuples. Our approach, termed CubiST++ (Cubing with Statistics Trees Plus Families), represents a drastic departure from existing relational (ROLAP) and multi-dimensional (MOLAP) approaches in that it does not use the view lattice to compute and materialize new views from existing views in some heuristic fashion. Instead, CubiST++ encodes all possible aggregate views in the leaves of a new data structure called statistics tree (ST) during a one-time scan of the detailed data. In order to optimize the queries involving constraints on hierarchy levels of the underlying dimensions, we select and materialize a family of candidate trees, which represent superviews over the different hierarchical levels of the dimensions. Given a query, our query evaluation algorithm selects the smallest tree in the family, which can provide the answer. Extensive evaluations of our prototype implementation have demonstrated its superior run-time performance and scalability when compared with existing MOLAP and ROLAP systems

    Efficient Evaluation of Sparse Data Cubes

    Get PDF
    Computing data cubes requires the aggregation of measures over arbitrary combinations of dimensions in a data set. Efficient data cube evaluation remains challenging because of the potentially very large sizes of input datasets (e.g., in the data warehousing context), the well-known curse of dimensionality, and the complexity of queries that need to be supported. This paper proposes a new dynamic data structure called SST (Sparse Statistics Trees) and a novel, in-teractive, and fast cube evaluation algorithm called CUPS (Cubing by Pruning SST), which is especially well suitable for computing aggregates in cubes whose data sets are sparse. SST only stores the aggregations of non-empty cube cells instead of the detailed records. Furthermore, it retains in memory the dense cubes (a.k.a. iceberg cubes) whose aggregate values are above a threshold. Sparse cubes are stored on disks. This allows a fast, accurate approximation for queries. If users desire more refined answers, related sparse cubes are aggregated. SST is incrementally maintainable, which makes CUPS suitable for data warehousing and analysis of streaming data. Experiment results demonstrate the excellent performance and good scalability of our approach

    CubiST: A New Algorithm for Improving the Performance of Ad-hoc OLAP Queries

    Get PDF
    Being able to efficiently answer arbitrary OLAP queries that aggregate along any combination of dimensions over numerical and categorical attributes has been a continued, major concern in data warehousing. In this paper, we introduce a new data structure, called Statistics Tree (ST), together with an efficient algorithm called CubiST, for evaluating ad-hoc OLAP queries on top of a relational data warehouse. We are focusing on a class of queries called cube queries, which generalize the data cube operator. CubiST represents a drastic departure from existing relational (ROLAP) and multi-dimensional (MOLAP) approaches in that it does not use the familiar view lattice to compute and materialize new views from existing views in some heuristic fashion. CubiST is the first OLAP algorithm that needs only one scan over the detailed data set and can efficiently answer any cube query without additional I/O when the ST fits into memory. We have implemented CubiST and our experiments have demonstrated significant improvements in performance and scalability over existing ROLAP/MOLAP approaches
    corecore