664 research outputs found

    Hermitian Maass lift for General Level

    Full text link
    For an imaginary quadratic field KK of discriminant −D-D, let χ=χK\chi = \chi_K be the associated quadratic character. We will show that the space of special hermitian Jacobi forms of level NN is isomorphic to the space of plus forms of level DNDN and nebentypus χ\chi (the hermitian analogue of Kohnen's plus space) for any integer NN prime to DD. This generalizes the results of Krieg from N=1N = 1 to arbitrary level. Combining this isomorphism with the recent work of Berger and Klosin and a modification of Ikeda's construction we prove the existence of a lift from the space of elliptic modular forms to the space of hermitian modular forms of level NN which can be viewed as a generalization of the classical hermitian \Maass lift to arbitrary level

    Distributed Data Summarization in Well-Connected Networks

    Get PDF
    We study distributed algorithms for some fundamental problems in data summarization. Given a communication graph G of n nodes each of which may hold a value initially, we focus on computing sum_{i=1}^N g(f_i), where f_i is the number of occurrences of value i and g is some fixed function. This includes important statistics such as the number of distinct elements, frequency moments, and the empirical entropy of the data. In the CONGEST~ model, a simple adaptation from streaming lower bounds shows that it requires Omega~(D+ n) rounds, where D is the diameter of the graph, to compute some of these statistics exactly. However, these lower bounds do not hold for graphs that are well-connected. We give an algorithm that computes sum_{i=1}^{N} g(f_i) exactly in {tau_{G}} * 2^{O(sqrt{log n})} rounds where {tau_{G}} is the mixing time of G. This also has applications in computing the top k most frequent elements. We demonstrate that there is a high similarity between the GOSSIP~ model and the CONGEST~ model in well-connected graphs. In particular, we show that each round of the GOSSIP~ model can be simulated almost perfectly in O~({tau_{G}}) rounds of the CONGEST~ model. To this end, we develop a new algorithm for the GOSSIP~ model that 1 +/- epsilon approximates the p-th frequency moment F_p = sum_{i=1}^N f_i^p in O~(epsilon^{-2} n^{1-k/p}) roundsfor p >= 2, when the number of distinct elements F_0 is at most O(n^{1/(k-1)}). This result can be translated back to the CONGEST~ model with a factor O~({tau_{G}}) blow-up in the number of rounds

    COMPUTATION FOR THE DELAMINATION IN THE LAMINATE COMPOSITE MATERIAL USING A COHESIVE ZONE MODEL BY ABAQUS

    Get PDF
    In this paper, a damage model using cohesive damage zone for the simulation of progressive delamination under variable mode is presented. The constitutive relations, based on liner softening law, are using for formulation of the delamination onset and propagation. The implementation of the cohesive elements is described, along with instructions on how to incorporate the elements into a finite element mesh. The model is implemented in a finite element formulation in ABAQUS. The numerical results given by the model are compare with experimental dat

    Predictive modeling of human placement decisions in an English Writing Placement Test

    Get PDF
    Writing is an important component in standardized tests that are utilized for admission decisions, class placement, and academic or professional development. Placement results of the EPT Writing Test at the undergraduate level are used to determine whether international students meet English requirements for writing skills (i.e., Pass); and to direct students to appropriate ESL writing classes (i.e., 101B or 101C). Practical constraints during evaluation processes in the English Writing Placement Test (the EPT Writing Test) at Iowa State University, such as rater disagreement, rater turnover, and heavy administrative workload, have demonstrated the necessity to develop valid scoring models for an automated writing evaluation tool. Statistical algorithms of the scoring engines were essential to predict human raters\u27 quality judgments of EPT essays in the future. Furthermore, in measuring L2 writing performance, previous research has heavily focused on writer-oriented text features in students\u27 writing performance, rather than reader-oriented linguistic features that were influential to human raters for making quality judgments. To address the practical concerns of the EPT Writing Test and the existing gap in the literature, the current project aimed at developing a predictive model that best defines human placement decisions in the EPT Writing Test. A two-phase multistage mixed-methods design was adopted in this study within a model-specification phase and in interconnection with model-specification and model-construction phases. In the model-specification phase, results of a Multifaceted-Rasch-Measurement (MFRM) analysis allowed for selection of five EPT expert raters that represented rating severity levels. Concurrent think-aloud protocols provided by the five participants while evaluating EPT sample essays were analyzed qualitatively to identify text features to which raters attended. Based on the qualitative findings, 52 evaluative variables and metrics were generated. Among the 52 variables, 36 variables were chosen to be analyzed in the whole EPT essay corpus. After that, a corpus-based analysis of 297 EPT essays in terms of 37 text features was conducted to obtain quantitative data on the 36 variables in the model-construction phase. Principal Component Analysis (PCA) helped extract seven principal components (PCs). Results of MANOVA and one-way ANOVA tests revealed 17 original variables and six PCs that significantly differentiated the three EPT placement levels (i.e., 101B, 101C, and Pass). A profile analysis suggested that the lowest level (101B) and the highest level (Pass) seemed to have distinct profiles in terms of text features. Test takers placed in 101C classes were likely to be characterized as an average group. Like 101B students, 101C students appeared to have some linguistic problems. However, students in 101C classes and those who passed the test similarly demonstrated an ability to develop an essay. In the model-construction phase, random forests (Breiman, 2001) were deployed as a data mining technique to define predictive models of human raters\u27 placement decisions in different task types. Results of the random forests indicated that fragments, part-of-speech-related errors and PC2 (clear organization but limited paragraph development) were significant predictors of the 101B level, and PC6 (academic word use) of the Pass level. The generic classifier on the 17 original variables was seemingly the best model that could perfectly predict the training data set (0% error) and successfully forecast the test set (8% error). Differences in prediction performance between the generic and task specific models were negligible. Results of this project provided little evidence of generalizability of the predictive models in classifying new EPT essays. However, within-class examinations showed that the best classifier could recognize the highest and lowest essays, but crossover cases existed at the adjacent levels. Implications of the project for placement assessment purposes, pedagogical practices in ESL writing courses and automated essay scoring (AES) development for the EPT Writing Test are brought into the discussion

    Water Security in the Mekong River Basin Challenges, Causes and Solutions

    Get PDF
    Water, an essential element in sustaining life is of special importance to society and economic development. Although renewable, water resources are not infinite. It can be said that equitable and sustainable water resource management in the context of climate change is a challenge that all Mekong countries have been facing. The challenge becomes bigger in the context of water scarcity when the total amount of water is sharply decreasing and the water quality is declining, failing to meet domestic and industrial needs. Based on a study into the current situation of water security in the Mekong River, the author points out the basic challenges that threaten water security in the river basin. At the same time, the author discusses the reasons leading to this situation. Based on the research on the current situation of challenges and underlying causes, the author proposes a number of necessary solutions to cope with current and upcoming challenges. These solutions, including legal, diplomatic, economic and political solutions, should be implemented in a synchronized and long-term manner in the future

    Finding Subcube Heavy Hitters in Analytics Data Streams

    Full text link
    Data streams typically have items of large number of dimensions. We study the fundamental heavy-hitters problem in this setting. Formally, the data stream consists of dd-dimensional items x1,…,xm∈[n]dx_1,\ldots,x_m \in [n]^d. A kk-dimensional subcube TT is a subset of distinct coordinates {T1,⋯ ,Tk}⊆[d]\{ T_1,\cdots,T_k \} \subseteq [d]. A subcube heavy hitter query Query(T,v){\rm Query}(T,v), v∈[n]kv \in [n]^k, outputs YES if fT(v)≥γf_T(v) \geq \gamma and NO if fT(v)<γ/4f_T(v) < \gamma/4, where fTf_T is the ratio of number of stream items whose coordinates TT have joint values vv. The all subcube heavy hitters query AllQuery(T){\rm AllQuery}(T) outputs all joint values vv that return YES to Query(T,v){\rm Query}(T,v). The one dimensional version of this problem where d=1d=1 was heavily studied in data stream theory, databases, networking and signal processing. The subcube heavy hitters problem is applicable in all these cases. We present a simple reservoir sampling based one-pass streaming algorithm to solve the subcube heavy hitters problem in O~(kd/γ)\tilde{O}(kd/\gamma) space. This is optimal up to poly-logarithmic factors given the established lower bound. In the worst case, this is Θ(d2/γ)\Theta(d^2/\gamma) which is prohibitive for large dd, and our goal is to circumvent this quadratic bottleneck. Our main contribution is a model-based approach to the subcube heavy hitters problem. In particular, we assume that the dimensions are related to each other via the Naive Bayes model, with or without a latent dimension. Under this assumption, we present a new two-pass, O~(d/γ)\tilde{O}(d/\gamma)-space algorithm for our problem, and a fast algorithm for answering AllQuery(T){\rm AllQuery}(T) in O(k/γ2)O(k/\gamma^2) time. Our work develops the direction of model-based data stream analysis, with much that remains to be explored.Comment: To appear in WWW 201
    • …
    corecore