407 research outputs found
Adjusted empirical likelihood with high-order precision
Empirical likelihood is a popular nonparametric or semi-parametric
statistical method with many nice statistical properties. Yet when the sample
size is small, or the dimension of the accompanying estimating function is
high, the application of the empirical likelihood method can be hindered by low
precision of the chi-square approximation and by nonexistence of solutions to
the estimating equations. In this paper, we show that the adjusted empirical
likelihood is effective at addressing both problems. With a specific level of
adjustment, the adjusted empirical likelihood achieves the high-order precision
of the Bartlett correction, in addition to the advantage of a guaranteed
solution to the estimating equations. Simulation results indicate that the
confidence regions constructed by the adjusted empirical likelihood have
coverage probabilities comparable to or substantially more accurate than the
original empirical likelihood enhanced by the Bartlett correction.Comment: Published in at http://dx.doi.org/10.1214/09-AOS750 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
The Hydro-Modeling Platform (HydroMP) - Enabling Cloud-Based Environmental Modeling Using Software-As-A-Service (SaaS) Cloud Computing
Hydro-model has become important tool for water resources management, with higher demand in simulation precision and speed of decision support, models designed for sectoral application becoming outmoded, and original mode that massive schemes are run sequentially cannot meet the real-time requirements, especially with the computation increase by finer discretization granularity and broader research range. Water management organizations are increasingly looking for new generation tools that allow integration across domains, and can provide extensible computing resources to assist their decision making processes. In response to this need, a hydro-modeling platform(HydroMP) based on cloud computing is designed and implemented, which can deployed in distributed HPC Cluster and center HPC Cluster use a resources balancer to manage load balancing. This platform integrates multi models and computing resources (i.e. blade computer) dynamically to assure models integrated in platform get extensible computing capacity. A server, hosting HydroMP Web Service and interfaces, is connected to the HPC Cluster and Internet constituting the gateway for registered users. Any terminal (i.e. decision making system) can reference library and Web service of HydroMP in their systems. Massive modeling schemes can be submitted by different users simultaneously, and terminal can get simulation results from HydroMP real-time. Some key approaches and techniques are utilized including: i) a standard model component wrapper communicating with platform by named pipe have developed. OpenMI-compliant model-components can be integrated to this wrapper; ii) API and Event-Handler interface provided by HPC Server, task scheduler and calculation management table is employed to dispatch computing resource, while controlling multiple concurrent scheme submitting; iii) Interface array(i.e. SchemesSubmit, StatusInquiry, GetResult) in the Web Service is supplied to make terminal communicate with platform; iv) Oracle database is used to manage massive model data, results and model-components. This paper describes the details of design and implementation, and gives a case presentation platform application
A Quadrillion Standard Models from F-theory
We present an explicit construction of globally
consistent string compactifications that realize the exact chiral spectrum of
the Standard Model of particle physics with gauge coupling unification in the
context of F-theory. Utilizing the power of algebraic geometry, all global
consistency conditions can be reduced to a single criterion on the base of the
underlying elliptically fibered Calabi--Yau fourfolds. For toric bases, this
criterion only depends on an associated polytope and is satisfied for at least
bases, each of which defines a distinct compactification.Comment: 7 pages, double column; v3: improved and expanded discussion,
technical details deferred to an added appendi
A Knowledge-Driven Approach to Classifying Object and Attribute Coreferences in Opinion Mining
Classifying and resolving coreferences of objects (e.g., product names) and
attributes (e.g., product aspects) in opinionated reviews is crucial for
improving the opinion mining performance. However, the task is challenging as
one often needs to consider domain-specific knowledge (e.g., iPad is a tablet
and has aspect resolution) to identify coreferences in opinionated reviews.
Also, compiling a handcrafted and curated domain-specific knowledge base for
each domain is very time consuming and arduous. This paper proposes an approach
to automatically mine and leverage domain-specific knowledge for classifying
objects and attribute coreferences. The approach extracts domain-specific
knowledge from unlabeled review data and trains a knowledgeaware neural
coreference classification model to leverage (useful) domain knowledge together
with general commonsense knowledge for the task. Experimental evaluation on
realworld datasets involving five domains (product types) shows the
effectiveness of the approach.Comment: Accepted to Proceedings of EMNLP 2020 (Findings
- …