184,846 research outputs found
Viterbi Training for PCFGs: Hardness Results and Competitiveness of Uniform Initialization
We consider the search for a maximum likelihood assignment of hidden derivations and grammar weights for a probabilistic context-free grammar, the problem approximately solved by āViterbi training.ā We show that solving and even approximating Viterbi training for PCFGs is NP-hard. We motivate the use of uniformat-random initialization for Viterbi EM as an optimal initializer in absence of further information about the correct model parameters, providing an approximate bound on the log-likelihood.
Empirical Risk Minimization for Probabilistic Grammars: Sample Complexity and Hardness of Learning
Probabilistic grammars are generative statistical models that are useful for compositional and sequential structures. They are used ubiquitously in computational linguistics. We present a framework, reminiscent of structural risk minimization, for empirical risk minimization of probabilistic grammars using the log-loss. We derive sample complexity bounds in this framework that apply both to the supervised setting and the unsupervised setting. By making assumptions about the underlying distribution that are appropriate for natural language scenarios, we are able to derive distribution-dependent sample complexity bounds for probabilistic grammars. We also give simple algorithms for carrying out empirical risk minimization using this framework in both the supervised and unsupervised settings. In the unsupervised case, we show that the problem of minimizing empirical risk is NP-hard. We therefore suggest an approximate algorithm, similar to expectation-maximization, to minimize the empirical risk. Learning from data is central to contemporary computational linguistics. It is in common in such learning to estimate a model in a parametric family using the maximum likelihood principle. This principle applies in the supervised case (i.e., using annotate
Detached from their homeland: the Latter-day Saints of Chihuahua, Mexico
Over the past few decades, the homeland concept has received an ever-increasing amount of attention by cultural geographers. While the debate surrounding the necessity and applicability of the concept continues, it is more than apparent that no other geographic term (including culture areas or culture regions) captures the essence of peoplesā attachment to place better than homeland. The literature, however, provides few examples of the deep-seated loyalty people have for a homeland despite being physically detached from that space. Employing land use mapping and informal interviews, this paper seeks to help fill that gap by exemplifying how the daily lives of Mormons living in Chihuahua, Mexico reflect their connection to the United States and the Mormon Homeland. Our research revealed that, among other things, the Anglo residents perpetuate their cultural identity through their unique self-reference, exhibit territoriality links reflected in their built environment, and demonstrate unconditional bonding to their homeland through certain holiday celebrations. It is clear to us, as the Anglo-Mormon experience illustrates, that the homeland concept deserves a place within the geographic lexicon
Recommended from our members
A review of the Yorkshire and Humber regional waste strategy
Managing waste has become a primary issue for regional planners. This article reports on the institutional process underpinning the regionās strategy and the stages in its production. It emphasises that there has been a watering down of the target for household waste production without appropriate explanation
Empirical Risk Minimization with Approximations of Probabilistic Grammars
Probabilistic grammars are generative statistical models that are useful for compositional and sequential structures. We present a framework, reminiscent of structural risk minimization, for empirical risk minimization of the parameters of a fixed probabilistic grammar using the log-loss. We derive sample complexity bounds in this framework that apply both to the supervised setting and the unsupervised setting.
Joint Morphological and Syntactic Disambiguation
In morphologically rich languages, should morphological and syntactic disambiguation be treated sequentially or as a single problem? We describe several efficient, probabilistically interpretable ways to apply joint inference to morphological and syntactic disambiguation using lattice parsing. Joint inference is shown to compare favorably to pipeline parsing methods across a variety of component models. State-of-the-art performance on Hebrew Treebank parsing is demonstrated using the new method. The benefits of joint inference are modest with the current component models, but appear to increase as components themselves improve
The ethics of sociocultural risk research
In socio-cultural risk research, an epistemological tension often follows if real hazards in the world are juxtaposed against the essentially socially constructed nature of all risk. In this editorial, we consider how this paradox is manifest at a practical level in a number of ethical dilemmas for the risk researcher. (1) In terms of strategies for seeking informed consent, and for addressing the power inequalities involved in interpretative and analytical work, researchers can find themselves pushing at the boundaries of standard understandings of ethical practices and ways of engaging informants in their studies. (2) Impact on participants is another key area of concern, since the subject matter on which data are collected in risk research may be a source of uncertainty, anxiety or unwanted self knowledge. (3) Risk researchers also face the possibility of institutional repercussions of raising risk issues with people who usually normalize the risks, thereby stimulating distrust in the institutions or organizations with formal responsibilities for risk management. There are no simple formulae to guide the researcher in dealing with such ethical issues and paradoxes. It is important, though, to recognize their specificity in risk studies, including the ambiguous status of questions about vulnerability since judgements about 'who is vulnerable' and 'in what ways' are themselves influenced by the situational framings and understandings of participants and researchers
Generalized enthalpy model of a high pressure shift freezing process
High-pressure freezing processes are a novel emerging technology in food processing, offering significant improvements to the quality of frozen foods. To be able to simulate plateau times and thermal history under different conditions, in this work we present a generalized enthalpy model of the high-pressure shift freezing process. The model includes the effects of pressure on conservation of enthalpy and incorporates the freezing point depression of non-dilute food samples. In addition the significant heat transfer effects of convection in the pressurizing medium are accounted for by solving the two-dimensional Navier-Stokes equations. We run the model for several numerical tests where the food sample is agar gel, and find good agreement with experimental data from the literature
Recommended from our members
Trends in long-term prescribing of dependence forming medicines
Using patient-level primary care data to estimate the extent to which antidepressant medicines are prescribed to people continuously for long periods of time.
Aim
This descriptive research used patient-level primary care data to estimate the extent to which antidepressant medicines are prescribed to people continuously for long periods of time. The study also drew on survey data and data on the number of prescriptions dispensed.
Findings
- The number of antidepressant prescriptions dispensed each year in England doubled between 2008 and 2018
- Survey data show that the proportion of adults reporting use of antidepressants in the past year increased in the 1990s, and again between 2007 and 2014
- The average length of time that antidepressants are continuously prescribed to people for has increased over time.
- Some types of antidepressants (for example, tricyclics and other antidepressants) tend to be prescribed for longer periods than other types (such as SSRIs).
- In 2014, one in twelve prescribing periods for tricyclics and other antidepressants lasted for three years or more
Methods
The analyses in this report are descriptive and show the overall prevalence of long-term prescribing in each year.
We used a sample of around 50,000 patients prescribed at least one antidepressant medicine between 2000 and 2017. This was drawn from the Clinical Practice Research Datalink (CPRD). The CPRD contains data about prescriptions issued by GPs (including the length and size of prescription) and characteristics of the patients prescribed to (such as their age, sex, and area where they live). Medicines were grouped for analysis into: tricyclics, selective serotonin reuptake inhibitors (SSRIs), and other ADMs. The length of individual prescriptions and continuous prescribing periods were derived using information on consultation dates, the quantity of tablets prescribed, and the numeric daily dose
- ā¦