33,373 research outputs found

    The constrained median : a way to incorporate side information in the assessment of food samples

    Get PDF
    A classical problem in the field of food science concerns the consensus evaluation of food samples. Typically, several panelists are asked to provide scores describing the perceived quality of the samples, and subsequently, the overall (consensus) scores are determined. Unfortunately, gathering a large number of panelists is a challenging and very expensive way of collecting information. Interestingly, side information about the samples is often available. This paper describes a method that exploits such information with the aim of improving the assessment of the quality of multiple samples. The proposed method is illustrated by discussing an experiment on raw Atlantic salmon (Salmo salar), where the evolution of the overall score of each salmon sample is studied. The influence of incorporating knowledge of storage days, results of a clustering analysis, and information from additionally performed sensory evaluation tests is discussed. We provide guidelines for incorporating different types of information and discuss their benefits and potential risks

    Directing diarrhoeal disease research towards disease-burden reduction

    Get PDF
    Despite gains in controlling mortality relating to diarrhoeal disease, the burden of disease remains unacceptably high. To refocus health research to target disease-burden reduction as the goal of research in child health, the Child Health and Nutrition Research Initiative developed a systematic strategy to rank health research options. This priority-setting exercise included listing of 46 competitive research options in diarrhoeal disease and their critical and quantitative appraisal by 10 experts based on five criteria for research that reflect the ability of the research to be translated into interventions and achieved disease-burden reduction. These criteria included the answerability of the research questions, the efficacy and effectiveness of the intervention resulting from the research, the maximal potential for disease-burden reduction of the interventions derived from the research, the affordability, deliverability, and sustainability of the intervention supported by the research, and the overall effect of the research-derived intervention on equity. Experts scored each research option independently to delineate the best investments for diarrhoeal disease control in the developing world to reduce the burden of disease by 2015. Priority scores obtained for health policy and systems research obtained eight of the top 10 rankings in overall scores, indicating that current investments in health research are significantly different from those estimated to be the most effective in reducing the global burden of diarrhoeal disease by 2015

    Combining absolute and relative information in studies on food quality

    Get PDF
    A common problem in food science concerns the assessment of the quality of food samples. Typically, a group of panellists is trained exhaustively on how to identify different quality indicators in order to provide absolute information, in the form of scores, for each given food sample. Unfortunately, this training is expensive and time-consuming. For this very reason, it is quite common to search for additional information provided by untrained panellists. However, untrained panellists usually provide relative information, in the form of rankings, for the food samples. In this paper, we discuss how both scores and rankings can be combined in order to improve the quality of the assessment

    Using productivity and susceptibility indices to assess the vulnerability of United States fish stocks to overfishing

    Get PDF
    Assessing the vulnerability of stocks to fishing practices in U.S. federal waters was recently highlighted by the National Marine Fisheries Service (NMFS), National Oceanic and Atmospheric Administration, as an important factor to consider when 1) identifying stocks that should be managed and protected under a fishery management plan; 2) grouping data-poor stocks into relevant management complexes; and 3) developing precautionary harvest control rules. To assist the regional fishery management councils in determining vulnerability, NMFS elected to use a modified version of a productivity and susceptibility analysis (PSA) because it can be based on qualitative data, has a history of use in other fisheries, and is recommended by several organizations as a reasonable approach for evaluating risk. A number of productivity and susceptibility attributes for a stock are used in a PSA and from these attributes, index scores and measures of uncertainty are computed and graphically displayed. To demonstrate the utility of the resulting vulnerability evaluation, we evaluated six U.S. fisheries targeting 162 stocks that exhibited varying degrees of productivity and susceptibility, and for which data quality varied. Overall, the PSA was capable of differentiating the vulnerability of stocks along the gradient of susceptibility and productivity indices, although fixed thresholds separating low-, moderate-, and highly vulnerable species were not observed. The PSA can be used as a flexible tool that can incorporate regional-specific information on fishery and management activity

    The development of a measure of social care outcome for older people. Funded/commissioned by: Department of Health

    No full text
    An essential element of identifying Best Value and monitoring cost-effective care is to be able to identify the outcomes of care. In the field of health services, use of utility-based health related quality of life measures has become widespread, indeed even required. If, in the new era of partnerships, social care outcomes are to be valued and included we need to develop measures that reflect utility or welfare gain from social care interventions. This paper reports on a study, commissioned as part of the Department of Health’s Outcomes of Social Care for Adults Initiative, that developed an instrument and associated utility indexes that provide a tool for evaluating social care interventions in both a research and service setting. Discrete choice conjoint analysis used to derive utility weights provided us with new insights into the relative importance of the core domains of social care to older people. Whilst discrete choice conjoint analysis is being increasingly used in health economics, this is the first study that has attempted to use it to derive a measure of outcome

    Measuring the Consistency of Phytosanitary Measures

    Get PDF
    The paper presents a model for quantifying quarantine-related phytosanitary measures by combining the two basic components of pest risk assessment, probability of establishment and economic effects, into a single management framework, Iso-Risk. The model provides a systematic and objective basis for defining and measuring acceptable risk and for justifying quarantine actions relative to acceptable risk. This can then be used to measure consistency of phytosanitary measures. The Iso-Risk framework is applied using a database of USDA phytosanitary risk assessments. The results show that the USDA risk assessment system produces assessments that are not consistent across a range of intermediate values for consequence or likelihood of occurrence.Iso-Risk, phytosanitary risk assessment, pest risk assessment, Agricultural and Food Policy, Crop Production/Industries, Environmental Economics and Policy, Farm Management, Food Consumption/Nutrition/Food Safety, Land Economics/Use,

    Extremely cold and hot temperatures increase the risk of ischaemic heart disease mortality: epidemiological evidence from China.

    No full text
    OBJECTIVE: To examine the effects of extremely cold and hot temperatures on ischaemic heart disease (IHD) mortality in five cities (Beijing, Tianjin, Shanghai, Wuhan and Guangzhou) in China; and to examine the time relationships between cold and hot temperatures and IHD mortality for each city. DESIGN: A negative binomial regression model combined with a distributed lag non-linear model was used to examine city-specific temperature effects on IHD mortality up to 20 lag days. A meta-analysis was used to pool the cold effects and hot effects across the five cities. PATIENTS: 16 559 IHD deaths were monitored by a sentinel surveillance system in five cities during 2004-2008. RESULTS: The relationships between temperature and IHD mortality were non-linear in all five cities. The minimum-mortality temperatures in northern cities were lower than in southern cities. In Beijing, Tianjin and Guangzhou, the effects of extremely cold temperatures were delayed, while Shanghai and Wuhan had immediate cold effects. The effects of extremely hot temperatures appeared immediately in all the cities except Wuhan. Meta-analysis showed that IHD mortality increased 48% at the 1st percentile of temperature (extremely cold temperature) compared with the 10th percentile, while IHD mortality increased 18% at the 99th percentile of temperature (extremely hot temperature) compared with the 90th percentile. CONCLUSIONS: Results indicate that both extremely cold and hot temperatures increase IHD mortality in China. Each city has its characteristics of heat effects on IHD mortality. The policy for response to climate change should consider local climate-IHD mortality relationships

    Visual Landmark Recognition from Internet Photo Collections: A Large-Scale Evaluation

    Full text link
    The task of a visual landmark recognition system is to identify photographed buildings or objects in query photos and to provide the user with relevant information on them. With their increasing coverage of the world's landmark buildings and objects, Internet photo collections are now being used as a source for building such systems in a fully automatic fashion. This process typically consists of three steps: clustering large amounts of images by the objects they depict; determining object names from user-provided tags; and building a robust, compact, and efficient recognition index. To this date, however, there is little empirical information on how well current approaches for those steps perform in a large-scale open-set mining and recognition task. Furthermore, there is little empirical information on how recognition performance varies for different types of landmark objects and where there is still potential for improvement. With this paper, we intend to fill these gaps. Using a dataset of 500k images from Paris, we analyze each component of the landmark recognition pipeline in order to answer the following questions: How many and what kinds of objects can be discovered automatically? How can we best use the resulting image clusters to recognize the object in a query? How can the object be efficiently represented in memory for recognition? How reliably can semantic information be extracted? And finally: What are the limiting factors in the resulting pipeline from query to semantics? We evaluate how different choices of methods and parameters for the individual pipeline steps affect overall system performance and examine their effects for different query categories such as buildings, paintings or sculptures
    • …
    corecore