7,193 research outputs found

    Domestic Support Policies for Agriculture in Ecuador and the U.S.-Andean Countries Free Trade Agreement: An Applied General Equilibrium Assessment

    Get PDF
    For the past two years the United States and Colombia, Peru and Ecuador have being negotiating a Free Trade Agreement (FTA). One of the main concerns of Ecuador's farmers is the asymmetry that exists between U.S. and Ecuador agricultural sectors. U.S. agriculture is highly subsidized in products such as rice, corn, and soybeans, products that represent an important export and subsistence products for Ecuadorian farmers. To reduce any negative effect that the FTA may have, Ecuador's government is studying land-based payments for rice, corn, soybeans and livestock producers. This program would offer direct initial support to farmers' income after the FTA enters in full effect. The objectives of this paper were twofold. First, estimate the effects on the Ecuadorian economy, and especially on Ecuador's agriculture of the FTA. And second, study the viability of the domestic support program for agriculture proposed by the Ecuadorian government, as well as some alternative domestic support policies. We use a modified version of the GTAP global general equilibrium model specific for agriculture support, called GTAP-AGR. The results show that trade liberalization will negatively affect all agricultural sectors in Ecuador, except for the exporting sectors (bananas, coffee, cocoa, and flowers). Government subsidies are estimated to disproportionally help rice and soybeans producers, but they will not be enough for corn and livestock producers. We conclude that government subsidies should be extended to other sector such as sugar cane and cotton.International Relations/Trade,

    Report of MIRACLE team for Geographical IR in CLEF 2006

    Full text link
    The main objective of the designed experiments is testing the effects of geographical information retrieval from documents that contain geographical tags. In the designed experiments we try to isolate geographical retrieval from textual retrieval replacing all geo-entity textual references from topics with associated tags and splitting the retrieval process in two phases: textual retrieval from the textual part of the topic without geo-entity references and geographical retrieval from the tagged text generated by the topic tagger. Textual and geographical results are combined applying different techniques: union, intersection, difference, and external join based. Our geographic information retrieval system consists of a set of basics components organized in two categories: (i) linguistic tools oriented to textual analysis and retrieval and (ii) resources and tools oriented to geographical analysis. These tools are combined to carry out the different phases of the system: (i) documents and topics analysis, (ii) relevant documents retrieval and (iii) result combination. If we compare the results achieved to the last campaign’s results, we can assert that mean average precision gets worse when the textual geo-entity references are replaced with geographical tags. Part of this worsening is due to our experiments return cero pertinent documents if no documents satisfy de geographical sub-query. But if we only analyze the results of queries that satisfied both textual and geographical terms, we observe that the designed experiments recover pertinent documents quickly, improving R-Precision values. We conclude that the developed geographical information retrieval system is very sensible to textual georeference and therefore it is necessary to improve the name entity recognition module

    Searching for nothing: placing zero on the temporal continuum

    Get PDF
    Generalization allows responses acquired in one situation to be transferred to similar situations. For temporal stimuli, a discontinuity has been found between zero and non-zero durations: responses in trials with no (or 0-s) stimuli and in trials with very short stimuli differ more than what would be expected by generalization. This discontinuity may happen because 0-s durations do not belong to the same continuum as non-zero durations. Alternatively, the discontinuity may be due to generalization decrement effects: a 0-s stimulus differs from a short stimulus not only in duration, but also in its presence, thus leading to greater differences in performance. Aiming to reduce differences between trials with and without a stimulus, we used two procedures to test whether a potential reduction in generalization decrement would bring performance following zero and non-zero durations closer. In both procedures, there was a reduction in the discontinuity between 0-s and short durations, supporting the hypothesis that 0-s durations are integrated in the temporal subjective continuum.Open access funding provided by FCT vertical bar FCCN (b-on). The present work was conducted at the Psychology Research Centre (PSI/01662), School of Psychology, University of Minho, supported by the Foundation for Science and Technology (FCT) through the Portuguese State Budget (Ref.: UIDB/PSI/01662/2020)

    The three stage assembly permutation flowshop scheduling problem

    Get PDF
    [ENG] The assembly flowshop scheduling problem has been studied recently due to its applicability in real life scheduling problems. It arises when various fabrication operations are performed concurrently in one stage. It was firstly introduced by Lee et al. (1993) in a flowshop environment. Later, Potts et al. (1995) considered the two-stage assembly flowshop problem with m concurrent operations in the first stage and an assembly operation in the second stage with the makespan objective, they showed that the considered problem is NP-hard in the strong sense even when the number of machines in the first stage is equal to two. Allahverdi et al. (2007) and Al-Anzi et al. (2009) considered two bicriteria two-stage assembly flowshop scheduling problems and proposed some metaheuristics. Previously, Al- Anzi et al. (2007) had considered the two-stage assembly flowshop scheduling problem with consideration of separate setup times from processing times and tried to minimize maximum lateness as objective function. Koulamas et al. (2007) extended the two-stage assembly flowshop to three-stage assembly flowshop scheduling problem with the objective of minimizing the makespan. The first stage manufactures various fabrication operations concurrently, the second one collected and transported them into an assembly stage as final stage for an assembly operation. They analyzed the worst-case ratio bound for several heuristics for the considered problem and they also analyzed the worst-case absolute bound for a heuristic based on compact vector summation techniques. In this paper we considered the three-stage assembly flowshop problem with sequences dependent setup time (SDST) on first and third stages with the objective of minimizing total completion time. The problem is described in detail in the next section, and a mathematical model is proposed and tested in Section 3. Finally the summary of the work is presented in section 4

    MIRACLE at ImageCLEFanot 2007: Machine Learning Experiments on Medical Image Annotation

    Full text link
    This paper describes the participation of MIRACLE research consortium at the ImageCLEF Medical Image Annotation task of ImageCLEF 2007. Our areas of expertise do not include image analysis, thus we approach this task as a machine-learning problem, regardless of the domain. FIRE is used as a black-box algorithm to extract different groups of image features that are later used for training different classifiers in order to predict the IRMA code. Three types of classifiers are built. The first type is a single classifier that predicts the complete IRMA code. The second type is a two level classifier composed of four classifiers that individually predict each axis of the IRMA code. The third type is similar to the second one but predicts a combined pair of axes. The main idea behind the definition of our experiments is to evaluate whether an axis-by-axis prediction is better than a prediction by pairs of axes or the complete code, or vice versa. We submitted 30 experiments to be evaluated and results are disappointing compared to other groups. However, the main conclusion that can be drawn from the experiments is that, irrespective of the selected image features, the axis-by-axis prediction achieves more accurate results not only than the prediction of a combined pair of axes but also, in turn, than the prediction of the complete IRMA code. In addition, data normalization seems to improve the predictions and vector-based features are preferred over histogram-based ones

    Report of MIRACLE team for the Ad-Hoc track in CLEF 2007

    Get PDF
    This paper presents the 2007 MIRACLE’s team approach to the AdHoc Information Retrieval track. The work carried out for this campaign has been reduced to monolingual experiments, in the standard and in the robust tracks. No new approaches have been attempted in this campaign, following the procedures established in our participation in previous campaigns. For this campaign, runs were submitted for the following languages and tracks: - Monolingual: Bulgarian, Hungarian, and Czech. - Robust monolingual: French, English and Portuguese. There is still some room for improvement around multilingual named entities recognition
    • …
    corecore