513 research outputs found

    Characterization of Smallholder Beef Cattle Production System in Central Vietnam –Revealing Performance, Trends, Constraints, and Future Development

    Get PDF
    The objective of this study is to evaluate the characteristics of smallholder beef cattle production in Central Vietnam. A total of 360 households were interviewed by using semi-structured questionnaire; a total of 606 beef cows were investigated for evaluating calving interval (CI). Thirty-two fattening cattle were monitored for the estimation of diet structure. Results showed that the cattle herd size was 4.32-4.45 cattle/household. In North Central (NC), 55% of surveyed farmers kept local cattle, 45% kept crossbreeds, and none of surveyed farmers keeping exotic breeds. In South Central (SC), 63% of surveyed farmers kept cross cattle, 32% kept local cattle, and 5% kept exotic breeds. In the breeding method, 70% of surveyed farmers used artificial insemination (AI), 20% used natural mating (NM), and only 10% used both AI and NM in SC, whereas in NC 40% of farmers used AI, 40% used NM, and 20% used both AI and NM. The variety of feedstuffs fed to cattle including roughages and concentrate. The concentrate in the diet for fattening cattle was 25%-35% and protein level was 11%-13%, and the average daily gain of cattle was 0.51-0.63 kg/day. The CI of cows was 12-13 months in SC, whereas in NC it was 13-14 months. There were numerous constraints to cattle production in surveyed households including diseases, lack of good quality feed sources, breeds, knowledge, and lack of capital. In conclusion, cattle production in Central Vietnam is small scale and still largely extensive. There are constraints that must be solved to improve livestock systems in the near future, especially when shifting towards semi-intensive and/or intensive cattle production systems.

    Modeling Stable Matching Problems with Answer Set Programming

    Get PDF
    The Stable Marriage Problem (SMP) is a well-known matching problem first introduced and solved by Gale and Shapley (1962). Several variants and extensions to this problem have since been investigated to cover a wider set of applications. Each time a new variant is considered, however, a new algorithm needs to be developed and implemented. As an alternative, in this paper we propose an encoding of the SMP using Answer Set Programming (ASP). Our encoding can easily be extended and adapted to the needs of specific applications. As an illustration we show how stable matchings can be found when individuals may designate unacceptable partners and ties between preferences are allowed. Subsequently, we show how our ASP based encoding naturally allows us to select specific stable matchings which are optimal according to a given criterion. Each time, we can rely on generic and efficient off-the-shelf answer set solvers to find (optimal) stable matchings.Comment: 26 page

    Multi-variate analyses of flood loss in Can Tho city, Mekong delta

    Get PDF
    Floods in the Mekong delta are recurring events and cause substantial losses to the economy. Sea level rise and increasing precipitation during the wet season result in more frequent floods. For effective flood risk management, reliable losses and risk analyses are necessary. However, knowledge about damaging processes and robust assessments of flood losses in the Mekong delta are scarce. In order to fill this gap, we identify and quantify the effects of the most important variables determining flood losses in Can Tho city through multi-variate statistical analyses. Our analysis is limited to the losses of residential buildings and contents. Results reveal that under the specific flooding characteristics in the Mekong delta with relatively well-adapted households, long inundation durations and shallow water depths, inundation duration is more important than water depth for the resulting loss. However, also building and content values, floor space of buildings and building quality are important loss-determining variables. Human activities like undertaking precautionary measures also influence flood losses. The results are important for improving flood loss modelling and, consequently, flood risk assessments in the Mekong delta

    Flood loss models and risk analysis for private households in can Tho City, Vietnam

    Get PDF
    Vietnam has a long history and experience with floods. Flood risk is expected to increase further due to climatic, land use and other global changes. Can Tho City, the cultural and economic center of the Mekong delta in Vietnam, is at high risk of flooding. To improve flood risk analyses for Vietnam, this study presents novel multi-variable flood loss models for residential buildings and contents and demonstrates their application in a flood risk assessment for the inner city of Can Tho. Cross-validation reveals that decision tree based loss models using the three input variables water depth, flood duration and floor space of building are more appropriate for estimating building and contents loss in comparison with depth-damage functions. The flood risk assessment reveals a median expected annual flood damage to private households of US$3340 thousand for the inner city of Can Tho. This is approximately 2.5%of the total annual income of households in the study area. For damage reduction improved flood risk management is required for the Mekong Delta, based on reliable damage and risk analyses

    Linear Query Approximation Algorithms for Non-monotone Submodular Maximization under Knapsack Constraint

    Full text link
    This work, for the first time, introduces two constant factor approximation algorithms with linear query complexity for non-monotone submodular maximization over a ground set of size nn subject to a knapsack constraint, DLA\mathsf{DLA} and RLA\mathsf{RLA}. DLA\mathsf{DLA} is a deterministic algorithm that provides an approximation factor of 6+ϵ6+\epsilon while RLA\mathsf{RLA} is a randomized algorithm with an approximation factor of 4+ϵ4+\epsilon. Both run in O(nlog(1/ϵ)/ϵ)O(n \log(1/\epsilon)/\epsilon) query complexity. The key idea to obtain a constant approximation ratio with linear query lies in: (1) dividing the ground set into two appropriate subsets to find the near-optimal solution over these subsets with linear queries, and (2) combining a threshold greedy with properties of two disjoint sets or a random selection process to improve solution quality. In addition to the theoretical analysis, we have evaluated our proposed solutions with three applications: Revenue Maximization, Image Summarization, and Maximum Weighted Cut, showing that our algorithms not only return comparative results to state-of-the-art algorithms but also require significantly fewer queries

    Pairing via Index theorem

    Full text link
    This work is motivated by a specific point of view: at short distances and high energies the undoped and underdoped cuprates resemble the π\pi-flux phase of the t-J model. The purpose of this paper is to present a mechanism by which pairing grows out of the doped π\pi-flux phase. According to this mechanism pairing symmetry is determined by a parameter controlling the quantum tunneling of gauge flux quanta. For zero tunneling the symmetry is dx2y2+idxyd_{x^2-y^2}+id_{xy}, while for large tunneling it is dx2y2d_{x^2-y^2}. A zero-temperature critical point separates these two limits

    A Duality Between Unidirectional Charge Density Wave Order and Superconductivity

    Full text link
    This paper shows the existence of a duality between an unidirectional charge density wave order and a superconducting order. This duality predicts the existence of charge density wave near a superconducting vortex, and the existence of superconductivity near a charge density wave dislocation.Comment: Main results are the same, but the presentation is significantly modified. To appear in Physical Review Letter

    Entanglement, local measurements, and symmetry

    Get PDF
    A definition of entanglement in terms of local measurements is discussed. Viz, the maximum entanglement corresponds to the states that cause the highest level of quantum fluctuations in all local measurements determined by the dynamic symmetry group of the system. A number of examples illustrating this definition is considered.Comment: 10 pages. to be published in Journal of Optics

    Mass Dependent αS\alpha_S Evolution and the Light Gluino Existence

    Full text link
    There is an intriguing discrepancy between \alpha_s(M_Z) values measured directly at the CERN Z0Z_0-factory and low-energy (at few GeV) measurements transformed to Q=MZ0Q=M_{Z_0} by a massless QCD \alpha_s(Q) evolution relation. There exists an attempt to reconcile this discrepancy by introducing a light gluino \gl in the MSSM. We study in detail the influence of heavy thresholds on \alpha_s(Q) evolution. First, we consruct the "exact" explicit solution to the mass-dependent two-loop RG equation for the running \alpha_s(Q). This solution describes heavy thresholds smoothly. Second, we use this solution to recalculate anew \alpha_s(M_Z) values corresponding to "low-energy" input data. Our analysis demonstrates that using {\it mass-dependent RG procedure} generally produces corrections of two types: Asymptotic correction due to effective shift of threshold position; Local threshold correction only for the case when input experiment lies in the close vicinity of heavy particle threshold: QexptMhQ_{expt} \simeq M_h . Both effects result in the effective shift of the \asmz values of the order of 10310^{-3}. However, the second one could be enhanced when the gluino mass is close to a heavy quark mass. For such a case the sum effect could be important for the discussion of the light gluino existence as it further changes the \gl mass.Comment: 13, Late

    Casimir-Polder forces: A non-perturbative approach

    Full text link
    Within the frame of macroscopic QED in linear, causal media, we study the radiation force of Casimir-Polder type acting on an atom which is positioned near dispersing and absorbing magnetodielectric bodies and initially prepared in an arbitrary electronic state. It is shown that minimal and multipolar coupling lead to essentially the same lowest-order perturbative result for the force acting on an atom in an energy eigenstate. To go beyond perturbation theory, the calculations are based on the exact center-of-mass equation of motion. For a nondriven atom in the weak-coupling regime, the force as a function of time is a superposition of force components that are related to the electronic density-matrix elements at a chosen time. Even the force component associated with the ground state is not derivable from a potential in the ususal way, because of the position dependence of the atomic polarizability. Further, when the atom is initially prepared in a coherent superposition of energy eigenstates, then temporally oscillating force components are observed, which are due to the interaction of the atom with both electric and magnetic fields.Comment: 23 pages, 3 figures, additional misprints correcte
    corecore