964 research outputs found

    Rights Based Fisheries Management in Canada

    Get PDF
    The conflict between efficiency and maximization of employment colours all aspects of fisheries management in Canada, including implementation of rights based fisheries management regimes. Even though rights based systems are strongly based on considerations of efficiency, sometimes at the expense of maximization of employment, a number of such regimes have been put in place in recent years. These are generally little known and little analyzed. This paper attempts to address this gap in our knowledge by surveying such schemes. For a number of reasons outlined in the paper, rights based regimes in Canada have not usually involved transferability or divisibility of quotas. Nonetheless, efficiency gains have been made where such schemes have been implemented. These are illustrated in case studies of the Atlantic offshore groundfish fishery and the Atlantic purse seine herring fishery.fisheries management, property rights, quota values, limits on transferability of rights, rationalization of fisheries, Environmental Economics and Policy, Resource /Energy Economics and Policy,

    Thermoelectric power in one-dimensional Hubbard model

    Full text link
    The thermoelectric power S is studied within the one-dimensional Hubbard model using the linear response theory and the numerical exact-diagonalization method for small systems. While both the diagonal and off-diagonal dynamical correlation functions of particle and energy current are singular within the model even at temperature T>0, S behaves regularly as a function of frequency ω\omega and T. Dependence on the electron density n below the half-filling reveals a change of sign of S at n_0=0.73+/-0.07 due to strong correlations, in the whole T range considered. Approaching half-filling S is hole-like and can become large for U>>t although decreasing with T.Comment: 6 pages, 4 figure

    Pangenome reconstruction of Lactobacillaceae metabolism predicts species-specific metabolic traits.

    Get PDF
    Strains across the Lactobacillaceae family form the basis for a trillion-dollar industry. Our understanding of the genomic basis for their key traits is fragmented, however, including the metabolism that is foundational to their industrial uses. Pangenome analysis of publicly available Lactobacillaceae genomes allowed us to generate genome-scale metabolic network reconstructions for 26 species of industrial importance. Their manual curation led to more than 75,000 gene-protein-reaction associations that were deployed to generate 2,446 genome-scale metabolic models. Cross-referencing genomes and known metabolic traits allowed for manual metabolic network curation and validation of the metabolic models. As a result, we provide the first pangenomic basis for metabolism in the Lactobacillaceae family and a collection of predictive computational metabolic models that enable a variety of practical uses.IMPORTANCELactobacillaceae, a bacterial family foundational to a trillion-dollar industry, is increasingly relevant to biosustainability initiatives. Our study, leveraging approximately 2,400 genome sequences, provides a pangenomic analysis of Lactobacillaceae metabolism, creating over 2,400 curated and validated genome-scale models (GEMs). These GEMs successfully predict (i) unique, species-specific metabolic reactions; (ii) niche-enriched reactions that increase organism fitness; (iii) essential media components, offering insights into the global amino acid essentiality of Lactobacillaceae; and (iv) fermentation capabilities across the family, shedding light on the metabolic basis of Lactobacillaceae-based commercial products. This quantitative understanding of Lactobacillaceae metabolic properties and their genomic basis will have profound implications for the food industry and biosustainability, offering new insights and tools for strain selection and manipulation

    Pressure-induced referred pain is expanded by persistent soreness

    Get PDF

    Quantifying the connectivity of a network: The network correlation function method

    Full text link
    Networks are useful for describing systems of interacting objects, where the nodes represent the objects and the edges represent the interactions between them. The applications include chemical and metabolic systems, food webs as well as social networks. Lately, it was found that many of these networks display some common topological features, such as high clustering, small average path length (small world networks) and a power-law degree distribution (scale free networks). The topological features of a network are commonly related to the network's functionality. However, the topology alone does not account for the nature of the interactions in the network and their strength. Here we introduce a method for evaluating the correlations between pairs of nodes in the network. These correlations depend both on the topology and on the functionality of the network. A network with high connectivity displays strong correlations between its interacting nodes and thus features small-world functionality. We quantify the correlations between all pairs of nodes in the network, and express them as matrix elements in the correlation matrix. From this information one can plot the correlation function for the network and to extract the correlation length. The connectivity of a network is then defined as the ratio between this correlation length and the average path length of the network. Using this method we distinguish between a topological small world and a functional small world, where the latter is characterized by long range correlations and high connectivity. Clearly, networks which share the same topology, may have different connectivities, based on the nature and strength of their interactions. The method is demonstrated on metabolic networks, but can be readily generalized to other types of networks.Comment: 10 figure

    Functional States of the Genome-Scale Escherichia Coli Transcriptional Regulatory System

    Get PDF
    A transcriptional regulatory network (TRN) constitutes the collection of regulatory rules that link environmental cues to the transcription state of a cell's genome. We recently proposed a matrix formalism that quantitatively represents a system of such rules (a transcriptional regulatory system [TRS]) and allows systemic characterization of TRS properties. The matrix formalism not only allows the computation of the transcription state of the genome but also the fundamental characterization of the input-output mapping that it represents. Furthermore, a key advantage of this “pseudo-stoichiometric” matrix formalism is its ability to easily integrate with existing stoichiometric matrix representations of signaling and metabolic networks. Here we demonstrate for the first time how this matrix formalism is extendable to large-scale systems by applying it to the genome-scale Escherichia coli TRS. We analyze the fundamental subspaces of the regulatory network matrix (R) to describe intrinsic properties of the TRS. We further use Monte Carlo sampling to evaluate the E. coli transcription state across a subset of all possible environments, comparing our results to published gene expression data as validation. Finally, we present novel in silico findings for the E. coli TRS, including (1) a gene expression correlation matrix delineating functional motifs; (2) sets of gene ontologies for which regulatory rules governing gene transcription are poorly understood and which may direct further experimental characterization; and (3) the appearance of a distributed TRN structure, which is in stark contrast to the more hierarchical organization of metabolic networks

    Using weak values to experimentally determine "negative probabilities" in a two-photon state with Bell correlations

    Full text link
    Bipartite quantum entangled systems can exhibit measurement correlations that violate Bell inequalities, revealing the profoundly counter-intuitive nature of the physical universe. These correlations reflect the impossibility of constructing a joint probability distribution for all values of all the different properties observed in Bell inequality tests. Physically, the impossibility of measuring such a distribution experimentally, as a set of relative frequencies, is due to the quantum back-action of projective measurements. Weakly coupling to a quantum probe, however, produces minimal back-action, and so enables a weak measurement of the projector of one observable, followed by a projective measurement of a non-commuting observable. By this technique it is possible to empirically measure weak-valued probabilities for all of the values of the observables relevant to a Bell test. The marginals of this joint distribution, which we experimentally determine, reproduces all of the observable quantum statistics including a violation of the Bell inequality, which we independently measure. This is possible because our distribution, like the weak values for projectors on which it is built, is not constrained to the interval [0, 1]. It was first pointed out by Feynman that, for explaining singlet-state correlations within "a [local] hidden variable view of nature ... everything works fine if we permit negative probabilities". However, there are infinitely many such theories. Our method, involving "weak-valued probabilities", singles out a unique set of probabilities, and moreover does so empirically.Comment: 9 pages, 3 figure

    Individualised risk assessment for diabetic retinopathy and optimisation of screening intervals: a scientific approach to reducing healthcare costs.

    Get PDF
    To access publisher's full text version of this article, please click on the hyperlink in Additional Links field or click on the hyperlink at the top of the page marked Files. This article is open access.To validate a mathematical algorithm that calculates risk of diabetic retinopathy progression in a diabetic population with UK staging (R0-3; M1) of diabetic retinopathy. To establish the utility of the algorithm to reduce screening frequency in this cohort, while maintaining safety standards.The cohort of 9690 diabetic individuals in England, followed for 2 years. The algorithms calculated individual risk for development of preproliferative retinopathy (R2), active proliferative retinopathy (R3A) and diabetic maculopathy (M1) based on clinical data. Screening intervals were determined such that the increase in risk of developing certain stages of retinopathy between screenings was the same for all patients and identical to mean risk in fixed annual screening. Receiver operating characteristic curves were drawn and area under the curve calculated to estimate the prediction capability.The algorithm predicts the occurrence of the given diabetic retinopathy stages with area under the curve =80% for patients with type II diabetes (CI 0.78 to 0.81). Of the cohort 64% is at less than 5% risk of progression to R2, R3A or M1 within 2 years. By applying a 2 year ceiling to the screening interval, patients with type II diabetes are screened on average every 20 months, which is a 40% reduction in frequency compared with annual screening.The algorithm reliably identifies patients at high risk of developing advanced stages of diabetic retinopathy, including preproliferative R2, active proliferative R3A and maculopathy M1. Majority of patients have less than 5% risk of progression between stages within a year and a small high-risk group is identified. Screening visit frequency and presumably costs in a diabetic retinopathy screening system can be reduced by 40% by using a 2 year ceiling. Individualised risk assessment with 2 year ceiling on screening intervals may be a pragmatic next step in diabetic retinopathy screening in UK, in that safety is maximised and cost reduced by about 40%.Icelandic Research Counci
    corecore