1,561 research outputs found

    Minimum entropy restoration using FPGAs and high-level techniques

    Get PDF
    One of the greatest perceived barriers to the widespread use of FPGAs in image processing is the difficulty for application specialists of developing algorithms on reconfigurable hardware. Minimum entropy deconvolution (MED) techniques have been shown to be effective in the restoration of star-field images. This paper reports on an attempt to implement a MED algorithm using simulated annealing, first on a microprocessor, then on an FPGA. The FPGA implementation uses DIME-C, a C-to-gates compiler, coupled with a low-level core library to simplify the design task. Analysis of the C code and output from the DIME-C compiler guided the code optimisation. The paper reports on the design effort that this entailed and the resultant performance improvements

    Restoration of star-field images using high-level languages and core libraries

    Get PDF
    Research into the use of FPGAs in Image Processing began in earnest at the beginning of the 1990s. Since then, many thousands of publications have pointed to the computational capabilities of FPGAs. During this time, FPGAs have seen the application space to which they are applicable grow in tandem with their logic densities. When investigating a particular application, researchers compare FPGAs with alternative technologies such as Digital Signal Processors (DSPs), Application-Specific Integrated Cir-cuits (ASICs), microprocessors and vector processors. The metrics for comparison depend on the needs of the application, and include such measurements as: raw performance, power consumption, unit cost, board footprint, non-recurring engineering cost, design time and design cost. The key metrics for a par-ticular application may also include ratios of these metrics, e.g. power/performance, or performance/unit cost. The work detailed in this paper compares a 90nm-process commodity microprocessor with a plat-form based around a 90nm-process FPGA, focussing on design time and raw performance. The application chosen for implementation was a minimum entropy restoration of star-field images (see [1] for an introduction), with simulated annealing used to converge towards the globally-optimum solution. This application was not chosen in the belief that it would particularly suit one technology over another, but was instead selected as being representative of a computationally intense image-processing application

    Extraordinarily high leaf selenium to sulfur ratios define ‘se-accumulator’ plants

    Get PDF
    Background and Aims: Selenium (Se) and sulfur (S) exhibit similar chemical properties. In flowering plants (angiosperms) selenate and sulfate are acquired and assimilated by common transport and metabolic pathways. It is hypothesized that most angiosperm species show little or no discrimination in the accumulation of Se and S in leaves when their roots are supplied a mixture of selenate and sulfate, but some, termed Se-accumulator plants, selectively accumulate Se in preference to S under these conditions. Methods: This paper surveys Se and S accumulation in leaves of 39 angiosperm species, chosen to represent the range of plant Se accumulation phenotypes, grown hydroponically under identical conditions. Results: The data show that, when supplied a mixture of selenate and sulfate: (1) plant species differ in both their leaf Se ([Se]leaf) and leaf S ([S]leaf) concentrations; (2) most angiosperms show little discrimination for the accumulation of Se and S in their leaves and, in non-accumulator plants, [Se]leaf and [S]leaf are highly correlated; (3) [Se]leaf in Se-accumulator plants is significantly greater than in other angiosperms, but [S]leaf, although high, is within the range expected for angiosperms in general; and (4) the Se/S quotient in leaves of Se-accumulator plants is significantly higher than in leaves of other angiosperms. Conclusion: The traits of extraordinarily high [Se]leaf and leaf Se/S quotients define the distinct elemental composition of Se-accumulator plants

    Job Search and Frictional Unemployment : Some Empirical Evidence

    Get PDF
    L'auteur du présent article expose les résultats d'une enquête destinée à fournir certaines informations supplémentaires au sujet du chômage frictionnel.Il définit d'abord ce type de chômage à partir d'une formule algébrique, puis indique trois formes de chômage frictionnel: un chômage transitoire résultant de la mise à pied des salariés, un chômage volontaire lorsque ceux-ci quittent volontairement leur emploi, un chômage d'entrée au travail, lorsqu'il s'agit des personnes qui arrivent ou reviennent sur le marché du travail.Dans un deuxième temps, il soulève un autre élément de cette forme de chômage, soit la période plus ou moins longue au cours de laquelle les travailleurs sont à la recherche d'un emploi. Comme le chômage frictionnel est censé se produire lorsque les offres d'emploi sont égales ou supérieures aux demandes d'emploi, il est important de savoir pourquoi les travailleurs ne se placent pas immédiatement. D'une façon générale, on admet que quatre raisons principales peuvent expliquer le chômage frictionnel: 1° les travailleurs peuvent agir au hasard dans la recherche d'un emploi; 2° ils se peut qu'ils soient insuffisamment informés quant aux emplois vacants; 3° ils peuvent se réserver un temps plus ou moins long afin de recueillir des renseignements plus complets; 4° il peut arriver qu'ils mettent délibérément de côté certains emplois vacants.L'auteur, comme bien d'autres économistes, se posent un certain nombre de questions relatives au fait du chômage frictionnel. Les travailleurs en chômage sont-ils insuffisamment informés des conditions du marché du travail? Pendant qu'ils sont sans travail, s'efforcent-t-ils de se renseigner davantage sur l'existence des emplois vacants? Selon quels critères, décident-ils de solliciter un emploi? Tiennent-ils compte des conditions de travail et des taux de salaire?L'auteur a voulu procéder autrement, et voici comment. Il a choisi, aux fins de sa recherche, dans la ville d'Edmonton en Alberta l'endroit où étaient concentrées un grand nombre d'entreprises, soit ce qu'on appelle communément « le centre des affaires. » Il a fait porter son enquête sur une seule occupation, soit celle de dactylo. On leur a demandé combien de temps elles étaient demeurées sans emploi avant d'obtenir leur poste actuel, si elles se trouvaient suffisamment informées lorsqu'elles sont devenues sans travail, quelles furent leurs sources de renseignements au cours de leur périodede chômage et selon quels critères elles ont choisi les entreprises où elles ont postulé des emplois. Enfin, on a tenté de savoir si elles avaient, entre-temps, refusé une offre et si les taux de salaire payés par les employeurs y étaient pour quelque chose.— 68% d'entre elles ont réussi à obtenir leur emploi actuel en moins de quatre semaines, alors que 12 des 21 qui restaient ne se sont pas mises aussitôt en chômage à la recherche d'un nouveau poste. Dans l'ensemble, la durée de la recherche d'un emploi a été de 5.7 semaines.— Au sujet des renseignements nécessaires à la découverte de ce nouvel emploi, 42 employées, 12% d'entre elles seulement, ont déclaré qu'elles n'étaient pas suffisamment informées, alors que 30, ou 45%, estimaient posséder de bonnes ou d'excellentes informations.— Les répondantes estiment aussi qu'elles ne considèrent pas le fait de s'enregistrer aux centres de main-d'oeuvre et aux agences de placement privées comme une activité de cueillette d'information, estimant que ces organismes ne sont que des extensions des bureaux de personnel des entreprises.Finalement, l'Auteur s'est enquis des critères utilisés par les employées pour décider d'accepter un poste. Fait surprenant, 32 dactylos ont rejeté une offre d'emploi et 12 en ont refusé plus d'un. La principale raison de ces refus étaient que les conditions salariales étaient inacceptables.De cette enquête, l'Auteur cherche à tirer quelques conclusions générales. L'hypothèse selon laquelle les travailleurs manquent de renseignements lorsqu'ils tombent en chômage n'est pas confirmée par les résultats de l'enquête.The purpose of this paper is to report on some recent empirical research undertaken in order to provide additional information concerning the frictionally unemployed

    Selection of patients with cystic fibrosis for lung transplantation

    Get PDF
    Journal ArticleLung transplantation is the most aggressive therapy available for end-stage lung disease from cystic fibrosis (CF). A new predictive survival model of CF uses demographic, FEV1, nutritional, microbiologic, and acute exacerbation data to produce precise estimates of 5-year survival

    Bayesian fitting of probabilistic maturation reaction norms to population-level data

    Get PDF
    Probabilistic maturation reaction norms (PMRNs) are an important tool for studying fisheries-induced evolution and environmental effects on life history. To date there has been no way to fit a PMRN to population-level fisheries data; instead individual-level data must be used. This limits the stocks and time periods that can be studied.We introduce a Bayesian method for fitting PMRNs to population-level data. The method is verified against both an existing result and simulated data, and applied to historical Barents Sea cod data which combines observations of population-level variation in age, size and maturity status from Russia and Norway.The method shows a clear and rapid trend towards greater probability of maturation at smaller lengths in the Barents Sea cod.The new model fitting algorithm allows us to study historic changes in life history despite the lack of individual-level data seen in much long term data. Access to more data will aid the study of evolutionary hypotheses in a wide range of organisms

    Impact of Burkholderia Infection on Lung Transplantation in Cystic Fibrosis

    Full text link
    Rationale: Lung transplantation offers the only survival option for patients with cystic fibrosis (CF) with end-stage pulmonary disease. Infection with Burkholderia species is typically considered a contraindication to transplantation in CF. However, the risks posed by different Burkholderia species on transplantation outcomes are poorly defined. Objectives: To quantify the risks of infection with Burkholderia species on survival before and after lung transplantation in patients with CF. Methods: Multivariate Cox survival models assessed hazard ratios of infection with Burkholderia species in 1,026 lung transplant candidates and 528 lung transplant recipients. Lung allocation scores, incorporating Burkholderia infection status, were calculated for transplant candidates. Measurements and Main Results: Transplant candidates infected with different Burkholderia species did not have statistically different mortality rates. Among transplant recipients infected with B. cenocepacia, only those infected with nonepidemic strains had significantly greater post-transplant mortality compared with uninfected patients (hazard ratio [HR], 2.52; 95% confidence interval [CI], 1.04–6.12; P 5 0.04). Hazards were similar between uninfected transplant recipients and those infected with B. multivorans (HR, 0.66; 95% CI, 0.27–1.56; P 5 0.34). Transplant recipients infected with B. gladioli had significantly greater post-transplant mortality than uninfected patients (HR, 2.23; 95% CI, 1.05–4.74; P 5 0.04). Oncehazards for species/strainwereincluded,lung allocation scores of B. multivorans–infected transplant candidates were comparable to uninfected candidate scores, whereas those of candidates infected with nonepidemic B. cenocepacia or B. gladioli were lower. Conclusions: Post-transplant mortality among patients with CF infected with Burkholderia varies by infecting species. This variability should be taken into account in evaluating lung transplantation candidates.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/91898/1/Murray LiPuma AJRCCM 2008.pd

    Glomerular matrix accumulation is linked to inhibition of the plasmin protease system

    Get PDF
    Glomerular matrix accumulation is linked to inhibition of the plasmin protease system. TGF-β plays a pivotal role in the pathological accumulation of extracellular matrix in experimental glomerulonephritis. Increased TGF-β expression leads to increased synthesis and deposition of extracellular matrix components while administration of antiserum to TGF-β suppresses the major manifestations of the disease. We hypothesized that TGF-β might also enhance matrix accumulation by decreasing matrix turnover via effects on protease/protease inhibitor balance. Plasmin is a potent protease capable of degrading a variety of matrix molecules. Plasmin generation from plasminogen is regulated by plasminogen activator(s) (PA) and plasminogen activator inhibitor(s) (PAI). In this study PA activity was markedly reduced and PAI-1 synthesis dramatically increased when TGF-β was added to normal glomeruli. Diseased glomeruli also showed decreased PA activity, increased PAI-1 synthesis and increased PAI-1 deposition into matrix. Administration of anti-TGF-β serum to glomerulonephritic rats blocked the expected increase in glomerular PAI-1 deposition. Thus changes in the PA/PAI balance favoring accumulation of matrix are induced by TGF-β in normal glomeruli and are present in nephritic glomeruli when endogenous TGF-β production is high. Our findings implicate the plasmin protease system in tissue repair following acute glomerular injury and suggest another mechanism by which TGF-β enhances the matrix accumulation characteristic of many glomerular diseases
    corecore