58 research outputs found

    Anisotropic smoothness classes : from finite element approximation to image models

    Get PDF
    We propose and study quantitative measures of smoothness which are adapted to anisotropic features such as edges in images or shocks in PDE's. These quantities govern the rate of approximation by adaptive finite elements, when no constraint is imposed on the aspect ratio of the triangles, the simplest examples of such quantities are based on the determinant of the hessian of the function to be approximated. Since they are not semi-norms, these quantities cannot be used to define linear function spaces. We show that they can be well defined by mollification when the function to be approximated has jump discontinuities along piecewise smooth curves. This motivates for using them in image processing as an alternative to the frequently used record variation semi-norm which does not account for the geometric smoothness of the edges.Comment: 24 pages, 2 figure

    Financial Structure and Economic Welfare: Applied General Equilibrium Development Economics

    Get PDF
    This review provides a common framework for researchers thinking about the next generation of micro-founded macro models of growth, inequality, and financial deepening, as well as direction for policy makers targeting microfinance programs to alleviate poverty. Topics include treatment of financial structure general equilibrium models: testing for as-if-complete markets or other financial underpinnings; examining dual-sector models with both a perfectly intermediated sector and a sector in financial autarky, as well as a second generation of these models that embeds information problems and other obstacles to trade; designing surveys to capture measures of income, investment/savings, and flow of funds; and aggregating individuals and households to the level of network, village, or national economy. The review concludes with new directions that overcome conceptual and computational limitations.National Science Foundation (U.S.)National Institutes of Health (U.S.)Templeton FoundationBill & Melinda Gates Foundatio

    Development of methods for the preparation of radiopure <sup>82</sup>Se sources for the SuperNEMO neutrinoless double-beta decay experiment

    Get PDF
    A radiochemical method for producing 82Se sources with an ultra-low level of contamination of natural radionuclides (40K, decay products of 232Th and 238U) has been developed based on cation-exchange chromatographic purification with reverse removal of impurities. It includes chromatographic separation (purification), reduction, conditioning (which includes decantation, centrifugation, washing, grinding, and drying), and 82Se foil production. The conditioning stage, during which highly dispersed elemental selenium is obtained by the reduction of purified selenious acid (H2SeO3) with sulfur dioxide (SO2) represents the crucial step in the preparation of radiopure 82Se samples. The natural selenium (600 g) was first produced in this procedure in order to refine the method. The technique developed was then used to produce 2.5 kg of radiopure enriched selenium (82Se). The produced 82Se samples were wrapped in polyethylene (12 μm thick) and radionuclides present in the sample were analyzed with the BiPo-3 detector. The radiopurity of the plastic materials (chromatographic column material and polypropylene chemical vessels), which were used at all stages, was determined by instrumental neutron activation analysis. The radiopurity of the 82Se foils was checked by measurements with the BiPo-3 spectrometer, which confirmed the high purity of the final product. The measured contamination level for 208Tl was 8-54 μBq/kg, and for 214Bi the detection limit of 600 μBq/kg has been reached.</p

    Credit Supply: Identifying Balance-Sheet Channels with Loan Applications and Granted Loans

    Get PDF
    To identify credit availability we analyze the extensive and intensive margins of lending with loan applications and all loans granted in Spain. We find that during the period analyzed both worse economic and tighter monetary conditions reduce loan granting, especially to firms or from banks with lower capital or liquidity ratios. Moreover, responding to applications for the same loan, weak banks are less likely to grant the loan. Our results suggest that firms cannot offset the resultant credit restriction by turning to other banks. Importantly the bank-lending channel is notably stronger when we account for unobserved time-varying firm heterogeneity in loan demand and quality

    First demonstration of 30 eVee ionization energy resolution with Ricochet germanium cryogenic bolometers

    Full text link
    The future Ricochet experiment aims to search for new physics in the electroweak sector by measuring the Coherent Elastic Neutrino-Nucleus Scattering process from reactor antineutrinos with high precision down to the sub-100 eV nuclear recoil energy range. While the Ricochet collaboration is currently building the experimental setup at the reactor site, it is also finalizing the cryogenic detector arrays that will be integrated into the cryostat at the Institut Laue Langevin in early 2024. In this paper, we report on recent progress from the Ge cryogenic detector technology, called the CryoCube. More specifically, we present the first demonstration of a 30~eVee (electron equivalent) baseline ionization resolution (RMS) achieved with an early design of the detector assembly and its dedicated High Electron Mobility Transistor (HEMT) based front-end electronics. This represents an order of magnitude improvement over the best ionization resolutions obtained on similar heat-and-ionization germanium cryogenic detectors from the EDELWEISS and SuperCDMS dark matter experiments, and a factor of three improvement compared to the first fully-cryogenic HEMT-based preamplifier coupled to a CDMS-II germanium detector. Additionally, we discuss the implications of these results in the context of the future Ricochet experiment and its expected background mitigation performance.Comment: 10 pages, 5 figures, 1 tabl

    Fast neutron background characterization of the future Ricochet experiment at the ILL research nuclear reactor

    Full text link
    The future Ricochet experiment aims at searching for new physics in the electroweak sector by providing a high precision measurement of the Coherent Elastic Neutrino-Nucleus Scattering (CENNS) process down to the sub-100 eV nuclear recoil energy range. The experiment will deploy a kg-scale low-energy-threshold detector array combining Ge and Zn target crystals 8.8 meters away from the 58 MW research nuclear reactor core of the Institut Laue Langevin (ILL) in Grenoble, France. Currently, the Ricochet collaboration is characterizing the backgrounds at its future experimental site in order to optimize the experiment's shielding design. The most threatening background component, which cannot be actively rejected by particle identification, consists of keV-scale neutron-induced nuclear recoils. These initial fast neutrons are generated by the reactor core and surrounding experiments (reactogenics), and by the cosmic rays producing primary neutrons and muon-induced neutrons in the surrounding materials. In this paper, we present the Ricochet neutron background characterization using 3^3He proportional counters which exhibit a high sensitivity to thermal, epithermal and fast neutrons. We compare these measurements to the Ricochet Geant4 simulations to validate our reactogenic and cosmogenic neutron background estimations. Eventually, we present our estimated neutron background for the future Ricochet experiment and the resulting CENNS detection significance.Comment: 14 pages, 14 figures, 1 tabl

    Cancer Biomarker Discovery: The Entropic Hallmark

    Get PDF
    Background: It is a commonly accepted belief that cancer cells modify their transcriptional state during the progression of the disease. We propose that the progression of cancer cells towards malignant phenotypes can be efficiently tracked using high-throughput technologies that follow the gradual changes observed in the gene expression profiles by employing Shannon's mathematical theory of communication. Methods based on Information Theory can then quantify the divergence of cancer cells' transcriptional profiles from those of normally appearing cells of the originating tissues. The relevance of the proposed methods can be evaluated using microarray datasets available in the public domain but the method is in principle applicable to other high-throughput methods. Methodology/Principal Findings: Using melanoma and prostate cancer datasets we illustrate how it is possible to employ Shannon Entropy and the Jensen-Shannon divergence to trace the transcriptional changes progression of the disease. We establish how the variations of these two measures correlate with established biomarkers of cancer progression. The Information Theory measures allow us to identify novel biomarkers for both progressive and relatively more sudden transcriptional changes leading to malignant phenotypes. At the same time, the methodology was able to validate a large number of genes and processes that seem to be implicated in the progression of melanoma and prostate cancer. Conclusions/Significance: We thus present a quantitative guiding rule, a new unifying hallmark of cancer: the cancer cell's transcriptome changes lead to measurable observed transitions of Normalized Shannon Entropy values (as measured by high-throughput technologies). At the same time, tumor cells increment their divergence from the normal tissue profile increasing their disorder via creation of states that we might not directly measure. This unifying hallmark allows, via the the Jensen-Shannon divergence, to identify the arrow of time of the processes from the gene expression profiles, and helps to map the phenotypical and molecular hallmarks of specific cancer subtypes. The deep mathematical basis of the approach allows us to suggest that this principle is, hopefully, of general applicability for other diseases

    Lawra – linear algebra with recursive algorithms

    Get PDF
    Recursion leads to automatic variable blocking for dense linear‐algebra algorithms. The recursive way of programming algorithms eliminates using BLAS level 2 during the factorization steps. For this and other reasons recursion usually speeds up the algorithms. The Cholesky factorization algorithm for positive definite matrices and&nbsp;LU&nbsp;factorization for general matrices are formulated. Different storage data formats and recursive BLAS are explained in this paper. Performance graphes of packed and recursive Cholesky algorithms are presented. Lawra - rekursyviniai tiesinės algebros algoritmai Santrauka Rekursyviniai algoritmai leidžia automatiškai parinkti optimalu bloko dydi realizuojant tiesines algebros algoritmus su pilnomis matricomis. Naudojant rekursyvini programavima išvengiama BLAS bibliotekos antrojo lygio paprogramiu naudojimo vykdant faktorizaci‐jos cikla. Del šios ir kitu priežasčiu rekursyviniai algoritmai dažniausiai yra greitesni už standartinius tiesines algebros algoritmus. Straipsnyje pateikti Choleckio ir LU išskaidy‐mo rekursyviniai algoritmai. Apibrežti skirtingi rekursyviniai duomenu saugojimo formatai ir aprašytas naujas BLAS bibliotekos projektas. Pateikiami naujojo rekursyvinio Choleckio išskaidymo algoritmo efektyvumo tyrimo rezultatai, kurie buvo atlikti su ivairiu tipu kompiuteriais. First Published Online: 14 Oct 201
    corecore