1,283 research outputs found

    The Role of the Public Municipality in Urban Regeneration: the Case of Genoa

    Get PDF
    The conditions why processes of urban regeneration can be developed in modern-day cities have changed enormously over the last decade. Unlike the recent past, where the reuse for urban uses of former industrial areas was only based on maximising the amount of space, after the housing bubble began in 2008 and the pandemic crisis, the profit margins for operators were reduced, and today, they faced to a sharp contraction in demand and a surplus of supply. Consequently, the framework within which we carry out the investment decisions is increasingly complex and is characterised by the opposition of a potential conflict between two forces. On the one hand, the public administration which seeks to take full advantage of the urban transformation processes to improve the quality of life for citizens; on the other, the private entity that has the aim of maximising the profits obtainable from the intervention and to minimise business risk. Therefore, to ensure the overall feasibility of an intervention, urban viability must correspond to economic and financial sustainability. The paper analyses the role of public strategies in urban regeneration interventions through the analysis of a case study in the city of Genoa. Currently, in the city some urban transformation interventions are being implemented; most of them (and the most relevant) are all aligned along the border between the city and the port. The role of the public administration is not limited to that of regulation, but the local municipality also acts as a financier (of public works) and as owner of the areas (which it makes available in concession). In this way, an attempt is made to make the city more competitive in the international real estate market. It is essential to reduce risk and cost factors compared to the private investor. The question then arises of how to evaluate the potential public benefits of these transformation operations

    New Epicenters for Production Development in Port Cities: The Digital Innovation Hub in Genoa

    Get PDF
    In the framework of infrastructural upgrading that the port city of Genoa has been going through for at least two decades, the episode of the Erzelli Science and Technology Park represents a unicum for geographic location, functional programme, implementation process, and actors involved. Located on the hill of the eponymous name, the Park hosts the Liguria Digital Innovation Hub, responding to a need for delocalisation and territorial aggregation of large activities related to technology, production, the service sector and scientific research. The contribution explores how the realization of the Park addresses critical issues related to accessibility and to the attractiveness of the territories, declining the theme of development epicenters from a technological, productive and tertiary point of view

    Towards measuring variations of Casimir energy by a superconducting cavity

    Full text link
    We consider a Casimir cavity, one plate of which is a thin superconducting film. We show that when the cavity is cooled below the critical temperature for the onset of superconductivity, the sharp variation (in the far infrared) of the reflection coefficient of the film engenders a variation in the value of the Casimir energy. Even though the relative variation in the Casimir energy is very small, its magnitude can be comparable to the condensation energy of the superconducting film, and this gives rise to a number of testable effects, including a significant increase in the value of the critical magnetic field, required to destroy the superconductivity of the film. The theoretical ground is therefore prepared for the first experiment ever aimed at measuring variations of the Casimir energy itself.Comment: 4 pages, 2 figures. Substantial improvement of presentation, choice of a more convenient cavity geometry. Accepted for publication in Phys. Rev. Let

    How well can we guess theoretical uncertainties?

    Full text link
    The problem of estimating the effect of missing higher orders in perturbation theory is analyzed with emphasis in the application to Higgs production in gluon-gluon fusion. Well-known mathematical methods for an approximated completion of the perturbative series are applied with the goal to not truncate the series, but complete it in a well-defined way, so as to increase the accuracy - if not the precision - of theoretical predictions. The uncertainty arising from the use of the completion procedure is discussed and a recipe for constructing a corresponding probability distribution function is proposed

    Through precision straits to next standard model heights

    Get PDF
    After the LHC Run 1, the standard model (SM) of particle physics has been completed. Yet, despite its successes, the SM has shortcomings vis-\`{a}-vis cosmological and other observations. At the same time, while the LHC restarts for Run 2 at 13 TeV, there is presently a lack of direct evidence for new physics phenomena at the accelerator energy frontier. From this state of affairs arises the need for a consistent theoretical framework in which deviations from the SM predictions can be calculated and compared to precision measurements. Such a framework should be able to comprehensively make use of all measurements in all sectors of particle physics, including LHC Higgs measurements, past electroweak precision data, electric dipole moment, g2g-2, penguins and flavor physics, neutrino scattering, deep inelastic scattering, low-energy e+ee^{+}e^{-} scattering, mass measurements, and any search for physics beyond the SM. By simultaneously describing all existing measurements, this framework then becomes an intermediate step, pointing us toward the next SM, and hopefully revealing the underlying symmetries. We review the role that the standard model effective field theory (SMEFT) could play in this context, as a consistent, complete, and calculable generalization of the SM in the absence of light new physics. We discuss the relationship of the SMEFT with the existing kappa-framework for Higgs boson couplings characterization and the use of pseudo-observables, that insulate experimental results from refinements due to ever-improving calculations. The LHC context, as well as that of previous and future accelerators and experiments, is also addressed.Comment: 19 pages, 3 figure

    Low energy behaviour of standard model extensions

    Get PDF
    The integration of heavy scalar fields is discussed in a class of BSM models, containing more that one representation for scalars and with mixing. The interplay between integrating out heavy scalars and the Standard Model decoupling limit is examined. In general, the latter cannot be obtained in terms of only one large scale and can only be achieved by imposing further assumptions on the couplings. Systematic low-energy expansions are derived in the more general, non-decoupling scenario, including mixed tree-loop and mixed heavy-light generated operators. The number of local operators is larger than the one usually reported in the literature.Comment: 32 pages, 8 figure

    Effect of pre-weaning solid feed and milk intake on caecal content characteristics and performance of rabbits around weaning

    Get PDF
    The aim of this study is to know the effect of different solid feed and milk intake during suckling on performance around weaning and on caecal content characteristics at weaning. In order to obtain different intakes of milk and solid feed, 13 litters of pregnant females (PF) inseminated the day after delivery and 14 litters of non-pregnant females (NPF) were compared. At birth the litters were equalized at eight pups and during lactation dead pups were replaced by pups of the same age from nursing does. Compared to the PF group, rabbits in the NPF group had a higher milk intake (26.0 versus 21.4 g/day; P < 0.01) and lower solid feed intake (9.1 versus 11.5 g/day; P < 0.01) between 20 and 28 days of age. No significant difference was observed between the two groups in weight gain before and post-weaning (28-49 days). At weaning, the rabbits in group PF showed higher values in caecal content (g 26.3 versus 22.6; P < 0.05) and volatile fatty acids (mmol/l 52.2 versus 43.6; P < 0.01) and lower values in empty caecal weight (g 7.18 versus 7.78; P < 0.05), C3 (6.4 versus 9.3%; P < 0.01) and C3/C4 ratio (0.39 versus 0.63; P < 0.01) than the group NPF. On the basis of the above results, it may be concluded that the quantity of solid feed and milk intake before weaning influenced the charac- teristics of the caecal content, but not the performance of rabbits around weaning

    Slope and distance from buildings are easy-to-retrieve proxies for estimating livestock site-use intensity in alpine summer pastures

    Get PDF
    Regardless of the issue, most of the research carried out on summer pastures of European Alps had to consider the effects of grazing management, as it is an intrinsic component of alpine environment. The management intensity of grazing livestock is measured in terms of livestock stocking rate, but not always a direct measure of it is easily retrievable. Therefore, the aim of the research was to test the reliability of proxies easily retrievable from open data sources (i.e. slope and distance from buildings) in approximating the pastoral site-use intensity. To test the proxies’ effectiveness two different approaches were used. With the first one, the proxies’ reliability was assessed in a case-study conducted at farm scale by using the number of positions gathered with GPS collars, which are a reliable measure of livestock site-use intensity. With the second, the proxies’ reliability was assessed by means of five Vegetation Ecological Groups (VEGs), used as a tool for indirect quantification of livestock site-use intensity at regional scale (thirty-two alpine valleys of the Western Italian Alps, Piedmont Region—Italy). At farm scale, distance from buildings and slope were both reliable predictors of the number of GPS locations as assessed with a Generalized Additive Model. Results of Generalized Linear Models at the regional scale showed that the values of both the slope and the distance from buildings were able to separate VEGs along the same site-use intensity gradient assessed by modelling the number of GPS locations at farm scale. By testing proxies’ reliability both with a direct (i.e. GPS collar positions) and indirect (i.e. VEGs) measurement of livestock site-use intensity, results indicated that slope and distance from buildings can be considered effective surrogates of site-use intensity gradient in alpine grasslands managed under livestock grazing. Therefore, when the level of site-use intensity in research carried out in alpine summer pastures is not directly available, a reliable solution consists in the use of the terrain slope and the distance from buildings, which are also easily retrievable from open data sources or computable

    A semiparametric bivariate probit model for joint modeling of outcomes in STEMI patients

    Get PDF
    In this work we analyse the relationship among in-hospital mortality and a treatment effectiveness outcome in patients affected by ST-Elevation myocardial infarction. The main idea is to carry out a joint modeling of the two outcomes applying a Semiparametric Bivariate Probit Model to data arising from a clinical registry called STEMI Archive. A realistic quantification of the relationship between outcomes can be problematic for several reasons. First, latent factors associated with hospitals organization can affect the treatment efficacy and/or interact with patient’s condition at admission time. Moreover, they can also directly influence the mortality outcome. Such factors can be hardly measurable. Thus, the use of classical estimation methods will clearly result in inconsistent or biased parameter estimates. Secondly, covariate-outcomes relationships can exhibit nonlinear patterns. Provided that proper statistical methods for model fitting in such framework are available, it is possible to employ a simultaneous estimation approach to account for unobservable confounders. Such a framework can also provide flexible covariate structures and model the whole conditional distribution of the response
    corecore