35 research outputs found
Equilibrio general en dimensión infinita
Este trabajo está dedicado el estudio de la Teoría de Equilibrio General en economías de intercambio de dimensión infinita. Para ello, se intenta caracterizar las principales dificultades que aparecen en este contexto y, posteriormente, se presentan los teoremas que constituyen el núcleo de esta teoría : el Primer Teorema de Bienestar Social, el Segundo Teorema de Bienestar Social, un teorema de existencia de equilibrio y un estudio sobre la existencia de óptimos de Pareto. Finalmente, se exhiben ejemplos de economías en las cuales no es posible probar la existencia de equilibrio.This work is dedicated to the study of the Theory of General Equilibrium in exchange economies of infiite dimension. To do this, we try to characterize the main difficulties that appear in this context and, subsequently, the theorems that constitute the core of this theory are presented: the First Social Welfare Theorem, the Second Social Welfare Theorem, an equilibrium existence theorem and a study on the existence of Pareto optima. Finally, there are examples of economies in which it is not possible to prove the existence of equilibrium
Health impact of the emissions from a refinery: case-control study on the adult population living in two municipalities in Lomellina, Italy
Background: In the municipalities of Sannazzaro de’ Burgondi and Ferrer Erbognone (District of Lomellina, Pavia, Lombardy, Italy), an oil refinery is operating since 1963. In 2008, the company running the plant (eni S.p.A.) asked the competent bodies the permission for building a new facility (“EST”). The present work is aimed at evaluating the ante-operam health impacts of the existing facility refinery.
Methods: A case-control study design was implemented. Cases were subjects admitted to hospital in 2002-2014 due to acute respiratory, cardiovascular or gastrointestinal conditions. Controls were selected among those who had not been hospitalised in that timespan. Cases and controls had to be alive at enrolment, aged 20-64 years, and were frequency-matched by age, gender and municipality. Data were extracted from the health insurance registry and from Hospital Discharge Records (ATS Pavia). Enrolled subjects were asked to complete a mailed survey. Environmental exposure was the fallout of refinery emissions (PM10) at participants’ homes, as predicted by an AERMOD model.
Results: 541 respondents (125 cases, 416 controls) were included in the analyses. Response bias was excluded. Individual PM10 exposure was not significantly different between cases and controls, while it was significantly associated with municipality (being higher in Sannazzaro). The crude effect estimate of PM10 over case/control status indicated a not-significant excess of hospitalisation with the increase in PM10 exposure. Multivariate analyses confirmed those results.
Conclusion: Findings indicate a possible excess of hospitalisation risk in most exposed people, but the effect is not statistically significant and may be affected by bias
A cohort study to evaluate persistence of hepatitis B immunogenicity after administration of hexavalent vaccines
<p>Abstract</p> <p>Background</p> <p>In 2001, two hexavalent vaccines were licensed in Italy (Hexavac<sup>®</sup>, Infanrix Hexa<sup>®</sup>), and since 2002 were extensively used for primary immunization in the first year of life (at 3, 5, 11/12 months of age). In 2005, the market authorization of Hexavac<sup>® </sup>was precautionary suspended by EMEA, because of doubts on long-term protection against hepatitis B virus. The objectives of this study were to evaluate the persistence of antibodies to anti-HBs, in children in the third year of life, and to investigate the response to a booster dose of hepatitis B vaccine.</p> <p>Methods</p> <p>Participant children were enrolled concomitantly with the offering of anti-polio booster dose, in the third year of life. Anti-HBs titers were determined on capillary blood samples. A booster dose of hepatitis B vaccine was administered to children with anti-HBs titers < 10 mIU/ml, with the monovalent precursor product of the previously received hexavalent vaccine. HBsAb titers were tested again one month after the booster.</p> <p>Results</p> <p>Sera from 113 children previously vaccinated with Hexavac<sup>®</sup>, and from 124 vaccinated with Infanrix Hexa<sup>® </sup>were tested for anti-HBs. Titers were ≥ 10 mIU/ml in 69% and 96% (p < 0,0001) respectively. The proportion of children with titers ≥ 100 mIU/ml did also significantly differ among groups (27% and 78%; p < 0,0001).</p> <p>Post-booster, 93% of children achieved titers ≥ 10 mIU/ml, with no significant difference by vaccine group.</p> <p>Discussion</p> <p>Fifteen months after third dose administration, a significant difference in anti-HBs titers was noted in the two vaccine groups considered. Monovalent hepatitis B vaccine administration in 3-year old children induced a proper booster response, confirming that immunologic memory persists in children with anti-HBs titers < 10 mIU/ml. However, long-term persistence of HBV protection after hexavalent vaccines administration should be further evaluated over time.</p
Cholangiocarcinoma and occupational exposure to asbestos: insights from the Italian pooled cohort study
Background: Recent studies supported the association between occupational exposure to asbestos and risk of cholan- giocarcinoma (CC). Aim of the present study is to investigate this association using an update of mortality data from the Italian pooled asbestos cohort study and to test record linkage to Cancer Registries to distinguish between hepato- cellular carcinoma (HCC) and intrahepatic/extrahepatic forms of CC. Methods: The update of a large cohort study pooling 52 Italian industrial cohorts of workers formerly exposed to asbestos was carried out. Causes of death were coded according to ICD. Linkage was carried out for those subjects who died for liver or bile duct cancer with data on histological subtype provided by Cancer Registries. Results: 47 cohorts took part in the study (57,227 subjects). We identified 639 causes of death for liver and bile duct cancer in the 44 cohorts covered by Cancer Registry. Of these 639, 240 cases were linked to Cancer Registry, namely 14 CC, 83 HCC, 117 cases with unspecified histology, 25 other carcinomas, and one case of cirrhosis (likely precancerous condition). Of the 14 CC, 12 occurred in 2010-2019, two in 2000-2009, and none before 2000. Conclusion: Further studies are needed to explore the association between occupational exposure to asbestos and CC. Record linkage was hampered due to incomplete coverage of the study areas and periods by Cancer Registries. The identification of CC among unspecific histology cases is fundamental to establish more effective and targeted liver cancer screening strategies
Acute Delta Hepatitis in Italy spanning three decades (1991–2019): Evidence for the effectiveness of the hepatitis B vaccination campaign
Updated incidence data of acute Delta virus hepatitis (HDV) are lacking worldwide. Our aim was to evaluate incidence of and risk factors for acute HDV in Italy after the introduction of the compulsory vaccination against hepatitis B virus (HBV) in 1991. Data were obtained from the National Surveillance System of acute viral hepatitis (SEIEVA). Independent predictors of HDV were assessed by logistic-regression analysis. The incidence of acute HDV per 1-million population declined from 3.2 cases in 1987 to 0.04 in 2019, parallel to that of acute HBV per 100,000 from 10.0 to 0.39 cases during the same period. The median age of cases increased from 27 years in the decade 1991-1999 to 44 years in the decade 2010-2019 (p < .001). Over the same period, the male/female ratio decreased from 3.8 to 2.1, the proportion of coinfections increased from 55% to 75% (p = .003) and that of HBsAg positive acute hepatitis tested for by IgM anti-HDV linearly decreased from 50.1% to 34.1% (p < .001). People born abroad accounted for 24.6% of cases in 2004-2010 and 32.1% in 2011-2019. In the period 2010-2019, risky sexual behaviour (O.R. 4.2; 95%CI: 1.4-12.8) was the sole independent predictor of acute HDV; conversely intravenous drug use was no longer associated (O.R. 1.25; 95%CI: 0.15-10.22) with this. In conclusion, HBV vaccination was an effective measure to control acute HDV. Intravenous drug use is no longer an efficient mode of HDV spread. Testing for IgM-anti HDV is a grey area requiring alert. Acute HDV in foreigners should be monitored in the years to come
Association of kidney disease measures with risk of renal function worsening in patients with type 1 diabetes
Background: Albuminuria has been classically considered a marker of kidney damage progression in diabetic patients and it is routinely assessed to monitor kidney function. However, the role of a mild GFR reduction on the development of stage 653 CKD has been less explored in type 1 diabetes mellitus (T1DM) patients. Aim of the present study was to evaluate the prognostic role of kidney disease measures, namely albuminuria and reduced GFR, on the development of stage 653 CKD in a large cohort of patients affected by T1DM. Methods: A total of 4284 patients affected by T1DM followed-up at 76 diabetes centers participating to the Italian Association of Clinical Diabetologists (Associazione Medici Diabetologi, AMD) initiative constitutes the study population. Urinary albumin excretion (ACR) and estimated GFR (eGFR) were retrieved and analyzed. The incidence of stage 653 CKD (eGFR < 60 mL/min/1.73 m2) or eGFR reduction > 30% from baseline was evaluated. Results: The mean estimated GFR was 98 \ub1 17 mL/min/1.73m2 and the proportion of patients with albuminuria was 15.3% (n = 654) at baseline. About 8% (n = 337) of patients developed one of the two renal endpoints during the 4-year follow-up period. Age, albuminuria (micro or macro) and baseline eGFR < 90 ml/min/m2 were independent risk factors for stage 653 CKD and renal function worsening. When compared to patients with eGFR > 90 ml/min/1.73m2 and normoalbuminuria, those with albuminuria at baseline had a 1.69 greater risk of reaching stage 3 CKD, while patients with mild eGFR reduction (i.e. eGFR between 90 and 60 mL/min/1.73 m2) show a 3.81 greater risk that rose to 8.24 for those patients with albuminuria and mild eGFR reduction at baseline. Conclusions: Albuminuria and eGFR reduction represent independent risk factors for incident stage 653 CKD in T1DM patients. The simultaneous occurrence of reduced eGFR and albuminuria have a synergistic effect on renal function worsening
Finishing the euchromatic sequence of the human genome
The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead
Les avatars du bruit dans les images numériques et leur application à la forensique des images
Les images sont des puissants vecteurs d'information, transmettant de données et d'informations a travers de représentations visuelles. À cette époque, caractérisée par l'influence omniprésente de l'imagerie numérique, la forensique des images représente une discipline vitale qui répond au besoin de maintenir la véracité et la fiabilité du contenu visuel numérique. Les images sont intrinsèquement dotées d'une empreinte digitale, intégrée au cours du processus de formation de l'image. En effet, la création d'une image numérique, depuis son acquisition par le capteur jusqu'à son stockage final, imprime des artefacts qui servent de signature unique. L'objectif de cette thèse est de retrouver cette empreinte grâce à l'analyse du bruit. Au long la chaîne de traitement, le bruit initial de Poisson est transformé par de multiples opérations adaptées à chaque chaîne de formation de l'image, conduisant à l'image comprimée finale. Par conséquent, les résidus de bruit peuvent fournir des informations forensiques significatives.Tels indices permettent de détecter les falsifications. En effet, bien que les manipulations actuelles permettent d'atteindre un haut degré de fidélité visuelle, elles introduisent simultanément des altérations dans la structure intrinsèque de l'image. Ces perturbations de l'empreinte digitale inhérente sont exploitées par la plupart des méthodes pour repérer les régions altérées. La première partie de cette thèse se concentre sur ce problème. Nous proposons ici deux méthodes basées sur la détection d'inconsistances locales du modèle de bruit. En particulier, la méthode Noisesniffer adopte une étape de validation a contrario, visant à contrôler le nombre espéré de fausses détections. Nous explorons ensuite la possibilité d'apprendre les traces forensiques en utilisant des réseaux convolutifs profonds, au lieu d'utiliser des features construites à la main. Enfin, cette partie se termine par l'évaluation des méthodes de détection de falsifications elles-mêmes. Nous proposons une méthodologie et un ensemble de données pour étudier la sensibilité des outils de détection à des traces spécifiques, ainsi que leur capacité à effectuer une détection sans indices sémantiques dans l'image.Les tâches forensiques liées à la caméra source, telles que l'identification du modèle ou la certification du dispositif d'origine, peuvent également être réalisées à l'aide de ladite empreinte digitale. En effet, certaines des traces forensiques intégrées au cours du processus d'acquisition de l'image sont propres au modèle ou à l'appareil. En isolant ces signaux, il est possible d'obtenir des informations sur l'appareil d'origine. La deuxième partie de cette thèse se concentre sur ces tâches. Ici, nous explorons des approches d'apprentissage pour déterminer si une paire d'images contient les mêmes traces forensiques. En outre, nous proposons une nouvelle approche statistique pour la certification de la caméra d'origine basée sur les traces PRNU. Cette approche repose sur deux tests d'hypothèse basés sur des corrélations locales qui ne nécessitent pas le calcul de distributions empiriques.Cependant, rien n'empêche les faussaires de cacher l'empreinte de l'image. C'est pourquoi nous consacrons la dernière partie à l'analyse de différentes attaques contre-forensiques. Il est important de montrer les limites des méthodes forensiques afin de savoir quelle confiance on peut accorder à une image et d'encourager l'exploration d'autres méthodes d'authentification. À cette fin, nous analysons une nouvelle approche récemment introduite dans la littérature pour l'effacement des traces de caméra. Cette approche repose sur une fonction objectif hybride innovante pour l'apprentissage du réseau : une combinaison de trois fonctions différentes : la fonction de similarité intégrée, la fonction de fidélité tronquée et la fonction d'identité croisée. En outre, nous proposons une nouvelle attaque forensique basée sur des modèles de diffusion.Images serve as potent information vectors, conveying a wealth of data and insights through visual representations. Their importance in various domains cannot be overstated, as they offer unique advantages for communication, understanding, and documentation. In an era characterized by the pervasive influence of digital imagery, image forensics represents a vital discipline that addresses the pressing need to uphold the veracity and trustworthiness of digital visual content. Images are naturally endowed with a fingerprint, embedded during the image formation process. Indeed, the creation of a digital image, spanning from its acquisition at the camera sensor to its final storage, imprints distinct artifacts that serve as a unique signature. The goal of this thesis is to retrieve this fingerprint through noise analysis. Along the camera processing pipeline, the initial Poisson noise is transformed by multiple operations tailored to each image formation process, leading to the final compressed image. As a consequence, noise residuals can yield significant forensic insights.Such cues allow forgery detection. Indeed, though nowadays manipulations have the capability to achieve a high degree of visual fidelity, they concurrently introduce alterations to the intrinsic structure of the image. Such disruptions in the inherent fingerprint are exploited by most forgery detection methods to spot tampered regions. The first part of this thesis focuses on this problem. Here, we propose two methods based on the detection of local inconsistencies of the noise model with respect to a background model. In particular, the Noisesniffer method adopts an a contrario validation step, aiming at controlling the expected number of false detections. We then explore the possibility of learning the forensic traces by means of deep convolutional networks instead of using hand-crafted features. Finally, this part ends with the evaluation of forgery detection methods themselves. We propose a methodology and a dataset to study the sensitivity of the detection tools to specific traces, as well as their ability to perform detection without semantic cues in the image.Source camera forensics tasks such source camera model identification or source device certification can also be achieved using the said fingerprint. Indeed, some of the forensic traces embedded during the image acquisition process are model-unique or device-unique. By isolating such signals, information about the source device can be obtained. The second part of this thesis focuses on these tasks. Here, we explore learning approaches to determine if a pair of images contain the same forensic traces. In addition, we propose a new statistical approach for source camera certification based on PRNU traces. Such an approach relies on two hypothesis tests based on local correlations which do not require computing empirical distributions.Still, nothing prevents the forgers from hiding the image fingerprint. This is why we devote the final part of this thesis to the analysis of different counter-forensics attacks. Highlighting the limitations of current forensic methods is important so that one can know how much trust can be put into an image and to encourage the exploration of alternative authentication methods. To this end, we analyze a novel approach recently introduced in the literature for camera trace erasing. This approach relies on an innovative hybrid loss for network training defined as a combination of three different losses: the embedded similarity loss, the truncated fidelity loss and the cross-identity loss. In addition, we propose a new counter-forensic attack based on diffusion models
Les avatars du bruit dans les images numériques et leur application à la forensique des images
Images serve as potent information vectors, conveying a wealth of data and insights through visual representations. Their importance in various domains cannot be overstated, as they offer unique advantages for communication, understanding, and documentation. In an era characterized by the pervasive influence of digital imagery, image forensics represents a vital discipline that addresses the pressing need to uphold the veracity and trustworthiness of digital visual content. Images are naturally endowed with a fingerprint, embedded during the image formation process. Indeed, the creation of a digital image, spanning from its acquisition at the camera sensor to its final storage, imprints distinct artifacts that serve as a unique signature. The goal of this thesis is to retrieve this fingerprint through noise analysis. Along the camera processing pipeline, the initial Poisson noise is transformed by multiple operations tailored to each image formation process, leading to the final compressed image. As a consequence, noise residuals can yield significant forensic insights.Such cues allow forgery detection. Indeed, though nowadays manipulations have the capability to achieve a high degree of visual fidelity, they concurrently introduce alterations to the intrinsic structure of the image. Such disruptions in the inherent fingerprint are exploited by most forgery detection methods to spot tampered regions. The first part of this thesis focuses on this problem. Here, we propose two methods based on the detection of local inconsistencies of the noise model with respect to a background model. In particular, the Noisesniffer method adopts an a contrario validation step, aiming at controlling the expected number of false detections. We then explore the possibility of learning the forensic traces by means of deep convolutional networks instead of using hand-crafted features. Finally, this part ends with the evaluation of forgery detection methods themselves. We propose a methodology and a dataset to study the sensitivity of the detection tools to specific traces, as well as their ability to perform detection without semantic cues in the image.Source camera forensics tasks such source camera model identification or source device certification can also be achieved using the said fingerprint. Indeed, some of the forensic traces embedded during the image acquisition process are model-unique or device-unique. By isolating such signals, information about the source device can be obtained. The second part of this thesis focuses on these tasks. Here, we explore learning approaches to determine if a pair of images contain the same forensic traces. In addition, we propose a new statistical approach for source camera certification based on PRNU traces. Such an approach relies on two hypothesis tests based on local correlations which do not require computing empirical distributions.Still, nothing prevents the forgers from hiding the image fingerprint. This is why we devote the final part of this thesis to the analysis of different counter-forensics attacks. Highlighting the limitations of current forensic methods is important so that one can know how much trust can be put into an image and to encourage the exploration of alternative authentication methods. To this end, we analyze a novel approach recently introduced in the literature for camera trace erasing. This approach relies on an innovative hybrid loss for network training defined as a combination of three different losses: the embedded similarity loss, the truncated fidelity loss and the cross-identity loss. In addition, we propose a new counter-forensic attack based on diffusion models.Les images sont des puissants vecteurs d'information, transmettant de données et d'informations a travers de représentations visuelles. À cette époque, caractérisée par l'influence omniprésente de l'imagerie numérique, la forensique des images représente une discipline vitale qui répond au besoin de maintenir la véracité et la fiabilité du contenu visuel numérique. Les images sont intrinsèquement dotées d'une empreinte digitale, intégrée au cours du processus de formation de l'image. En effet, la création d'une image numérique, depuis son acquisition par le capteur jusqu'à son stockage final, imprime des artefacts qui servent de signature unique. L'objectif de cette thèse est de retrouver cette empreinte grâce à l'analyse du bruit. Au long la chaîne de traitement, le bruit initial de Poisson est transformé par de multiples opérations adaptées à chaque chaîne de formation de l'image, conduisant à l'image comprimée finale. Par conséquent, les résidus de bruit peuvent fournir des informations forensiques significatives.Tels indices permettent de détecter les falsifications. En effet, bien que les manipulations actuelles permettent d'atteindre un haut degré de fidélité visuelle, elles introduisent simultanément des altérations dans la structure intrinsèque de l'image. Ces perturbations de l'empreinte digitale inhérente sont exploitées par la plupart des méthodes pour repérer les régions altérées. La première partie de cette thèse se concentre sur ce problème. Nous proposons ici deux méthodes basées sur la détection d'inconsistances locales du modèle de bruit. En particulier, la méthode Noisesniffer adopte une étape de validation a contrario, visant à contrôler le nombre espéré de fausses détections. Nous explorons ensuite la possibilité d'apprendre les traces forensiques en utilisant des réseaux convolutifs profonds, au lieu d'utiliser des features construites à la main. Enfin, cette partie se termine par l'évaluation des méthodes de détection de falsifications elles-mêmes. Nous proposons une méthodologie et un ensemble de données pour étudier la sensibilité des outils de détection à des traces spécifiques, ainsi que leur capacité à effectuer une détection sans indices sémantiques dans l'image.Les tâches forensiques liées à la caméra source, telles que l'identification du modèle ou la certification du dispositif d'origine, peuvent également être réalisées à l'aide de ladite empreinte digitale. En effet, certaines des traces forensiques intégrées au cours du processus d'acquisition de l'image sont propres au modèle ou à l'appareil. En isolant ces signaux, il est possible d'obtenir des informations sur l'appareil d'origine. La deuxième partie de cette thèse se concentre sur ces tâches. Ici, nous explorons des approches d'apprentissage pour déterminer si une paire d'images contient les mêmes traces forensiques. En outre, nous proposons une nouvelle approche statistique pour la certification de la caméra d'origine basée sur les traces PRNU. Cette approche repose sur deux tests d'hypothèse basés sur des corrélations locales qui ne nécessitent pas le calcul de distributions empiriques.Cependant, rien n'empêche les faussaires de cacher l'empreinte de l'image. C'est pourquoi nous consacrons la dernière partie à l'analyse de différentes attaques contre-forensiques. Il est important de montrer les limites des méthodes forensiques afin de savoir quelle confiance on peut accorder à une image et d'encourager l'exploration d'autres méthodes d'authentification. À cette fin, nous analysons une nouvelle approche récemment introduite dans la littérature pour l'effacement des traces de caméra. Cette approche repose sur une fonction objectif hybride innovante pour l'apprentissage du réseau : une combinaison de trois fonctions différentes : la fonction de similarité intégrée, la fonction de fidélité tronquée et la fonction d'identité croisée. En outre, nous proposons une nouvelle attaque forensique basée sur des modèles de diffusion
