1,021 research outputs found

    Consensus-Based Optimization on Hypersurfaces

    Get PDF
    In questo elaborato viene presentato un algoritmo Consensus-Based per l'ottimizazione vincolata a ipersuperfici. Il metodo consiste in una tecnica di ottimizzazione di tipo metaeuristico dove un insieme di particelle interagenti si muove secondo un meccanismo che unisce movimenti deterministici e stocastici per creare un consenso attorno ad un luogo del dominio dove è presente un minimo della funzione. La dinamica è governata da un sistema di SDE ed è studiata attraverso il formalismo della teoria cinetica per modelli di particelle interagenti. Innanzitutto, viene dimostrato che il sistema è ben posto e viene formalmente derivato il suo limite di campo medio. Il meccanismo di consenso viene poi studiato analiticamente e computazionalmente soffermandosi sulle difficoltà che il rispetto del vincolo comporta. Infine, vengono condotti esperimenti su classiche funzioni test

    Repulsion dynamics for uniform Pareto front approximation in multi-objective optimization problems

    Full text link
    Scalarization allows to solve a multi-objective optimization problem by solving many single-objective sub-problems, uniquely determined by some parameters. In this work, we propose several adaptive strategies to select such parameters in order to obtain a uniform approximation of the Pareto front. This is done by introducing a heuristic dynamics where the parameters interact through a binary repulsive potential. The approach aims to minimize the associated energy potential which is used to quantify the diversity of the computed solutions. A stochastic component is also added to overcome non-optimal energy configurations. Numerical experiments show the validity of the proposed approach for bi- and tri-objectives problems with different Pareto front geometries

    An adaptive consensus based method for multi-objective optimization with uniform Pareto front approximation

    Get PDF
    In this work we are interested in stochastic particle methods for multi-objective optimization. The problem is formulated using parametrized, single-objective sub-problems which are solved simultaneously. To this end a consensus based multi-objective optimization method on the search space combined with an additional heuristic strategy to adapt parameters during the computations is proposed. The adaptive strategy aims to distribute the particles uniformly over the image space by using energy-based measures to quantify the diversity of the system. The resulting metaheuristic algorithm is mathematically analyzed using a mean-field approximation and convergence guarantees towards optimal points is rigorously proven. In addition, a gradient flow structure in the parameter space for the adaptive method is revealed and analyzed. Several numerical experiments shows the validity of the proposed stochastic particle dynamics and illustrate the theoretical findings

    Consensus based optimization with memory effects: random selection and applications

    Get PDF
    In this work we extend the class of Consensus-Based Optimization (CBO) metaheuristic methods by considering memory effects and a random selection strategy. The proposed algorithm iteratively updates a population of particles according to a consensus dynamics inspired by social interactions among individuals. The consensus point is computed taking into account the past positions of all particles. While sharing features with the popular Particle Swarm Optimization (PSO) method, the exploratory behavior is fundamentally different and allows better control over the convergence of the particle system. We discuss some implementation aspects which lead to an increased efficiency while preserving the success rate in the optimization process. In particular, we show how employing a random selection strategy to discard particles during the computation improves the overall performance. Several benchmark problems and applications to image segmentation and Neural Networks training are used to validate and test the proposed method. A theoretical analysis allows to recover convergence guarantees under mild assumptions on the objective function. This is done by first approximating the particles evolution with a continuous-in-time dynamics, and then by taking the mean-field limit of such dynamics. Convergence to a global minimizer is finally proved at the mean-field level

    Modelli astratti di calcolo: Macchine a puntatori

    Get PDF
    Presentiamo la classe di modelli di computazione astratti delle Macchine a puntatori mettendole in relazione con i classici modelli computazionali.In particolare ci chiediamo se appartengano alla classe descritta della Tesi di Invarianza. A questo scopo illustriamo la simulazioni di una Macchina di Turing attraverso una Macchina di Schönhage e presentiamo alcuni teoremi riguardanti l'utilizzo dello spazio. Questi risultati mettono in luce come le macchine a puntatori possano essere considerate più potenti rispetto alle Macchine di Turing e alle RAM. Presentiamo poi le Macchine di Kolmogorv-Uspensky, le prime macchine a puntatori inventate, mettendo in luce la riflessione che ne ha portato all'ideazione. Ne presentiamo infine una implementazione biologica tramite il Physarum Polycephalum con l'intento di mostrare come questo modello, grazie alla sua particolare struttura, si presti ad essere un utile strumento per la comprensione di sistemi fisici

    Smart-working VS office work: how does personal exposure to different air pollutants change?

    Get PDF
    The COVID-19 pandemic is raging all over the world, with possible structural effects on the work: the smart-working (WFH -Working From Home) role is therefore emphasized by the fact that it could become a traditional way of working in many work sectors. Several scientific papers have recently analyzed the WFH phenomenon under different aspects, but scientific studies have not yet been conducted considering the differences between WFH and WFO (Working From Office), in terms of evaluation of personal exposure assessment to selected airborne pollutants. This study, therefore, aims to evaluate, using portable monitors, the differences in terms of personal exposure to selected airborne pollutants, during different working conditions (WFO vs WFH), over long periods of time (from days to weeks), extending the results to even longer periods (years), to adhere to the approach proposed by the concept of the exposome. The preliminary results of this study refer to three separate phases of the work (i) re-analyses of literature data via Monte Carlo simulation, and assessment of personal exposure to different air pollutants during different working conditions, during (ii) “long term” campaign and (iii) a “short term” monitoring campaign. During the two different measurement campaigns, portable instrumentation was used, because of the ability of these kinds of instruments to obtain data characterized by a high spatial and temperature resolution. The re-elaborations of the data obtained from the literature show how, under different conditions, the exposure concentrations to different PM fractions are statistically lower in WFH working conditions, compared to WFO conditions. These results are in contrast with the preliminary results obtained from exploratory monitoring (both for the “long term” and for the “short term” campaigns). The results obtained from these exploratory monitoring show that the WFH condition has a greater impact on the daily exposure of the monitored subjects, compared to the WFO condition

    Monitor and sensors 2.0 for exposure assessment to airborne pollutants

    Get PDF
    In recent years, the issue of exposure assessment to airborne pollutants has become increasingly popular, both in the occupational and environmental fields. The increasingly stringent national and international air quality standards and exposure limit values both for indoor environments and occupational exposure limit values have been developed with the aim of protecting the health of the general population and workers. On the other hand, this requires a considerable and continuous development of the technologies used to monitor the concentrations of the pollutants to ensure the reliability of the exposure assessment studies. In this regard, one of the most interesting aspects is certainly the development of “new generation” instrumentation for monitoring airborne pollutants (“Next Generation Monitors and Sensors” – NGMS). The main purpose of this work is to analyze the state of the art regarding the afore-mentioned instrumentation, to be able to investigate any practical applications within exposure assessment studies. In this regard, a systematic review of the scientific literature was carried out using three different databases (Scopus, PubMed and Web of Knowledge) and the results were analyzed in terms of the objectives set out above. What emerged is the fact that the use of NGMSs is increasingly growing within the scientific community for exposure assessment studies applied to the occupational and environmental context. The investigated studies have emphasized that NGMSs cannot be considered, in terms of the reliability of the results, to be equal to the reference measurement tools and techniques (i.e., those defined in recognized methods used for regulatory purposes), but they can certainly be integrated into the internal exposure assessment studies to improve their spatial-temporal resolution. These tools have the potential to be easily adapted to different types of studies, are characterized by a small size, which allows them to be worn comfortably without affecting the normal activities of workers or citizens, and by a relatively low cost. Despite this, there is certainly a gap with respect to the reference instrumentation, regarding the measurement performance and quality of the data provided; the objective to be set, however, is not to replace the traditional instrumentation with NGMSs but to integrate and combine the two typologies of instruments to benefit from the strengths of both, therefore, the desirable future developments in this sense has been discussed in this work

    ERAS program adherence-institutionalization, major morbidity and anastomotic leakage after elective colorectal surgery: the iCral2 multicenter prospective study

    Get PDF
    Background Enhanced recovery after surgery (ERAS) programs influence morbidity rates and length of stay after colorectal surgery (CRS), and may also impact major complications and anastomotic leakage rates. A prospective multicenter observational study to investigate the interactions between ERAS program adherence and early outcomes after elective CRS was carried out. Methods Prospective enrolment of patients submitted to elective CRS with anastomosis in 18 months. Adherence to 21 items of ERAS program was measured upon explicit criteria in every case. After univariate analysis, independent predictors of primary endpoints [major morbidity (MM) and anastomotic leakage (AL) rates] were identified through logistic regression analyses including all significant variables, presenting odds ratios (OR). Results Institutional ERAS protocol was declared by 27 out of 38 (71.0%) participating centers. Median overall adherence to ERAS program items was 71.4%. Among 3830 patients included in the study, MM and AL rates were 4.7% and 4.2%, respectively. MM rates were independently influenced by intra- and/or postoperative blood transfusions (OR 7.79, 95% CI 5.46-11.10; p < 0.0001) and standard anesthesia protocol (OR 0.68, 95% CI 0.48-0.96; p = 0.028). AL rates were independently influenced by male gender (OR 1.48, 95% CI 1.06-2.07; p = 0.021), intra- and/or postoperative blood transfusions (OR 4.29, 95% CI 2.93-6.50; p < 0.0001) and non-standard resections (OR 1.49, 95% CI 1.01-2.22; p = 0.049). Conclusions This study disclosed wide room for improvement in compliance to several ERAS program items. It failed to detect any significant association between institutionalization and/or adherence rates to ERAS program with primary endpoints. These outcomes were independently influenced by gender, intra- and postoperative blood transfusions, non-standard resections, and standard anesthesia protocol

    Simultaneous transcranial electrical and magnetic stimulation boost gamma oscillations in the dorsolateral prefrontal cortex

    Get PDF
    Neural oscillations in the gamma frequency band have been identified as a fundament for synaptic plasticity dynamics and their alterations are central in various psychiatric and neurological conditions. Transcranial magnetic stimulation (TMS) and alternating electrical stimulation (tACS) may have a strong therapeutic potential by promoting gamma oscillations expression and plasticity. Here we applied intermittent theta-burst stimulation (iTBS), an established TMS protocol known to induce LTP-like cortical plasticity, simultaneously with transcranial alternating current stimulation (tACS) at either theta (theta tACS) or gamma (gamma tACS) frequency on the dorsolateral prefrontal cortex (DLPFC). We used TMS in combination with electroencephalography (EEG) to evaluate changes in cortical activity on both left/right DLPFC and over the vertex. We found that simultaneous iTBS with gamma tACS but not with theta tACS resulted in an enhancement of spectral gamma power, a trend in shift of individual peak frequency towards faster oscillations and an increase of local connectivity in the gamma band. Furthermore, the response to the neuromodulatory protocol, in terms of gamma oscillations and connectivity, were directly correlated with the initial level of cortical excitability. These results were specific to the DLPFC and confined locally to the site of stimulation, not being detectable in the contralateral DLPFC. We argue that the results described here could promote a new and effective method able to induce long-lasting changes in brain plasticity useful to be clinically applied to several psychiatric and neurological conditions

    Characterization of timing and spacial resolution of novel TI-LGAD structures before and after irradiation

    Full text link
    The characterization of spacial and timing resolution of the novel Trench Isolated LGAD (TI-LGAD) technology is presented. This technology has been developed at FBK with the goal of achieving 4D pixels, where an accurate position resolution is combined in a single device with the precise timing determination for Minimum Ionizing Particles (MIPs). In the TI-LGAD technology, the pixelated LGAD pads are separated by physical trenches etched in the silicon. This technology can reduce the interpixel dead area, mitigating the fill factor problem. The TI-RD50 production studied in this work is the first one of pixelated TI-LGADs. The characterization was performed using a scanning TCT setup with an infrared laser and a 90^{90}Sr source setup
    corecore