72 research outputs found

    Hamiltonian linearization of the rest-frame instant form of tetrad gravity in a completely fixed 3-orthogonal gauge: a radiation gauge for background-independent gravitational waves in a post-Minkowskian Einstein spacetime

    Get PDF
    In the framework of the rest-frame instant form of tetrad gravity, where the Hamiltonian is the weak ADM energy E^ADM{\hat E}_{ADM}, we define a special completely fixed 3-orthogonal Hamiltonian gauge, corresponding to a choice of {\it non-harmonic} 4-coordinates, in which the independent degrees of freedom of the gravitational field are described by two pairs of canonically conjugate Dirac observables (DO) raˉ(τ,σ)r_{\bar a}(\tau ,\vec \sigma), πaˉ(τ,σ)\pi_{\bar a}(\tau ,\vec \sigma), aˉ=1,2\bar a = 1,2. We define a Hamiltonian linearization of the theory, i.e. gravitational waves, {\it without introducing any background 4-metric}, by retaining only the linear terms in the DO's in the super-hamiltonian constraint (the Lichnerowicz equation for the conformal factor of the 3-metric) and the quadratic terms in the DO's in E^ADM{\hat E}_{ADM}. {\it We solve all the constraints} of the linearized theory: this amounts to work in a well defined post-Minkowskian Christodoulou-Klainermann space-time. The Hamilton equations imply the wave equation for the DO's raˉ(τ,σ)r_{\bar a}(\tau ,\vec \sigma), which replace the two polarizations of the TT harmonic gauge, and that {\it linearized Einstein's equations are satisfied} . Finally we study the geodesic equation, both for time-like and null geodesics, and the geodesic deviation equation.Comment: LaTeX (RevTeX3), 94 pages, 4 figure

    Different biological and prognostic breast cancer populations identified by FDG-PET in sentinel node-positive patients: Results and clinical implications after eight-years follow-up

    Get PDF
    Abstract Background Sentinel node (SN) biopsy is the standard method to evaluate axillary node involvement in breast cancer (BC). Positron emission tomography with 2-(fluorine-18)-fluoro-2-deoxy-D-glucose (FDG-PET) provides a non-invasive tool to evaluate regional nodes in BC in a metabolic-dependent, biomolecular-related way. In 1999, we initiated a prospective non-randomized study to compare these two methods and to test the hypothesis that FDG-PET results reflect biomolecular characteristics of the primary tumor, thereby yielding valuable prognostic information. Patients and methods A total of 145 cT1N0 BC patients, aged 24–70 years, underwent FDG-PET and lymphoscintigraphy before surgery. SN biopsy was followed in all cases by complete axillary dissection. Pathologic evaluation in tissue sections for involvement of the SN and other non-SN nodes served as the basis of the comparison between FDG-PET imaging and SN biopsy. Results FDG-PET and SN biopsy sensitivity was 72.6% and 88.7%, respectively, and negative predictive values were 80.5% and 92.2%, respectively. A subgroup of more aggressive tumors (ER-GIII, Her2+) was found mainly in the FDG-PET true-positive (FDG-PET+) patients, whereas LuminalA, Mib1 low-rate BCs were significantly undetected ( p = 0.009) in FDG-PET false-negative (FDG-PET−) patients. Kaplan–Meier survival estimates after a median follow-up of more than 8 years showed significantly worse overall survival for FDG-PET+ patients in node-positive (N+) patients ( p = 0.035) as compared to N+/FDG-PET− patients, which overlapped with survival curves of N− and FDG-PET+ or − patients. Conclusions Our findings suggest that FDG-PET results reflect intrinsic biologic features of primary BC tumors and have prognostic value with respect to nodal metastases. FDG-PET false negative cases appear to identify less aggressive indolent metastases. The possibility to identify a subgroup of N+ BC patients with an outcome comparable with N− BC patients could reduce the surgical and adjuvant therapeutic intervention

    Confidence-based Optimization for the Newsvendor Problem

    Get PDF
    We introduce a novel strategy to address the issue of demand estimation in single-item single-period stochastic inventory optimisation problems. Our strategy analytically combines confidence interval analysis and inventory optimisation. We assume that the decision maker is given a set of past demand samples and we employ confidence interval analysis in order to identify a range of candidate order quantities that, with prescribed confidence probability, includes the real optimal order quantity for the underlying stochastic demand process with unknown stationary parameter(s). In addition, for each candidate order quantity that is identified, our approach can produce an upper and a lower bound for the associated cost. We apply our novel approach to three demand distribution in the exponential family: binomial, Poisson, and exponential. For two of these distributions we also discuss the extension to the case of unobserved lost sales. Numerical examples are presented in which we show how our approach complements existing frequentist - e.g. based on maximum likelihood estimators - or Bayesian strategies.Comment: Working draf

    Confidence-based Reasoning in Stochastic Constraint Programming

    Get PDF
    In this work we introduce a novel approach, based on sampling, for finding assignments that are likely to be solutions to stochastic constraint satisfaction problems and constraint optimisation problems. Our approach reduces the size of the original problem being analysed; by solving this reduced problem, with a given confidence probability, we obtain assignments that satisfy the chance constraints in the original model within prescribed error tolerance thresholds. To achieve this, we blend concepts from stochastic constraint programming and statistics. We discuss both exact and approximate variants of our method. The framework we introduce can be immediately employed in concert with existing approaches for solving stochastic constraint programs. A thorough computational study on a number of stochastic combinatorial optimisation problems demonstrates the effectiveness of our approach.Comment: 53 pages, working draf

    Recurrence and mortality according to Estrogen Receptor status for breast cancer patients undergoing conservative surgery. Ipsilateral breast tumour recurrence dynamics provides clues for tumour biology within the residual breast

    Get PDF
    BACKGROUND: The study was designed to determine how tumour hormone receptor status affects the subsequent pattern over time (dynamics) of breast cancer recurrence and death following conservative primary breast cancer resection. METHODS: Time span from primary resection until both first recurrence and death were considered among 2825 patients undergoing conservative surgery with or without breast radiotherapy. The hazard rates for ipsilateral breast tumour recurrence (IBTR), distant metastasis (DM) and mortality throughout 10 years of follow-up were assessed. RESULTS: DM dynamics displays the same bimodal pattern (first early peak at about 24 months, second late peak at the sixth-seventh year) for both estrogen receptor (ER) positive (P) and negative (N) tumours and for all local treatments and metastatic sites. The hazard rates for IBTR maintain the bimodal pattern for ERP and ERN tumours; however, each IBTR recurrence peak for ERP tumours is delayed in comparison to the corresponding timing of recurrence peaks for ERN tumours. Mortality dynamics is markedly different for ERP and ERN tumours with more early deaths among patients with ERN than among patients with ERP primary tumours. CONCLUSION: DM dynamics is not influenced by the extent of conservative primary tumour resection and is similar for both ER phenotypes across different metastatic sites, suggesting similar mechanisms for tumour development at distant sites despite apparently different microenvironments. The IBTR risk peak delay observed in ERP tumours is an exception to the common recurrence risk rhythm. This suggests that the microenvironment within the residual breast tissue may enforce more stringent constraints upon ERP breast tumour cell growth than other tissues, prolonging the latency of IBTR. This local environment is, however, apparently less constraining to ERN cells, as IBTR dynamics is similar to the corresponding recurrence dynamics among other distant tissue

    Intelligenza artificiale e sicurezza: opportunità, rischi e raccomandazioni

    Get PDF
    L'IA (o intelligenza artificiale) è una disciplina in forte espansione negli ultimi anni e lo sarà sempre più nel prossimo futuro: tuttavia è dal 1956 che l’IA studia l’emulazione dell’intelligenza da parte delle macchine, intese come software e in certi casi hardware. L’IA è nata dall’idea di costruire macchine che - ispirandosi ai processi legati all’intelligenza umana - siano in grado di risolvere problemi complessi, per i quali solitamente si ritiene che sia necessario un qualche tipo di ragionamento intelligente. La principale area di ricerca e applicazione attuale dell’IA è il machine learning (algoritmi che imparano e si adattano in base ai dati che ricevono), che negli ultimi anni ha trovato ampie applicazioni grazie alle reti neurali (modelli matematici composti da neuroni artificiali) che a loro volta hanno consentito la nascita del deep learning (reti neurali di maggiore complessità). Appartengono al mondo dell’IA anche i sistemi esperti, la visione artificiale, il riconoscimento vocale, l’elaborazione del linguaggio naturale, la robotica avanzata e alcune soluzioni di cybersecurity. Quando si parla di IA c'è chi ne è entusiasta pensando alle opportunità, altri sono preoccupati poiché temono tecnologie futuristiche di un mondo in cui i robot sostituiranno l'uomo, gli toglieranno il lavoro e decideranno al suo posto. In realtà l'IA è ampiamente utilizzata già oggi in molti campi, ad esempio nei cellulari, negli oggetti smart (IoT), nelle industry 4.0, per le smart city, nei sistemi di sicurezza informatica, nei sistemi di guida autonoma (drive o parking assistant), nei chat bot di vari siti web; questi sono solo alcuni esempi basati tutti su algoritmi tipici dell’intelligenza artificiale. Grazie all'IA le aziende possono avere svariati vantaggi nel fornire servizi avanzati, personalizzati, prevedere trend, anticipare le scelte degli utenti, ecc. Ma non è tutto oro quel che luccica: ci sono talvolta problemi tecnici, interrogativi etici, rischi di sicurezza, norme e legislazioni non del tutto chiare. Le organizzazioni che già adottano soluzioni basate sull’IA, o quelle che intendono farlo, potrebbero beneficiare di questa pubblicazione per approfondirne le opportunità, i rischi e le relative contromisure. La Community for Security del Clusit si augura che questa pubblicazione possa fornire ai lettori un utile quadro d’insieme di una realtà, come l’intelligenza artificiale, che ci accompagnerà sempre più nella vita personale, sociale e lavorativa.AI (or artificial intelligence) is a booming discipline in recent years and will be increasingly so in the near future.However, it is since 1956 that AI has been studying the emulation of intelligence by machines, understood as software and in some cases hardware. AI arose from the idea of building machines that-inspired by processes related to human intelligence-are able to solve complex problems, for which it is usually believed that some kind of intelligent reasoning is required. The main current area of AI research and application is machine learning (algorithms that learn and adapt based on the data they receive), which has found wide applications in recent years thanks to neural networks (mathematical models composed of artificial neurons), which in turn have enabled the emergence of deep learning (neural networks of greater complexity). Also belonging to the AI world are expert systems, computer vision, speech recognition, natural language processing, advanced robotics and some cybersecurity solutions. When it comes to AI there are those who are enthusiastic about it thinking of the opportunities, others are concerned as they fear futuristic technologies of a world where robots will replace humans, take away their jobs and make decisions for them. In reality, AI is already widely used in many fields, for example, in cell phones, smart objects (IoT), industries 4.0, for smart cities, cybersecurity systems, autonomous driving systems (drive or parking assistant), chat bots on various websites; these are just a few examples all based on typical artificial intelligence algorithms. Thanks to AI, companies can have a variety of advantages in providing advanced, personalized services, predicting trends, anticipating user choices, etc. But not all that glitters is gold: there are sometimes technical problems, ethical questions, security risks, and standards and legislation that are not entirely clear. Organizations already adopting AI-based solutions, or those planning to do so, could benefit from this publication to learn more about the opportunities, risks, and related countermeasures. Clusit's Community for Security hopes that this publication will provide readers with a useful overview of a reality, such as artificial intelligence, that will increasingly accompany us in our personal, social and working lives
    corecore