366 research outputs found

    Impact of response evaluation for resectable esophageal adenocarcinoma – A retrospective cohort study

    Get PDF
    AbstractIntroduction: The standard treatment concept in patients with locally advanced adenocarcinoma of the esophagogastric junction is neoadjuvant chemotherapy, followed by tumor resection in curative intent. Response evaluation of neoadjuvant chemotherapy using histopathological tumor regression grade (TRG) has been shown to be a prognostic factor in patients with esophageal cancer. Methods: We assessed the impact of the various methods of response control and their value in correlation to established prognostic factors in a cohort of patients with adenocarcinoma at the gastroesophageal junction treated by neoadjuvant chemotherapy. Results: After neoadjuvant chemotherapy, in 56 consecutive patients with locally advanced (T2/3/4 and/or N0/N1) esophageal adenocarcinoma an oncologic tumor resection for curative intent was performed. Median follow-up was 44 months. Histopathological tumor stages were stage 0 in 10.7%, stage I in 17.9%, stage II in 21.4%, stage III in 41.1% and stage IV 8.9%. The 3-year overall survival (OS) rate was 30.3%. In univariate analysis, ypN-status, histopathological tumor stage and tumor regression grade correlated significantly with overall survival (p = 0.022, p = 0.001, p = 0.035 respectively). Clinical response evaluation could not predict response and overall survival (p = 0.556, p = 0.254 respectively). Conclusion: After preoperative chemotherapy, outcomes of esophageal carcinoma are best predicted utilizing pathological tumor stage and histologic tumor regression. Clinical response assessments were not useful for guidance of treatment

    Modeling the Effects of Introducing Low Impact Development in a Tropical City: a Case Study from Joinville, Brazil

    Get PDF
    In tropical countries like Brazil, fast and uncontrolled urbanization, together with high rainfall intensities, makes flooding a frequent event. The implementation of decentralized stormwater controls is a promising strategy aiming to reduce surface runoff and pollution through retention, infiltration, filtration, and evapotranspiration of stormwater. Although the application of such controls has increased in the past years in developed countries, they are still not a common approach in developing countries, such as Brazil. In this paper we evaluate to what extend different low impact development (LID) techniques are able to reduce the flood risk in an area of high rainfall intensities in a coastal region of South Brazil. Feasible scenarios of placing LID units throughout the catchment were developed, analyzed with a hydrodynamic solver, and compared against the baseline scenario to evaluate the potential of flood mitigation. Results show that the performance improvements of different LID scenarios are highly dependent on the rainfall events. On average, a total flood volume reduction between 30% and 75% could be achieved for seven LID scenarios. For this case study the best results were obtained when using a combination of central and decentral LID units, namely detention ponds, infiltration trenches, and rain gardens.(VLID)2512613Version of recor

    The influence of task difficulty on engagement, performance and self-efficacy

    Get PDF
    peer-reviewedMy research examined the impact of a person’s belief about their own capabilities and how this influences their performance. In order to examine this I needed a task that was both relatively enjoyable, so that participants would engage with it in their own free time without pressure to do so, and a task that was not heavily linked to a particular subject as this would influence performance. That is the line of thinking that led to a PhD examining self-efficacy theory by getting hundreds of children to play Pacman, a popular arcade gameACCEPTEDPeer reviewe

    State of the Art on Neural Rendering

    Get PDF
    Efficient rendering of photo-realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo-realistic images from hand-crafted scene representations. However, the automatic generation of shape, materials, lighting, and other aspects of scenes remains a challenging problem that, if solved, would make photo-realistic computer graphics more widely accessible. Concurrently, progress in computer vision and machine learning have given rise to a new approach to image synthesis and editing, namely deep generative models. Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering into network training. With a plethora of applications in computer graphics and vision, neural rendering is poised to become a new area in the graphics community, yet no survey of this emerging field exists. This state-of-the-art report summarizes the recent trends and applications of neural rendering. We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photo-realistic outputs. Starting with an overview of the underlying computer graphics and machine learning concepts, we discuss critical aspects of neural rendering approaches. This state-of-the-art report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence. Finally, we conclude with a discussion of the social implications of such technology and investigate open research problems

    Challenges in QCD matter physics - The Compressed Baryonic Matter experiment at FAIR

    Full text link
    Substantial experimental and theoretical efforts worldwide are devoted to explore the phase diagram of strongly interacting matter. At LHC and top RHIC energies, QCD matter is studied at very high temperatures and nearly vanishing net-baryon densities. There is evidence that a Quark-Gluon-Plasma (QGP) was created at experiments at RHIC and LHC. The transition from the QGP back to the hadron gas is found to be a smooth cross over. For larger net-baryon densities and lower temperatures, it is expected that the QCD phase diagram exhibits a rich structure, such as a first-order phase transition between hadronic and partonic matter which terminates in a critical point, or exotic phases like quarkyonic matter. The discovery of these landmarks would be a breakthrough in our understanding of the strong interaction and is therefore in the focus of various high-energy heavy-ion research programs. The Compressed Baryonic Matter (CBM) experiment at FAIR will play a unique role in the exploration of the QCD phase diagram in the region of high net-baryon densities, because it is designed to run at unprecedented interaction rates. High-rate operation is the key prerequisite for high-precision measurements of multi-differential observables and of rare diagnostic probes which are sensitive to the dense phase of the nuclear fireball. The goal of the CBM experiment at SIS100 (sqrt(s_NN) = 2.7 - 4.9 GeV) is to discover fundamental properties of QCD matter: the phase structure at large baryon-chemical potentials (mu_B > 500 MeV), effects of chiral symmetry, and the equation-of-state at high density as it is expected to occur in the core of neutron stars. In this article, we review the motivation for and the physics programme of CBM, including activities before the start of data taking in 2022, in the context of the worldwide efforts to explore high-density QCD matter.Comment: 15 pages, 11 figures. Published in European Physical Journal

    The learning styles neuromyth:when the same term means different things to different teachers

    Get PDF
    Alexia Barrable - ORCID: 0000-0002-5352-8330 https://orcid.org/0000-0002-5352-8330Although learning styles (LS) have been recognised as a neuromyth, they remain a virtual truism within education. A point of concern is that the term LS has been used within theories that describe them using completely different notions and categorisations. This is the first empirical study to investigate education professionals’ conceptualisation, as well as means of identifying and implementing LS in their classroom. A sample of 123 education professionals were administered a questionnaire consisting both closed- and open-ended questions. Responses were analysed using thematic analysis. LS were found to be mainly conceptualised within the Visual-Auditory-(Reading)-Kinaesthetic (VAK/VARK) framework, as well as Gardner’s multiple intelligences. Moreover, a lot of education professionals confused theories of learning (e.g., behavioural or cognitive theories) with LS. In terms of identifying LS, educators reported using a variety of methods, spanning from observation and everyday contact to the use of tests. The ways LS were implemented in the classroom were numerous, comprising various teaching aids, participatory techniques and motor activities. Overall, we argue that the extended use of the term LS gives the illusion of a consensus amongst educators, when a closer examination reveals that the term LS is conceptualised, identified and implemented idiosyncratically by different individuals. This study aims to be of use to pre-service and in-service teacher educators in their effort to debunk the neuromyth of LS and replace it with evidence-based practices.https://doi.org/10.1007/s10212-020-00485-236pubpub
    corecore