162 research outputs found

    A Case of Definitive Therapy for Localised Prostate Cancer: Report of a Urological Nightmare

    Get PDF
    Radical prostatectomy, external beam radiotherapy and permanent brachytherapy are the most common treatment options for nonmetastatic localised adenocarcinoma of the prostate (PCa). Accurate pretherapeutic clinical staging is difficult, the number of positive cores after biopsy does not imperatively represent the extension of the cancer. Furthermore postoperative upgrading in Gleason score is frequently observed. Even in a localised setting a certain amount of patients with organ-confined PCa will develop biochemical progression. In case of a rise in PSA level after radiation the majority of patients will receive androgen deprivation therapy what must be considered as palliative. If local or systemic progressive disease is associated with evolving neuroendocrine differentiation hormonal manipulation is increasingly ineffective; radiotherapy and systemic chemotherapy with a platinum agent and etoposide are recommended. In case of local progression complications such as pelvic pain, gross haematuria, infravesical obstruction and rectal invasion with obstruction and consecutive ileus can possibly occur. In this situation palliative radical surgery is a therapy option especially in the absence of distant metastases. A case with local and later systemic progression after permanent brachytherapy is presented here

    Granularity in Large-Scale Parallel Functional Programming

    Get PDF
    This thesis demonstrates how to reduce the runtime of large non-strict functional programs using parallel evaluation. The parallelisation of several programs shows the importance of granularity, i.e. the computation costs of program expressions. The aspect of granularity is studied both on a practical level, by presenting and measuring runtime granularity improvement mechanisms, and at a more formal level, by devising a static granularity analysis. By parallelising several large functional programs this thesis demonstrates for the first time the advantages of combining lazy and parallel evaluation on a large scale: laziness aids modularity, while parallelism reduces runtime. One of the parallel programs is the Lolita system which, with more than 47,000 lines of code, is the largest existing parallel non-strict functional program. A new mechanism for parallel programming, evaluation strategies, to which this thesis contributes, is shown to be useful in this parallelisation. Evaluation strategies simplify parallel programming by separating algorithmic code from code specifying dynamic behaviour. For large programs the abstraction provided by functions is maintained by using a data-oriented style of parallelism, which defines parallelism over intermediate data structures rather than inside the functions. A highly parameterised simulator, GRANSIM, has been constructed collaboratively and is discussed in detail in this thesis. GRANSIM is a tool for architecture-independent parallelisation and a testbed for implementing runtime-system features of the parallel graph reduction model. By providing an idealised as well as an accurate model of the underlying parallel machine, GRANSIM has proven to be an essential part of an integrated parallel software engineering environment. Several parallel runtime- system features, such as granularity improvement mechanisms, have been tested via GRANSIM. It is publicly available and in active use at several universities worldwide. In order to provide granularity information this thesis presents an inference-based static granularity analysis. This analysis combines two existing analyses, one for cost and one for size information. It determines an upper bound for the computation costs of evaluating an expression in a simple strict higher-order language. By exposing recurrences during cost reconstruction and using a library of recurrences and their closed forms, it is possible to infer the costs for some recursive functions. The possible performance improvements are assessed by measuring the parallel performance of a hand-analysed and annotated program

    The Jomini Engine: a historical MMORPG framework

    Get PDF
    This short paper discusses the design of the JominiEngine, a serious game engine for massive multi-player online role-playing games (MMORPG), designed as an educational tool for the learning history. The main design principles of the game engine are accuracy in the historic model, flexibility in the scope of content modeling and cover a wide range of historic periods, cooperative team-play embedded in a competitive game in order to reflect historical context, and high security in the interaction with the underlying game engine

    PAEAN : portable and scalable runtime support for parallel Haskell dialects

    Get PDF
    Over time, several competing approaches to parallel Haskell programming have emerged. Different approaches support parallelism at various different scales, ranging from small multicores to massively parallel high-performance computing systems. They also provide varying degrees of control, ranging from completely implicit approaches to ones providing full programmer control. Most current designs assume a shared memory model at the programmer, implementation and hardware levels. This is, however, becoming increasingly divorced from the reality at the hardware level. It also imposes significant unwanted runtime overheads in the form of garbage collection synchronisation etc. What is needed is an easy way to abstract over the implementation and hardware levels, while presenting a simple parallelism model to the programmer. The PArallEl shAred Nothing runtime system design aims to provide a portable and high-level shared-nothing implementation platform for parallel Haskell dialects. It abstracts over major issues such as work distribution and data serialisation, consolidating existing, successful designs into a single framework. It also provides an optional virtual shared-memory programming abstraction for (possibly) shared-nothing parallel machines, such as modern multicore/manycore architectures or cluster/cloud computing systems. It builds on, unifies and extends, existing well-developed support for shared-memory parallelism that is provided by the widely used GHC Haskell compiler. This paper summarises the state-of-the-art in shared-nothing parallel Haskell implementations, introduces the PArallEl shAred Nothing abstractions, shows how they can be used to implement three distinct parallel Haskell dialects, and demonstrates that good scalability can be obtained on recent parallel machines.PostprintPeer reviewe

    Identidad de marca y cobertura del mercado de la Corporación Quirúrgica Oncológica SAC, Surquillo 2019

    Get PDF
    La presente investigación tuvo como objetivo general determinar la relación entre la Identidad de la marca y la cobertura de mercado de la Corporación Quirúrgica Oncológica SAC – 2019. Acerca de la Identidad de Marca, Aaker (2014), sostiene que es un conjunto único de asociaciones que el estratega aspira crear o mantener, dichas asociaciones representan la razón de ser de la marca implicando una promesa de los integrantes de la organización para con los clientes. Así mismo la Cobertura de Mercado, Munuera y Rodríguez (2015), nos dicen que es la clave para no incurrir en riesgos, de miopía comercial, estos son, la empresa ha de incluir en su reflexión estratégica: todos los productos sustitutivos, todos los grupos de compradores y la necesidad genérica que se satisface. La investigación fue de enfoque cuantitativo, el diseño es no experimental, básico, descriptivo, transversal y correlacional con su encuesta como técnica y como instrumento el cuestionario nos ayudo a dar validez y confiabilidad a la investigación, además se utilizó el SPS 22. Los resultados obtenidos después del procesamiento y análisis de los datos no indican que: La identidad de marca se relaciona con la cobertura de mercado de la empresa Corporación Quirúrgica Oncológica – Lima 2019

    Dielectric Characterization of a Nonlinear Optical Material

    Get PDF
    Batisite was reported to be a nonlinear optical material showing second harmonic generation. Using dielectric spectroscopy and polarization measurements, we provide a thorough investigation of the dielectric and charge-transport properties of this material. Batisite shows the typical characteristics of a linear lossy dielectric. No evidence for ferro- or antiferroelectric polarization is found. As the second-harmonic generation observed in batisite points to a non-centrosymmetric structure, this material is piezoelectric, but most likely not ferroelectric. In addition, we found evidence for hopping charge transport of localized charge carriers and a relaxational process at low temperatures

    HPC-GAP: engineering a 21st-century high-performance computer algebra system

    Get PDF
    Symbolic computation has underpinned a number of key advances in Mathematics and Computer Science. Applications are typically large and potentially highly parallel, making them good candidates for parallel execution at a variety of scales from multi-core to high-performance computing systems. However, much existing work on parallel computing is based around numeric rather than symbolic computations. In particular, symbolic computing presents particular problems in terms of varying granularity and irregular task sizes thatdo not match conventional approaches to parallelisation. It also presents problems in terms of the structure of the algorithms and data. This paper describes a new implementation of the free open-source GAP computational algebra system that places parallelism at the heart of the design, dealing with the key scalability and cross-platform portability problems. We provide three system layers that deal with the three most important classes of hardware: individual shared memory multi-core nodes, mid-scale distributed clusters of (multi-core) nodes, and full-blown HPC systems, comprising large-scale tightly-connected networks of multi-core nodes. This requires us to develop new cross-layer programming abstractions in the form of new domain-specific skeletons that allow us to seamlessly target different hardware levels. Our results show that, using our approach, we can achieve good scalability and speedups for two realistic exemplars, on high-performance systems comprising up to 32,000 cores, as well as on ubiquitous multi-core systems and distributed clusters. The work reported here paves the way towards full scale exploitation of symbolic computation by high-performance computing systems, and we demonstrate the potential with two major case studies
    corecore