1,237 research outputs found

    Estimation of Organ and Effective Dose due to Compton Backscatter Security Scans

    Get PDF
    Purpose: To estimate organ and effective radiation doses due to backscatter security scanners using Monte Carlo simulations and a voxelized phantom set. Methods: Voxelized phantoms of male and female adults and children were used with the GEANT4 toolkit to simulate a backscatter security scan. The backscatter system was modeled based on specifications available in the literature. The simulations modeled a 50 kVp spectrum with 1.0 mm-aluminum-equivalent filtration and a previously measured exposure of approximately 4.6 μR at 30 cm from the source. Photons and secondary interactions were tracked from the source until they reached zero kinetic energy or exited from the simulation’s boundaries. The energy deposited in the phantoms’ respective organs was tallied and used to calculate total organ dose and total effective dose for frontal, rear, and full scans with subjects located 30 and 75 cm from the source. Results: For a full screen, all phantoms’ total effective doses were below the established 0.25 μSv standard, with an estimated maximum total effective dose of 0.07 μSv for full screen of a male child. The estimated maximum organ dose due to a full screen was 1.03 μGy, deposited in the adipose tissue of the male child phantom when located 30 cm from the source. All organ dose estimates had a coefficient of variation of less than 3% for a frontal scan and less than 11% for a rear scan. Conclusions: Backscatter security scanners deposit dose in organs beyond the skin. The effective dose is below recommended standards set by the Health Physics Society (HPS) and the American National Standards Institute (ANSI) assuming the system provides a maximum exposure of approximately 4.6 μR at 30 cm

    Reducing the Radiation Dose to Women Receiving Cardiac CT Scans

    Get PDF
    This thesis aims to quantify the reduction in radiation dose deposited in glandular breast tissue achieved by using tilted gantry acquisition during cardiac CT scans. Previous work by Halpern et al. suggested using tilted acquisition parallel to the long axis of the patient’s heart. However, for a larger portion of the population this is not feasible due to the design of current scanners (which are limited to maximum tilt angles of 30 degrees). This study investigated the potential dose reduction and image quality effects at commercially available tilt angles between 0-30 degrees through simulation and experimental studies. Upon IRB approval, datasets from 10 female patients from Froedtert Hospital (Milwaukee, WI) were used to create voxelized phantom models for the computer simulation. Experimental measurements were performed with an anthropomorphic phantom on a clinical CT scanner (Discovery CT750HD, GE Healthcare, Chalfont St. Gilates, England). For both simulation and experimental studies, radiation dose to the breast and reconstructed image signal-to-noise ratio (SNR) was quantified for tilt angles between 0-30 degrees in five degree increments. The simulated and experimental results demonstrate that tilted gantry acquisition reduces the glandular breast dose from cardiac CT scans when compared to conventional (non-tilted) axial scans. Maximum reductions of 33%-81% (mean, 55%) were achieved with a 30-degree gantry tilt. However, a decrease in image quality by approximately 15% when compared to non-tilted images is seen in the simulated results. The image quality is found to remain equivalent, on average, up to a 15 degree tilt

    Municipal Composting and Organic Waste Diversion: The Case of Fayetteville, Arkansas

    Get PDF
    It is estimated that 40% of food is wasted in the United States; representing $165 billion in wasted resources. A vast majority of that wasted food is ultimately placed in landfills where it decomposes and releases harmful greenhouse gases (GHGs). In fact, food waste alone is responsible for 23% of annual methane emissions for the US. This has a huge impact on global climate change due to the potency of methane as a greenhouse gas. Currently only 5% of the food waste produced is recovered across the nation. Source reduction would be the best solution to reducing this food waste, however, large-scale source reduction is not feasible with the current food market in the US. This is why many cities are beginning to adopt municipal composting programs as a way to divert more of their waste from landfills. The goal of this research was to review and show the impacts of how Fayetteville, Arkansas currently handles waste throughout the city, particularly in regards to food waste and other compostable organics. After getting a snapshot of Fayetteville’s current situation, a model created by the EPA was used to assess the carbon footprint of Fayetteville’s current waste management system. A proposed system in which all organic waste is diverted is created and analyzed within the model, in order to compare the impact the city could have on reducing their carbon footprint. The proposed system reduced the carbon footprint of the cities waste management system by over 200%. Based on the proposed system total impact from waste management would go from a net positive impact of 4,796 MTCO2E per year to a net negative impact of 4,989 MTCO2E per year. These results help to prove that moving toward composting all organics would result in decreasing the impact the third largest city in Arkansas has on the environment

    Mission unfinished : evaluating the impact of the Global Outreach Seminar in the local church

    Get PDF
    https://place.asburyseminary.edu/ecommonsatsdissertations/1390/thumbnail.jp

    Representing Edge Flows on Graphs via Sparse Cell Complexes

    Full text link
    Obtaining sparse, interpretable representations of observable data is crucial in many machine learning and signal processing tasks. For data representing flows along the edges of a graph, an intuitively interpretable way to obtain such representations is to lift the graph structure to a simplicial complex: The eigenvectors of the associated Hodge-Laplacian, respectively the incidence matrices of the corresponding simplicial complex then induce a Hodge decomposition, which can be used to represent the observed data in terms of gradient, curl, and harmonic flows. In this paper, we generalize this approach to cellular complexes and introduce the cell inference optimization problem, i.e., the problem of augmenting the observed graph by a set of cells, such that the eigenvectors of the associated Hodge Laplacian provide a sparse, interpretable representation of the observed edge flows on the graph. We show that this problem is NP-hard and introduce an efficient approximation algorithm for its solution. Experiments on real-world and synthetic data demonstrate that our algorithm outperforms current state-of-the-art methods while being computationally efficient.Comment: 9 pages, 6 figures (plus appendix). For evaluation code, see https://anonymous.4open.science/r/edge-flow-repr-cell-complexes-11C

    Integrated Pointing and Signal Detector for Optical Receiver

    Get PDF
    A design concept for the receiver portion of a proposed free-space optical-communication terminal calls for integration of its communication and pointing detectors. As explained below, this would entail a departure from prior designs, in which pointing and communication detectors have been separate. As used here, communication detector denotes a single high-speed photodetector used for reception of a laser beam that has been modulated to convey information, while pointing detector denotes an array of photodetectors (typically, a quad-cell detector or a charge-coupled device) used in sensing the pointing error (the error in the aim of a receiver telescope, relative to the laser-beam axis). The pointing detector of this or any free-space optical-communication receiver is necessary for proper acquisition and tracking of the received laser beam. The suitably processed output of the pointing detector is fed back to a fine-steering mirror to reduce any pointing error and thereby maintain optimum reception. Heretofore, it has been common practice to pass the incoming laser beam through a beam splitter that sends about 10 percent of the beam power to a pointing detector and the rest to a separate communication detector, as illustrated in the upper part of the figure. One disadvantage of this is that because only 10 percent of the received signal power is available for use by the pointing detector, the signal-to-noise ratio (SNR) at the pointing detector is lower than it otherwise would be. The performance of the pointing detector is correspondingly limited. Another disadvantage is that the alignment between the communication and pointing detectors is critical and must be ensured by means of a calibration procedure. According to the proposal, there would be no beam splitter. The communication and pointing detectors would be positioned coaxially in the same focal plane, as shown in the lower part of the figure: the communication detector would occupy the central part of the focal plane, while the pointing detector would occupy the surrounding area. This arrangement would inherently ensure the proper alignment of the detectors with each other

    Everything counts! - Warum kleine Gemeinden die Gewinner der Zensuserhebung 2011 sind

    Full text link
    The population and housing census 2011 was an EU-wide census in all EU member states. In Germany, the basis was a largely register-based method. In this paper, it is shown that communities with less than 10.000 inhabitants have significantly less relative losses in the number of inhabitants compared to communities with more than 10.000 inhabitants.Comment: in Germa

    An exploration of error-driven learning in simple two-layer networks from a discriminative learning perspective

    Get PDF
    Error-driven learning algorithms, which iteratively adjust expectations based on prediction error, are the basis for a vast array of computational models in the brain and cognitive sciences that often differ widely in their precise form and application: they range from simple models in psychology and cybernetics to current complex deep learning models dominating discussions in machine learning and artificial intelligence. However, despite the ubiquity of this mechanism, detailed analyses of its basic workings uninfluenced by existing theories or specific research goals are rare in the literature. To address this, we present an exposition of error-driven learning – focusing on its simplest form for clarity – and relate this to the historical development of error-driven learning models in the cognitive sciences. Although historically error-driven models have been thought of as associative, such that learning is thought to combine preexisting elemental representations, our analysis will highlight the discriminative nature of learning in these models and the implications of this for the way how learning is conceptualized. We complement our theoretical introduction to error-driven learning with a practical guide to the application of simple error-driven learning models in which we discuss a number of example simulations, that are also presented in detail in an accompanying tutorial
    • …
    corecore