14,163 research outputs found

    Medical Image Data and Datasets in the Era of Machine Learning-Whitepaper from the 2016 C-MIMI Meeting Dataset Session.

    Get PDF
    At the first annual Conference on Machine Intelligence in Medical Imaging (C-MIMI), held in September 2016, a conference session on medical image data and datasets for machine learning identified multiple issues. The common theme from attendees was that everyone participating in medical image evaluation with machine learning is data starved. There is an urgent need to find better ways to collect, annotate, and reuse medical imaging data. Unique domain issues with medical image datasets require further study, development, and dissemination of best practices and standards, and a coordinated effort among medical imaging domain experts, medical imaging informaticists, government and industry data scientists, and interested commercial, academic, and government entities. High-level attributes of reusable medical image datasets suitable to train, test, validate, verify, and regulate ML products should be better described. NIH and other government agencies should promote and, where applicable, enforce, access to medical image datasets. We should improve communication among medical imaging domain experts, medical imaging informaticists, academic clinical and basic science researchers, government and industry data scientists, and interested commercial entities

    Local Operations and Completely Positive Maps in Algebraic Quantum Field Theory

    Full text link
    Einstein introduced the locality principle which states that all physical effect in some finite space-time region does not influence its space-like separated finite region. Recently, in algebraic quantum field theory, R\'{e}dei captured the idea of the locality principle by the notion of operational separability. The operation in operational separability is performed in some finite space-time region, and leaves unchanged the state in its space-like separated finite space-time region. This operation is defined with a completely positive map. In the present paper, we justify using a completely positive map as a local operation in algebraic quantum field theory, and show that this local operation can be approximately written with Kraus operators under the funnel property

    Resolution of Nearly Mass Degenerate Higgs Bosons and Production of Black Hole Systems of Known Mass at a Muon Collider

    Full text link
    The direct s-channel coupling to Higgs bosons is 40000 times greater for muons than electrons; the coupling goes as mass squared. High precision scanning of the lighter h0h^0 and the higher mass H0H^0 and A0A^0 is thus possible with a muon collider. The H0H^0 and A0A^0 are expected to be nearly mass degenerate and to be CP even and odd, respectively. A muon collider could resolve the mass degeneracy and make CP measurements. The origin of CP violation in the K0K^{0} and B0B^{0} meson systems might lie in the the H0/A0H^0/A^0 Higgs bosons. If large extra dimensions exist, black holes with lifetimes of ∌10−26\sim 10^{-26} seconds could be created and observed via Hawking radiation at the LHC. Unlike proton or electron colliders, muon colliders can produce black hole systems of known mass. This opens the possibilities of measuring quantum remnants, gravitons as missing energy, and scanning production turn on. Proton colliders are hampered by parton distributions and CLIC by beamstrahlung. The ILC lacks the energy reach.Comment: Latex, 5 pages, 2 figures, proceedings to the DPF 2004: Annual Meeting of the Division of Particles and Fields of APS, 26 August-31 August 2004, Riverside, CA, US

    Multi-Terabyte EIDE Disk Arrays running Linux RAID5

    Full text link
    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case of multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important.Comment: Talk from the 2004 Computing in High Energy and Nuclear Physics (CHEP04), Interlaken, Switzerland, 27th September - 1st October 2004, 4 pages, LaTeX, uses CHEP2004.cls. ID 47, Poster Session 2, Track

    Self-Assembly of Arbitrary Shapes Using RNAse Enzymes: Meeting the Kolmogorov Bound with Small Scale Factor (extended abstract)

    Get PDF
    We consider a model of algorithmic self-assembly of geometric shapes out of square Wang tiles studied in SODA 2010, in which there are two types of tiles (e.g., constructed out of DNA and RNA material) and one operation that destroys all tiles of a particular type (e.g., an RNAse enzyme destroys all RNA tiles). We show that a single use of this destruction operation enables much more efficient construction of arbitrary shapes. In particular, an arbitrary shape can be constructed using an asymptotically optimal number of distinct tile types (related to the shape's Kolmogorov complexity), after scaling the shape by only a logarithmic factor. By contrast, without the destruction operation, the best such result has a scale factor at least linear in the size of the shape, and is connected only by a spanning tree of the scaled tiles. We also characterize a large collection of shapes that can be constructed efficiently without any scaling

    GALAXY DYNAMICS IN CLUSTERS

    Full text link
    We use high resolution simulations to study the formation and distribution of galaxies within a cluster which forms hierarchically. We follow both dark matter and baryonic gas which is subject to thermal pressure, shocks and radiative cooling. Galaxy formation is identified with the dissipative collapse of the gas into cold, compact knots. We examine two extreme representations of galaxies during subsequent cluster evolution --- one purely gaseous and the other purely stellar. The results are quite sensitive to this choice. Gas-galaxies merge efficiently with a dominant central object while star-galaxies merge less frequently. Thus, simulations in which galaxies remain gaseous appear to suffer an ``overmerging'' problem, but this problem is much less severe if the gas is allowed to turn into stars. We compare the kinematics of the galaxy population in these two representations to that of dark halos and of the underlying dark matter distribution. Galaxies in the stellar representation are positively biased (\ie over-represented in the cluster) both by number and by mass fraction. Both representations predict the galaxies to be more centrally concentrated than the dark matter, whereas the dark halo population is more extended. A modest velocity bias also exists in both representations, with the largest effect, σgal/σDM≃0.7\sigma_{gal}/\sigma_{DM} \simeq 0.7, found for the more massive star-galaxies. Phase diagrams show that the galaxy population has a substantial net inflow in the gas representation, while in the stellar case it is roughly in hydrostatic equilibrium. Virial mass estimators can underestimate the true cluster mass by up to a factor of 5. The discrepancy is largest if only the most massive galaxies are used, reflecting significant mass segregation.Comment: 30 pages, self-unpacking (via uufiles) postscript file without figures. Eighteen figures (and slick color version of figure 3) and entire paper available at ftp://oahu.physics.lsa.umich.edu/groups/astro/fews Total size of paper with figures is ~9.0 Mb uncompressed. Submitted to Ap.J

    Nonlocal hydrodynamic influence on the dynamic contact angle: Slip models versus experiment

    Get PDF
    Experiments reported by Blake et al. [Phys. Fluids. 11, 1995 (1999)] suggest that the dynamic contact angle formed between the free surface of a liquid and a moving solid boundary at a fixed contact-line speed depends on the flow field/geometry near the moving contact line. The present paper examines quantitatively whether or not it is possible to attribute this effect to bending of the free surface due to hydrodynamic stresses acting upon it and hence interpret the results in terms of the so-called ``apparent'' contact angle. It is shown that this is not the case. Numerical analysis of the problem demonstrates that, at the spatial resolution reported in the experiments, the variations of the ``apparent'' contact angle (defined in two different ways) caused by variations in the flow field, at a fixed contact-line speed, are too small to account for the observed effect. The results clearly indicate that the actual (macroscopic) dynamic contact angle, i.e.\ the one used in fluid mechanics as a boundary condition for the equation determining the free surface shape, must be regarded as dependent not only on the contact-line speed but also on the flow field/geometry in the vicinity of the moving contact line

    Remarks on Causality in Relativistic Quantum Field Theory

    Get PDF
    It is shown that the correlations predicted by relativistic quantum field theory in locally normal states between projections in local von Neumann algebras \cA(V_1),\cA(V_2) associated with spacelike separated spacetime regions V1,V2V_1,V_2 have a (Reichenbachian) common cause located in the union of the backward light cones of V1V_1 and V2V_2. Further comments on causality and independence in quantum field theory are made.Comment: 10 pages, Latex, Quantum Structures 2002 Conference Proceedings submission. Minor revision of the order of definitions on p.

    A 233 km Tunnel for Lepton and Hadron Colliders

    Full text link
    A decade ago, a cost analysis was conducted to bore a 233 km circumference Very Large Hadron Collider (VLHC) tunnel passing through Fermilab. Here we outline implementations of e+e−e^+e^-, ppˉp \bar{p}, and ÎŒ+Ό−\mu^+ \mu^- collider rings in this tunnel using recent technological innovations. The 240 and 500 GeV e+e−e^+e^- colliders employ Crab Waist Crossings, ultra low emittance damped bunches, short vertical IP focal lengths, superconducting RF, and low coercivity, grain oriented silicon steel/concrete dipoles. Some details are also provided for a high luminosity 240 GeV e+e−e^+ e^- collider and 1.75 TeV muon accelerator in a Fermilab site filler tunnel. The 40 TeV ppˉp \bar{p} collider uses the high intensity Fermilab pˉ\bar{p} source, exploits high cross sections for ppˉp \bar{p} production of high mass states, and uses 2 Tesla ultra low carbon steel/YBCO superconducting magnets run with liquid neon. The 35 TeV muon ring ramps the 2 Tesla superconducting magnets at 9 Hz every 0.4 seconds, uses 250 GV of superconducting RF to accelerate muons from 1.75 to 17.5 TeV in 63 orbits with 71% survival, and mitigates neutrino radiation with phase shifting, roller coaster motion in a FODO lattice.Comment: LaTex, 6 pages, 1 figure, Advanced Accelerator Concepts Workshop, Austin, TX, 10-15 June 201

    Redundant Arrays of IDE Drives

    Get PDF
    The next generation of high-energy physics experiments is expected to gather prodigious amounts of data. New methods must be developed to handle this data and make analysis at universities possible. We examine some techniques that use recent developments in commodity hardware. We test redundant arrays of integrated drive electronics (IDE) disk drives for use in offline high-energy physics data analysis. IDE redundant array of inexpensive disks (RAID) prices now equal the cost per terabyte of million-dollar tape robots! The arrays can be scaled to sizes affordable to institutions without robots and used when fast random access at low cost is important. We also explore three methods of moving data between sites; internet transfers, hot pluggable IDE disks in FireWire cases, and writable digital video disks (DVD-R).Comment: Submitted to IEEE Transactions On Nuclear Science, for the 2001 IEEE Nuclear Science Symposium and Medical Imaging Conference, 8 pages, 1 figure, uses IEEEtran.cls. Revised March 19, 2002 and published August 200
    • 

    corecore