2,656 research outputs found

    An inequality of Kostka numbers and Galois groups of Schubert problems

    Full text link
    We show that the Galois group of any Schubert problem involving lines in projective space contains the alternating group. Using a criterion of Vakil and a special position argument due to Schubert, this follows from a particular inequality among Kostka numbers of two-rowed tableaux. In most cases, an easy combinatorial injection proves the inequality. For the remaining cases, we use that these Kostka numbers appear in tensor product decompositions of sl_2(C)-modules. Interpreting the tensor product as the action of certain commuting Toeplitz matrices and using a spectral analysis and Fourier series rewrites the inequality as the positivity of an integral. We establish the inequality by estimating this integral.Comment: Extended abstract for FPSAC 201

    Quantification of heterogeneity observed in medical images

    Get PDF
    BACKGROUND: There has been much recent interest in the quantification of visually evident heterogeneity within functional grayscale medical images, such as those obtained via magnetic resonance or positron emission tomography. In the case of images of cancerous tumors, variations in grayscale intensity imply variations in crucial tumor biology. Despite these considerable clinical implications, there is as yet no standardized method for measuring the heterogeneity observed via these imaging modalities. METHODS: In this work, we motivate and derive a statistical measure of image heterogeneity. This statistic measures the distance-dependent average deviation from the smoothest intensity gradation feasible. We show how this statistic may be used to automatically rank images of in vivo human tumors in order of increasing heterogeneity. We test this method against the current practice of ranking images via expert visual inspection. RESULTS: We find that this statistic provides a means of heterogeneity quantification beyond that given by other statistics traditionally used for the same purpose. We demonstrate the effect of tumor shape upon our ranking method and find the method applicable to a wide variety of clinically relevant tumor images. We find that the automated heterogeneity rankings agree very closely with those performed visually by experts. CONCLUSIONS: These results indicate that our automated method may be used reliably to rank, in order of increasing heterogeneity, tumor images whether or not object shape is considered to contribute to that heterogeneity. Automated heterogeneity ranking yields objective results which are more consistent than visual rankings. Reducing variability in image interpretation will enable more researchers to better study potential clinical implications of observed tumor heterogeneity

    FDG uptake heterogeneity in FIGO IIb cervical carcinoma does not predict pelvic lymph node involvement

    Get PDF
    TRANSLATIONAL RELEVANCE: Many types of cancer are located and assessed via positron emission tomography (PET) using the 18F-fluorodeoxyglucose (FDG) radiotracer of glucose uptake. There is rapidly increasing interest in exploiting the intra-tumor heterogeneity observed in these FDG-PET images as an indicator of disease outcome. If this image heterogeneity is of genuine prognostic value, then it either correlates to known prognostic factors, such as tumor stage, or it indicates some as yet unknown tumor quality. Therefore, the first step in demonstrating the clinical usefulness of image heterogeneity is to explore the dependence of image heterogeneity metrics upon established prognostic indicators and other clinically interesting factors. If it is shown that image heterogeneity is merely a surrogate for other important tumor properties or variations in patient populations, then the theoretical value of quantified biological heterogeneity may not yet translate into the clinic given current imaging technology. PURPOSE: We explore the relation between pelvic lymph node status at diagnosis and the visually evident uptake heterogeneity often observed in 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) images of cervical carcinomas. EXPERIMENTAL DESIGN: We retrospectively studied the FDG-PET images of 47 node negative and 38 node positive patients, each having FIGO stage IIb tumors with squamous cell histology. Imaged tumors were segmented using 40% of the maximum tumor uptake as the tumor-defining threshold and then converted into sets of three-dimensional coordinates. We employed the sphericity, extent, Shannon entropy (S) and the accrued deviation from smoothest gradients (ζ) as image heterogeneity metrics. We analyze these metrics within tumor volume strata via: the Kolmogorov-Smirnov test, principal component analysis and contingency tables. RESULTS: We found no statistically significant difference between the positive and negative lymph node groups for any one metric or plausible combinations thereof. Additionally, we observed that S is strongly dependent upon tumor volume and that ζ moderately correlates with mean FDG uptake. CONCLUSIONS: FDG uptake heterogeneity did not indicate patients with differing prognoses. Apparent heterogeneity differences between clinical groups may be an artifact arising from either the dependence of some image metrics upon other factors such as tumor volume or upon the underlying variations in the patient populations compared

    What to expect when you’re prospecting: how new information changes our estimate of the chance of success of a prospect

    Get PDF
    There is a common belief that we can expect to add value to a prospect or prospect portfolio by improving the prospect chance of success (Pg) as a consequence of acquiring information and doing work. Established laws of probability dictate that this is incorrect. We do expect new information to add value to the exploration cycle, but not by an expectation of improving the prospect risk. New information may result in an increase or a decrease of Pg, but the expected result (the average of all possible outcomes) is zero change. Moreover, for a typical exploration prospect (Pg <0.5), we expect that new information will downgrade more prospects Pg than are upgraded. Real-world prospect data are neither suitable nor publically available to study this. Instead, the concept is explored using an analogous process (prenatal prediction of fetus gender) for which good statistics exist, and by creating a synthetic prospect that can be analyzed in a repeatable way. The results support the predictions made above

    A practical guide to the use of success versus failure statistics in the estimation of prospect risk

    Get PDF
    Statistical data documenting past exploration success and failure can be used to inform the estimate of future chance of success, but this is not appropriate to every situation. Even where appropriate, past frequency is not numerically equivalent to future expectation unless the sample size is very large. Using the Rule of Succession of Laplace (1774), we calculate the appropriate predicted chance of future success that can be used for smaller sample numbers, typical of exploration data sets, which include both successes and failures. The results, presented as a simple look-up table, show that the error which would result from using simple frequency instead of the appropriately calculated value is particularly severe for small samples (>10% error arising if n< 9). This error is least if past success rate is close to 0.5 but it increases markedly if the past data consist of mostly failure or mostly success. We review the conditions in which past frequency can be used as a guide, and the circumstances in which it does not reflect future chance. Past success frequency should only be used as a guide to future chance if the past tests and future opportunities belong to the same play, and are similar as far as the available data allow. They should not be used if the historical tests have selectively sampled the “cream” of the pool of opportunities

    Occurrence, Growth, and Food Habits of the Spotted Hake, Urophycis regia, in the Cape Fear Estuary and Adjacent Atlantic Ocean, North Carolina

    Get PDF
    From 1973 to 1978, 62,867 Urophycis regia were collected from the Cape Fear Estuary, North Carolina, and the adjacent Atlantic Ocean. Most fish were young-of-the-year (25-225 mm SL), but a few age-l individuals (230-295 mm) were present in the estuary from January to June. They moved offshore or northward when water temperatures warmed above 22°C. Average monthly growth increments varied from 12 to 26 mm SL; the greatest increase in length was 92 mm from January to June 1977. Length-weight regressions for the 6-year study period were similar. Important food items were crustaceans (largely mysid shrimp and decapods) and fishes (clupeid and sciaenid larvae). The abundance of U. regia in inshore waters and the relatively large size it reaches suggests that marketing needs to be explored

    Investigating the robustness of a learning-based method for quantitative phase retrieval from propagation-based x-ray phase contrast measurements under laboratory conditions

    Full text link
    Quantitative phase retrieval (QPR) in propagation-based x-ray phase contrast imaging of heterogeneous and structurally complicated objects is challenging under laboratory conditions due to partial spatial coherence and polychromaticity. A learning-based method (LBM) provides a non-linear approach to this problem while not being constrained by restrictive assumptions about object properties and beam coherence. In this work, a LBM was assessed for its applicability under practical scenarios by evaluating its robustness and generalizability under typical experimental variations. Towards this end, an end-to-end LBM was employed for QPR under laboratory conditions and its robustness was investigated across various system and object conditions. The robustness of the method was tested via varying propagation distances and its generalizability with respect to object structure and experimental data was also tested. Although the LBM was stable under the studied variations, its successful deployment was found to be affected by choices pertaining to data pre-processing, network training considerations and system modeling. To our knowledge, we demonstrated for the first time, the potential applicability of an end-to-end learning-based quantitative phase retrieval method, trained on simulated data, to experimental propagation-based x-ray phase contrast measurements acquired under laboratory conditions. We considered conditions of polychromaticity, partial spatial coherence, and high noise levels, typical to laboratory conditions. This work further explored the robustness of this method to practical variations in propagation distances and object structure with the goal of assessing its potential for experimental use. Such an exploration of any LBM (irrespective of its network architecture) before practical deployment provides an understanding of its potential behavior under experimental settings.Comment: Under review as a journal submission. Early version with partial results has been accepted for poster presentation at SPIE-MI 202

    Risk-Seeking versus Risk-Avoiding Investments in Noisy Periodic Environments

    Full text link
    We study the performance of various agent strategies in an artificial investment scenario. Agents are equipped with a budget, x(t)x(t), and at each time step invest a particular fraction, q(t)q(t), of their budget. The return on investment (RoI), r(t)r(t), is characterized by a periodic function with different types and levels of noise. Risk-avoiding agents choose their fraction q(t)q(t) proportional to the expected positive RoI, while risk-seeking agents always choose a maximum value qmaxq_{max} if they predict the RoI to be positive ("everything on red"). In addition to these different strategies, agents have different capabilities to predict the future r(t)r(t), dependent on their internal complexity. Here, we compare 'zero-intelligent' agents using technical analysis (such as moving least squares) with agents using reinforcement learning or genetic algorithms to predict r(t)r(t). The performance of agents is measured by their average budget growth after a certain number of time steps. We present results of extensive computer simulations, which show that, for our given artificial environment, (i) the risk-seeking strategy outperforms the risk-avoiding one, and (ii) the genetic algorithm was able to find this optimal strategy itself, and thus outperforms other prediction approaches considered.Comment: 27 pp. v2 with minor corrections. See http://www.sg.ethz.ch for more inf
    corecore