2,161 research outputs found

    Independent predictors of breast malignancy in screen-detected microcalcifications: biopsy results in 2545 cases

    Get PDF
    Background: Mammographic microcalcifications are associated with many benign lesions, ductal carcinoma in situ (DCIS) and invasive cancer. Careful assessment criteria are required to minimise benign biopsies while optimising cancer diagnosis. We wished to evaluate the assessment outcomes of microcalcifications biopsied in the setting of population-based breast cancer screening. Methods: Between January 1992 and December 2007, cases biopsied in which microcalcifications were the only imaging abnormality were included. Patient demographics, imaging features and final histology were subjected to statistical analysis to determine independent predictors of malignancy. Results: In all, 2545 lesions, with a mean diameter of 21.8 mm (s.d. 23.8 mm) and observed in patients with a mean age of 57.7 years (s.d. 8.4 years), were included. Using the grading system adopted by the RANZCR, the grade was 3 in 47.7%; 4 in 28.3% and 5 in 24.0%. After assessment, 1220 lesions (47.9%) were malignant (809 DCIS only, 411 DCIS with invasive cancer) and 1325 (52.1%) were non-malignant, including 122 (4.8%) premalignant lesions (lobular carcinoma in situ, atypical lobular hyperplasia and atypical ductal hyperplasia). Only 30.9% of the DCIS was of low grade. Mammographic extent of microcalcifications >15 mm, imaging grade, their pattern of distribution, presence of a palpable mass and detection after the first screening episode showed significant univariate associations with malignancy. On multivariate modeling imaging grade, mammographic extent of microcalcifications >15 mm, palpable mass and screening episode were retained as independent predictors of malignancy. Radiological grade had the largest effect with lesions of grade 4 and 5 being 2.2 and 3.3 times more likely to be malignant, respectively, than grade 3 lesions. Conclusion: The radiological grading scheme used throughout Australia and parts of Europe is validated as a useful system of stratifying microcalcifications into groups with significantly different risks of malignancy. Biopsy assessment of appropriately selected microcalcifications is an effective method of detecting invasive breast cancer and DCIS, particularly of non-low-grade subtypes.G Farshid, T Sullivan, P Downey, P G Gill, and S Pieters

    Parameterized Complexity of the k-anonymity Problem

    Full text link
    The problem of publishing personal data without giving up privacy is becoming increasingly important. An interesting formalization that has been recently proposed is the kk-anonymity. This approach requires that the rows of a table are partitioned in clusters of size at least kk and that all the rows in a cluster become the same tuple, after the suppression of some entries. The natural optimization problem, where the goal is to minimize the number of suppressed entries, is known to be APX-hard even when the records values are over a binary alphabet and k=3k=3, and when the records have length at most 8 and k=4k=4 . In this paper we study how the complexity of the problem is influenced by different parameters. In this paper we follow this direction of research, first showing that the problem is W[1]-hard when parameterized by the size of the solution (and the value kk). Then we exhibit a fixed parameter algorithm, when the problem is parameterized by the size of the alphabet and the number of columns. Finally, we investigate the computational (and approximation) complexity of the kk-anonymity problem, when restricting the instance to records having length bounded by 3 and k=3k=3. We show that such a restriction is APX-hard.Comment: 22 pages, 2 figure

    Degree spectra for transcendence in fields

    Full text link
    We show that for both the unary relation of transcendence and the finitary relation of algebraic independence on a field, the degree spectra of these relations may consist of any single computably enumerable Turing degree, or of those c.e. degrees above an arbitrary fixed Δ20\Delta^0_2 degree. In other cases, these spectra may be characterized by the ability to enumerate an arbitrary Σ20\Sigma^0_2 set. This is the first proof that a computable field can fail to have a computable copy with a computable transcendence basis

    Reflections on a coaching pilot project in healthcare settings

    Get PDF
    This paper draws on personal reflection of coaching experiences and learning as a coach to consider the relevance of these approaches in a management context with a group of four healthcare staff who participated in a pilot coaching project. It explores their understanding of coaching techniques applied in management settings via their reflections on using coaching approaches and coaching applications as healthcare managers. Coaching approaches can enhance a manager’s skill portfolio and offer the potential benefits in terms of successful goal achievement, growth, mutual learning and development for both themselves and staff they work with in task focused scenarios

    The zero exemplar distance problem

    Full text link
    Given two genomes with duplicate genes, \textsc{Zero Exemplar Distance} is the problem of deciding whether the two genomes can be reduced to the same genome without duplicate genes by deleting all but one copy of each gene in each genome. Blin, Fertin, Sikora, and Vialette recently proved that \textsc{Zero Exemplar Distance} for monochromosomal genomes is NP-hard even if each gene appears at most two times in each genome, thereby settling an important open question on genome rearrangement in the exemplar model. In this paper, we give a very simple alternative proof of this result. We also study the problem \textsc{Zero Exemplar Distance} for multichromosomal genomes without gene order, and prove the analogous result that it is also NP-hard even if each gene appears at most two times in each genome. For the positive direction, we show that both variants of \textsc{Zero Exemplar Distance} admit polynomial-time algorithms if each gene appears exactly once in one genome and at least once in the other genome. In addition, we present a polynomial-time algorithm for the related problem \textsc{Exemplar Longest Common Subsequence} in the special case that each mandatory symbol appears exactly once in one input sequence and at least once in the other input sequence. This answers an open question of Bonizzoni et al. We also show that \textsc{Zero Exemplar Distance} for multichromosomal genomes without gene order is fixed-parameter tractable if the parameter is the maximum number of chromosomes in each genome.Comment: Strengthened and reorganize

    Computable randomness is about more than probabilities

    Get PDF
    We introduce a notion of computable randomness for infinite sequences that generalises the classical version in two important ways. First, our definition of computable randomness is associated with imprecise probability models, in the sense that we consider lower expectations (or sets of probabilities) instead of classical 'precise' probabilities. Secondly, instead of binary sequences, we consider sequences whose elements take values in some finite sample space. Interestingly, we find that every sequence is computably random with respect to at least one lower expectation, and that lower expectations that are more informative have fewer computably random sequences. This leads to the intriguing question whether every sequence is computably random with respect to a unique most informative lower expectation. We study this question in some detail and provide a partial answer

    Reconfiguration of Dominating Sets

    Full text link
    We explore a reconfiguration version of the dominating set problem, where a dominating set in a graph GG is a set SS of vertices such that each vertex is either in SS or has a neighbour in SS. In a reconfiguration problem, the goal is to determine whether there exists a sequence of feasible solutions connecting given feasible solutions ss and tt such that each pair of consecutive solutions is adjacent according to a specified adjacency relation. Two dominating sets are adjacent if one can be formed from the other by the addition or deletion of a single vertex. For various values of kk, we consider properties of Dk(G)D_k(G), the graph consisting of a vertex for each dominating set of size at most kk and edges specified by the adjacency relation. Addressing an open question posed by Haas and Seyffarth, we demonstrate that DΓ(G)+1(G)D_{\Gamma(G)+1}(G) is not necessarily connected, for Γ(G)\Gamma(G) the maximum cardinality of a minimal dominating set in GG. The result holds even when graphs are constrained to be planar, of bounded tree-width, or bb-partite for b≥3b \ge 3. Moreover, we construct an infinite family of graphs such that Dγ(G)+1(G)D_{\gamma(G)+1}(G) has exponential diameter, for γ(G)\gamma(G) the minimum size of a dominating set. On the positive side, we show that Dn−m(G)D_{n-m}(G) is connected and of linear diameter for any graph GG on nn vertices having at least m+1m+1 independent edges.Comment: 12 pages, 4 figure

    Tree Compression with Top Trees Revisited

    Get PDF
    We revisit tree compression with top trees (Bille et al, ICALP'13) and present several improvements to the compressor and its analysis. By significantly reducing the amount of information stored and guiding the compression step using a RePair-inspired heuristic, we obtain a fast compressor achieving good compression ratios, addressing an open problem posed by Bille et al. We show how, with relatively small overhead, the compressed file can be converted into an in-memory representation that supports basic navigation operations in worst-case logarithmic time without decompression. We also show a much improved worst-case bound on the size of the output of top-tree compression (answering an open question posed in a talk on this algorithm by Weimann in 2012).Comment: SEA 201

    Long term microparticle impact fluxes on LDEF determined from optical survey of Interplanetary Dust Experiment (IDE) sensors

    Get PDF
    Many of the IDE metal-oxide-silicon (MOS) capacitor-discharge impact sensors remained active during the entire Long Duration Exposure Facility (LDEF) mission. An optical survey of impact sites on the active surfaces of these sensors has been extended to include all sensors from the low-flux sides of LDEF (i.e. the west or trailing side, the earth end, and the space end) and 5-7 active sensors from each LDEF's high-flux sides (i.e. the east or leading side, the south side, and the north side). This survey was facilitated by the presence of a relatively large (greater than 50 micron diameter) optical signature associated with each impact site on the active sensor surfaces. Of the approximately 4700 impacts in the optical survey data set, 84% were from particles in the 0.5 to 3 micron size range. An estimate of the total number of hypervelocity impacts on LDEF from particles greater than 0.5 micron diameter yields a value of approximately 7 x 10(exp 6). Impact feature dimensions for several dozen large craters on MOS sensors and germanium witness plates are also presented. Impact fluxes calculated from the IDE survey data closely matched surveys of similar size impacts (greater than or equal to 3 micron diameter craters in Al, or marginal penetrations of a 2.4 micron thick Al foil) by other LDEF investigators. Since the first year IDE data were electronically recorded, the flux data could be divided into three long term time periods: the first year, the entire 5.8 year mission, and the intervening 4.8 years (by difference). The IDE data show that there was an order of magnitude decrease in the long term microparticle impact flux on the trailing side of LDEF, from 1.01 to 0.098 x 10(exp -4) m(exp 2)/s, from the first year in orbit compared to years 2-6. The long term flux on the leading edge showed an increase from 8.6 to 11.2 x 10(exp -4) m(exp -2)/s over this same time period. (Short term flux increases up to 10,000 times the background rate were recorded on the leading side during LDEF's first year in orbit.) The overall east/west ratio was 44, but during LDEF's first year in orbit the ratio was 8.5, and during years 2-6 the ratio was 114. Long term microparticle impact fluxes on the space end decreased from 1.12 to 0.55 x 10(exp -4) m(exp -2)/s from the first year in orbit compared to years 2-6. The earth end showed the opposite trend with an increase from 0.16 to 0.38 x 10(exp -4) m(exp -2)/s. Fluxes on rows 6 and 12 decreased from 6.1 to 3.4 and 6.7 to 3.7 x 10(exp -4) m(exp -2)/s, respectively, over the same time periods. This resulted in space/earth microparticle impact flux ratios of 7.1 during the first year and 1.5 during years 2-6, while the south/north, space/north and space/south ratios remained constant at 1.1, 0.16 and 0.17, respectively, during the entire mission. This information indicates the possible identification of long term changes in discrete microparticle orbital debris component contributions to the total impact flux experienced by LDEF. A dramatic decrease in the debris population capable of striking the trailing side was detected that could possibly be attributed to the hiatus of western launch activity experienced from 1986-1989. A significant increase in the debris population that preferentially struck the leading side was also observed and could possibly be attributed to a single breakup event that occurred in September of 1986. A substantial increase in the microparticle debris population that struck the earth end of LDEF, but not the space end, was also detected and could possibly be the result of a single breakup event at low altitude. These results point to the importance of including discrete orbital debris component contribution changes in flux models in order to achieve accurate predictions of the microparticle environment that a particular spacecraft will experience in earth orbit. The only reliable, verified empirical measurements of these changes are reported in this paper. Further time-resolved in-situ measurements of these debris populations are needed to accurately assess model predictions and mitigation practices

    Parameterized complexity of the MINCCA problem on graphs of bounded decomposability

    Full text link
    In an edge-colored graph, the cost incurred at a vertex on a path when two incident edges with different colors are traversed is called reload or changeover cost. The "Minimum Changeover Cost Arborescence" (MINCCA) problem consists in finding an arborescence with a given root vertex such that the total changeover cost of the internal vertices is minimized. It has been recently proved by G\"oz\"upek et al. [TCS 2016] that the problem is FPT when parameterized by the treewidth and the maximum degree of the input graph. In this article we present the following results for the MINCCA problem: - the problem is W[1]-hard parameterized by the treedepth of the input graph, even on graphs of average degree at most 8. In particular, it is W[1]-hard parameterized by the treewidth of the input graph, which answers the main open problem of G\"oz\"upek et al. [TCS 2016]; - it is W[1]-hard on multigraphs parameterized by the tree-cutwidth of the input multigraph; - it is FPT parameterized by the star tree-cutwidth of the input graph, which is a slightly restricted version of tree-cutwidth. This result strictly generalizes the FPT result given in G\"oz\"upek et al. [TCS 2016]; - it remains NP-hard on planar graphs even when restricted to instances with at most 6 colors and 0/1 symmetric costs, or when restricted to instances with at most 8 colors, maximum degree bounded by 4, and 0/1 symmetric costs.Comment: 25 pages, 11 figure
    • …
    corecore