17,355 research outputs found

    Coordinating visualizations of polysemous action: Values added for grounding proportion

    Get PDF
    We contribute to research on visualization as an epistemic learning tool by inquiring into the didactical potential of having students visualize one phenomenon in accord with two different partial meanings of the same concept. 22 Grade 4-6 students participated in a design study that investigated the emergence of proportional-equivalence notions from mediated perceptuomotor schemas. Working as individuals or pairs in tutorial clinical interviews, students solved non-symbolic interaction problems that utilized remote-sensing technology. Next, they used symbolic artifacts interpolated into the problem space as semiotic means to objectify in mathematical register a variety of both additive and multiplicative solution strategies. Finally, they reflected on tensions between these competing visualizations of the space. Micro-ethnographic analyses of episodes from three paradigmatic case studies suggest that students reconciled semiotic conflicts by generating heuristic logico-mathematical inferences that integrated competing meanings into cohesive conceptual networks. These inferences hinged on revisualizing additive elements multiplicatively. Implications are drawn for rethinking didactical design for proportions. © 2013 FIZ Karlsruhe

    Summarizing First-Person Videos from Third Persons' Points of Views

    Full text link
    Video highlight or summarization is among interesting topics in computer vision, which benefits a variety of applications like viewing, searching, or storage. However, most existing studies rely on training data of third-person videos, which cannot easily generalize to highlight the first-person ones. With the goal of deriving an effective model to summarize first-person videos, we propose a novel deep neural network architecture for describing and discriminating vital spatiotemporal information across videos with different points of view. Our proposed model is realized in a semi-supervised setting, in which fully annotated third-person videos, unlabeled first-person videos, and a small number of annotated first-person ones are presented during training. In our experiments, qualitative and quantitative evaluations on both benchmarks and our collected first-person video datasets are presented.Comment: 16+10 pages, ECCV 201

    Does a novel X-ray imaging technology provide a substantial radiation dose reduction for patients in trans-catheter aortic valve implantation procedures?

    Get PDF
    Purpose: Modern interventional X-ray equipment employs image processing to permit reduction in radiation whilst retaining sufficient image quality. The aim of this study was to investigate whether our recently-installed system (AlluraClarity, Philips Healthcare) which contains advanced real-time image noise reduction algorithms and anatomy-specific X-ray optimization (beam filtering, grid switch, pulse width, spot size, detector and image processing engine), affected patient procedure dose and overall procedure duration in routine trans-catheter aortic valve implantation (TAVI) procedures. Methods: Patient dose for 42 TAVI patients from the AlluraClarity cardiac catheterisation lab and from a reference system (Axiom Artis, Siemens Healthcare) in the same cardiology department was recorded. Median values from the two X-ray systems were compared using the Wilcoxon statistical test. Results: Total patient procedure dose medians were 4016 and 7088 cGy cm2 from the AlluraClarity and reference systems respectively. AlluraClarity median patient doses were 3405 cGy cm2 and 783.5 cGy cm2 from fluoroscopy and digital image acquisition respectively. Reference median patient doses were 4928 cGy cm2 and 2511 cGy cm2 from fluoroscopy and digital image acquisition respectively. All differences in patient dose were significant at the 5% level. Median total fluoroscopy times [min:sec] were 19:57 and 20:20 for the AlluraClarity and reference systems respectively. Conclusion: The AlluraClarity cardiac catheterisation lab had 43% lower total patient procedure dose for TAVI patients than the reference lab; fluoroscopy and digital image acquisition doses were 31% and 69% lower respectively. In terms of total fluoroscopy time, there was no statistically significant difference between the two labs

    Image quality based x-ray dose control in cardiac imaging

    Get PDF
    An automated closed-loop dose control system balances the radiation dose delivered to patients and the quality of images produced in cardiac x-ray imaging systems. Using computer simulations, this study compared two designs of automatic x-ray dose control in terms of the radiation dose and quality of images produced. The first design, commonly in x-ray systems today, maintained a constant dose rate at the image receptor. The second design maintained a constant image quality in the output images. A computer model represented patients as a polymethylmetacrylate phantom (which has similar x-ray attenuation to soft tissue), containing a detail representative of an artery filled with contrast medium. The model predicted the entrance surface dose to the phantom and contrast to noise ratio of the detail as an index of image quality. Results showed that for the constant dose control system, phantom dose increased substantially with phantom size (x5 increase between 20cm and 30 cm thick phantom), yet the image quality decreased by 43% for the same thicknesses. For the constant quality control, phantom dose increased at a greater rate with phantom thickness (>x10 increase between 20 cm and 30 cm phantom). Image quality based dose control could tailor the x-ray output to just achieve the quality required, which would reduce dose to patients where the current dose control produces images of too high quality. However, maintaining higher levels of image quality for large patients would result in a significant dose increase over current practice

    An Extremal Chiral Primary Three-Point Function at Two-loops in ABJ(M)

    Get PDF
    archiveprefix: arXiv primaryclass: hep-th reportnumber: QMUL-PH-14-23 slaccitation: %%CITATION = ARXIV:1411.0626;%%archiveprefix: arXiv primaryclass: hep-th reportnumber: QMUL-PH-14-23 slaccitation: %%CITATION = ARXIV:1411.0626;%%archiveprefix: arXiv primaryclass: hep-th reportnumber: QMUL-PH-14-23 slaccitation: %%CITATION = ARXIV:1411.0626;%

    Survey on geographic visual display techniques in epidemiology: Taxonomy and characterization

    Get PDF
    Many works have been done on the topic of Geographic Visual Display with different objectives and approaches. There are studies to compare the traditional cartography techniques (the traditional term of Geographic Visual Display (GVD) without Human-Computer Interaction (HCI)) to Modern GIS which are also known as Geo-visualization, some literature differentiates and highlight the commonalities of features and architectures of different Geographic Visual Display tools (from layers and clusters to dot and color and more). Furthermore, with the existence of more advanced tools which support data exploration, few tasks are done to evaluate how those tools are used to handle complex and multivariate spatial-temporal data. Several test on usability and interactivity of tools toward user's needs or preferences, some even develop frameworks that address user's concern in a wide array of tasks, and others prove how these tools are able to stimulate the visual thought process and help in decision making or event prediction amongst decision-makers. This paper surveyed and categorized these research articles into 2 categories: Traditional Cartography (TC) and Geo-visualization (G). This paper will classify each category by their techniques and tasks that contribute to the significance of data representation in Geographic Visual Display and develop perspectives of each area and evaluating trends of Geographic Visual Display Techniques. Suggestions and ideas on what mechanisms can be used to improve and diversify Geographic Visual Display Techniques are provided at the end of this survey

    Peer review and citation data in predicting university rankings, a large-scale analysis

    Get PDF
    Most Performance-based Research Funding Systems (PRFS) draw on peer review and bibliometric indicators, two different method- ologies which are sometimes combined. A common argument against the use of indicators in such research evaluation exercises is their low corre- lation at the article level with peer review judgments. In this study, we analyse 191,000 papers from 154 higher education institutes which were peer reviewed in a national research evaluation exercise. We combine these data with 6.95 million citations to the original papers. We show that when citation-based indicators are applied at the institutional or departmental level, rather than at the level of individual papers, surpris- ingly large correlations with peer review judgments can be observed, up to r <= 0.802, n = 37, p < 0.001 for some disciplines. In our evaluation of ranking prediction performance based on citation data, we show we can reduce the mean rank prediction error by 25% compared to previous work. This suggests that citation-based indicators are sufficiently aligned with peer review results at the institutional level to be used to lessen the overall burden of peer review on national evaluation exercises leading to considerable cost savings

    Coulomb Explosion Dynamics of Chlorocarbonylsulfenyl Chloride

    Get PDF
    The Coulomb explosion dynamics following strong field ionization of chlorocarbonylsulfenyl chloride was studied using multimass coincidence detection and covariance imaging analysis, supported by density functional theory calculations. These results show evidence of multiple dissociation channels from various charge states. Double ionization to low-lying electronic states leads to a dominant C-S cleavage channel, while higher states can alternatively correlate to the loss of Cl+. Triple ionization leads to a double dissociation channel, the observation of which is confirmed via three-body covariance analysis, while further ionization leads primarily to atomic or diatomic fragments whose relative momenta depend strongly on the starting structure of the molecule

    Machine vision image quality measurement in cardiac x-ray imaging

    Get PDF
    The purpose of this work is to report on a machine vision approach for the automated measurement of x-ray image contrast of coronary arteries filled with iodine contrast media during interventional cardiac procedures. A machine vision algorithm was developed that creates a binary mask of the principal vessels of the coronary artery tree by thresholding a standard deviation map of the direction image of the cardiac scene derived using a Frangi filter. Using the mask, average contrast is calculated by tting a Gaussian model to the greyscale profile orthogonal to the vessel centre line at a number of points along the vessel. The algorithm was applied to sections of single image frames from 30 left and 30 right coronary artery image sequences from different patients. Manual measurements of average contrast were also performed on the same images. A Bland-Altman analysis indicates good agreement between the two methods with 95% confidence intervals -0.046 to +0.048 with a mean bias of 0.001. The machine vision algorithm has the potential of providing real-time context sensitive information so that radiographic imaging control parameters could be adjusted on the basis of clinically relevant image content

    Acute Cerebral Infarction Masked by a Brain Tumor

    Get PDF
    We report on an 81-year-old man who presented with left limbs weakness and was brought to the emergency room where a brain computed tomography revealed a tumor at the right parasellar region. The patient was admitted to the neurosurgery department, and the symptoms were thought to be due to the tumor mass effect. The final diagnosis turned out to be acute ischemic infarction with an incidentally found brain tumor following angiography and magnetic resonance imaging
    • …
    corecore