3,695 research outputs found

    Visualization-Based Mapping of Language Function in the Brain

    Get PDF
    Cortical language maps, obtained through intraoperative electrical stimulation studies, provide a rich source of information for research on language organization. Previous studies have shown interesting correlations between the distribution of essential language sites and such behavioral indicators as verbal IQ and have provided suggestive evidence for regarding human language cortex as an organization of multiple distributed systems. Noninvasive studies using ECoG, PET, and functional MR lend support to this model; however, there as yet are no studies that integrate these two forms of information. In this paper we describe a method for mapping the stimulation data onto a 3-D MRI-based neuroanatomic model of the individual patient. The mapping is done by comparing an intraoperative photograph of the exposed cortical surface with a computer-based MR visualization of the surface, interactively indicating corresponding stimulation sites, and recording 3-D MR machine coordinates of the indicated sites. Repeatability studies were performed to validate the accuracy of the mapping technique. Six observers—a neurosurgeon, a radiologist, and four computer scientists, independently mapped 218 stimulation sites from 12 patients. The mean distance of a mapping from the mean location of each site was 2.07 mm, with a standard deviation of 1.5 mm, or within 5.07 mm with 95% confidence. Since the surgical sites are accurate within approximately 1 cm, these results show that the visualization-based approach is accurate within the limits of the stimulation maps. When incorporated within the kind of information system envisioned by the Human Brain Project, this anatomically based method will not only provide a key link between noninvasive and invasive approaches to understanding language organization, but will also provide the basis for studying the relationship between language function and anatomical variability

    Development of visual programming techniques to integrate theoretical modeling into the scientific planning and instrument operations environment of ISTP

    Get PDF
    The goal of this project is to investigate the use of visualization software based on the visual programming and data-flow paradigms to meet the needs of the SPOF and through it the International Solar Terrestrial Physics (ISTP) science community. Specific needs we address include science planning, data interpretation, comparisons of data with simulation and model results, and data acquisition. Our accomplishments during the twelve month grant period are discussed below

    Visualization techniques to aid in the analysis of multi-spectral astrophysical data sets

    Get PDF
    This report describes our project activities for the period Sep. 1991 - Oct. 1992. Our activities included stabilizing the software system STAR, porting STAR to IDL/widgets (improved user interface), targeting new visualization techniques for multi-dimensional data visualization (emphasizing 3D visualization), and exploring leading-edge 3D interface devices. During the past project year we emphasized high-end visualization techniques, by exploring new tools offered by state-of-the-art visualization software (such as AVS3 and IDL4/widgets), by experimenting with tools still under research at the Department of Computer Science (e.g., use of glyphs for multidimensional data visualization), and by researching current 3D input/output devices as they could be used to explore 3D astrophysical data. As always, any project activity is driven by the need to interpret astrophysical data more effectively

    Adversarial Robustness: Softmax versus Openmax

    Full text link
    Deep neural networks (DNNs) provide state-of-the-art results on various tasks and are widely used in real world applications. However, it was discovered that machine learning models, including the best performing DNNs, suffer from a fundamental problem: they can unexpectedly and confidently misclassify examples formed by slightly perturbing otherwise correctly recognized inputs. Various approaches have been developed for efficiently generating these so-called adversarial examples, but those mostly rely on ascending the gradient of loss. In this paper, we introduce the novel logits optimized targeting system (LOTS) to directly manipulate deep features captured at the penultimate layer. Using LOTS, we analyze and compare the adversarial robustness of DNNs using the traditional Softmax layer with Openmax, which was designed to provide open set recognition by defining classes derived from deep representations, and is claimed to be more robust to adversarial perturbations. We demonstrate that Openmax provides less vulnerable systems than Softmax to traditional attacks, however, we show that it can be equally susceptible to more sophisticated adversarial generation techniques that directly work on deep representations.Comment: Accepted to British Machine Vision Conference (BMVC) 201

    Slisp: A Flexible Software Toolkit for Hybrid, Embedded and Distributed Applications

    Get PDF
    We describe Slisp (pronounced ‘Ess-Lisp’), a hybrid Lisp–C programming toolkit for the development of scriptable and distributed applications. Computationally expensive operations implemented as separate C-coded modules are selectively compiled into a small Xlisp interpreter, then called as Lisp functions in a Lisp-coded program. The resulting hybrid program may run in several modes: as a stand-alone executable, embedded in a different C program, as a networked server accessed from another Slisp client, or as a networked server accessed from a C-coded client. Five years of experience with Slisp, as well experience with other scripting languages such as Tcl and Perl, are summarized. These experiences suggest that Slisp will be most useful for mid-sized applications in which the kinds of scripting and embeddability features provided by Tcl and Perl can be extended in an efficient manner to larger applications, while maintaining a well-defined standard (Common Lisp) for these extensions. In addition, the generality of Lisp makes Lisp a good candidate for an application-level communication language in distributed environments

    Do you see what I mean?

    Get PDF
    Visualizers, like logicians, have long been concerned with meaning. Generalizing from MacEachren's overview of cartography, visualizers have to think about how people extract meaning from pictures (psychophysics), what people understand from a picture (cognition), how pictures are imbued with meaning (semiotics), and how in some cases that meaning arises within a social and/or cultural context. If we think of the communication acts carried out in the visualization process further levels of meaning are suggested. Visualization begins when someone has data that they wish to explore and interpret; the data are encoded as input to a visualization system, which may in its turn interact with other systems to produce a representation. This is communicated back to the user(s), who have to assess this against their goals and knowledge, possibly leading to further cycles of activity. Each phase of this process involves communication between two parties. For this to succeed, those parties must share a common language with an agreed meaning. We offer the following three steps, in increasing order of formality: terminology (jargon), taxonomy (vocabulary), and ontology. Our argument in this article is that it's time to begin synthesizing the fragments and views into a level 3 model, an ontology of visualization. We also address why this should happen, what is already in place, how such an ontology might be constructed, and why now

    Toward a first-principles integrated simulation of tokamak edge plasmas

    Get PDF
    Performance of the ITER is anticipated to be highly sensitive to the edge plasma condition. The edge pedestal in ITER needs to be predicted from an integrated simulation of the necessary first-principles, multi-scale physics codes. The mission of the SciDAC Fusion Simulation Project (FSP) Prototype Center for Plasma Edge Simulation (CPES) is to deliver such a code integration framework by (1) building new kinetic codes XGC0 and XGC1, which can simulate the edge pedestal buildup; (2) using and improving the existing MHD codes ELITE, M3D-OMP, M3D-MPP and NIMROD, for study of large-scale edge instabilities called Edge Localized Modes (ELMs); and (3) integrating the codes into a framework using cutting-edge computer science technology. Collaborative effort among physics, computer science, and applied mathematics within CPES has created the first working version of the End-to-end Framework for Fusion Integrated Simulation (EFFIS), which can be used to study the pedestal-ELM cycles
    • …
    corecore