110 research outputs found
SB17-11/12: Endorsing Power Shift
SB17-11/12: Endorsing Power Shift. This resolution passed during the November 9, 2011 meeting of the Associated Students of the University of Montana (ASUM)
SB45-11/12: Socially Responsible Apparel
SB45-11/12: Socially Responsible Apparel. This resolution passed during the March 21, 2012 meeting of the Associated Students of the University of Montana (ASUM)
Recommended from our members
DOECGF 2008 Site Report
The Data group provides data analysis and visualization support to its customers. This consists primarily of the development and support of VisIt, a data analysis and visualization tool. Support ranges from answering questions about the tool, providing classes on how to use the tool, and performing data analysis and visualization for customers. The Information Management and Graphics Group supports and develops tools that enhance our ability to access, display, and understand large, complex data sets. Activities include applying visualization software for terascale data exploration; running two video production labs; supporting graphics libraries and tools for end users; maintaining PowerWalls and assorted other displays; and developing software for searching, managing, and browsing scientific data. Researchers in the Center for Applied Scientific Computing (CASC) work on various projects including the development of visualization techniques for terascale data exploration that are funded by the ASC program, among others. The researchers also have LDRD projects and collaborations with other lab researchers, academia, and industry. During the past year we have completed our visualization cluster strategy of converting to Opteron/IB clusters. We support a 128-node Opteron/IB cluster providing a visualization production server for our unclassified systems and a 256-node Opteron/IB cluster for the classified systems, as well as several smaller clusters to drive the PowerWalls. We are in the process of updating projectors for one of the PowerWalls and acquiring new fiber modems for another. We deployed a 150 TB NFS server to provide dedicated storage for data analysis and visualization for our unclassified visualization server. The IMG group is located in the Terascale Simulation Facility, home to BGL, Purple, and Atlas, which includes both classified and unclassified visualization theaters, a visualization computer floor and deployment workshop, and video production labs. We continued to provide the traditional graphics group consulting and video production support. We maintained five PowerWalls and a host of other displays
Recommended from our members
SimTracker - Using the Web to track computer simulation results
Large-scale computer simulations, a hallmark of computing at Lawrence Livermore National Laboratory (LLNL), often take days to run and can produce massive amounts of output. The typical environment of many LLNL scientists includes multiple hardware platforms, a large collection of eclectic software applications, data stored on many devices in many formats, and little standard metadata, which is accessible documentation about the data. The exploration of simulation results typically proceeds as a laborious process requiring knowledge of this complex environment and many application programs. We have addressed this problem by developing a web-based approach for exploring simulation results via the automatic generation of metadata summaries which provide convenient access to the data sets and associated analysis tools. In this paper we will describe the SimTracker tool for automatically generating metadata that serves as a quick overview and index to the archived results of simulations. The SimTracker application consists of two parts - a generation component and a viewing component. The generation component captures and generates calculation metadata from a simulation. These metadata include graphical snapshots from various stages of the run, pointers to the input and output files from the simulation, and assorted annotations describing the run. SimTracker generation can be done either during a simulation or afterwards. When integrated with a code system, SimTracker does its work on the fly, allowing the user to monitor a calculation while it is running. The viewing component of SimTracker provides a web-based mechanism for both quick perusing and careful analysis of simulation results. HTML is created on the fly from a series of Perl CGI scripts and metadata extracted from a database. A variety of views are provided, ranging from a high-level table of contents showing all of one's simulations, to an in-depth results page from which numeric values can be extracted and analysis tools can easily be launched. Annotations can be associated with a calculation at any time, allowing an end-user to customize the summary pages with titles, abstracts, and pointers to related information, for example. In this paper, we will present an overview of the design, implementation, and operational aspects of the SimTracker application. We will also discuss how it is being deployed in the environment of the Accelerated Strategic Computing Initiative [1]. SimTracker was designed as an extensible application that we are now adapting to use with several simulation codes
Recommended from our members
Mining scientific data archives through metadata generation
Data analysis and management tools typically have not supported the documenting of data, so scientists must manually maintain all information pertaining to the context and history of their work. This metadata is critical to effective retrieval and use of the masses of archived data, yet little of it exists on-line or in an accessible format. Exploration of archived legacy data typically proceeds as a laborious process, using commands to navigate through file structures on several machines. This file-at-a-time approach needs to be replaced with a model that represents data as collections of interrelated objects. The tools that support this model must focus attention on data while hiding the complexity of the computational environment. This problem was addressed by developing a tool for exploring large amounts of data in UNIX directories via automatic generation of metadata summaries. This paper describes the model for metadata summaries of collections and the Data Miner tool for interactively traversing directories and automatically generating metadata that serves as a quick overview and index to the archived data. The summaries include thumbnail images as well as links to the data, related directories, and other metadata. Users may personalize the metadata by adding a title and abstract to the summary, which is presented as an HTML page viewed with a World Wide Web browser. We have designed summaries for 3 types of collections of data: contents of a single directory; virtual directories that represent relations between scattered files; and groups of related calculation files. By focusing on the scientists` view of the data mining task, we have developed techniques that assist in the ``detective work `` of mining without requiring knowledge of mundane details about formats and commands. Experiences in working with scientists to design these tools are recounted
Recommended from our members
From Petascale to Exascale: Eight Focus Areas of R&D Challenges for HPC Simulation Environments
Programming models bridge the gap between the underlying hardware architecture and the supporting layers of software available to applications. Programming models are different from both programming languages and application programming interfaces (APIs). Specifically, a programming model is an abstraction of the underlying computer system that allows for the expression of both algorithms and data structures. In comparison, languages and APIs provide implementations of these abstractions and allow the algorithms and data structures to be put into practice - a programming model exists independently of the choice of both the programming language and the supporting APIs. Programming models are typically focused on achieving increased developer productivity, performance, and portability to other system designs. The rapidly changing nature of processor architectures and the complexity of designing an exascale platform provide significant challenges for these goals. Several other factors are likely to impact the design of future programming models. In particular, the representation and management of increasing levels of parallelism, concurrency and memory hierarchies, combined with the ability to maintain a progressive level of interoperability with today's applications are of significant concern. Overall the design of a programming model is inherently tied not only to the underlying hardware architecture, but also to the requirements of applications and libraries including data analysis, visualization, and uncertainty quantification. Furthermore, the successful implementation of a programming model is dependent on exposed features of the runtime software layers and features of the operating system. Successful use of a programming model also requires effective presentation to the software developer within the context of traditional and new software development tools. Consideration must also be given to the impact of programming models on both languages and the associated compiler infrastructure. Exascale programming models must reflect several, often competing, design goals. These design goals include desirable features such as abstraction and separation of concerns. However, some aspects are unique to large-scale computing. For example, interoperability and composability with existing implementations will prove critical. In particular, performance is the essential underlying goal for large-scale systems. A key evaluation metric for exascale models will be the extent to which they support these goals rather than merely enable them
Viewing Visual Analytics as Model Building
To complement the currently existing definitions and conceptual frameworks of visual analytics, which focus mainly on activities performed by analysts and types of techniques they use, we attempt to define the expected results of these activities. We argue that the main goal of doing visual analytics is to build a mental and/or formal model of a certain piece of reality reflected in data. The purpose of the model may be to understand, to forecast or to control this piece of reality. Based on this model-building perspective, we propose a detailed conceptual framework in which the visual analytics process is considered as a goal-oriented workflow producing a model as a result. We demonstrate how this framework can be used for performing an analytical survey of the visual analytics research field and identifying the directions and areas where further research is needed
No.498 Bob Springmeyer
Transcript (29, 15 pages) of two interviews by Erik Solberg with Bob Springmeyer on June 14 and 28, 2007Springmeyer (b. 1943) was raised in Salt Lake City, Utah, where he was introduced to the outdoors through family fishing and camping trips. He was also involved in the scouting programs through the L.D.S. Church. He describes his education in Salt Lake City, an illegal fraternity in high school, the Alpenbock Club, climbing friends, experiences with improvement in technology, and clean climbing (recovery of equipment on the way down). He also shares his favorite climbs, both in the Jackson, Wyoming/Grand Tetons and the Salt Lake City Canyons areas. He also talks about doing the Gannett climb (also in Wyoming), but backing off as he felt his safety was threatened by weather and equipment issues. Springmeyer concludes with a discussion of fellow climbers from the Alpenbock Club and making ‘clean' climbs easier. Interview is part of the Outdoor Recreation Oral History Project. Interviewer: Erick Solber
- …