210 research outputs found

    On the Super-computational Background of the Research Centre Jülich

    Get PDF
    KFA Jülich is one of the largest big-science research centres in Europe; its scientific and engineering activities are ranging from fundamental research to applied science and technology. KFA's Central Institute for Applied Mathematics (ZAM) is running the large-scale computing facilities and network systems at KFA and is providing communication services, general-purpose and supercomputer capacity also for the HLRZ ("Höchstleistungsrechenzentrum") established in 1987 in order to further enhance and promote computational science in Germany. Thus, at KFA - and in particular enforced by ZAM - supercomputing has received high priority since more than ten years. What particle accelerators mean to experimental physics, supercomputers mean to Computational Science and Engineering: Supercomputers are the accelerators of theory

    Workshop proceedings: Information Systems for Space Astrophysics in the 21st Century, volume 1

    Get PDF
    The Astrophysical Information Systems Workshop was one of the three Integrated Technology Planning workshops. Its objectives were to develop an understanding of future mission requirements for information systems, the potential role of technology in meeting these requirements, and the areas in which NASA investment might have the greatest impact. Workshop participants were briefed on the astrophysical mission set with an emphasis on those missions that drive information systems technology, the existing NASA space-science operations infrastructure, and the ongoing and planned NASA information systems technology programs. Program plans and recommendations were prepared in five technical areas: Mission Planning and Operations; Space-Borne Data Processing; Space-to-Earth Communications; Science Data Systems; and Data Analysis, Integration, and Visualization

    Mizzou engineer, volume 12, number 1

    Get PDF

    On the engineering of crucial software

    Get PDF
    The various aspects of the conventional software development cycle are examined. This cycle was the basis of the augmented approach contained in the original grant proposal. This cycle was found inadequate for crucial software development, and the justification for this opinion is presented. Several possible enhancements to the conventional software cycle are discussed. Software fault tolerance, a possible enhancement of major importance, is discussed separately. Formal verification using mathematical proof is considered. Automatic programming is a radical alternative to the conventional cycle and is discussed. Recommendations for a comprehensive approach are presented, and various experiments which could be conducted in AIRLAB are described

    Army-NASA aircrew/aircraft integration program (A3I) software detailed design document, phase 3

    Get PDF
    The capabilities and design approach of the MIDAS (Man-machine Integration Design and Analysis System) computer-aided engineering (CAE) workstation under development by the Army-NASA Aircrew/Aircraft Integration Program is detailed. This workstation uses graphic, symbolic, and numeric prototyping tools and human performance models as part of an integrated design/analysis environment for crewstation human engineering. Developed incrementally, the requirements and design for Phase 3 (Dec. 1987 to Jun. 1989) are described. Software tools/models developed or significantly modified during this phase included: an interactive 3-D graphic cockpit design editor; multiple-perspective graphic views to observe simulation scenarios; symbolic methods to model the mission decomposition, equipment functions, pilot tasking and loading, as well as control the simulation; a 3-D dynamic anthropometric model; an intermachine communications package; and a training assessment component. These components were successfully used during Phase 3 to demonstrate the complex interactions and human engineering findings involved with a proposed cockpit communications design change in a simulated AH-64A Apache helicopter/mission that maps to empirical data from a similar study and AH-1 Cobra flight test

    ACUTA Journal of Telecommunications in Higher Education

    Get PDF
    In This Issue Strategic Planning in the College and University Ecosystem Outlook 2012: Chickens or Eggs? lT Trends on Campus: 2012 Best Practices in Deploying a Successful University SAN Beyond Convergence: How Advanced Networking Will Erase Campus Boundaries Distributed Computing: The Path to the Power? Cell Phones on the University Campus: Adversary or Ally? lnstitutional Excellence Award Honorable Mention: Wake Forest University Interview President\u27s Message From the Executive Director Here\u27s My Advic

    NASA Tech Briefs, January 1999

    Get PDF
    Topics include: special coverage sections on sensors and data acquisition and sections on electronic components and circuits, electronic software, materials, mechanics, bio-medical physical sciences, book and reports, and a special section of Photonics Tech Briefs

    Critical Team Composition Issues for Long-Distance and Long-Duration Space Exploration: A Literature Review, an Operational Assessment, and Recommendations for Practice and Research

    Get PDF
    Prevailing team effectiveness models suggest that teams are best positioned for success when certain enabling conditions are in place (Hackman, 1987; Hackman, 2012; Mathieu, Maynard, Rapp, & Gilson, 2008; Wageman, Hackman, & Lehman, 2005). Team composition, or the configuration of member attributes, is an enabling structure key to fostering competent teamwork (Hackman, 2002; Wageman et al., 2005). A vast body of research supports the importance of team composition in team design (Bell, 2007). For example, team composition is empirically linked to outcomes such as cooperation (Eby & Dobbins, 1997), social integration (Harrison, Price, Gavin, & Florey, 2002), shared cognition (Fisher, Bell, Dierdorff, & Belohlav, 2012), information sharing (Randall, Resick, & DeChurch, 2011), adaptability (LePine, 2005), and team performance (e.g., Bell, 2007). As such, NASA has identified team composition as a potentially powerful means for mitigating the risk of performance decrements due to inadequate crew cooperation, coordination, communication, and psychosocial adaptation in future space exploration missions. Much of what is known about effective team composition is drawn from research conducted in conventional workplaces (e.g., corporate offices, production plants). Quantitative reviews of the team composition literature (e.g., Bell, 2007; Bell, Villado, Lukasik, Belau, & Briggs, 2011) are based primarily on traditional teams. Less is known about how composition affects teams operating in extreme environments such as those that will be experienced by crews of future space exploration missions. For example, long-distance and long-duration space exploration (LDSE) crews are expected to live and work in isolated and confined environments (ICEs) for up to 30 months. Crews will also experience communication time delays from mission control, which will require crews to work more autonomously (see Appendix A for more detailed information regarding the LDSE context). Given the unique context within which LDSE crews will operate, NASA identified both a gap in knowledge related to the effective composition of autonomous, LDSE crews, and the need to identify psychological and psychosocial factors, measures, and combinations thereof that can be used to compose highly effective crews (Team Gap 8). As an initial step to address Team Gap 8, we conducted a focused literature review and operational assessment related to team composition issues for LDSE. The objectives of our research were to: (1) identify critical team composition issues and their effects on team functioning in LDSE-analogous environments with a focus on key composition factors that will most likely have the strongest influence on team performance and well-being, and 1 Astronaut diary entry in regards to group interaction aboard the ISS (p.22; Stuster, 2010) 2 (2) identify and evaluate methods used to compose teams with a focus on methods used in analogous environments. The remainder of the report includes the following components: (a) literature review methodology, (b) review of team composition theory and research, (c) methods for composing teams, (d) operational assessment results, and (e) recommendations

    Big Data Management Using Scientific Workflows

    Get PDF
    Humanity is rapidly approaching a new era, where every sphere of activity will be informed by the ever-increasing amount of data. Making use of big data has the potential to improve numerous avenues of human activity, including scientific research, healthcare, energy, education, transportation, environmental science, and urban planning, just to name a few. However, making such progress requires managing terabytes and even petabytes of data, generated by billions of devices, products, and events, often in real time, in different protocols, formats and types. The volume, velocity, and variety of big data, known as the 3 Vs , present formidable challenges, unmet by the traditional data management approaches. Traditionally, many data analyses have been performed using scientific workflows, tools for formalizing and structuring complex computational processes. While scientific workflows have been used extensively in structuring complex scientific data analysis processes, little work has been done to enable scientific workflows to cope with the three big data challenges on the one hand, and to leverage the dynamic resource provisioning capability of cloud computing to analyze big data on the other hand. In this dissertation, to facilitate efficient composition, verification, and execution of distributed large-scale scientific workflows, we first propose a formal approach to scientific workflow verification, including a workflow model, and the notion of a well-typed workflow. Our approach translates a scientific workflow into an equivalent typed lambda expression, and typechecks the workflow. We then propose a typetheoretic approach to the shimming problem in scientific workflows, which occurs when connecting related but incompatible components. We reduce the shimming problem to a runtime coercion problem in the theory of type systems, and propose a fully automated and transparent solution. Our technique algorithmically inserts invisible shims into the workflow specification, thereby resolving the shimming problem for any well-typed workflow. Next, we identify a set of important challenges for running big data workflows in the cloud. We then propose a generic, implementation-independent system architecture that addresses many of these challenges. Finally, we develop a cloud-enabled big data workflow management system, called DATAVIEW, that delivers a specific implementation of our proposed architecture. To further validate our proposed architecture, we conduct a case study in which we design and run a big data workflow from the automotive domain using the Amazon EC2 cloud environment

    Structured editing of literate programs

    Get PDF
    corecore