9,969 research outputs found

    Identifying and addressing adaptability and information system requirements for tactical management

    Get PDF

    Workshop proceedings: Information Systems for Space Astrophysics in the 21st Century, volume 1

    Get PDF
    The Astrophysical Information Systems Workshop was one of the three Integrated Technology Planning workshops. Its objectives were to develop an understanding of future mission requirements for information systems, the potential role of technology in meeting these requirements, and the areas in which NASA investment might have the greatest impact. Workshop participants were briefed on the astrophysical mission set with an emphasis on those missions that drive information systems technology, the existing NASA space-science operations infrastructure, and the ongoing and planned NASA information systems technology programs. Program plans and recommendations were prepared in five technical areas: Mission Planning and Operations; Space-Borne Data Processing; Space-to-Earth Communications; Science Data Systems; and Data Analysis, Integration, and Visualization

    Software (re)modularization: Fight against the structure erosion and migration preparation

    No full text
    Software systems, and in particular, Object-Oriented sys- tems are models of the real world that manipulate representa- tions of its entities through models of its processes. The real world is not static: new laws are created, concurrents offer new functionalities, users have renewed expectation toward what a computer should offer them, memory constraints are added, etc. As a result, software systems must be continuously updated or face the risk of becoming gradually out-dated and irrelevant [34]. In the meantime, details and multiple abstraction levels result in a high level of com- plexity, and completely analyzing real software systems is impractical. For example, the Windows operating system consists of more than 60 millions lines of code (500,000 pages printed double-face, about 16 times the Encyclopedia Universalis). Maintaining such large applications is a trade- off between having to change a model that nobody can understand in details and limiting the impact of possible changes. Beyond maintenance, a good structure gives to the software systems good qualities for migration towards modern paradigms as web services or components, and the problem of architecture extraction is very close to the classical remodularization problem

    The Structured Process Modeling Theory (SPMT): a cognitive view on why and how modelers benefit from structuring the process of process modeling

    Get PDF
    After observing various inexperienced modelers constructing a business process model based on the same textual case description, it was noted that great differences existed in the quality of the produced models. The impression arose that certain quality issues originated from cognitive failures during the modeling process. Therefore, we developed an explanatory theory that describes the cognitive mechanisms that affect effectiveness and efficiency of process model construction: the Structured Process Modeling Theory (SPMT). This theory states that modeling accuracy and speed are higher when the modeler adopts an (i) individually fitting (ii) structured (iii) serialized process modeling approach. The SPMT is evaluated against six theory quality criteria

    Interaction-aware development environments: recording, mining, and leveraging IDE interactions to analyze and support the development flow

    Get PDF
    Nowadays, software development is largely carried out using Integrated Development Environments, or IDEs. An IDE is a collection of tools and facilities to support the most diverse software engineering activities, such as writing code, debugging, and program understanding. The fact that they are integrated enables developers to find all the tools needed for the development in the same place. Each activity is composed of many basic events, such as clicking on a menu item in the IDE, opening a new user interface to browse the source code of a method, or adding a new statement in the body of a method. While working, developers generate thousands of these interactions, that we call fine-grained IDE interaction data. We believe this data is a valuable source of information that can be leveraged to enable better analyses and to offer novel support to developers. However, this data is largely neglected by modern IDEs. In this dissertation we propose the concept of "Interaction-Aware Development Environments": IDEs that collect, mine, and leverage the interactions of developers to support and simplify their workflow. We formulate our thesis as follows: Interaction-Aware Development Environments enable novel and in- depth analyses of the behavior of software developers and set the ground to provide developers with effective and actionable support for their activities inside the IDE. For example, by monitoring how developers navigate source code, the IDE could suggest the program entities that are potentially relevant for a particular task. Our research focuses on three main directions: 1. Modeling and Persisting Interaction Data. The first step to make IDEs aware of interaction data is to overcome its ephemeral nature. To do so we have to model this new source of data and to persist it, making it available for further use. 2. Interpreting Interaction Data. One of the biggest challenges of our research is making sense of the millions of interactions generated by developers. We propose several models to interpret this data, for example, by reconstructing high-level development activities from interaction histories or measure the navigation efficiency of developers. 3. Supporting Developers with Interaction Data. Novel IDEs can use the potential of interaction data to support software development. For example, they can identify the UI components that are potentially unnecessary for the future and suggest developers to close them, reducing the visual cluttering of the IDE

    National Center for Biomedical Ontology: Advancing biomedicine through structured organization of scientific knowledge

    Get PDF
    The National Center for Biomedical Ontology is a consortium that comprises leading informaticians, biologists, clinicians, and ontologists, funded by the National Institutes of Health (NIH) Roadmap, to develop innovative technology and methods that allow scientists to record, manage, and disseminate biomedical information and knowledge in machine-processable form. The goals of the Center are (1) to help unify the divergent and isolated efforts in ontology development by promoting high quality open-source, standards-based tools to create, manage, and use ontologies, (2) to create new software tools so that scientists can use ontologies to annotate and analyze biomedical data, (3) to provide a national resource for the ongoing evaluation, integration, and evolution of biomedical ontologies and associated tools and theories in the context of driving biomedical projects (DBPs), and (4) to disseminate the tools and resources of the Center and to identify, evaluate, and communicate best practices of ontology development to the biomedical community. Through the research activities within the Center, collaborations with the DBPs, and interactions with the biomedical community, our goal is to help scientists to work more effectively in the e-science paradigm, enhancing experiment design, experiment execution, data analysis, information synthesis, hypothesis generation and testing, and understand human disease

    CAiSE Radar 2016

    Get PDF
    The CAiSE Radar is an experimental format, established for CAiSE 2016, to make CAiSE workshops livelier, exciting, stimulate discussions, and attract additional active participants by establishing an environment where not only well established and validated research is reported but research in infancy, new ideas, and potentially interesting research projects can be presented and discussed. So similarly to a radar, the idea is to enable researchers to look into the future of the field and identify upcoming trends early. The aim of such effort is on one hand to contribute to the building of research communities and promote the integration of young researchers into the community, and on the other hand to provide opportunities to discuss ideas early and to receive additional opinions on planned research
    corecore