14,339 research outputs found

    WESTT (Workload, Error, Situational Awareness, Time and Teamwork): An analytical prototyping system for command and control

    Get PDF
    Modern developments in the use of information technology within command and control allow unprecedented scope for flexibility in the way teams deal with tasks. These developments, together with the increased recognition of the importance of knowledge management within teams present difficulties for the analyst in terms of evaluating the impacts of changes to task composition or team membership. In this paper an approach to this problem is presented that represents team behaviour in terms of three linked networks (representing task, social network structure and knowledge) within the integrative WESTT software tool. In addition, by automating analyses of workload and error based on the same data that generate the networks, WESTT allows the user to engage in the process of rapid and iterative “analytical prototyping”. For purposes of illustration an example of the use of this technique with regard to a simple tactical vignette is presented

    Parametric t-Distributed Stochastic Exemplar-centered Embedding

    Full text link
    Parametric embedding methods such as parametric t-SNE (pt-SNE) have been widely adopted for data visualization and out-of-sample data embedding without further computationally expensive optimization or approximation. However, the performance of pt-SNE is highly sensitive to the hyper-parameter batch size due to conflicting optimization goals, and often produces dramatically different embeddings with different choices of user-defined perplexities. To effectively solve these issues, we present parametric t-distributed stochastic exemplar-centered embedding methods. Our strategy learns embedding parameters by comparing given data only with precomputed exemplars, resulting in a cost function with linear computational and memory complexity, which is further reduced by noise contrastive samples. Moreover, we propose a shallow embedding network with high-order feature interactions for data visualization, which is much easier to tune but produces comparable performance in contrast to a deep neural network employed by pt-SNE. We empirically demonstrate, using several benchmark datasets, that our proposed methods significantly outperform pt-SNE in terms of robustness, visual effects, and quantitative evaluations.Comment: fixed typo

    A Conceptual UX-aware Model of Requirements

    Full text link
    User eXperience (UX) is becoming increasingly important for success of software products. Yet, many companies still face various challenges in their work with UX. Part of these challenges relate to inadequate knowledge and awareness of UX and that current UX models are commonly not practical nor well integrated into existing Software Engineering (SE) models and concepts. Therefore, we present a conceptual UX-aware model of requirements for software development practitioners. This layered model shows the interrelation between UX and functional and quality requirements. The model is developed based on current models of UX and software quality characteristics. Through the model we highlight the main differences between various requirement types in particular essentially subjective and accidentally subjective quality requirements. We also present the result of an initial validation of the model through interviews with 12 practitioners and researchers. Our results show that the model can raise practitioners' knowledge and awareness of UX in particular in relation to requirement and testing activities. It can also facilitate UX-related communication among stakeholders with different backgrounds.Comment: 6th International Working Conference on Human-Centred Software Engineerin

    Hypercube matrix computation task

    Get PDF
    A major objective of the Hypercube Matrix Computation effort at the Jet Propulsion Laboratory (JPL) is to investigate the applicability of a parallel computing architecture to the solution of large-scale electromagnetic scattering problems. Three scattering analysis codes are being implemented and assessed on a JPL/California Institute of Technology (Caltech) Mark 3 Hypercube. The codes, which utilize different underlying algorithms, give a means of evaluating the general applicability of this parallel architecture. The three analysis codes being implemented are a frequency domain method of moments code, a time domain finite difference code, and a frequency domain finite elements code. These analysis capabilities are being integrated into an electromagnetics interactive analysis workstation which can serve as a design tool for the construction of antennas and other radiating or scattering structures. The first two years of work on the Hypercube Matrix Computation effort is summarized. It includes both new developments and results as well as work previously reported in the Hypercube Matrix Computation Task: Final Report for 1986 to 1987 (JPL Publication 87-18)

    PuLSE-I: Deriving instances from a product line infrastructure

    Get PDF
    Reusing assets during application engineering promises to improve the efficiency of systems development. However, in order to benefit from reusable assets, application engineering processes must incorporate when and how to use the reusable assets during single system development. However, when and how to use a reusable asset depends on what types of reusable assets have been created.Product line engineering approaches produce a reusable infrastructure for a set of products. In this paper, we present the application engineering process associated with the PuLSE product line software engineering method - PuLSE-I. PuLSE-I details how single systems can be built efficiently from the reusable product line infrastructure built during the other PuLSE activities

    Hypercube matrix computation task

    Get PDF
    The Hypercube Matrix Computation (Year 1986-1987) task investigated the applicability of a parallel computing architecture to the solution of large scale electromagnetic scattering problems. Two existing electromagnetic scattering codes were selected for conversion to the Mark III Hypercube concurrent computing environment. They were selected so that the underlying numerical algorithms utilized would be different thereby providing a more thorough evaluation of the appropriateness of the parallel environment for these types of problems. The first code was a frequency domain method of moments solution, NEC-2, developed at Lawrence Livermore National Laboratory. The second code was a time domain finite difference solution of Maxwell's equations to solve for the scattered fields. Once the codes were implemented on the hypercube and verified to obtain correct solutions by comparing the results with those from sequential runs, several measures were used to evaluate the performance of the two codes. First, a comparison was provided of the problem size possible on the hypercube with 128 megabytes of memory for a 32-node configuration with that available in a typical sequential user environment of 4 to 8 megabytes. Then, the performance of the codes was anlyzed for the computational speedup attained by the parallel architecture

    What Permits Small Firms to Compete in High-Tech Industries? Inter-Organizational Knowledge Creation in the Taiwanese Computer Industry

    Get PDF
    This paper addresses a puzzle related to firm size and competition. Since Stephen Hymer´s pioneering contribution (Hymer, 1960/1976), theories of the firm implicitly assume that only large, diversified multinational enterprises can compete in industries that combine high capital intensity, high knowledge-intensity and a high degree of internationalization. Small firms, by definition, have limited resources and capabilities and are unlikely to possess substantial ownership advantages. They also have a limited capacity to influence and shape the development of markets, market structure and technological change. One would thus expect that they are ill-equipped to compete in a knowledge-intensive industry that is highly globalized. Taiwan’s experience in the computer industry tells a different story: despite the dominance of small- and medium-sized enterprises (SMEs), Taiwan successfully competes in the international market for PC-related products, key components and knowledge-intensive services. The paper inquires into how this was possible. It is argued that organizational innovations related to the creation of knowledge are of critical importance. Taiwanese computer firms were able to develop their own distinctive approach: due to their initially very narrow knowledge base, access to external sources of knowledge has been an essential prerequisite for their knowledge creation. Such “inter-organizational knowledge creation” (Nonaka and Takeuchi, 1995) was facilitated by two factors: active, yet selective and continuously adjusted industrial development policies; and a variety of linkages with large Taiwanese business groups, foreign sales and manufacturing affiliates and an early participation in international production networks established by foreign electronics companies. A novel contribution of this paper is its focus on inter-organizational knowledge creation. I first describe Taiwan´s achievements in the computer industry. The dominance of SMEs and their role as a source of flexibility is documented in part II. Part III describes some policy innovations that have shaped the process of knowledge creation. The rest of the paper inquires how inter-organizational knowledge creation has benefited from a variety of linkages with large domestic and foreign firms; I also address some industrial upgrading requirements that result from this peculiar type of knowledge creation.knowledge creation; learning; small firms; networks; firm strategy; industrial policies;

    The factory approach to software development : a strategic overview

    Get PDF
    "October 1989."Includes bibliographical references (leaves 36-38).by Michael Cusumano
    corecore