741 research outputs found

    Multiple Choice Systems for Decision Support

    Get PDF
    Humans are able to think, to feel, and to sense. We are also able to compute but not very well. In contrast, computers are giants in computing. Yet, they can not do anything else besides computing. Appropriate combinations of the different gifts and strengths of human and computer may result in impressive performances. In the 3-Hirn approach one human and two computers are involved. On the computers different programs are running. The human starts the machines and inspects the solutions they propose. He compares these candidate solutions and finally decides for one of the alternatives. So, the human makes the final choice from a small number of computer proposals. In performance-oriented chess, 3-Hirn combinations consisting of an amateur player and commer-cial software have reached world class level. 3-Hirn is a Decision Support System with Multiple Choice Structure. Such Multiple Choice Systems will be exhibited and discussed

    Spartan Daily, September 14, 1981

    Get PDF
    Volume 77, Issue 9https://scholarworks.sjsu.edu/spartandaily/6784/thumbnail.jp

    Toward an optimal foundation architecture for optoelectronic computing. Part II. Physical construction and application platforms

    Get PDF
    Cataloged from PDF version of article.Various issues pertaining to the physical construction of systems that are based on regularly interconnected device planes, such as heat removal and extensibility of the optical interconnections for larger systems, are discussed. Regularly interconnected device planes constitute a foundation architecture that is reasonably close to the best possible as defined by physical limitations. Three application platforms based on the foundation architecture described are offered. © 1997 Optical Society of Americ

    Creative Practice for Classical String Players with Live Looping

    Get PDF
    In recent years, string pedagogy discussions have highlighted the greater need for creative practice as classical string players. Since the second half of the nineteenth century, string methods have shifted towards a limited scope of improvisatory techniques, parallelling the decline of improvisation in Western classical music performance practices. This thesis explores live looping as a practice tool to facilitate learning concepts and help string players develop musicianship skills including improvisation, participate in non-classical genres, and explore their creative voices. Examining the results of string educators that incorporate live looping into their own teaching reveals the tool’s effectiveness in bridging curricula standards with opportunities for avenues of creativity and endless experimentation. Ultimately, live looping can help string players learn a concept more deeply, employing scaffolding techniques to practice abstract models and thus relying less on any specific example such as learning from sheet music. This encourages a broader musical foundation enabling classical string players to feel more equipped in areas beyond their comfort zones and participate in and enjoy a wider range of musically fulfilling experiences

    Improvement of Decision on Coding Unit Split Mode and Intra-Picture Prediction by Machine Learning

    Get PDF
    High efficiency Video Coding (HEVC) has been deemed as the newest video coding standard of the ITU-T Video Coding Experts Group and the ISO/IEC Moving Picture Experts Group. The reference software (i.e., HM) have included the implementations of the guidelines in appliance with the new standard. The software includes both encoder and decoder functionality. Machine learning (ML) works with data and processes it to discover patterns that can be later used to analyze new trends. ML can play a key role in a wide range of critical applications, such as data mining, natural language processing, image recognition, and expert systems. In this research project, in compliance with H.265 standard, we are focused on improvement of the performance of encode/decode by optimizing the partition of prediction block in coding unit with the help of supervised machine learning. We used Keras library as the main tool to implement the experiments. Key parameters were tuned for the model in our convolution neuron network. The coding tree unit mode decision time produced in the model was compared with that produced in HM software, and it was proved to have improved significantly. The intra-picture prediction mode decision was also investigated with modified model and yielded satisfactory results

    TME Volume 5, Numbers 2 and 3

    Get PDF

    Serious Game Design Using MDA and Bloom’s Taxonomy

    Get PDF
    The field of Serious Games (SG) studies the use of games as a learning tool and it has been in existence for over forty years. During this period the primary focus of the field has been designing systems to evaluate the educational efficacy of existing games. This translates to a lack of systems designed to aid in the creation of serious games, but this does not have to remain an issue. The rise in popularity of games means that there is no shortage of ideas on how to methodically create them for commercial production which can just as easily be applied to SG creation. However, showing a clear linkage between a game’s components and its learning objectives is a primary difficulty. Created by Hunicke, LeBlanc, and Zubek, the Mechanics Dynamics Aesthetics (MDA) methodology is an understandable and robust construct for creating commercial games using mechanics to produce an intended level of aesthetic appreciation from its consumers. However, an educational Serious Game (SG) must not only be fun, but through experience it must convey the intended learning objectives to its players. This thesis explores utilizing the MDA methodology, with Bloom’s taxonomy, to create and evaluate a game to meet two learning objectives for a Cyber focused class. The created game CyComEx, was designed to teach cyber students to identify tradeoffs between security and mission execution, and to explain how policies can impact cyber mission areas. The game was evaluated to have conveyed these objectives during a playthrough and that it was sufficiently enjoyable to students participating in this case study

    MACHINE LEARNING OPERATIONS (MLOPS) ARCHITECTURE CONSIDERATIONS FOR DEEP LEARNING WITH A PASSIVE ACOUSTIC VECTOR SENSOR

    Get PDF
    As machine learning augmented decision-making becomes more prevalent, defense applications for these techniques are needed to prevent being outpaced by peer adversaries. One area that has significant potential is deep learning applications to classify passive sonar acoustic signatures, which would accelerate tactical, operational, and strategic decision-making processes in one of the most contested and difficult warfare domains. Convolutional Neural Networks have achieved some of the greatest success in accomplishing this task; however, a full production pipeline to continually train, deploy, and evaluate acoustic deep learning models throughout their lifecycle in a realistic architecture is a barrier to further and more rapid success in this field of research. Two main contributions of this thesis are a proposed production architecture for model lifecycle management using Machine Learning Operations (MLOps) and evaluation of the same on live passive sonar stream. Using the proposed production architecture, this work evaluates model performance differences in a production setting and explores methods to improve model performance in production. Through documenting considerations for creating a platform and architecture to continuously train, deploy, and evaluate various deep learning acoustic classification models, this study aims to create a framework and recommendations to accelerate progress in acoustic deep learning classification research.Los Alamos National LabLieutenant, United States NavyApproved for public release. Distribution is unlimited

    Low latency fast data computation scheme for map reduce based clusters

    Get PDF
    MapReduce based clusters is an emerging paradigm for big data analytics to scale up and speed up the big data classification, investigation, and processing of the huge volumes, massive and complex data sets. One of the fundamental issues of processing the data in MapReduce clusters is to deal with resource heterogeneity, especially when there is data inter-dependency among the tasks. Secondly, MapReduce runs a job in many phases; the intermediate data traffic and its migration time become a major bottleneck for the computation of jobs which produces a huge intermediate data in the shuffle phase. Further, encountering factors to monitor the critical issue of straggling is necessary because it produces unnecessary delays and poses a serious constraint on the overall performance of the system. Thus, this research aims to provide a low latency fast data computation scheme which introduces three algorithms to handle interdependent task computation among heterogeneous resources, reducing intermediate data traffic with its migration time and monitoring and modelling job straggling factors. This research has developed a Low Latency and Computational Cost based Tasks Scheduling (LLCC-TS) algorithm of interdependent tasks on heterogeneous resources by encountering priority to provide cost-effective resource utilization and reduced makespan. Furthermore, an Aggregation and Partition based Accelerated Intermediate Data Migration (APAIDM) algorithm has been presented to reduce the intermediate data traffic and data migration time in the shuffle phase by using aggregators and custom partitioner. Moreover, MapReduce Total Execution Time Prediction (MTETP) scheme for MapReduce job computation with inclusion of the factors which affect the job computation time has been produced using machine learning technique (linear regression) in order to monitor the job straggling and minimize the latency. LLCCTS algorithm has 66.13%, 22.23%, 43.53%, and 44.74% performance improvement rate over FIFO, improved max-min, SJF and MOS algorithms respectively for makespan time of scheduling of interdependent tasks. The AP-AIDM algorithm scored 66.62% and 48.4% performance improvements in reducing the data migration time over hash basic and conventional aggregation algorithms, respectively. Moreover, an MTETP technique shows the performance improvement in predicting the total job execution time with 20.42% accuracy than the improved HP technique. Thus, the combination of the three algorithms mentioned above provides a low latency fast data computation scheme for MapReduce based clusters
    corecore