28 research outputs found

    Vcluster: A Portable Virtual Computing Library For Cluster Computing

    Get PDF
    Message passing has been the dominant parallel programming model in cluster computing, and libraries like Message Passing Interface (MPI) and Portable Virtual Machine (PVM) have proven their novelty and efficiency through numerous applications in diverse areas. However, as clusters of Symmetric Multi-Processor (SMP) and heterogeneous machines become popular, conventional message passing models must be adapted accordingly to support this new kind of clusters efficiently. In addition, Java programming language, with its features like object oriented architecture, platform independent bytecode, and native support for multithreading, makes it an alternative language for cluster computing. This research presents a new parallel programming model and a library called VCluster that implements this model on top of a Java Virtual Machine (JVM). The programming model is based on virtual migrating threads to support clusters of heterogeneous SMP machines efficiently. VCluster is implemented in 100% Java, utilizing the portability of Java to address the problems of heterogeneous machines. VCluster virtualizes computational and communication resources such as threads, computation states, and communication channels across multiple separate JVMs, which makes a mobile thread possible. Equipped with virtual migrating thread, it is feasible to balance the load of computing resources dynamically. Several large scale parallel applications have been developed using VCluster to compare the performance and usage of VCluster with other libraries. The results of the experiments show that VCluster makes it easier to develop multithreading parallel applications compared to conventional libraries like MPI. At the same time, the performance of VCluster is comparable to MPICH, a widely used MPI library, combined with popular threading libraries like POSIX Thread and OpenMP. In the next phase of our work, we implemented thread group and thread migration to demonstrate the feasibility of dynamic load balancing in VCluster. We carried out experiments to show that the load can be dynamically balanced in VCluster, resulting in a better performance. Thread group also makes it possible to implement collective communication functions between threads, which have been proved to be useful in process based libraries

    Idaho National Laboratory Cultural Resource Management Plan

    Full text link
    As a federal agency, the U.S. Department of Energy has been directed by Congress, the U.S. president, and the American public to provide leadership in the preservation of prehistoric, historic, and other cultural resources on the lands it administers. This mandate to preserve cultural resources in a spirit of stewardship for the future is outlined in various federal preservation laws, regulations, and guidelines such as the National Historic Preservation Act, the Archaeological Resources Protection Act, and the National Environmental Policy Act. The purpose of this Cultural Resource Management Plan is to describe how the Department of Energy, Idaho Operations Office will meet these responsibilities at the Idaho National Laboratory. This Laboratory, which is located in southeastern Idaho, is home to a wide variety of important cultural resources representing at least 13,500 years of human occupation in the southeastern Idaho area. These resources are nonrenewable; bear valuable physical and intangible legacies; and yield important information about the past, present, and perhaps the future. There are special challenges associated with balancing the preservation of these sites with the management and ongoing operation of an active scientific laboratory. The Department of Energy, Idaho Operations Office is committed to a cultural resource management program that accepts these challenges in a manner reflecting both the spirit and intent of the legislative mandates. This document is designed for multiple uses and is intended to be flexible and responsive to future changes in law or mission. Document flexibility and responsiveness will be assured through annual reviews and as-needed updates. Document content includes summaries of Laboratory cultural resource philosophy and overall Department of Energy policy; brief contextual overviews of Laboratory missions, environment, and cultural history; and an overview of cultural resource management practices. A series of appendices provides important details that support the main text

    Computer Science 2019 APR Self-Study & Documents

    Get PDF
    UNM Computer Science APR self-study report and review team report for Spring 2019, fulfilling requirements of the Higher Learning Commission

    Annual Report of the University, 1999-2000, Volumes 1-4

    Get PDF
    The Robert O. Anderson School and Graduate School of Management at The University of New Mexico Period of Report: July 1, 1999 to June 30, 2000 Submitted by Howard L. Smith, Dean The Anderson Schools of Management is divided into four distinct divisions- the Department of Accounting; the Department of Finance, International and Technology Management; the Department of Marketing, Information and Decision Sciences; and the Department of Organizational Studies. This structure provides an opportunity for The Anderson Schools to develop four distinct areas of excellence, proven by results reported here. I. Significant Developments During the Academic Year The Anderson Schools of Management • As a result of the multi-year gift from the Ford Motor Company, completed renovation of The Schools\u27 Advisement and Placement Center, as well as all student organization offices. • The Ford gift also provided for $100,000 to support faculty research, case studies and course development. • The Schools revised the MBA curriculum to meet the changing needs of professional, advanced business education. • The Schools updated computer laboratory facilities, with the addition of a 45-unit cluster for teaching and student work. • The faculty and staff of The Schools furthered outreach in economic development activities by participating directly as committee members and leaders in the cluster workgroups of the Next Generation Economy Initiative. • The faculty, staff and students of The Schools contributed to the development of the Ethics in Business Awards; particularly exciting was the fact that all nominee packages were developed by student teams from The Anderson Schools. • The Schools continue to generate more credit hours per faculty member than any other division of the UNM community. The Accounting Department • Preparation and presentation of a progress report to accrediting body, the AACSB. The Department of Finance, International and Technology Management • The Department continued to focus on expansion of the Management of Technology program as a strategic strength of The Schools. The Department of Marketing. Information and Decision Sciences • Generated 9022 credit hours, with a student enrollment of 3070. The Department of Organizational Studies • Coordinated the 9th UNM Universidad de Guanajuato (UG) Mexico Student Exchange

    Easier Parallel Programming with Provably-Efficient Runtime Schedulers

    Get PDF
    Over the past decade processor manufacturers have pivoted from increasing uniprocessor performance to multicore architectures. However, utilizing this computational power has proved challenging for software developers. Many concurrency platforms and languages have emerged to address parallel programming challenges, yet writing correct and performant parallel code retains a reputation of being one of the hardest tasks a programmer can undertake. This dissertation will study how runtime scheduling systems can be used to make parallel programming easier. We address the difficulty in writing parallel data structures, automatically finding shared memory bugs, and reproducing non-deterministic synchronization bugs. Each of the systems presented depends on a novel runtime system which provides strong theoretical performance guarantees and performs well in practice

    Flexible Application-Layer Multicast in Heterogeneous Networks

    Get PDF
    This work develops a set of peer-to-peer-based protocols and extensions in order to provide Internet-wide group communication. The focus is put to the question how different access technologies can be integrated in order to face the growing traffic load problem. Thereby, protocols are developed that allow autonomous adaptation to the current network situation on the one hand and the integration of WiFi domains where applicable on the other hand

    Autonomous grid scheduling using probabilistic job runtime scheduling

    Get PDF
    Computational Grids are evolving into a global, service-oriented architecture – a universal platform for delivering future computational services to a range of applications of varying complexity and resource requirements. The thesis focuses on developing a new scheduling model for general-purpose, utility clusters based on the concept of user requested job completion deadlines. In such a system, a user would be able to request each job to finish by a certain deadline, and possibly to a certain monetary cost. Implementing deadline scheduling is dependent on the ability to predict the execution time of each queued job, and on an adaptive scheduling algorithm able to use those predictions to maximise deadline adherence. The thesis proposes novel solutions to these two problems and documents their implementation in a largely autonomous and self-managing way. The starting point of the work is an extensive analysis of a representative Grid workload revealing consistent workflow patterns, usage cycles and correlations between the execution times of jobs and its properties commonly collected by the Grid middleware for accounting purposes. An automated approach is proposed to identify these dependencies and use them to partition the highly variable workload into subsets of more consistent and predictable behaviour. A range of time-series forecasting models, applied in this context for the first time, were used to model the job execution times as a function of their historical behaviour and associated properties. Based on the resulting predictions of job runtimes a novel scheduling algorithm is able to estimate the latest job start time necessary to meet the requested deadline and sort the queue accordingly to minimise the amount of deadline overrun. The testing of the proposed approach was done using the actual job trace collected from a production Grid facility. The best performing execution time predictor (the auto-regressive moving average method) coupled to workload partitioning based on three simultaneous job properties returned the median absolute percentage error centroid of only 4.75%. This level of prediction accuracy enabled the proposed deadline scheduling method to reduce the average deadline overrun time ten-fold compared to the benchmark batch scheduler. Overall, the thesis demonstrates that deadline scheduling of computational jobs on the Grid is achievable using statistical forecasting of job execution times based on historical information. The proposed approach is easily implementable, substantially self-managing and better matched to the human workflow making it well suited for implementation in the utility Grids of the future

    Proceedings, MSVSCC 2014

    Get PDF
    Proceedings of the 8th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 17, 2014 at VMASC in Suffolk, Virginia

    Deep Model for Improved Operator Function State Assessment

    Get PDF
    A deep learning framework is presented for engagement assessment using EEG signals. Deep learning is a recently developed machine learning technique and has been applied to many applications. In this paper, we proposed a deep learning strategy for operator function state (OFS) assessment. Fifteen pilots participated in a flight simulation from Seattle to Chicago. During the four-hour simulation, EEG signals were recorded for each pilot. We labeled 20- minute data as engaged and disengaged to fine-tune the deep network and utilized the remaining vast amount of unlabeled data to initialize the network. The trained deep network was then used to assess if a pilot was engaged during the four-hour simulation
    corecore