1,276 research outputs found

    The Locus Algorithm III: A Grid Computing system to generate catalogues of optimised pointings for Differential Photometry

    Get PDF
    This paper discusses the hardware and software components of the Grid Computing system used to implement the Locus Algorithm to identify optimum pointings for differential photometry of 61,662,376 stars and 23,799 quasars. The scale of the data, together with initial operational assessments demanded a High Performance Computing (HPC) system to complete the data analysis. Grid computing was chosen as the HPC solution as the optimum choice available within this project. The physical and logical structure of the National Grid computing Infrastructure informed the approach that was taken. That approach was one of layered separation of the different project components to enable maximum flexibility and extensibility

    Conversations on a probable future: interview with Beatrice Fazi

    Get PDF
    No description supplie

    Situational Awareness Support to Enhance Teamwork in Collaborative Environments

    Get PDF
    Modern collaborative environments often provide an overwhelming amount of visual information on multiple displays. The multitude of personal and shared interaction devices leads to lack of awareness of team members on ongoing activities, and awareness of who is in control of shared artefacts. This research addresses the situational awareness (SA) support of multidisciplinary teams in co-located collaborative environments. This work aims at getting insights into design and evaluation of large displays systems that afford SA and effective teamwork

    Genetic Algorithm-based Mapper to Support Multiple Concurrent Users on Wireless Testbeds

    Full text link
    Communication and networking research introduces new protocols and standards with an increasing number of researchers relying on real experiments rather than simulations to evaluate the performance of their new protocols. A number of testbeds are currently available for this purpose and a growing number of users are requesting access to those testbeds. This motivates the need for better utilization of the testbeds by allowing concurrent experimentations. In this work, we introduce a novel mapping algorithm that aims to maximize wireless testbed utilization using frequency slicing of the spectrum resources. The mapper employs genetic algorithm to find the best combination of requests that can be served concurrently, after getting all possible mappings of each request via an induced sub-graph isomorphism stage. The proposed mapper is tested on grid testbeds and randomly generated topologies. The solution of our mapper is compared to the optimal one, obtained through a brute-force search, and was able to serve the same number of requests in 82.96% of testing scenarios. Furthermore, we show the effect of the careful design of testbed topology on enhancing the testbed utilization by applying our mapper on a carefully positioned 8-nodes testbed. In addition, our proposed approach for testbed slicing and requests mapping has shown an improved performance in terms of total served requests, about five folds, compared to the simple allocation policy with no slicing.Comment: IEEE Wireless Communications and Networking Conference (WCNC) 201

    Managing Copyright and Licensing Information in Software Projects: Streamlining Specifications and Standards for the Next Generation Internet

    Get PDF
    This article analyses how providing clear specifications to software projects and communicating legal information regarding licenses and copyright facilitates streamlining management policies for digital commons, especially Free Software. In particular, this article deals with the compliance issues software projects may face during their implementation of Free Software licenses. Establishing a compliance program for the different types of Free Software licenses involves operational, logistical, and legal efforts. A licensing compliance program depends on the size of the Free Software project, as it could be tailored for small teams of contributors working on a small program or for a large corporation implementing Free Software on their production. A successful compliance program can be beneficial not only for the project, but to the whole ecosystem, as it enables a project or organization to effectively use and profit from digital commons and contribute back to the community. As a study case, this paper focuses on how a compliance workflow was proposed to the Next Generation Internet (NGI), an European Commission (EC) initiative for Europe’s digital transformation. The NGI aims to shape the development of the “Internet of Humans”, in response to fundamental rights and principles including trust, security, and inclusion. The NGI fosters diversity and decentralisation for a sustainable open ecosystem based on open technologies, such as Free Software. The paper illustrates the workflow proposed for software projects covered by the NGI, covering topics such as compliance processes, code release, guidelines for external contribution, and baseline requirements for contracts and trademarks. In particular, the article refers to the “REUSE Software”, a set of best practices for declaring copyright and licensing in an unambiguous, human- and machine-readable way. The REUSE specifications facilitate management policies for digital commons. They improve data and metadata communication for individuals, communities, governments, and businesses. The article concludes with the lessons learned from the experience gathered in the three years of implementation of the compliance workflow established for the NGI initiative. This effort actively contributed to 295 software projects

    Satellite Networks: Architectures, Applications, and Technologies

    Get PDF
    Since global satellite networks are moving to the forefront in enhancing the national and global information infrastructures due to communication satellites' unique networking characteristics, a workshop was organized to assess the progress made to date and chart the future. This workshop provided the forum to assess the current state-of-the-art, identify key issues, and highlight the emerging trends in the next-generation architectures, data protocol development, communication interoperability, and applications. Presentations on overview, state-of-the-art in research, development, deployment and applications and future trends on satellite networks are assembled

    RIACS

    Get PDF
    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities that serves as a bridge between NASA and the academic community. Under a five-year co-operative agreement with NASA, research at RIACS is focused on areas that are strategically enabling to the Ames Research Center's role as NASA's Center of Excellence for Information Technology. The primary mission of RIACS is charted to carry out research and development in computer science. This work is devoted in the main to tasks that are strategically enabling with respect to NASA's bold mission in space exploration and aeronautics. There are three foci for this work: (1) Automated Reasoning. (2) Human-Centered Computing. and (3) High Performance Computing and Networking. RIACS has the additional goal of broadening the base of researcher in these areas of importance to the nation's space and aeronautics enterprises. Through its visiting scientist program, RIACS facilitates the participation of university-based researchers, including both faculty and students, in the research activities of NASA and RIACS. RIACS researchers work in close collaboration with NASA computer scientists on projects such as the Remote Agent Experiment on Deep Space One mission, and Super-Resolution Surface Modeling

    Practical Experiences With Torque Meta-Scheduling In The Czech National Grid

    Get PDF
    The Czech National Grid Infrastructure went through a complex transition inthe last year. The production environment has been switched from a commercialbatch system PBSPro, which was replaced by an open source alternative Torquebatch system.This paper concentrates on two aspects of this transition. First, we will presentour practical experience with Torque being used as a production ready batchsystem. Our modified version of Torque, with all the necessary PBSPro ex-clusive features re-implemented and further extended with new features likecloud-like behaviour, was deployed across the entire production environment,covering the entire Czech Republic for almost a full year.In the second part, we will present our work on meta-scheduling. This in-volves our work on distributed architecture and cloud-grid convergence. Thedistributed architecture was designed to overcome the limitations of a centralserver setup, which was originally used and presented stability and performanceissues. While this paper does not discuss the inclusion of cloud interfaces intogrids, it does present the dynamic infrastructure, which is a requirement forsharing the grid infrastructure between a batch system and a cloud gateway.We are also inviting everyone to try out our fork of the Torque batch system,which is now publicly available
    corecore