1,007 research outputs found

    Information Outlook, September 2005

    Get PDF
    Volume 9, Issue 9https://scholarworks.sjsu.edu/sla_io_2005/1008/thumbnail.jp

    Planning the Future of U.S. Particle Physics (Snowmass 2013): Chapter 1: Summary

    Full text link
    These reports present the results of the 2013 Community Summer Study of the APS Division of Particles and Fields ("Snowmass 2013") on the future program of particle physics in the U.S. Chapter 1 contains the Executive Summary and the summaries of the reports of the nine working groups.Comment: 51 page

    Proceedings of the 2005 IJCAI Workshop on AI and Autonomic Communications

    Get PDF

    Distributed computing and farm management with application to the search for heavy gauge bosons using the ATLAS experiment at the LHC (CERN)

    Get PDF
    The Standard Model of particle physics describes the strong, weak, and electromagnetic forces between the fundamental particles of ordinary matter. However, it presents several problems and some questions remain unanswered so it cannot be considered a complete theory of fundamental interactions. Many extensions have been proposed in order to address these problems. Some important recent extensions are the Extra Dimensions theories. In the context of some models with Extra Dimensions of size about 1TeV−11 TeV^{-}1, in particular in the ADD model with only fermions confined to a D-brane, heavy Kaluza-Klein excitations are expected, with the same properties as SM gauge bosons but more massive. In this work, three hadronic decay modes of some of such massive gauge bosons, Z* and W*, are investigated using the ATLAS experiment at the Large Hadron Collider (LHC), presently under construction at CERN. These hadronic modes are more difficult to detect than the leptonic ones, but they should allow a measurement of the couplings between heavy gauge bosons and quarks. The events were generated using the ATLAS fast simulation and reconstruction MC program Atlfast coupled to the Monte Carlo generator PYTHIA. We found that for an integrated luminosity of 3×105pb−13 × 10^{5} pb^{-}1 and a heavy gauge boson mass of 2 TeV, the channels Z*->bb and Z*->tt would be difficult to detect because the signal would be very small compared with the expected backgrou nd, although the significance in the case of Z*->tt is larger. In the channel W*->tb , the decay might yield a signal separable from the background and a significance larger than 5 so we conclude that it would be possible to detect this particular mode at the LHC. The analysis was also performed for masses of 1 TeV and we conclude that the observability decreases with the mass. In particular, a significance higher than 5 may be achieved below approximately 1.4, 1.9 and 2.2 TeV for Z*->bb , Z*->tt and W*->tb respectively. The LHC will start to operate in 2008 and collect data in 2009. It will produce roughly 15 Petabytes of data per year. Access to this experimental data has to be provided for some 5,000 scientists working in 500 research institutes and universities. In addition, all data need to be available over the estimated 15-year lifetime of the LHC. The analysis of the data, including comparison with theoretical simulations, requires an enormous computing power. The computing challenges that scientists have to face are the huge amount of data, calculations to perform and collaborators. The Grid has been proposed as a solution for those challenges. The LHC Computing Grid project (LCG) is the Grid used by ATLAS and the other LHC experiments and it is analised in depth with the aim of studying the possible complementary use of it with another Grid project. That is the Berkeley Open Infrastructure for Network C omputing middle-ware (BOINC) developed for the SETI@home project, a Grid specialised in high CPU requirements and in using volunteer computing resources. Several important packages of physics software used by ATLAS and other LHC experiments have been successfully adapted/ported to be used with this platform with the aim of integrating them into the LHC@home project at CERN: Atlfast, PYTHIA, Geant4 and Garfield. The events used in our physics analysis with Atlfast were reproduced using BOINC obtaining exactly the same results. The LCG software, in particular SEAL, ROOT and the external software, was ported to the Solaris/sparc platform to study it's portability in general as well. A testbed was performed including a big number of heterogeneous hardware and software that involves a farm of 100 computers at CERN's computing center (lxboinc) together with 30 PCs from CIEMAT and 45 from schools from Extremadura (Spain). That required a preliminary study, development and creation of components of the Quattor software and configuration management tool to install and manage the lxboinc farm and it also involved the set up of a collaboration between the Spanish research centers and government and CERN. The testbed was successful and 26,597 Grid jobs were delivered, executed and received successfully. We conclude that BOINC and LCG are complementary and useful kinds of Grid that can be used by ATLAS and the other LHC experiments. LCG has very good data distribution, management and storage capabilities that BOINC does not have. In the other hand, BOINC does not need high bandwidth or Internet speed and it also can provide a huge and inexpensive amount of computing power coming from volunteers. In addition, it is possible to send jobs from LCG to BOINC and vice versa. So, possible complementary cases are to use volunteer BOINC nodes when the LCG nodes have too many jobs to do or to use BOINC for high CPU tasks like event generators or reconstructions while concentrating LCG for data analysis

    A descriptive dissertation on trust development between pastor and parish

    Get PDF
    https://place.asburyseminary.edu/ecommonsatsdissertations/1595/thumbnail.jp

    Information Outlook, September 2007

    Get PDF
    Volume 11, Issue 9https://scholarworks.sjsu.edu/sla_io_2007/1008/thumbnail.jp

    From diaspora to disciple : training mainland Chinese Christians to live a Christlike life

    Get PDF
    https://place.asburyseminary.edu/ecommonsatsdissertations/1890/thumbnail.jp

    The DUNE Far Detector Interim Design Report Volume 1: Physics, Technology & Strategies Deep Underground Neutrino Experiment (DUNE)

    Get PDF
    The Deep Underground Neutrino Experiment (DUNE) will be a world-class neutrino observatory and nucleon decay detector designed to answer fundamental questions about the nature of elemen tary particles and their role in the universe. The international DUNE experiment, hosted by the U.S. Department of Energy’s Fermilab, will consist of a far detector to be located about 1.5 km underground at the Sanford Underground Research Facility (SURF) in South Dakota, USA, at a distance of 1300 km from Fermilab, and a near detector to be located at Fermilab in Illinois. The far detector will be a very large, modular liquid argon time-projection chamber (LArTPC) with a 40 kt (40 Gg) fiducial mass. This LAr technology will make it possible to reconstruct neutrino interactions with image-like precision and unprecedented resolution

    OpenCog Hyperon: A Framework for AGI at the Human Level and Beyond

    Full text link
    An introduction to the OpenCog Hyperon framework for Artificiai General Intelligence is presented. Hyperon is a new, mostly from-the-ground-up rewrite/redesign of the OpenCog AGI framework, based on similar conceptual and cognitive principles to the previous OpenCog version, but incorporating a variety of new ideas at the mathematical, software architecture and AI-algorithm level. This review lightly summarizes: 1) some of the history behind OpenCog and Hyperon, 2) the core structures and processes underlying Hyperon as a software system, 3) the integration of this software system with the SingularityNET ecosystem's decentralized infrastructure, 4) the cognitive model(s) being experimentally pursued within Hyperon on the hopeful path to advanced AGI, 5) the prospects seen for advanced aspects like reflective self-modification and self-improvement of the codebase, 6) the tentative development roadmap and various challenges expected to be faced, 7) the thinking of the Hyperon team regarding how to guide this sort of work in a beneficial direction ... and gives links and references for readers who wish to delve further into any of these aspects
    • …
    corecore