122 research outputs found

    The colours of the Higgs boson: a study in creativity and science motivation among high-school students in Italy

    Get PDF
    AbstractWith the increasing shift from STEM to STEAM education, arts-based approaches to science teaching and learning are considered promising for aligning school science curricula with the development of twenty-first century skills, including creativity. Yet the impact of STEAM practices on student creativity and specifically on how the latter is associated with science learning outcomes have thus far received scarce empirical support. This paper contributes to this line of research by reporting on a two-wave quantitative study that examines the effect of a long-term STEAM intervention on two cognitive processes associated with creativity (act, flow) and their interrelationships with intrinsic and extrinsic components of science motivation. Using pre- and post-survey data from 175 high-school students in Italy, results show an overall positive effect of the intervention both on the act subscale of creativity and science career motivation, whereas a negative effect is found on self-efficacy. Gender differences in the above effects are also observed. Further, results provide support for the mediating role of self-efficacy in the relationship between creativity and science career motivation. Implications for the design of STEAM learning environments are discussed

    Analisi della evoluzione delle tecnologie hardware per il calcolo scientifico

    Get PDF
    Gli esperimenti della fisica HEP avranno bisogno, nel prossimo decennio, di strumenti di calcolo di enorme potenza, che le tecnologie attuali non potranno soddisfare. Questo lavoro è un'analisi dell'evoluzione delle tecnologie di CPU, di storage e di rete, volta a valutare alcuni degli indicatori (prestazioni, consumi) su cui poter basare la progettazione di un centro di calcolo valida per i prossimi 10 anni. Previsioni sull'evoluzione tecnologica di tale durata sono facilmente soggette ad errori di valutazione anche macroscopici: roadmap valide solo per tre-quattro anni, inaspettati balzi tecnologici, imprevedibili mutamenti di andamento del mercato, possono facilmente cambiare o anche ribaltare previsioni così a lungo protratte nel futuro, ed invalidare estrapolazioni per quanto accurate. Ciononostante, un'analisi della situazione e di quello che oggi può essere detto in relazione all'evoluzione di queste tecnologie è un punto di partenza senza il quale non sarebbe possibile fare alcun tipo di ipotesi

    HEPScore: A new CPU benchmark for the WLCG

    Full text link
    HEPScore is a new CPU benchmark created to replace the HEPSPEC06 benchmark that is currently used by the WLCG for procurement, computing resource pledges and performance studies. The development of the new benchmark, based on HEP applications or workloads, has involved many contributions from software developers, data analysts, experts of the experiments, representatives of several WLCG computing centres, as well as the WLCG HEPScore Deployment Task Force. In this contribution, we review the selection of workloads and the validation of the new HEPScore benchmark.Comment: Paper submitted to the proceedings of the Computing in HEP Conference 2023, Norfol

    Implementation and use of a highly available and innovative IaaS solution: the Cloud Area Padovana

    Get PDF
    While in the business world the cloud paradigm is typically implemented purchasing resources and services from third party providers (e.g. Amazon), in the scientific environment there's usually the need of on-premises IaaS infrastructures which allow efficient usage of the hardware distributed among (and owned by) different scientific administrative domains. In addition, the requirement of open source adoption has led to the choice of products like OpenStack by many organizations. We describe a use case of the Italian National Institute for Nuclear Physics (INFN) which resulted in the implementation of a unique cloud service, called ’Cloud Area Padovana’, which encompasses resources spread over two different sites: the INFN Legnaro National Laboratories and the INFN Padova division. We describe how this IaaS has been implemented, which technologies have been adopted and how services have been configured in high-availability (HA) mode. We also discuss how identity and authorization management were implemented, adopting a widely accepted standard architecture based on SAML2 and OpenID: by leveraging the versatility of those standards the integration with authentication federations like IDEM was implemented. We also discuss some other innovative developments, such as a pluggable scheduler, implemented as an extension of the native OpenStack scheduler, which allows the allocation of resources according to a fair-share based model and which provides a persistent queuing mechanism for handling user requests that can not be immediately served. Tools, technologies, procedures used to install, configure, monitor, operate this cloud service are also discussed. Finally we present some examples that show how this IaaS infrastructure is being used

    Autonomic Management of Large Clusters and Their Integration into the Grid

    Get PDF
    We present a framework for the co-ordinated, autonomic management of multiple clusters in a compute center and their integration into a Grid environment. Site autonomy and the automation of administrative tasks are prime aspects in this framework. The system behavior is continuously monitored in a steering cycle and appropriate actions are taken to resolve any problems. All presented components have been implemented in the course of the EU project DataGrid: The Lemon monitoring components, the FT fault-tolerance mechanism, the quattor system for software installation and configuration, the RMS job and resource management system, and the Gridification scheme that integrates clusters into the Grid

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe

    Trends in computing technologies and markets: The HEPiX TechWatch WG

    No full text
    Driven by the need to carefully plan and optimise the resources for the next data taking periods of Big Science projects, such as CERN’s Large Hadron Collider and others, sites started a common activity, the HEPiX Technology Watch Working Group, tasked with tracking the evolution of technologies and markets of concern to the data centres. The talk will give an overview of general and semiconductor markets, server markets, CPUs and accelerators, memories, storage and networks; it will highlight important areas of uncertainties and risks

    Trends in computing technologies and markets: The HEPiX TechWatch WG

    Get PDF
    Driven by the need to carefully plan and optimise the resources for the next data taking periods of Big Science projects, such as CERN’s Large Hadron Collider and others, sites started a common activity, the HEPiX Technology Watch Working Group, tasked with tracking the evolution of technologies and markets of concern to the data centres. The talk will give an overview of general and semiconductor markets, server markets, CPUs and accelerators, memories, storage and networks; it will highlight important areas of uncertainties and risks
    corecore