29 research outputs found

    Smallholder Agriculture and Land Reform in South Africa

    Get PDF
    How canland reformcontribute toa revitalisationof smallholder agriculture inSouthernAfrica?Thisquestion remains important despitenegativeperceptions of land reformas a result of the impactofZimbabwe’s “fast-track” resettlement programmeonagriculturalproduction.This articlefocusesmainly onSouthAfrica, whereahighly unequaldistributionof landcoexists withdeep ruralpoverty,but dominant narratives of the efficiency of large-scaleagriculture exert a s trangleholdon r uralpolicy(cfToulminandGuèye, this IDSBulletinfor WestAfrica)

    Graphene-doped photo-patternable ionogels: tuning of conductivity and mechanical stability of 3D microstructures

    Get PDF
    This work reports for the first time the development of enhanced conductivity, graphene- doped photo-patternable hybrid organic-inorganic ionogels and the effect of the subsequent materials condensation on the conductivity and mechanical stability of three- dimensional microstructures fabricated by multi-photon polymerisation (MPP). Ionogels were based on photocurable silicon/zirconium hybrid sol-gel materials and phosphonium (trihexyltetradecylphosphonium dicyanamide [P6,6,6,14][DCA] ionic liquid (IL). To optimise the dispersion of graphene within the ionogel matrices, aqueous solutions of graphene were prepared, as opposed to the conventional graphene powder approach, and employed as catalysts of hydrolysis and condensation reactions occurring in the sol-gel process. Ionogels were prepared via a two step process by varying the hydrolysis degree from 25 to 50%, IL content between 0-50 w/w%, and the inorganic modifier (zirconate complex) concentration from 30 to 60 mol.% against the photocurable ormosil and they were characterised via Raman, Electrochemical Impedance Spectroscopy and Transmission Electron Microscopy. MPP was performed on the hybrid ionogels, resulting in three- dimensional microstructures that were characterised using scanning electron microscopy. It is clearly demonstrated that the molecular formulation of the ionogels, including the concentration of graphene and the zirconate network modifier, play a critical role in the conductivity of the ionogels and influence the resulting mechanical stability of the fabricated three-dimensional microstructures. This work aims to establish for the first time the relationship between the molecular design and condensation of materials in the physico-chemistry and dynamic of ionogels

    CMS distributed computing workflow experience

    Get PDF
    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation

    Visualization for epidemiological modelling: challenges, solutions, reflections and recommendations.

    Get PDF
    From Europe PMC via Jisc Publications RouterHistory: epub 2022-08-15, ppub 2022-10-01Publication status: PublishedFunder: UK Research and Innovation; Grant(s): ST/V006126/1, EP/V054236/1, EP/V033670/1We report on an ongoing collaboration between epidemiological modellers and visualization researchers by documenting and reflecting upon knowledge constructs-a series of ideas, approaches and methods taken from existing visualization research and practice-deployed and developed to support modelling of the COVID-19 pandemic. Structured independent commentary on these efforts is synthesized through iterative reflection to develop: evidence of the effectiveness and value of visualization in this context; open problems upon which the research communities may focus; guidance for future activity of this type and recommendations to safeguard the achievements and promote, advance, secure and prepare for future collaborations of this kind. In describing and comparing a series of related projects that were undertaken in unprecedented conditions, our hope is that this unique report, and its rich interactive supplementary materials, will guide the scientific community in embracing visualization in its observation, analysis and modelling of data as well as in disseminating findings. Equally we hope to encourage the visualization community to engage with impactful science in addressing its emerging data challenges. If we are successful, this showcase of activity may stimulate mutually beneficial engagement between communities with complementary expertise to address problems of significance in epidemiology and beyond. See https://ramp-vis.github.io/RAMPVIS-PhilTransA-Supplement/. This article is part of the theme issue 'Technical challenges of modelling real-life epidemics and examples of overcoming these'

    FAIR Data Pipeline: provenance-driven data management for traceable scientific workflows

    Get PDF
    Modern epidemiological analyses to understand and combat the spread of disease depend critically on access to, and use of, data. Rapidly evolving data, such as data streams changing during a disease outbreak, are particularly challenging. Data management is further complicated by data being imprecisely identified when used. Public trust in policy decisions resulting from such analyses is easily damaged and is often low, with cynicism arising where claims of "following the science" are made without accompanying evidence. Tracing the provenance of such decisions back through open software to primary data would clarify this evidence, enhancing the transparency of the decision-making process. Here, we demonstrate a Findable, Accessible, Interoperable and Reusable (FAIR) data pipeline developed during the COVID-19 pandemic that allows easy annotation of data as they are consumed by analyses, while tracing the provenance of scientific outputs back through the analytical source code to data sources. Such a tool provides a mechanism for the public, and fellow scientists, to better assess the trust that should be placed in scientific evidence, while allowing scientists to support policy-makers in openly justifying their decisions. We believe that tools such as this should be promoted for use across all areas of policy-facing research

    Visualization for epidemiological modelling: challenges, solutions, reflections and recommendations

    Get PDF
    From The Royal Society via Jisc Publications RouterHistory: received 2021-10-14, accepted 2022-03-18, pub-electronic 2022-08-15, pub-print 2022-10-03Article version: VoRPublication status: PublishedFunder: UK Research and Innovation; Id: http://dx.doi.org/10.13039/100014013; Grant(s): EP/V033670/1, EP/V054236/1, ST/V006126/1We report on an ongoing collaboration between epidemiological modellers and visualization researchers by documenting and reflecting upon knowledge constructs—a series of ideas, approaches and methods taken from existing visualization research and practice—deployed and developed to support modelling of the COVID-19 pandemic. Structured independent commentary on these efforts is synthesized through iterative reflection to develop: evidence of the effectiveness and value of visualization in this context; open problems upon which the research communities may focus; guidance for future activity of this type and recommendations to safeguard the achievements and promote, advance, secure and prepare for future collaborations of this kind. In describing and comparing a series of related projects that were undertaken in unprecedented conditions, our hope is that this unique report, and its rich interactive supplementary materials, will guide the scientific community in embracing visualization in its observation, analysis and modelling of data as well as in disseminating findings. Equally we hope to encourage the visualization community to engage with impactful science in addressing its emerging data challenges. If we are successful, this showcase of activity may stimulate mutually beneficial engagement between communities with complementary expertise to address problems of significance in epidemiology and beyond. See https://ramp-vis.github.io/RAMPVIS-PhilTransA-Supplement/. This article is part of the theme issue ‘Technical challenges of modelling real-life epidemics and examples of overcoming these’

    IRIS – providing a nationally accessible infrastructure for UK science

    No full text
    In many countries around the world, the development of national infrastructures for science either has been implemented or are under serious consideration by governments and funding bodies. Current examples include ARDC in Australia, CANARIE in Canada and MTA Cloud in Hungary. These infrastructures provide access to compute and storage to a wide swathe of user communities and represent a collaboration between users, providers and, in some cases, industry to maximise the impact of the investments made. The UK has embarked on a project called IRIS to develop a sustainable e-infrastructure based on the needs of a diverse set of communities. Building on the success of the UK component of the WLCG and the innovations made, a number of research institutes and universities are working with several research groups to co-design an infrastructure, including support services, which take this to a level applicable to a wider use base. We present the preparatory work leading to the definition of this infrastructure, showing the wide variety of use cases which require to be supported. This leads us to a definition of the hardware and interface requirements needed to meet this diverse set of criteria, and the support posts identified in order to make best use of this facility and sustain it into the future

    Producing controlled grid patterns of nanotube arrays for strengthening polymer composites

    No full text
    To maximise the effect of carbon nanotube (CNT) reinforcement on a polymer thin film, while minimizing nanotube content, a controllable way of varying the volume fraction of CNTs within the composite is needed. Here we describe the fabrication of controllable CNT grid patterns on a silicon oxide substrate by Chemical Vapour Deposition (CVD). By varying the grid separations we can manipulate the amount of CNTs present on the substrates. These as-grown nanotube arrays can be easily incorporated into a free standing polymer thin film as demonstrated recently[1]. Embedded nanotubes mechanically strengthen a polymer and also provide a network of conduction pathways through an insulating polymer matrix. Mechanical reinforcement, electrical and thermal conductivities of the composite material depend on the location and concentration of these conduction channels.Soft lithography patterning of the catalyst used during nanotube production allows for selective positioning of CNT arrays. Multi-walled carbon nanotubes were grown by the decomposition of acetylene in a CVD chamber

    Running HTC and HPC applications opportunistically across private, academic and public clouds

    No full text
    The Fusion Science Demonstrator in the European Open Science Cloud for Research Pilot Project aimed to demonstrate that the fusion community can make use of distributed cloud resources. We developed a platform, Prominence, which enables users to transparently exploit idle cloud resources for running scientific workloads. In addition to standard HTC jobs, HPC jobs such as multi-node MPI are supported. All jobs are run in containers to ensure they will reliably run anywhere and are reproduceable. Cloud infrastructure is invisible to users, as all provisioning, including extensive failure handling, is completely automated. On-premises cloud resources can be utilised and at times of peak demand burst onto external clouds. In addition to the traditional “cloud-bursting” onto a single cloud, Prominence allows for bursting across many clouds in a hierarchical manner. Job requirements are taken into account, so jobs with special requirements, e.g. high memory or access to GPUs, are sent only to appropriate clouds. Here we describe Prominence, its architecture, the challenges of using many clouds opportunistically and report on our experiences with several fusion use cases

    Running HTC and HPC applications opportunistically across private, academic and public clouds

    Get PDF
    The Fusion Science Demonstrator in the European Open Science Cloud for Research Pilot Project aimed to demonstrate that the fusion community can make use of distributed cloud resources. We developed a platform, Prominence, which enables users to transparently exploit idle cloud resources for running scientific workloads. In addition to standard HTC jobs, HPC jobs such as multi-node MPI are supported. All jobs are run in containers to ensure they will reliably run anywhere and are reproduceable. Cloud infrastructure is invisible to users, as all provisioning, including extensive failure handling, is completely automated. On-premises cloud resources can be utilised and at times of peak demand burst onto external clouds. In addition to the traditional “cloud-bursting” onto a single cloud, Prominence allows for bursting across many clouds in a hierarchical manner. Job requirements are taken into account, so jobs with special requirements, e.g. high memory or access to GPUs, are sent only to appropriate clouds. Here we describe Prominence, its architecture, the challenges of using many clouds opportunistically and report on our experiences with several fusion use cases
    corecore