81 research outputs found

    The role of users’ emotions and associated quality goals on appropriation of systems: two case studies

    Get PDF
    In this paper, we examine the role of emotions and associated system qualities in encouraging adoption and effective use of systems. In two different contexts, we examine the use of a learning management system in an educational setting and a personal emergency alarm system in an aged care setting. This study reveals that technology appropriation is driven by different emotions depending on whether users are in the adoption decision-making stage or during actual use as a part of their everyday routine. Findings from this study suggest that social factors influence peoples’ emotions in the decision to adopt a system. However, as people use a system, it is the non-functional system qualities, based on personal experiences with the look, feel, functionality and features that trigger positive and negative emotional responses. Our findings therefore propose that these emotional responses should be considered during system design and implementation to encourage appropriation and avoid rejection of systems

    Elevating zero dimensional global scaling predictions to self-consistent theory-based simulations

    Full text link
    We have developed an innovative workflow, STEP-0D, within the OMFIT integrated modelling framework. Through systematic validation against the International Tokamak Physics Activity (ITPA) global H-mode confinement database, we demonstrated that STEP-0D, on average, predicts the energy confinement time with a mean relative error (MRE) of less than 19%. Moreover, this workflow showed promising potential in predicting plasmas for proposed fusion reactors such as ARC, EU-DEMO, and CFETR, indicating moderate H-factors between 0.9 and 1.2. STEP-0D allows theory-based prediction of tokamak scenarios, beginning with zero-dimensional (0D) quantities. The workflow initiates with the PRO-create module, generating physically consistent plasma profiles and equilibrium using the same 0D quantities as the IPB98(y,2) confinement scaling. This sets the starting point for the STEP (Stability, Transport, Equilibrium, and Pedestal) module, which further iterates between theory-based physics models of equilibrium, core transport, and pedestal to yield a self-consistent solution. Given these attributes, STEP-0D not only improves the accuracy of predicting plasma performance but also provides a path towards a novel fusion power plant (FPP) design workflow. When integrated with engineering and costing models within an optimization, this new approach could eliminate the iterative reconciliation between plasma models of varying fidelity. This potential for a more efficient design process underpins STEP-0D's significant contribution to future fusion power plant development.Comment: 12 pages, 13 figures, accepted by Physics of Plasmas 202

    Research Priorities for Achieving Healthy Marine Ecosystems and Human Communities in a Changing Climate

    Get PDF
    ABSTRACT: The health of coastal human communities and marine ecosystems are at risk from a host of anthropogenic stressors, in particular, climate change. Because ecological health and human well-being are inextricably connected, effective and positive responses to current risks require multidisciplinary solutions. Yet, the complexity of coupled social-ecological systems has left many potential solutions unidentified or insufficiently explored. The urgent need to achieve positive social and ecological outcomes across local and global scales necessitates rapid and targeted multidisciplinary research to identify solutions that have the greatest chance of promoting benefits for both people and nature. To address these challenges, we conducted a forecasting exercise with a diverse, multidisciplinary team to identify priority research questions needed to promote sustainable and just marine social-ecological systems now and into the future, within the context of climate change and population growth. In contrast to the traditional reactive cycle of science and management, we aimed to generate questions that focus on what we need to know, before we need to know it. Participants were presented with the question, "If we were managing oceans in 2050 and looking back, what research, primary or synthetic, would wish we had invested in today?" We first identified major social and ecological events over the past 60 years that shaped current human relationships with coasts and oceans. We then used a modified Delphi approach to identify nine priority research areas and 46 questions focused on increasing sustainability and well-being in marine social-ecological systems. The research areas we identified include relationships between ecological and human health, access to resources, equity, governance, economics, resilience, and technology. Most questions require increased collaboration across traditionally distinct disciplines and sectors for successful study and implementation. By identifying these questions, we hope to facilitate the discourse, research, and policies needed to rapidly promote healthy marine ecosystems and the human communities that depend upon them

    A genealogy of hacking

    Get PDF
    Hacking is now a widely discussed and known phenomenon, but remains difficult to define and empirically identify because it has come to refer to many different, sometimes incompatible, material practices. This paper proposes genealogy as a framework for understanding hacking by briefly revisiting Foucault’s concept of genealogy and interpreting its perspectival stance through the feminist materialist concept of the situated observer. Using genealogy as a theoretical frame a history of hacking will be proposed in four phases. The first phase is the ‘pre-history’ of hacking in which four core practices were developed. The second phase is the ‘golden age of cracking’ in which hacking becomes a self-conscious identity and community and is for many identified with breaking into computers, even while non-cracking practices such as free software mature. The third phase sees hacking divide into a number of new practices even while old practices continue, including the rise of serious cybercrime, hacktivism, the division of Open Source and Free Software and hacking as an ethic of business and work. The final phase sees broad consciousness of state-sponsored hacking, the re-rise of hardware hacking in maker labs and hack spaces and the diffusion of hacking into a broad ‘clever’ practice. In conclusion it will be argued that hacking consists across all the practices surveyed of an interrogation of the rationality of information techno-cultures enacted by each hacker practice situating itself within a particular techno-culture and then using that techno-culture to change itself, both in changing potential actions that can be taken and changing the nature of the techno-culture itself

    Long-term Mortality in HIV-Positive Individuals Virally Suppressed for >3 Years With Incomplete CD4 Recovery

    Get PDF
    Virally suppressed HIV-positive individuals on combination antiretroviral therapy who do not achieve a CD4 count >200 cells/µL have substantially increased long-term mortality. The increased mortality was seen across different patient groups and for all causes of deat

    The sustainable materials roadmap

    Get PDF
    Over the past 150 years, our ability to produce and transform engineered materials has been responsible for our current high standards of living, especially in developed economies. However, we must carefully think of the effects our addiction to creating and using materials at this fast rate will have on the future generations. The way we currently make and use materials detrimentally affects the planet Earth, creating many severe environmental problems. It affects the next generations by putting in danger the future of the economy, energy, and climate. We are at the point where something must drastically change, and it must change now. We must create more sustainable materials alternatives using natural raw materials and inspiration from nature while making sure not to deplete important resources, i.e. in competition with the food chain supply. We must use less materials, eliminate the use of toxic materials and create a circular materials economy where reuse and recycle are priorities. We must develop sustainable methods for materials recycling and encourage design for disassembly. We must look across the whole materials life cycle from raw resources till end of life and apply thorough life cycle assessments (LCAs) based on reliable and relevant data to quantify sustainability. We need to seriously start thinking of where our future materials will come from and how could we track them, given that we are confronted with resource scarcity and geographical constrains. This is particularly important for the development of new and sustainable energy technologies, key to our transition to net zero. Currently 'critical materials' are central components of sustainable energy systems because they are the best performing. A few examples include the permanent magnets based on rare earth metals (Dy, Nd, Pr) used in wind turbines, Li and Co in Li-ion batteries, Pt and Ir in fuel cells and electrolysers, Si in solar cells just to mention a few. These materials are classified as 'critical' by the European Union and Department of Energy. Except in sustainable energy, materials are also key components in packaging, construction, and textile industry along with many other industrial sectors. This roadmap authored by prominent researchers working across disciplines in the very important field of sustainable materials is intended to highlight the outstanding issues that must be addressed and provide an insight into the pathways towards solving them adopted by the sustainable materials community. In compiling this roadmap, we hope to aid the development of the wider sustainable materials research community, providing a guide for academia, industry, government, and funding agencies in this critically important and rapidly developing research space which is key to future sustainability.journal articl

    CD4:CD8 ratio and CD8 count as prognostic markers for mortality in HIV-positive patients on ART:Antiretroviral Therapy Cohort Collaboration

    Get PDF
    We investigated whether CD4:CD8 ratio and CD8 count were prognostic for all-cause, AIDS, and non-AIDS mortality in virologically suppressed patients with high CD4 count. We used data from 13 European and North American cohorts of human immunodeficiency virus-infected, antiretroviral therapy (ART)-naive adults who started ART during 1996-2010, who were followed from the date they had CD4 count ≥350 cells/μL and were virologically suppressed (baseline). We used stratified Cox models to estimate unadjusted and adjusted (for sex, people who inject drugs, ART initiation year, and baseline age, CD4 count, AIDS, duration of ART) all-cause and cause-specific mortality hazard ratios for tertiles of CD4:CD8 ratio (0-0.40, 0.41-0.64 [reference], >0.64) and CD8 count (0-760, 761-1138 [reference], >1138 cells/μL) and examined the shape of associations using cubic splines. During 276526 person-years, 1834 of 49865 patients died (249 AIDS-related; 1076 non-AIDS-defining; 509 unknown/unclassifiable deaths). There was little evidence that CD4:CD8 ratio was prognostic for all-cause mortality after adjustment for other factors: the adjusted hazard ratio (aHR) for lower vs middle tertile was 1.11 (95% confidence interval [CI], 1.00-1.25). The association of CD8 count with all-cause mortality was U-shaped: aHR for higher vs middle tertile was 1.13 (95% CI, 1.01-1.26). AIDS-related mortality declined with increasing CD4:CD8 ratio and decreasing CD8 count. There was little evidence that CD4:CD8 ratio or CD8 count was prognostic for non-AIDS mortality. In this large cohort collaboration, the magnitude of adjusted associations of CD4:CD8 ratio or CD8 count with mortality was too small for them to be useful as independent prognostic markers in virally suppressed patients on ART

    Readout of a quantum processor with high dynamic range Josephson parametric amplifiers

    Full text link
    We demonstrate a high dynamic range Josephson parametric amplifier (JPA) in which the active nonlinear element is implemented using an array of rf-SQUIDs. The device is matched to the 50 Ω\Omega environment with a Klopfenstein-taper impedance transformer and achieves a bandwidth of 250-300 MHz, with input saturation powers up to -95 dBm at 20 dB gain. A 54-qubit Sycamore processor was used to benchmark these devices, providing a calibration for readout power, an estimate of amplifier added noise, and a platform for comparison against standard impedance matched parametric amplifiers with a single dc-SQUID. We find that the high power rf-SQUID array design has no adverse effect on system noise, readout fidelity, or qubit dephasing, and we estimate an upper bound on amplifier added noise at 1.6 times the quantum limit. Lastly, amplifiers with this design show no degradation in readout fidelity due to gain compression, which can occur in multi-tone multiplexed readout with traditional JPAs.Comment: 9 pages, 8 figure

    Measurement-Induced State Transitions in a Superconducting Qubit: Within the Rotating Wave Approximation

    Full text link
    Superconducting qubits typically use a dispersive readout scheme, where a resonator is coupled to a qubit such that its frequency is qubit-state dependent. Measurement is performed by driving the resonator, where the transmitted resonator field yields information about the resonator frequency and thus the qubit state. Ideally, we could use arbitrarily strong resonator drives to achieve a target signal-to-noise ratio in the shortest possible time. However, experiments have shown that when the average resonator photon number exceeds a certain threshold, the qubit is excited out of its computational subspace, which we refer to as a measurement-induced state transition. These transitions degrade readout fidelity, and constitute leakage which precludes further operation of the qubit in, for example, error correction. Here we study these transitions using a transmon qubit by experimentally measuring their dependence on qubit frequency, average photon number, and qubit state, in the regime where the resonator frequency is lower than the qubit frequency. We observe signatures of resonant transitions between levels in the coupled qubit-resonator system that exhibit noisy behavior when measured repeatedly in time. We provide a semi-classical model of these transitions based on the rotating wave approximation and use it to predict the onset of state transitions in our experiments. Our results suggest the transmon is excited to levels near the top of its cosine potential following a state transition, where the charge dispersion of higher transmon levels explains the observed noisy behavior of state transitions. Moreover, occupation in these higher energy levels poses a major challenge for fast qubit reset

    Overcoming leakage in scalable quantum error correction

    Full text link
    Leakage of quantum information out of computational states into higher energy states represents a major challenge in the pursuit of quantum error correction (QEC). In a QEC circuit, leakage builds over time and spreads through multi-qubit interactions. This leads to correlated errors that degrade the exponential suppression of logical error with scale, challenging the feasibility of QEC as a path towards fault-tolerant quantum computation. Here, we demonstrate the execution of a distance-3 surface code and distance-21 bit-flip code on a Sycamore quantum processor where leakage is removed from all qubits in each cycle. This shortens the lifetime of leakage and curtails its ability to spread and induce correlated errors. We report a ten-fold reduction in steady-state leakage population on the data qubits encoding the logical state and an average leakage population of less than 1×1031 \times 10^{-3} throughout the entire device. The leakage removal process itself efficiently returns leakage population back to the computational basis, and adding it to a code circuit prevents leakage from inducing correlated error across cycles, restoring a fundamental assumption of QEC. With this demonstration that leakage can be contained, we resolve a key challenge for practical QEC at scale.Comment: Main text: 7 pages, 5 figure
    corecore