444 research outputs found

    Creation and Revelation: Two Edges of Contact Between Science and Religion

    Full text link
    The author, who is a physicist, engages in theological conjecture suggested by some of the concepts of his discipline, demonstrating the fruitfulness of creative appropriation of ideas across disciplinary boundaries. Two futuristic scenes contrast possible developments of the Church within a technological society

    A Community-Focused Health & Work Service (HWS)

    Get PDF
    We recommend establishment of a community-focused Health & Work Service (HWS) dedicated to responding rapidly to new health-related work absence among working people due to potentially disabling conditions. The first few days and weeks after onset are an especially critical period during which the likelihood of a good long-term outcome is being influenced, either favorably or unfavorably, by some simple things that either do or do not happen during that interval. It is the optimal window of opportunity to improve outcomes by simultaneously attending to the worker’s basic needs and concerns as well as coordinating the medical, functional restoration, and occupational aspects of the situation in a coordinated fashion

    HelioSwarm: The Swarm is the Observatory

    Get PDF
    The HelioSwarm Mission will transform our understanding of space plasma turbulence by being the first-of-its-kind simultaneous, multiscale observatory comprising multiple spacecraft. HelioSwarm was competitively selected under the Heliophysics Explorers Program 2019 Medium-Class Explorer (MIDEX) Announcement of Opportunity. The central powered-ESPA Hub spacecraft is co-orbited by eight SmallSat Node spacecraft, together moving through a High Earth Orbit to obtain data in various solar wind regimes. The mission architecture is that of hub-and-spoke, with the larger hub serving as a communications relay between the Nodes and DSN. Mission operations, management, and technical oversight are provided by NASA Ames Research Center; the spacecraft are provided by Northrop Grumman and BCT. The instrument suite includes foreign-contributed and US payloads, all under the oversight of University of New Hampshire (which is also the Principal Investigator\u27s home institution and Science Operations Center). The mission timeline from launch through conclusion of the one-year science phase is provided along with a summarized concept of operations, with particular emphasis on placing the Nodes in their proper relative orbit loops to form this geometry needed for science collection at apogee. A combination of legacy tools and custom-created swarm analysis tools are used to design the swarm and sort and visualize the collected science data and telemetry in context. Finally, an exploration of the pathfinding nature of HelioSwarm and some implications for future large scientific swarms is offered

    The Scalable Commutativity Rule: Designing Scalable Software for Multicore Processors

    Get PDF
    What fundamental opportunities for scalability are latent in interfaces, such as system call APIs? Can scalability opportunities be identified even before any implementation exists, simply by considering interface specifications? To answer these questions this paper introduces the following rule: Whenever interface operations commute, they can be implemented in a way that scales. This rule aids developers in building more scalable software starting from interface design and carrying on through implementation, testing, and evaluation. To help developers apply the rule, a new tool named Commuter accepts high-level interface models and generates tests of operations that commute and hence could scale. Using these tests, Commuter can evaluate the scalability of an implementation. We apply Commuter to 18 POSIX calls and use the results to guide the implementation of a new research operating system kernel called sv6. Linux scales for 68% of the 13,664 tests generated by Commuter for these calls, and Commuter finds many problems that have been observed to limit application scalability. sv6 scales for 99% of the tests.Engineering and Applied Science

    Tolerating Malicious Device Drivers in Linux

    Get PDF
    URL to paper from conference siteThis paper presents SUD, a system for running existing Linux device drivers as untrusted user-space processes. Even if the device driver is controlled by a malicious adversary, it cannot compromise the rest of the system. One significant challenge of fully isolating a driver is to confine the actions of its hardware device. SUD relies on IOMMU hardware, PCI express bridges, and message-signaled interrupts to confine hardware devices. SUD runs unmodified Linux device drivers, by emulating a Linux kernel environment in user-space. A prototype of SUD runs drivers for Gigabit Ethernet, 802.11 wireless, sound cards, USB host controllers, and USB devices, and it is easy to add a new device class. SUD achieves the same performance as an in-kernel driver on networking benchmarks, and can saturate a Gigabit Ethernet link. SUD incurs a CPU overhead comparable to existing runtime driver isolation techniques, while providing much stronger isolation guarantees for untrusted drivers. Finally, SUD requires minimal changes to the kernel—just two kernel modules comprising 4,000 lines of code—which may at last allow the adoption of these ideas in practice

    Alcohol Interventions for Trauma Patients Treated in Emergency Departments and Hospitals: A Cost Benefit Analysis

    Get PDF
    Summarizes a study of whether screening for problem drinking and interventions to reduce alcohol intake in hospital trauma centers reduce the direct cost of injury-related health care. Compares the costs of injury recidivism with and without intervention

    Understanding How Components of Black Racial Identity and Racial Realities May Impact Healthcare Utilization: A Randomized Study

    Full text link
    Purpose: Studies have suggested that even when minority groups have potential access to healthcare, they may have inadequate utilization (realized access). This study explores the application of a theory from the social psychology and political science literatures concerning how racial centrality and racial realities, specifically amongst Blacks, may influence patients’ healthcare utilization preferences. Methods: We created a survey with two (pseudo) randomized, controlled experimental treatments designed to assess whether racialized hospital and physician characteristics elicited a preference from Black or White respondents, as well as questions aimed at understanding participants’ different beliefs and levels of knowledge about past and current racial health disparities. The survey was distributed online by Qualtrics to paid Black (n=225) and White (n=75) participants. Data were analyzed using bivariable statistics. Results: Black respondents preferred a hospital with an advertisement featuring Black healthcare workers (p\u3c.01), an association that was correlated with higher levels of Black centrality (p\u3c.01), beliefs that the Tuskegee Syphilis Experiment could happen today (p\u3c.05), and a lack of trust in the healthcare system (p\u3c.01). No such association was observed for White respondents. Neither White nor Black respondents showed any significant associations concerning preference for a physician with a racialized name. Blacks respondents were significantly more likely to answer questions concerning the existence of health disparities correctly; however, there was no difference in the number of healthcare-related discriminatory experiences or general trust of healthcare organizations observed between respondents of the two races. Conclusions: Black subjects appeared to prefer health institutions that give the outward appearance of being diverse. This choice was associated with racial group centrality and knowledge of certain racial realities. As more equal access is legislated, the role racial identity plays in affecting utilization patterns should be better understood in order to inform future health care programs and policies

    Reinventing Scheduling for Multicore Systems

    Get PDF
    High performance on multicore processors requires that schedulers be reinvented. Traditional schedulers focus on keeping execution units busy by assigning each core a thread to run. Schedulers ought to focus, however, on high utilization of on-chip memory, rather than of execution cores, to reduce the impact of expensive DRAM and remote cache accesses. A challenge in achieving good use of on-chip memory is that the memory is split up among the cores in the form of many small caches. This paper argues for a form of scheduling that assigns each object and its operations to a specific core, moving a thread among the cores as it uses different objects

    A Software Approach to Unifying Multicore Caches

    Get PDF
    Multicore chips will have large amounts of fast on-chip cache memory, along with relatively slow DRAM interfaces. The on-chip cache memory, however, will be fragmented and spread over the chip; this distributed arrangement is hard for certain kinds of applications to exploit efficiently, and can lead to needless slow DRAM accesses. First, data accessed from many cores may be duplicated in many caches, reducing the amount of distinct data cached. Second, data in a cache distant from the accessing core may be slow to fetch via the cache coherence protocol. Third, software on each core can only allocate space in the small fraction of total cache memory that is local to that core. A new approach called software cache unification (SCU) addresses these challenges for applications that would be better served by a large shared cache. SCU chooses the on-chip cache in which to cache each item of data. As an application thread reads data items, SCU moves the thread to the core whose on-chip cache contains each item. This allows the thread to read the data quickly if it is already on-chip; if it is not, moving the thread causes the data to be loaded into the chosen on-chip cache. A new file cache for Linux, called MFC, uses SCU to improve performance of file-intensive applications, such as Unix file utilities. An evaluation on a 16-core AMD Opteron machine shows that MFC improves the throughput of file utilities by a factor of 1.6. Experiments with a platform that emulates future machines with less DRAM throughput per core shows that MFC will provide benefit to a growing range of applications.This material is based upon work supported by the National Science Foundation under grant number 0915164
    corecore