121 research outputs found
A Modern Application of Hedonics for Valuing Irrigation
Land Economics/Use,
Bindings, Blades, and Bottlenecks: Finding Equilibrium in an In-House Digitization Project
Lightning Talk given at ACRL New England Conference 2021. Starting in fall 2019, we embarked on an in-house pilot project to digitize our entire thesis and dissertation collection and ingest these works to the institutional repository. With undergraduate student workers doing all of the scanning, we are removing the binding of one copy of each thesis/dissertation and digitizing the resulting loose pages using a sheet feeder scanner. Surprisingly little has been written about this approach to digitization in libraries, perhaps because destroying physical copies of books is not often desirable or feasible. But in certain situations, this workflow can provide a very low-cost and relatively fast option for digitizing a collection. The best information we were able to find about taking books apart came from YouTube. For advice about heavy-duty paper cutters (guillotines) we relied on book arts listservs. As a result, we are learning on the job and adjusting as we go. For example, the new paper cutter blade went dull after merely two months because of the unexpectedly high number of books students were able to get through. Also because of the high volume of scanning, staff working on other pieces of the project – such as uploading the files to the repository, editing the catalog records, and even discarding the large volume of empty bindings – encountered a large bottleneck. After recalibrating and redistributing the workloads amongst the collaborative team, we are finding a balance where the various elements of the project are keeping pace. Given the limited amount of information available about in-house “destructive” digitization projects, our talk will be aimed at sharing valuable lessons learned from the experience
Lessons Learned from a Decade of Providing Interactive, On-Demand High Performance Computing to Scientists and Engineers
For decades, the use of HPC systems was limited to those in the physical
sciences who had mastered their domain in conjunction with a deep understanding
of HPC architectures and algorithms. During these same decades, consumer
computing device advances produced tablets and smartphones that allow millions
of children to interactively develop and share code projects across the globe.
As the HPC community faces the challenges associated with guiding researchers
from disciplines using high productivity interactive tools to effective use of
HPC systems, it seems appropriate to revisit the assumptions surrounding the
necessary skills required for access to large computational systems. For over a
decade, MIT Lincoln Laboratory has been supporting interactive, on-demand high
performance computing by seamlessly integrating familiar high productivity
tools to provide users with an increased number of design turns, rapid
prototyping capability, and faster time to insight. In this paper, we discuss
the lessons learned while supporting interactive, on-demand high performance
computing from the perspectives of the users and the team supporting the users
and the system. Building on these lessons, we present an overview of current
needs and the technical solutions we are building to lower the barrier to entry
for new users from the humanities, social, and biological sciences.Comment: 15 pages, 3 figures, First Workshop on Interactive High Performance
Computing (WIHPC) 2018 held in conjunction with ISC High Performance 2018 in
Frankfurt, German
Supercomputing in plain English : teaching high performance computing to inexperienced programmers
Oct 24 2002 Supercomputing in Plain English: Teaching High Performance Computing to Inexperienced Programmers (paper)The University of OklahomaN
A direct comparison of decision rules for early discharge of suspected acute coronary syndromes in the era of high sensitivity troponin
Background: We tested the hypothesis that a single high sensitivity troponin at limits of detection (LOD HSTnT) (<5 ng/l) combined with a presentation non-ischaemic electrocardiogram is superior to low-risk Global Registry of Acute Coronary Events (GRACE) (<75), Thrombolysis in Myocardial Infarction (TIMI) (≤1) and History, ECG, Age, Risk factors and Troponin (HEART) score (≤3) as an aid to early, safe discharge for suspected acute coronary syndrome. Methods: In a prospective cohort study, risk scores were computed in consecutive patients with suspected acute coronary syndrome presenting to the Emergency Room of a large English hospital. Adjudication of myocardial infarction, as per third universal definition, involved a two-physician, blinded, independent review of all biomarker positive chest pain re-presentations to any national hospital. The primary and secondary outcome was a composite of type 1 myocardial infarction, unplanned coronary revascularisation and all cause death (MACE) at six weeks and one year. Results: Of 3054 consecutive presentations with chest pain 1642 had suspected acute coronary syndrome (52% male, median age 59 years, 14% diabetic, 20% previous myocardial infarction). Median time from chest pain to presentation was 9.7 h. Re-presentations occurred in eight hospitals with 100% follow-up achieved. Two hundred and eleven (12.9%) and 279 (17%) were adjudicated to suffer MACE at six weeks and one year respectively. Only HEART ≤3 (negative predictive value MACE 99.4%, sensitivity 97.6%, %discharge 53.4) and LOD HSTnT strategy (negative predictive value MACE 99.8%, sensitivity 99.5%, %discharge 36.9) achieved pre-specified negative predictive value of >99% for MACE at six weeks. For type 1 myocardial infarction alone the negative predictive values at six weeks and one year were identical, for both HEART ≤3 and LOD HSTnT at 99.8% and 99.5% respectively. Conclusion: HEART ≤3 or LOD HSTnT strategy rules out short and medium term myocardial infarction with ≥99.5% certainty, and short-term MACE with >99% certainty, allowing for early discharge of 53.4% and 36.9% respectively of suspected acute coronary syndrome. Adoption of either strategy has the potential to greatly reduce Emergency Room pressures and minimise follow-up investigations. Very early presenters (<3 h), due to limited numbers, are excluded from these conclusions. </jats:sec
Interactive Supercomputing on 40,000 Cores for Machine Learning and Data Analysis
Interactive massively parallel computations are critical for machine learning
and data analysis. These computations are a staple of the MIT Lincoln
Laboratory Supercomputing Center (LLSC) and has required the LLSC to develop
unique interactive supercomputing capabilities. Scaling interactive machine
learning frameworks, such as TensorFlow, and data analysis environments, such
as MATLAB/Octave, to tens of thousands of cores presents many technical
challenges - in particular, rapidly dispatching many tasks through a scheduler,
such as Slurm, and starting many instances of applications with thousands of
dependencies. Careful tuning of launches and prepositioning of applications
overcome these challenges and allow the launching of thousands of tasks in
seconds on a 40,000-core supercomputer. Specifically, this work demonstrates
launching 32,000 TensorFlow processes in 4 seconds and launching 262,000 Octave
processes in 40 seconds. These capabilities allow researchers to rapidly
explore novel machine learning architecture and data analysis algorithms.Comment: 6 pages, 7 figures, IEEE High Performance Extreme Computing
Conference 201
The Impact of Membrane Lipid Composition on Macrophage Activation in the Immune Defense against Rhodococcus equi and Pseudomonas aeruginosa
Nutritional fatty acids are known to have an impact on membrane lipid composition of body cells, including cells of the immune system, thus providing a link between dietary fatty acid uptake, inflammation and immunity. In this study we reveal the significance of macrophage membrane lipid composition on gene expression and cytokine synthesis thereby highlighting signal transduction processes, macrophage activation as well as macrophage defense mechanisms. Using RAW264.7 macrophages as a model system, we identified polyunsaturated fatty acids (PUFA) of both the n-3 and the n-6 family to down-regulate the synthesis of: (i) the pro-inflammatory cytokines IL-1β, IL-6 and TNF-α; (ii) the co-stimulatory molecule CD86; as well as (iii) the antimicrobial polypeptide lysozyme. The action of the fatty acids partially depended on the activation status of the macrophages. It is particularly important to note that the anti-inflammatory action of the PUFA could also be seen in case of infection of RAW264.7 with viable microorganisms of the genera R. equi and P. aeruginosa. In summary, our data provide strong evidence that PUFA from both the n-3 and the n-6 family down-regulate inflammation processes in context of chronic infections caused by persistent pathogens
- …