7,722 research outputs found

    A principled approach to programming with nested types in Haskell

    Get PDF
    Initial algebra semantics is one of the cornerstones of the theory of modern functional programming languages. For each inductive data type, it provides a Church encoding for that type, a build combinator which constructs data of that type, a fold combinator which encapsulates structured recursion over data of that type, and a fold/build rule which optimises modular programs by eliminating from them data constructed using the buildcombinator, and immediately consumed using the foldcombinator, for that type. It has long been thought that initial algebra semantics is not expressive enough to provide a similar foundation for programming with nested types in Haskell. Specifically, the standard folds derived from initial algebra semantics have been considered too weak to capture commonly occurring patterns of recursion over data of nested types in Haskell, and no build combinators or fold/build rules have until now been defined for nested types. This paper shows that standard folds are, in fact, sufficiently expressive for programming with nested types in Haskell. It also defines buildcombinators and fold/build fusion rules for nested types. It thus shows how initial algebra semantics provides a principled, expressive, and elegant foundation for programming with nested types in Haskell

    An Evaluation of Low-Cost Terrestrial Lidar Sensors for Assessing Hydrogeomorphic Change

    Get PDF
    \ua9 2024. The Author(s).Accurate topographic data acquired at appropriate spatio-temporal resolution is often the cornerstone of geomorphic research. Recent decades have seen advances in our ability to generate highly accurate topographic data, primarily through the application of remote sensing techniques. Structure from Motion-Multi View Stereo (SfM-MVS) and lidar have revolutionised the spatial resolution of surveys across large spatial extents. Technological developments have led to commercialisation of small form factor (SFF) 3D lidar sensors that are suited to deployment on both mobile (e.g., uncrewed aerial systems), and in fixed semi-permanent installations. Whilst the former has been adopted, the potential for the latter to generate data suitable for geomorphic investigations has yet to be assessed. We address this gap here in the context of a 3-month deployment where channel change is assessed in an adjusting fluvial system. We find that SFF 3D lidar sensors generate change detection products comparable to those generated using a conventional lidar system. Areas of no geomorphic change are characterised as such (mean 3D change of 0.014 m compared with 0.0014 m for the Riegl VZ-4000), with differences in median change in eroding sections of between 0.02 and 0.04 m. We illustrate that this data enables: (a) accurate characterisation of river channel adjustments through extraction of bank long-profiles; (b) the assessment of bank retreat patterns which help elucidate failure mechanics; and (c) the extraction of water surface elevations. The deployment of this technology will enable a better understanding of processes across a variety of geomorphic systems, as data can be captured in 4D with near real-time processing

    Improved hepatic arterial fraction estimation using cardiac output correction of arterial input functions for liver DCE MRI

    Get PDF
    Liver dynamic contrast enhanced (DCE) MRI pharmacokinetic modelling could be useful in the assessment of diffuse liver disease and focal liver lesions, but is compromised by errors in arterial input function (AIF) sampling. In this study, we apply cardiac output correction to arterial input functions (AIFs) for liver dynamic contrast enhanced (DCE) MRI and investigate the effect on dual-input single compartment hepatic perfusion parameter estimation and reproducibility. Thirteen healthy volunteers (28.7±1.94 years, seven males) underwent liver DCE MRI and cardiac output measurement using aortic root phase contrast MRI (PCMRI), with reproducibility (n=9) measured at seven days. Cardiac output AIF correction was undertaken by constraining the first pass AIF enhancement curve using the indicator-dilution principle. Hepatic perfusion parameters with and without cardiac output AIF correction were compared and seven-day reproducibility assessed. Differences between cardiac output corrected and uncorrected liver DCE MRI portal venous (PV) perfusion (p=0.066), total liver blood flow (TLBF)(p=0.101), hepatic arterial (HA) fraction (p=0.895), mean transit time (MTT)(p=0.646), distribution volume (DV)(p=0.890) were not significantly different. Seven-day corrected HA fraction reproducibility was improved (mean difference 0.3%, Bland-Altman 95% Limits-of-Agreement (BA95%LoA) ±27.9%, Coefficient of Variation (CoV) 61.4% vs 9.3%, ±35.5%, 81.7% respectively without correction). Seven-day uncorrected PV perfusion was also improved (mean difference 9.3 ml/min/100g, BA95%LoA ±506.1 ml/min/100g, CoV 64.1% vs 0.9 ml/min/100g, ±562.8 ml/min/100g, 65.1% respectively with correction) as was uncorrected TLBF(mean difference 43.8 ml/min/100g, BA95%LoA ±586.7 ml/min/100g, CoV 58.3% vs 13.3 ml/min/100g, ±661.5 ml/min/100g, 60.9% respectively with correction). Reproducibility of uncorrected MTT was similar (uncorrected mean difference 2.4s, BA95%LoA ±26.7s, CoV 60.8% uncorrected vs 3.7s, ±27.8s, 62.0% respectively with correction), as was and DV (uncorrected mean difference 14.1%, BA95%LoA ±48.2%, CoV 24.7% vs 10.3%, ±46.0%, 23.9% respectively with correction). Cardiac output AIF correction does not significantly affect the estimation of hepatic perfusion parameters but demonstrates improvements in normal volunteer seven-day HA fraction reproducibility, but deterioration in PV perfusion and TLBF reproducibility. Improved HA fraction reproducibility maybe important as arterialisation of liver perfusion is increased in chronic liver disease and within malignant liver lesions

    Data acquisition software for the CMS strip tracker

    Get PDF
    The CMS silicon strip tracker, providing a sensitive area of approximately 200 m2 and comprising 10 million readout channels, has recently been completed at the tracker integration facility at CERN. The strip tracker community is currently working to develop and integrate the online and offline software frameworks, known as XDAQ and CMSSW respectively, for the purposes of data acquisition and detector commissioning and monitoring. Recent developments have seen the integration of many new services and tools within the online data acquisition system, such as event building, online distributed analysis, an online monitoring framework, and data storage management. We review the various software components that comprise the strip tracker data acquisition system, the software architectures used for stand-alone and global data-taking modes. Our experiences in commissioning and operating one of the largest ever silicon micro-strip tracking systems are also reviewed

    Education then and now: making the case for ecol-agogy

    Get PDF
    The processes, settings and outcomes of human education have distinctive impact on the human and non-human world: this paper sets out to discuss what may have motivated the initiation of human education, how it has been maintained why the outcome has wide-ranging, and often negative, planetary impact. The analysis offers a multi-disciplinary account of education, from pre-history to the present, noting that humans, past and present are born into an ‘open world’ that requires world building or, niche construction. As a result, cultural and genetic evolution are out of synchronisation instigating an existential threat and the anxious experience of ‘adaptive-lag’ leading to the motive for continued niche construction. Education is presented as a particular type of niche construction requiring teachers and the use of symbolic verbal language to help learners move from simplistic ‘split’ thinking to the more mature position where the needs of self and others can be met

    Attention and automation: New perspectives on mental underload and performance

    Get PDF
    There is considerable evidence in the ergonomics literature that automation can significantly reduce operator mental workload. Furthermore, reducing mental workload is not necessarily a good thing, particularly in cases where the level is already manageable. This raises the issue of mental underload, which can be at least as detrimental to performance as overload. However, although it is widely recognized that mental underload is detrimental to performance, there are very few attempts to explain why this may be the case. It is argued in this paper that, until the need for a human operator is completely eliminated, automation has psychological implications relevant in both theoretical and applied domains. The present paper reviews theories of attention, as well as the literature on mental workload and automation, to synthesize a new explanation for the effects of mental underload on performance. Malleable attentional resources theory proposes that attentional capacity shrinks to accommodate reductions in mental workload, and that this shrinkage is responsible for the underload effect. The theory is discussed with respect to the applied implications for ergonomics research

    Evaluating research impact: The development of a research for impact tool

    Get PDF
    © 2016 Tsey, Lawson, Kinchin, Bainbridge, McCalman, Watkin, Cadet-James and Rossetto. Introduction: This paper examines the process of developing a Research for Impact Tool in the contexts of general fiscal constraint, increased competition for funding, perennial concerns about the over-researching of Aboriginal and Torres Strait Islander issues without demonstrable benefits as well as conceptual and methodological difficulties of evaluating research impact. The aim is to highlight the challenges and opportunities involved in evaluating research impact to serve as resource for potential users of the research for impact tool and others interested in assessing the impact of research. Materials and methods: A combination of literature reviews, workshops with researchers, and reflections by project team members and partners using participatory snowball techniques. Results: Assessing research impact is perceived to be difficult, akin to the so-called "wicked problem," but not impossible. Heuristic and collaborative approach to research that takes the expectations of research users, research participants and the funders of research offers a pragmatic solution to evaluating research impact. The logic of the proposed Research for Impact Tool is based on the understanding that the value of research is to create evidence and/or products to support smarter decisions so as to improve the human condition. Research is, therefore, of limited value unless the evidence created is used to make smarter decisions for the betterment of society. A practical way of approaching research impact is, therefore, to start with the decisions confronting decision makers whether they are government policymakers, industry, professional practitioners, or households and the extent to which the research supports them to make smarter policy and practice decisions and the knock-on consequences of doing so. Embedded at each step in the impact planning and tracking process is the need for appropriate mix of expertise, capacity enhancement, and collaborative participatory learning-by-doing approaches. Discussion: The tool was developed in the context of Aboriginal and Torres Strait Islander research but the basic idea that the way to assess research impact is to start upfront with the information needs of decisions makers is equally applicable to research in other settings, both applied (horizontal) and basic (vertical) research. The tool will be further tested and evaluated with researchers over the next 2 years (2016/17). The decision by the Australian Government to include 'industry engagement' and 'impact' as additions to the Excellence in Research for Australia (ERA) quality measures from 2018 makes the Research for Impact Tool a timely development. The wider challenge is to engage with major Australian research funding agencies to ensure consistent alignment and approaches across research users, communities, and funders in evaluating impact

    Monitoring the CMS strip tracker readout system

    Get PDF
    The CMS Silicon Strip Tracker at the LHC comprises a sensitive area of approximately 200 m2 and 10 million readout channels. Its data acquisition system is based around a custom analogue front-end chip. Both the control and the readout of the front-end electronics are performed by off-detector VME boards in the counting room, which digitise the raw event data and perform zero-suppression and formatting. The data acquisition system uses the CMS online software framework to configure, control and monitor the hardware components and steer the data acquisition. The first data analysis is performed online within the official CMS reconstruction framework, which provides many services, such as distributed analysis, access to geometry and conditions data, and a Data Quality Monitoring tool based on the online physics reconstruction. The data acquisition monitoring of the Strip Tracker uses both the data acquisition and the reconstruction software frameworks in order to provide real-time feedback to shifters on the operational state of the detector, archiving for later analysis and possibly trigger automatic recovery actions in case of errors. Here we review the proposed architecture of the monitoring system and we describe its software components, which are already in place, the various monitoring streams available, and our experiences of operating and monitoring a large-scale system
    • 

    corecore