2,624 research outputs found
Recommended from our members
Comparative numerical analysis for cost and embodied carbon optimisation of steel building structures
The study investigates an area of sustainable structural design that is often overlooked in practical engineering applications. Specifically, a novel method to optimise the cost and embodied carbon performance of steel building structures simultaneously is explored in this paper. To achieve this, a parametric design model was developed to rapidly analyse code compliant structural configurations based on project specific constraints and rigorous testing of multiple steel beams (UB sections), floor construction typologies (precast or composite) and column layouts that could not be performed manually by engineering practitioners. Detailed objective functions are embedded in the model to compute the cost and life cycle carbon emissions of different material types used in the structure. Results from a comparative numerical analysis of a real case study illustrate that the proposed optimisation approach could guide structural engineers towards areas of the solution space with realistic design configurations, enabling them to effectively evaluate cost and carbon trade-offs. This significant contribution implies that the optimisation model could reduce the time required for the design and analysis of multiple structural configurations especially during the early stages of a project. Overall, the paper suggests that the deployment of automated design procedures can enhance the quality as well as the efficiency of the optimisation analysis.The research described in this paper was financially supported by Innovate UK through the ‘Innovative engineering approach for material, carbon and cost efficiency of steel buildings’ project with reference number 10247
Using Regular Languages to Explore the Representational Capacity of Recurrent Neural Architectures
The presence of Long Distance Dependencies (LDDs) in sequential data poses
significant challenges for computational models. Various recurrent neural
architectures have been designed to mitigate this issue. In order to test these
state-of-the-art architectures, there is growing need for rich benchmarking
datasets. However, one of the drawbacks of existing datasets is the lack of
experimental control with regards to the presence and/or degree of LDDs. This
lack of control limits the analysis of model performance in relation to the
specific challenge posed by LDDs. One way to address this is to use synthetic
data having the properties of subregular languages. The degree of LDDs within
the generated data can be controlled through the k parameter, length of the
generated strings, and by choosing appropriate forbidden strings. In this
paper, we explore the capacity of different RNN extensions to model LDDs, by
evaluating these models on a sequence of SPk synthesized datasets, where each
subsequent dataset exhibits a longer degree of LDD. Even though SPk are simple
languages, the presence of LDDs does have significant impact on the performance
of recurrent neural architectures, thus making them prime candidate in
benchmarking tasks.Comment: International Conference of Artificial Neural Networks (ICANN) 201
Improving the cost-effectiveness of visual devices for the control of Riverine tsetse flies, the major vectors of Human African Trypanosomiasis
Control of the Riverine (Palpalis) group of tsetse flies is normally achieved with stationary artificial devices such as traps or insecticide-treated targets. The efficiency of biconical traps (the standard control device), 161 m black targets and small 25625 cm targets with flanking nets was compared using electrocuting sampling methods. The work was done on Glossina tachinoides and G. palpalis gambiensis (Burkina Faso), G. fuscipes quanzensis (Democratic Republic of Congo), G. f. martinii (Tanzania) and G. f. fuscipes (Kenya). The killing effectiveness (measured as the catch per m2 of cloth) for small targets plus flanking nets is 5.5–15X greater than for 1 m2 targets and 8.6–37.5X greater than for biconical traps. This has important implications for the costs of control of the Riverine group of tsetse vectors of sleeping sickness
Clinical and economic comparison of an individualised immunoglobulin protocol vs. standard dosing for chronic inflammatory demyelinating polyneuropathy
Background The clinical and economic implications of an individualised intravenous immunoglobulin (IVIg) protocol for chronic inflammatory demyelinating polyneuropathy (CIDP) are unknown. Comparison with standard dosing regimens has not been performed. Methods We retrospectively studied 47 IVIg-treated subjects with CIDP over 4 years with an individualised, outcome-measured, dose-modifying protocol. We evaluated responder and remission rates, clinical improvement levels and dose requirements. We compared clinical benefits and costs with those reported with standard dosing at 1 g/kg every 3 weeks. Results The IVIg-responder rate was 83% and the 4-year remission rate was 25.6%. Mean IVIg dose requirements were 22.06 g/week (SD:15.29) in patients on ongoing therapy. Dose range was wide (5.83–80 g/week). Mean infusion frequency was every 4.34 weeks (SD:1.70) and infusion duration of 2.79 days (SD:1.15). Mean Overall Neuropathy Limitation Scale improvement was 2.54 (SD:1.89) and mean MRC sum score improvement of 12.23 (SD:7.17) in IVIg-responders. Mean modified-INCAT (Inflammatory Neuropathy Cause and Treatment) score improvement was similar (p = 0.47) and mean MRC sum score improvement greater (p < 0.001) in our cohort, compared to the IVIg-treated arm of the ICE Study. Mean drug costs were GBP 37,660/patient/year (€ 43,309) and mean infusion-related costs of GBP 17,115/patient/year (€ 19,682), totalling GBP 54,775/patient/year (€ 62,991). Compared to standard dosing using recorded weight, mean savings were of GBP 13,506/patient/year (€ 15,532). Compared to standard dosing using dosing weight, savings were of GBP 6,506/patient/year (€ 7,482). Conclusion Our results indicate that an individualised IVIg treatment protocol is clinically non-inferior and 10–25% more cost-effective than standard dosing regimens in CIDP
Bisphosphonate drugs have actions in the lung and inhibit the mevalonate pathway in alveolar macrophages.
Bisphosphonates drugs target the skeleton and are used globally for the treatment of common bone disorders. Nitrogen-containing bisphosphonates act by inhibiting the mevalonate pathway in bone-resorbing osteoclasts but, surprisingly, also appear to reduce the risk of death from pneumonia. We overturn the long-held belief that these drugs act only in the skeleton and show that a fluorescently labelled bisphosphonate is internalised by alveolar macrophages and large peritoneal macrophages in vivo. Furthermore, a single dose of a nitrogen-containing bisphosphonate (zoledronic acid) in mice was sufficient to inhibit the mevalonate pathway in tissue-resident macrophages, causing the build-up of a mevalonate metabolite and preventing protein prenylation. Importantly, one dose of bisphosphonate enhanced the immune response to bacterial endotoxin in the lung and increased the level of cytokines and chemokines in bronchoalveolar fluid. These studies suggest that bisphosphonates, as well as preventing bone loss, may boost immune responses to infection in the lung and provide a mechanistic basis to fully examine the potential of bisphosphonates to help combat respiratory infections that cause pneumonia
From theory to 'measurement' in complex interventions: methodological lessons from the development of an e-health normalisation instrument
<b>Background</b> Although empirical and theoretical understanding of processes of implementation in health care is advancing, translation of theory into structured measures that capture the complex interplay between interventions, individuals and context remain limited. This paper aimed to (1) describe the process and outcome of a project to develop a theory-based instrument for measuring implementation processes relating to e-health interventions; and (2) identify key issues and methodological challenges for advancing work in this field.<p></p>
<b>Methods</b> A 30-item instrument (Technology Adoption Readiness Scale (TARS)) for measuring normalisation processes in the context of e-health service interventions was developed on the basis on Normalization Process Theory (NPT). NPT focuses on how new practices become routinely embedded within social contexts. The instrument was pre-tested in two health care settings in which e-health (electronic facilitation of healthcare decision-making and practice) was used by health care professionals.<p></p>
<b>Results</b> The developed instrument was pre-tested in two professional samples (N = 46; N = 231). Ratings of items representing normalisation 'processes' were significantly related to staff members' perceptions of whether or not e-health had become 'routine'. Key methodological challenges are discussed in relation to: translating multi-component theoretical constructs into simple questions; developing and choosing appropriate outcome measures; conducting multiple-stakeholder assessments; instrument and question framing; and more general issues for instrument development in practice contexts.<p></p>
<b>Conclusions</b> To develop theory-derived measures of implementation process for progressing research in this field, four key recommendations are made relating to (1) greater attention to underlying theoretical assumptions and extent of translation work required; (2) the need for appropriate but flexible approaches to outcomes measurement; (3) representation of multiple perspectives and collaborative nature of work; and (4) emphasis on generic measurement approaches that can be flexibly tailored to particular contexts of study
The Feasibility of Using Ultrasound and Video Laryngoscopy in a Mobile Telemedicine Consult
- …