337 research outputs found
Loneliness, social relations and health and wellbeing in deprived communities
There is growing policy concern about the extent of loneliness in advanced societies, and its
prevalence among various social groups. This study looks at loneliness among people living in
deprived communities, where there may be additional barriers to social engagement including low
incomes, fear of crime, poor services and transient populations. The aim was to examine the
prevalence of loneliness, and also its associations with different types of social contacts and forms of
social support, and its links to self-reported health and wellbeing in the population group. The
method involved a cross-sectional survey of 4,302 adults across 15 communities, with the data
analysed using multinomial logistic regression controlling for sociodemographics, then for all other
predictors within each domain of interest. Frequent feelings of loneliness were more common
among those who: had contact with family monthly or less; had contact with neighbours weekly or
less; rarely talked to people in the neighbourhood; and who had no available sources of practical or
emotional support. Feelings of loneliness were most strongly associated with poor mental health,
but were also associated with long-term problems of stress, anxiety and depression, and with low
mental wellbeing, though to a lesser degree. The findings are consistent with a view that situational
loneliness may be the product of residential structures and resources in deprived areas. The findings
also show that neighbourly behaviours of different kinds are important for protecting against
loneliness in deprived communities. Familiarity within the neighbourhood, as active acquaintance
rather than merely recognition, is also important. The findings are indicative of several mechanisms
that may link loneliness to health and wellbeing in our study group: loneliness itself as a stressor;
lonely people not responding well to the many other stressors in deprived areas; and loneliness as
the product of weak social buffering to protect against stressors
Failure analysis of parameter-induced simulation crashes in climate models
Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models
Recommended from our members
Petascale Simulation Initiative Tech Base: FY2007 Final Report
The Petascale Simulation Initiative began as an LDRD project in the middle of Fiscal Year 2004. The goal of the project was to develop techniques to allow large-scale scientific simulation applications to better exploit the massive parallelism that will come with computers running at petaflops per second. One of the major products of this work was the design and prototype implementation of a programming model and a runtime system that lets applications extend data-parallel applications to use task parallelism. By adopting task parallelism, applications can use processing resources more flexibly, exploit multiple forms of parallelism, and support more sophisticated multiscale and multiphysics models. Our programming model was originally called the Symponents Architecture but is now known as Cooperative Parallelism, and the runtime software that supports it is called Coop. (However, we sometimes refer to the programming model as Coop for brevity.) We have documented the programming model and runtime system in a submitted conference paper [1]. This report focuses on the specific accomplishments of the Cooperative Parallelism project (as we now call it) under Tech Base funding in FY2007. Development and implementation of the model under LDRD funding alone proceeded to the point of demonstrating a large-scale materials modeling application using Coop on more than 1300 processors by the end of FY2006. Beginning in FY2007, the project received funding from both LDRD and the Computation Directorate Tech Base program. Later in the year, after the three-year term of the LDRD funding ended, the ASC program supported the project with additional funds. The goal of the Tech Base effort was to bring Coop from a prototype to a production-ready system that a variety of LLNL users could work with. Specifically, the major tasks that we planned for the project were: (1) Port SARS [former name of the Coop runtime system] to another LLNL platform, probably Thunder or Peloton (depending on when Peloton becomes available); (2) Improve SARS's robustness and ease-of-use, and develop user documentation; and (3) Work with LLNL code teams to help them determine how Symponents could benefit their applications. The original funding request was 252,000. The remainder of this report describes our efforts and accomplishments for each of the goals listed above
Financial incentives for smoking cessation in pregnancy:Randomised controlled trial
Objective: To assess the efficacy of a financial incentive added to routine specialist pregnancy stop smoking services versus routine care to help pregnant smokers quit.
Design: Phase II therapeutic exploratory single centre, individually randomised controlled parallel group superiority trial.
Setting: One large health board area with a materially deprived, inner city population in the west of Scotland, United Kingdom.
Participants: 612 self reported pregnant smokers in NHS Greater Glasgow and Clyde who were English speaking, at least 16 years of age, less than 24 weeks pregnant, and had an exhaled carbon monoxide breath test result of 7 ppm or more. 306 women were randomised to incentives and 306 to control.
Interventions: The control group received routine care, which was the offer of a face to face appointment to discuss smoking and cessation and, for those who attended and set a quit date, the offer of free nicotine replacement therapy for 10 weeks provided by pharmacy services, and four, weekly support phone calls. The intervention group received routine care plus the offer of up to £400 of shopping vouchers: £50 for attending a face to face appointment and setting a quit date; then another £50 if at four weeks’ post-quit date exhaled carbon monoxide confirmed quitting; a further £100 was provided for continued validated abstinence of exhaled carbon monoxide after 12 weeks; a final £200 voucher was provided for validated abstinence of exhaled carbon monoxide at 34-38 weeks’ gestation.
Main outcome measure: The primary outcome was cotinine verified cessation at 34-38 weeks’ gestation through saliva (<14.2 ng/mL) or urine (<44.7 ng/mL). Secondary outcomes included birth weight, engagement, and self reported quit at four weeks.
Results: Recruitment was extended from 12 to 15 months to achieve the target sample size. Follow-up continued until September 2013. Of the 306 women randomised, three controls opted out soon after enrolment; these women did not want their data to be used, leaving 306 intervention and 303 control group participants in the intention to treat analysis. No harms of financial incentives were documented. Significantly more smokers in the incentives group than control group stopped smoking: 69 (22.5%) versus 26 (8.6%). The relative risk of not smoking at the end of pregnancy was 2.63 (95% confidence interval 1.73 to 4.01) P<0.001. The absolute risk difference was 14.0% (95% confidence interval 8.2% to 19.7%). The number needed to treat (where financial incentives need to be offered to achieve one extra quitter in late pregnancy) was 7.2 (95% confidence interval 5.1 to 12.2). The mean birth weight was 3140 g (SD 600 g) in the incentives group and 3120 (SD 590) g in the control group (P=0.67).
Conclusion: This phase II randomised controlled trial provides substantial evidence for the efficacy of incentives for smoking cessation in pregnancy; as this was only a single centre trial, incentives should now be tested in different types of pregnancy cessation services and in different parts of the United Kingdom
Service Oriented Big Data Management for Transport
International audienceThe increasing power of computer hardware and the sophistication of computer software have brought many new possibilities to information world. On one side the possibility to analyse massive data sets has brought new insight, knowledge and information. On the other, it has enabled to massively distribute computing and has opened to a new programming paradigm called Service Oriented Computing particularly well adapted to cloud computing. Applying these new technologies to the transport industry can bring new understanding to town transport infrastructures. The objective of our work is to manage and aggregate cloud services for managing big data and assist decision making for transport systems. Thus this paper presents our approach to propose a service oriented architecture for big data analytics for transport systems based on the cloud. Proposing big data management strategies for data produced by transport infra‐ structures, whilst maintaining cost effective systems deployed on the cloud, is a promising approach. We present the advancement for developing the Data acquisition service and Information extraction and cleaning service as well as the analysis for choosing a sharding strategy
Recommended from our members
Activation-induced cytidine deaminase localizes to G-quadruplex motifs at mutation hotspots in lymphoma.
Diffuse large B-cell lymphoma (DLBCL) is a molecularly heterogeneous group of malignancies with frequent genetic abnormalities. G-quadruplex (G4) DNA structures may facilitate this genomic instability through association with activation-induced cytidine deaminase (AID), an antibody diversification enzyme implicated in mutation of oncogenes in B-cell lymphomas. Chromatin immunoprecipitation sequencing analyses in this study revealed that AID hotspots in both activated B cells and lymphoma cells in vitro were highly enriched for G4 elements. A representative set of these targeted sequences was validated for characteristic, stable G4 structure formation including previously unknown G4s in lymphoma-associated genes, CBFA2T3, SPIB, BCL6, HLA-DRB5 and MEF2C, along with the established BCL2 and MYC structures. Frequent genome-wide G4 formation was also detected for the first time in DLBCL patient-derived tissues using BG4, a structure-specific G4 antibody. Tumors with greater staining were more likely to have concurrent BCL2 and MYC oncogene amplification and BCL2 mutations. Ninety-seven percent of the BCL2 mutations occurred within G4 sites that overlapped with AID binding. G4 localization at sites of mutation, and within aggressive DLBCL tumors harboring amplified BCL2 and MYC, supports a role for G4 structures in events that lead to a loss of genomic integrity, a critical step in B-cell lymphomagenesis
Androgen receptor phosphorylation at serine 515 by Cdk1 predicts biochemical relapse in prostate cancer patients
<br>Background:Prostate cancer cell growth is dependent upon androgen receptor (AR) activation, which is regulated by specific kinases. The aim of the current study is to establish if AR phosphorylation by Cdk1 or ERK1/2 is of prognostic significance.</br> <br>Methods: Scansite 2.0 was utilised to predict which AR sites are phosphorylated by Cdk1 and ERK1/2. Immunohistochemistry for these sites was then performed on 90 hormone-naive prostate cancer specimens. The interaction between Cdk1/ERK1/2 and AR phosphorylation was investigated in vitro using LNCaP cells.</br><br>Results:Phosphorylation of AR at serine 515 (pAR(S515)) and PSA at diagnosis were independently associated with decreased time to biochemical relapse. Cdk1 and pCdk1(161), but not ERK1/2, correlated with pAR(S515). High expression of pAR(S515) in patients with a PSA at diagnosis of ≤20 ng ml(-1) was associated with shorter time to biochemical relapse (P=0.019). This translated into a reduction in disease-specific survival (10-year survival, 38.1% vs 100%, P<0.001). In vitro studies demonstrated that treatment with Roscovitine (a Cdk inhibitor) caused a reduction in pCdk1(161) expression, pAR(S515)expression and cellular proliferation.</br> <br>Conclusion: In prostate cancer patients with PSA at diagnosis of ≤20 ng ml(-1), phosphorylation of AR at serine 515 by Cdk1 may be an independent prognostic marker.</br>
TIGIT can inhibit T cell activation via ligation-induced nanoclusters, independent of CD226 co-stimulation
TIGIT is an inhibitory receptor expressed on lymphocytes and can inhibit T cells by preventing CD226 co-stimulation through interactions in cis or through competition of shared ligands. Whether TIGIT directly delivers cell-intrinsic inhibitory signals in T cells remains unclear. Here we show, by analysing lymphocytes from matched human tumour and peripheral blood samples, that TIGIT and CD226 co-expression is rare on tumour-infiltrating lymphocytes. Using super-resolution microscopy and other techniques, we demonstrate that ligation with CD155 causes TIGIT to reorganise into dense nanoclusters, which coalesce with T cell receptor (TCR)-rich clusters at immune synapses. Functionally, this reduces cytokine secretion in a manner dependent on TIGIT’s intracellular ITT-like signalling motif. Thus, we provide evidence that TIGIT directly inhibits lymphocyte activation, acting independently of CD226, requiring intracellular signalling that is proximal to the TCR. Within the subset of tumours where TIGIT-expressing cells do not commonly co-express CD226, this will likely be the dominant mechanism of action
A New Ensemble of Perturbed-Input-Parameter Simulations by the Community Atmosphere Model
Uncertainty quantification (UQ) is a fundamental challenge in the numerical simulation of Earth's weather and climate, and other complex systems. It entails much more than attaching defensible error bars to predictions: in particular it includes assessing low-probability but high-consequence events. To achieve these goals with models containing a large number of uncertain input parameters, structural uncertainties, etc., raw computational power is needed. An automated, self-adapting search of the possible model configurations is also useful. Our UQ initiative at the Lawrence Livermore National Laboratory has produced the most extensive set to date of simulations from the US Community Atmosphere Model. We are examining output from about 3,000 twelve-year climate simulations generated with a specialized UQ software framework, and assessing the model's accuracy as a function of 21 to 28 uncertain input parameter values. Most of the input parameters we vary are related to the boundary layer, clouds, and other sub-grid scale processes. Our simulations prescribe surface boundary conditions (sea surface temperatures and sea ice amounts) to match recent observations. Fully searching this 21+ dimensional space is impossible, but sensitivity and ranking algorithms can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination. Bayesian statistical constraints, employing a variety of climate observations as metrics, also seem promising. Observational constraints will be important in the next step of our project, which will compute sea surface temperatures and sea ice interactively, and will study climate change due to increasing atmospheric carbon dioxide
Experiences Marketing: A Cultural Philosophy for Contemporary Hospitality Marketing Studies
This article explores the landscape of contemporary hospitality marketing. It is argued that the teaching and academic discussions that surround the subject area adopt a predominantly positivistic approach; although important, that does not adequately reflect the nature of the industry or the products offered. Such a metrics-oriented position, although significant in the formulation of marketing strategy, does not reflect the complex experiential, nontangible nature of the hospitality product. This article presents a culturally located philosophy that reflects the multifaceted nature of the industry. The philosophy is underpinned by three precepts that draw from a multidisciplinary theoretical framework to create a more subject-specific approach to marketing, that when woven with traditional approaches can create a more effective and informed contemporary approach
- …