175 research outputs found

    The erosion of nongambling spheres by smartphone gambling: a qualitative study on workplace and domestic disordered gambling

    Get PDF
    The potential dangers of internet-based gambling as compared with more traditional land-based gambling have been increasingly investigated over the past decade. The general consensus appears to be that although internet gambling might not be a more dangerous medium for gambling per se, the 24/7 availability it generates for problem gamblers, however, is. Because smartphones have become the most used way of gambling online, internet gambling must, therefore, be further subcategorized according to the device by which it is accessed. This study examines the issue by exploring the views of smartphone gamblers undergoing treatment for gambling disorder in focus group settings (N=35). Utilizing thematic analysis, the paper shows that smartphone gambling has colonized spaces previously regarded as nongambling spheres. The workplace, especially in male-dominated contexts, emerged as an accommodator and stimulator of gambling behavior, raising issues of productivity rather than criminality. Domestic gambling was mostly characterized by an invasion of bathroom and bedtime spheres of intimacy. The study examines the implications of prevention and treatment, focusing on the minimization of exposure to gambling stimuli, the erosion of intimacy that recovering gamblers must endure, and the necessity of embracing a broader definition of gambling-related harm

    Charting a Dynamic DNA Methylation Landscape of the Human Genome

    No full text
    DNA methylation is a defining feature of mammalian cellular identity and essential for normal development(1,2). Most cell types, except germ cells and pre-implantation embryos(3–5), display relatively stable DNA methylation patterns with 70–80% of all CpGs being methylated(6). Despite recent advances we still have a too limited understanding of when, where and how many CpGs participate in genomic regulation. Here we report the in depth analysis of 42 whole genome bisulfite sequencing (WGBS) data sets across 30 diverse human cell and tissue types. We observe dynamic regulation for only 21.8% of autosomal CpGs within a normal developmental context, a majority of which are distal to transcription start sites. These dynamic CpGs co-localize with gene regulatory elements, particularly enhancers and transcription factor binding sites (TFBS), which allow identification of key lineage specific regulators. In addition, differentially methylated regions (DMRs) often harbor SNPs associated with cell type related diseases as determined by GWAS. The results also highlight the general inefficiency of WGBS as 70–80% of the sequencing reads across these data sets provided little or no relevant information regarding CpG methylation. To further demonstrate the utility of our DMR set, we use it to classify unknown samples and identify representative signature regions that recapitulate major DNA methylation dynamics. In summary, although in theory every CpG can change its methylation state, our results suggest that only a fraction does so as part of coordinated regulatory programs. Therefore our selected DMRs can serve as a starting point to help guide novel, more effective reduced representation approaches to capture the most informative fraction of CpGs as well as further pinpoint putative regulatory elements

    The family business, adversity and change: A dynamic capabilities and knowledge-based approach

    Get PDF
    While the growth of family business research is undisputable, knowledge gaps have been recognised, notably, regarding the lack of a strategic management theory, and a predominance of quantitative over qualitative methods when researchers examine family businesses. This study seeks to address these research gaps. First, the study proposes a framework based on the knowledge-based view framework and the dynamic capabilities approach to examine adaptation to adversity and to a changing business environment through the case of Hawkshead Relish Company, a family firm operating in the United Kingdom. Second, it employs a qualitative approach. Face-to-face interviews, on-site observations, and archival information of the firm helped reveal the association between dynamic capabilities, knowledge acquisition, networking, and innovation. Sensing, seizing, and transforming were manifested within and through the organisation’s strategy and practice. Overall, the framework emphasises how the above associations are applicable to family firms when adapting to adversity and change

    11th German Conference on Chemoinformatics (GCC 2015) : Fulda, Germany. 8-10 November 2015.

    Get PDF

    MANAGING THE IMPACT OF DIFFERENCES IN NATIONAL CULTURE ON SOCIAL CAPITAL IN MULTINATIONAL IT PROJECT TEAMS – A GERMAN PERSPECTIVE

    Get PDF
    How can management handle relationship problems arising from cultural differences in multinational IT project teams? This paper uses a social capital lens to better understand the negative impact of cultural differences in IT project teams. In contrast to many previous works we do not consider cultural differences as a whole but explore the role of the different national culture dimensions. This allows for a more detailed view on cultural differences in a team context and thus contributes to a better understanding about which dimensions of national culture drive relationship problems and which management measures can help to dampen the negative effects. Based on several exploratory cases (6 multinational IT projects in 4 companies, headquartered in Germany), the authors identify three patterns showing typical problems in team social relationships which arise from differences in particular dimensions of national culture. Pattern-specific as well as general management measures, employed to address the culture-driven negative effects, are identified as well

    LC-MSsim – a simulation software for liquid chromatography mass spectrometry data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Mass Spectrometry coupled to Liquid Chromatography (LC-MS) is commonly used to analyze the protein content of biological samples in large scale studies. The data resulting from an LC-MS experiment is huge, highly complex and noisy. Accordingly, it has sparked new developments in Bioinformatics, especially in the fields of algorithm development, statistics and software engineering. In a quantitative label-free mass spectrometry experiment, crucial steps are the detection of peptide features in the mass spectra and the alignment of samples by correcting for shifts in retention time. At the moment, it is difficult to compare the plethora of algorithms for these tasks. So far, curated benchmark data exists only for peptide identification algorithms but no data that represents a ground truth for the evaluation of feature detection, alignment and filtering algorithms.</p> <p>Results</p> <p>We present <it>LC-MSsim</it>, a simulation software for LC-ESI-MS experiments. It simulates ESI spectra on the MS level. It reads a list of proteins from a FASTA file and digests the protein mixture using a user-defined enzyme. The software creates an LC-MS data set using a predictor for the retention time of the peptides and a model for peak shapes and elution profiles of the mass spectral peaks. Our software also offers the possibility to add contaminants, to change the background noise level and includes a model for the detectability of peptides in mass spectra. After the simulation, <it>LC-MSsim </it>writes the simulated data to mzData, a public XML format. The software also stores the positions (monoisotopic m/z and retention time) and ion counts of the simulated ions in separate files.</p> <p>Conclusion</p> <p><it>LC-MSsim </it>generates simulated LC-MS data sets and incorporates models for peak shapes and contaminations. Algorithm developers can match the results of feature detection and alignment algorithms against the simulated ion lists and meaningful error rates can be computed. We anticipate that <it>LC-MSsim </it>will be useful to the wider community to perform benchmark studies and comparisons between computational tools.</p

    Statistical quality assessment and outlier detection for liquid chromatography-mass spectrometry experiments

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Quality assessment methods, that are common place in engineering and industrial production, are not widely spread in large-scale proteomics experiments. But modern technologies such as Multi-Dimensional Liquid Chromatography coupled to Mass Spectrometry (LC-MS) produce large quantities of proteomic data. These data are prone to measurement errors and reproducibility problems such that an automatic quality assessment and control become increasingly important.</p> <p>Results</p> <p>We propose a methodology to assess the quality and reproducibility of data generated in quantitative LC-MS experiments. We introduce quality descriptors that capture different aspects of the quality and reproducibility of LC-MS data sets. Our method is based on the Mahalanobis distance and a robust Principal Component Analysis.</p> <p>Conclusion</p> <p>We evaluate our approach on several data sets of different complexities and show that we are able to precisely detect LC-MS runs of poor signal quality in large-scale studies.</p
    • …
    corecore