2,073 research outputs found

    Missing in Action? Electronic Gaming Machines in Gambling Studies Research

    Full text link
    In the past thirty years casinos across the world have become dominated by the rise of “electronic gaming machines” (EGMs). Expanding with tremendous speed, this technology has arguably become the dominant form of non-online gambling around the world at time of writing (DeMichele, 2017; Schwartz, 2018). EGMs are also noted as being one of the most harmful forms of gambling, with significant numbers of players betting beyond their financial limits (MacLaren et al, 2012; Stewart & Wohl, 2013), spending a disproportionate amount of time playing (Cummings, 1999; Ballon, 2005; Schüll, 2012; cf. Dickerson, 1996), becoming disconnected from the world outside of the “zone” (Schüll, 2012) of gambling play, and even becoming bankrupt or otherwise financially crippled as a result of their use (Petry, 2003; Scarf et al, 2011). Using metadata from Web of Science and Scopus databases, we analysed peer-reviewed gambling research produced in Australia, New Zealand, North America and the UK published between 1996 and 2016. Surprisingly, we found that the overwhelming of majority of articles do not specifically address EGMs as the most popular and pervasive gambling technology available. Our paper teases out some concerning implications of this finding for the interdisciplinary field of gambling studies

    The relation between the diagonal entries and the eigenvalues of a symmetric matrix, based upon the sign pattern of its off-diagonal entries

    Get PDF
    It is known that majorization is a complete description of the relationships between the eigenvalues and diagonal entries of real symmetric matrices. However, for large subclasses of such matrices, the diagonal entries impose much greater restrictions on the eigenvalues. Motivated by previous results about Laplacian eigenvalues, we study here the additional restrictions that come from the off-diagonal sign-pattern classes of real symmetric matrices. Each class imposes additional restrictions. Several results are given for the all nonpositive and all nonnegative classes and for the third class that appears when n = 4. Complete description of the possible relationships are given in low dimensions. (C) 2012 Elsevier Inc. All rights reserved

    Nonpositive Eigenvalues of the Adjacency Matrix and Lower Bounds for Laplacian Eigenvalues

    Get PDF
    Let NPO(k)NPO(k) be the smallest number nn such that the adjacency matrix of any undirected graph with nn vertices or more has at least kk nonpositive eigenvalues. We show that NPO(k)NPO(k) is well-defined and prove that the values of NPO(k)NPO(k) for k=1,2,3,4,5k=1,2,3,4,5 are 1,3,6,10,161,3,6,10,16 respectively. In addition, we prove that for all k≥5k \geq 5, R(k,k+1)≥NPO(k)>TkR(k,k+1) \ge NPO(k) > T_k, in which R(k,k+1)R(k,k+1) is the Ramsey number for kk and k+1k+1, and TkT_k is the kthk^{th} triangular number. This implies new lower bounds for eigenvalues of Laplacian matrices: the kk-th largest eigenvalue is bounded from below by the NPO(k)NPO(k)-th largest degree, which generalizes some prior results.Comment: 23 pages, 12 figure

    Variability in the analysis of a single neuroimaging dataset by many teams

    Get PDF
    Data analysis workflows in many scientific domains have become increasingly complex and flexible. To assess the impact of this flexibility on functional magnetic resonance imaging (fMRI) results, the same dataset was independently analyzed by 70 teams, testing nine ex-ante 1 hypotheses . The flexibility of analytic approaches is exemplified by the fact that no two teams chose identical workflows to analyze the data. This flexibility resulted in sizeable variation in hypothesis test results, even for teams whose statistical maps were highly correlated at intermediate stages of their analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Importantly, a meta-analytic approach that aggregated information across teams yielded significant consensus in activated regions across teams. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset findings show that analytic flexibility can have substantial effects on scientific conclusions, and demonstrate factors possibly related to variability in fMRI. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for multiple analyses of the same data. Potential approaches to mitigate issues related to analytical variability are discusse

    COX-2 expression mediated by calcium-TonEBP signaling axis under hyperosmotic conditions serves osmoprotective function in nucleus pulposus cells.

    Get PDF
    The nucleus pulposus (NP) of intervertebral discs experiences dynamic changes in tissue osmolarity because of diurnal loading of the spine. TonEBP/NFAT5 is a transcription factor that is critical in osmoregulation as well as survival of NP cells in the hyperosmotic milieu. The goal of this study was to investigate whether cyclooxygenase-2 (COX-2) expression is osmoresponsive and dependent on TonEBP, and whether it serves an osmoprotective role. NP cells up-regulated COX-2 expression in hyperosmotic media. The induction of COX-2 depended on elevation of intracellular calcium levels and p38 MAPK pathway, but independent of calcineurin signaling as well as MEK/ERK and JNK pathways. Under hyperosmotic conditions, both COX-2 mRNA stability and its proximal promoter activity were increased. The proximal COX-2 promoter (-1840/+123 bp) contained predicted binding sites for TonEBP, AP-1, NF-κB, and C/EBP-β. While COX-2 promoter activity was positively regulated by both AP-1 and NF-κB, AP-1 had no effect and NF-κB negatively regulated COX-2 protein levels under hyperosmotic conditions. On the other hand, TonEBP was necessary for both COX-2 promoter activity and protein up-regulation in response to hyperosmotic stimuli

    A Context-based Approach to Robot-human Interaction

    Get PDF
    AbstractCARIL (Context-Augmented Robotic Interaction Layer) is a human-robot interaction system that leverages cognitive representations of shared context as a basis for a fundamentally new approach to human-robotic interaction. CARIL gives a robot a human-like representation of context and an ability to reason about context in order to adapt its behavior to that of the humans around it. This capability is “action compliance.” A prototype CARIL implementation focuses on a fundamental form of action compliance called non-interference -- “not being underfoot or in a human's way”. Non-interference is key for the safety of human-co-workers, and is also foundational to more complex interactive and teamwork skills. CARIL is tested via simulation in a space-exploration use-case. The live CARIL prototype directs a single simulated robot in a simulated space station where four simulated astronauts are engaging in a variety of tightly-scheduled work activities. The robot is scheduled to perform background tasks away from the astronauts, but must quickly adapt and not be underfoot as astronaut activities diverge from plan and encroach on the robot's space. The robot, driven by CARIL, demonstrates non-interference action compliance in three benchmarks situations, demonstrating the viability of the CARIL technology and concept

    Convolutional neural networks can decode eye movement data: A black box approach to predicting task from eye movements

    Get PDF
    Previous attempts to classify task from eye movement data have relied on model architectures designed to emulate theoretically defined cognitive processes and/or data that have been processed into aggregate (e.g., fixations, saccades) or statistical (e.g., fixation density) features. Black box convolutional neural networks (CNNs) are capable of identifying relevant features in raw and minimally processed data and images, but difficulty interpreting these model architectures has contributed to challenges in generalizing lab-trained CNNs to applied contexts. In the current study, a CNN classifier was used to classify task from two eye movement datasets (Exploratory and Confirmatory) in which participants searched, memorized, or rated indoor and outdoor scene images. The Exploratory dataset was used to tune the hyperparameters of the model, and the resulting model architecture was retrained, validated, and tested on the Confirmatory dataset. The data were formatted into timelines (i.e., x-coordinate, y-coordinate, pupil size) and minimally processed images. To further understand the informational value of each component of the eye movement data, the timeline and image datasets were broken down into subsets with one or more components systematically removed. Classification of the timeline data consistently outperformed the image data. The Memorize condition was most often confused with Search and Rate. Pupil size was the least uniquely informative component when compared with the x- and y-coordinates. The general pattern of results for the Exploratory dataset was replicated in the Confirmatory dataset. Overall, the present study provides a practical and reliable black box solution to classifying task from eye movement data

    A protocol to evaluate RNA sequencing normalization methods

    Get PDF
    Background RNA sequencing technologies have allowed researchers to gain a better understanding of how the transcriptome affects disease. However, sequencing technologies often unintentionally introduce experimental error into RNA sequencing data. To counteract this, normalization methods are standardly applied with the intent of reducing the non-biologically derived variability inherent in transcriptomic measurements. However, the comparative efficacy of the various normalization techniques has not been tested in a standardized manner. Here we propose tests that evaluate numerous normalization techniques and applied them to a large-scale standard data set. These tests comprise a protocol that allows researchers to measure the amount of non-biological variability which is present in any data set after normalization has been performed, a crucial step to assessing the biological validity of data following normalization. Results In this study we present two tests to assess the validity of normalization methods applied to a large-scale data set collected for systematic evaluation purposes. We tested various RNASeq normalization procedures and concluded that transcripts per million (TPM) was the best performing normalization method based on its preservation of biological signal as compared to the other methods tested. Conclusion Normalization is of vital importance to accurately interpret the results of genomic and transcriptomic experiments. More work, however, needs to be performed to optimize normalization methods for RNASeq data. The present effort helps pave the way for more systematic evaluations of normalization methods across different platforms. With our proposed schema researchers can evaluate their own or future normalization methods to further improve the field of RNASeq normalization

    Graph isomorphism for (H1,H2)-free graphs : an almost complete dichotomy.

    Get PDF
    We almost completely resolve the computational complexity of Graph Isomorphism for classes of graphs characterized by two forbidden induced subgraphs H1 and H2. Schweitzer settled the complexity of this problem restricted to (H1;H2)-free graphs for all but a nite number of pairs (H1;H2), but without explicitly giving the number of open cases. Grohe and Schweitzer proved that Graph Isomorphism is polynomialtime solvable on graph classes of bounded clique-width. By combining known results with a number of new results, we reduce the number of open cases to seven. By exploiting the strong relationship between Graph Isomorphism and clique-width, we simultaneously reduce the number of open cases for boundedness of clique-width for (H1;H2)-free graphs to ve

    Characterizing the Cool KOIs II. The M Dwarf KOI-254 and its Hot Jupiter

    Full text link
    We report the confirmation and characterization of a transiting gas giant planet orbiting the M dwarf KOI-254 every 2.455239 days, which was originally discovered by the Kepler mission. We use radial velocity measurements, adaptive optics imaging and near infrared spectroscopy to confirm the planetary nature of the transit events. KOI-254b is the first hot Jupiter discovered around an M-type dwarf star. We also present a new model-independent method of using broadband photometry to estimate the mass and metallicity of an M dwarf without relying on a direct distance measurement. Included in this methodology is a new photometric metallicity calibration based on J-K colors. We use this technique to measure the physical properties of KOI-254 and its planet. We measure a planet mass of Mp = 0.505 Mjup, radius Rp = 0.96 Rjup and semimajor axis a = 0.03 AU, based on our measured stellar mass Mstar = 0.59 Msun and radius Rstar = 0.55 Rsun. We also find that the host star is metal-rich, which is consistent with the sample of M-type stars known to harbor giant planets.Comment: AJ accepted (in press
    • …
    corecore