47,797 research outputs found

    Internet source evaluation: The role of implicit associations and psychophysiological self-regulation

    Get PDF
    This study focused on middle school students\u2019 source evaluation skills as a key component of digital literacy. Specifically, it examined the role of two unexplored individual factors that may affect the evaluation of sources providing information about the controversial topic of the health risks associated with the use of mobile phones. The factors were the implicit association of mobile phone with health or no health, and psychophysiological self-regulation as reflected in basal Heart Rate Variability (HRV). Seventy-two seventh graders read six webpages that provided contrasting information on the unsettled topic of the potential health risks related to the use of mobile phones. Then they were asked to rank-order the six websites along the dimension of reliability (source evaluation). Findings revealed that students were able to discriminate between the most and least reliable websites, justifying their ranking in light of different criteria. However, overall, they were little accurate in rank-ordering all six Internet sources. Both implicit associations and HRV correlated with source evaluation. The interaction between the two individual variables was a significant predictor of participants\u2019 performance in rank-ordering the websites for reliability. A slope analysis revealed that when students had an average psychophysiological self-regulation, the stronger their association of the mobile phone with health, the better their performance on source evaluation. Theoretical and educational significances of the study are discussed

    Measuring economic inequality and risk: a unifying approach based on personal gambles, societal preferences and references

    Get PDF
    The underlying idea behind the construction of indices of economic inequality is based on measuring deviations of various portions of low incomes from certain references or benchmarks, that could be point measures like population mean or median, or curves like the hypotenuse of the right triangle where every Lorenz curve falls into. In this paper we argue that by appropriately choosing population-based references, called societal references, and distributions of personal positions, called gambles, which are random, we can meaningfully unify classical and contemporary indices of economic inequality, as well as various measures of risk. To illustrate the herein proposed approach, we put forward and explore a risk measure that takes into account the relativity of large risks with respect to small ones.Comment: 29 pages, 4 figure

    Towards a new generation of transport services adapted to multimedia application

    Get PDF
    Une connexion d'ordre et de fiabilité partiels (POC, partial order connection) est une connexion de transport autorisée à perdre certains objets mais également à les délivrer dans un ordre éventuellement différent de celui d'émission. L'approche POC établit un lien conceptuel entre les protocoles sans connexion au mieux et les protocoles fiables avec connexion. Le concept de POC est motivé par le fait que dans les réseaux hétérogènes sans connexion tels qu'Internet, les paquets transmis sont susceptibles de se perdre et d'arriver en désordre, entraînant alors une réduction des performances des protocoles usuels. De plus, on montre qu'un protocole associé au transport d'un flux multimédia permet une réduction très sensible de l'utilisation des ressources de communication et de mémorisation ainsi qu'une diminution du temps de transit moyen. Dans cet article, une extension temporelle de POC, nommée TPOC (POC temporisé), est introduite. Elle constitue un cadre conceptuel permettant la prise en compte des exigences de qualité de service des applications multimédias réparties. Une architecture offrant un service TPOC est également introduite et évaluée dans le cadre du transport de vidéo MPEG. Il est ainsi démontré que les connexions POC comblent, non seulement le fossé conceptuel entre les protocoles sans connexion et avec connexion, mais aussi qu'ils surpassent les performances des ces derniers lorsque des données multimédias (telles que la vidéo MPEG) sont transportées

    Feature placement algorithms for high-variability applications in cloud environments

    Get PDF
    While the use of cloud computing is on the rise, many obstacles to its adoption remain. One of the weaknesses of current cloud offerings is the difficulty of developing highly customizable applications while retaining the increased scalability and lower cost offered by the multi-tenant nature of cloud applications. In this paper we describe a Software Product Line Engineering (SPLE) approach to the modelling and deployment of customizable Software as a Service (SaaS) applications. Afterwards we define a formal feature placement problem to manage these applications, and compare several heuristic approaches to solve the problem. The scalability and performance of the algorithms is investigated in detail. Our experiments show that the heuristics scale and perform well for systems with a reasonable load

    Fast, scalable, Bayesian spike identification for multi-electrode arrays

    Get PDF
    We present an algorithm to identify individual neural spikes observed on high-density multi-electrode arrays (MEAs). Our method can distinguish large numbers of distinct neural units, even when spikes overlap, and accounts for intrinsic variability of spikes from each unit. As MEAs grow larger, it is important to find spike-identification methods that are scalable, that is, the computational cost of spike fitting should scale well with the number of units observed. Our algorithm accomplishes this goal, and is fast, because it exploits the spatial locality of each unit and the basic biophysics of extracellular signal propagation. Human intervention is minimized and streamlined via a graphical interface. We illustrate our method on data from a mammalian retina preparation and document its performance on simulated data consisting of spikes added to experimentally measured background noise. The algorithm is highly accurate

    Cellular neural networks, Navier-Stokes equation and microarray image reconstruction

    Get PDF
    Copyright @ 2011 IEEE.Although the last decade has witnessed a great deal of improvements achieved for the microarray technology, many major developments in all the main stages of this technology, including image processing, are still needed. Some hardware implementations of microarray image processing have been proposed in the literature and proved to be promising alternatives to the currently available software systems. However, the main drawback of those proposed approaches is the unsuitable addressing of the quantification of the gene spot in a realistic way without any assumption about the image surface. Our aim in this paper is to present a new image-reconstruction algorithm using the cellular neural network that solves the Navier–Stokes equation. This algorithm offers a robust method for estimating the background signal within the gene-spot region. The MATCNN toolbox for Matlab is used to test the proposed method. Quantitative comparisons are carried out, i.e., in terms of objective criteria, between our approach and some other available methods. It is shown that the proposed algorithm gives highly accurate and realistic measurements in a fully automated manner within a remarkably efficient time

    Stability and aggregation of ranked gene lists

    Get PDF
    Ranked gene lists are highly instable in the sense that similar measures of differential gene expression may yield very different rankings, and that a small change of the data set usually affects the obtained gene list considerably. Stability issues have long been under-considered in the literature, but they have grown to a hot topic in the last few years, perhaps as a consequence of the increasing skepticism on the reproducibility and clinical applicability of molecular research findings. In this article, we review existing approaches for the assessment of stability of ranked gene lists and the related problem of aggregation, give some practical recommendations, and warn against potential misuse of these methods. This overview is illustrated through an application to a recent leukemia data set using the freely available Bioconductor package GeneSelector

    Methods for Ordinal Peer Grading

    Full text link
    MOOCs have the potential to revolutionize higher education with their wide outreach and accessibility, but they require instructors to come up with scalable alternates to traditional student evaluation. Peer grading -- having students assess each other -- is a promising approach to tackling the problem of evaluation at scale, since the number of "graders" naturally scales with the number of students. However, students are not trained in grading, which means that one cannot expect the same level of grading skills as in traditional settings. Drawing on broad evidence that ordinal feedback is easier to provide and more reliable than cardinal feedback, it is therefore desirable to allow peer graders to make ordinal statements (e.g. "project X is better than project Y") and not require them to make cardinal statements (e.g. "project X is a B-"). Thus, in this paper we study the problem of automatically inferring student grades from ordinal peer feedback, as opposed to existing methods that require cardinal peer feedback. We formulate the ordinal peer grading problem as a type of rank aggregation problem, and explore several probabilistic models under which to estimate student grades and grader reliability. We study the applicability of these methods using peer grading data collected from a real class -- with instructor and TA grades as a baseline -- and demonstrate the efficacy of ordinal feedback techniques in comparison to existing cardinal peer grading methods. Finally, we compare these peer-grading techniques to traditional evaluation techniques.Comment: Submitted to KDD 201
    corecore