35,675 research outputs found

    Experiencing Poverty in an Online Simulation: Effects on Players’ Beliefs, Attitudes and Behaviors about Poverty

    Get PDF
    Digital simulations are increasingly used to educate about the causes and effects of poverty, and inspire action to alleviate it. Drawing on research about attributions of poverty, subjective well-being, and relative income, this experimental study assesses the effects of an online poverty simulation (entitled Spent) on participants’ beliefs, attitudes, and actions. Results show that, compared with a control group, Spent players donated marginally more money to a charity serving the poor and expressed higher support for policies benefitting the poor, but were less likely to take immediate political action by signing an online petition to support a higher minimum wage. Spent players also expressed greater subjective well-being than the control group, but this was not associated with increased policy support or donations. Spent players who experienced greater presence (perceived realism of the simulation) had higher levels of empathy, which contributed to attributing poverty to structural causes and support for anti-poverty policies. We draw conclusions for theory about the psychological experience of playing online poverty simulations, and for how they could be designed to stimulate charity and support for anti-poverty policies

    Software trace cache

    Get PDF
    We explore the use of compiler optimizations, which optimize the layout of instructions in memory. The target is to enable the code to make better use of the underlying hardware resources regardless of the specific details of the processor/architecture in order to increase fetch performance. The Software Trace Cache (STC) is a code layout algorithm with a broader target than previous layout optimizations. We target not only an improvement in the instruction cache hit rate, but also an increase in the effective fetch width of the fetch engine. The STC algorithm organizes basic blocks into chains trying to make sequentially executed basic blocks reside in consecutive memory positions, then maps the basic block chains in memory to minimize conflict misses in the important sections of the program. We evaluate and analyze in detail the impact of the STC, and code layout optimizations in general, on the three main aspects of fetch performance; the instruction cache hit rate, the effective fetch width, and the branch prediction accuracy. Our results show that layout optimized, codes have some special characteristics that make them more amenable for high-performance instruction fetch. They have a very high rate of not-taken branches and execute long chains of sequential instructions; also, they make very effective use of instruction cache lines, mapping only useful instructions which will execute close in time, increasing both spatial and temporal locality.Peer ReviewedPostprint (published version

    Computational fluid dynamics modeling of symptomatic intracranial atherosclerosis may predict risk of stroke recurrence.

    Get PDF
    BackgroundPatients with symptomatic intracranial atherosclerosis (ICAS) of ≥ 70% luminal stenosis are at high risk of stroke recurrence. We aimed to evaluate the relationships between hemodynamics of ICAS revealed by computational fluid dynamics (CFD) models and risk of stroke recurrence in this patient subset.MethodsPatients with a symptomatic ICAS lesion of 70-99% luminal stenosis were screened and enrolled in this study. CFD models were reconstructed based on baseline computed tomographic angiography (CTA) source images, to reveal hemodynamics of the qualifying symptomatic ICAS lesions. Change of pressures across a lesion was represented by the ratio of post- and pre-stenotic pressures. Change of shear strain rates (SSR) across a lesion was represented by the ratio of SSRs at the stenotic throat and proximal normal vessel segment, similar for the change of flow velocities. Patients were followed up for 1 year.ResultsOverall, 32 patients (median age 65; 59.4% males) were recruited. The median pressure, SSR and velocity ratios for the ICAS lesions were 0.40 (-2.46-0.79), 4.5 (2.2-20.6), and 7.4 (5.2-12.5), respectively. SSR ratio (hazard ratio [HR] 1.027; 95% confidence interval [CI], 1.004-1.051; P = 0.023) and velocity ratio (HR 1.029; 95% CI, 1.002-1.056; P = 0.035) were significantly related to recurrent territorial ischemic stroke within 1 year by univariate Cox regression, respectively with the c-statistics of 0.776 (95% CI, 0.594-0.903; P = 0.014) and 0.776 (95% CI, 0.594-0.903; P = 0.002) in receiver operating characteristic analysis.ConclusionsHemodynamics of ICAS on CFD models reconstructed from routinely obtained CTA images may predict subsequent stroke recurrence in patients with a symptomatic ICAS lesion of 70-99% luminal stenosis

    Enlarging instruction streams

    Get PDF
    The stream fetch engine is a high-performance fetch architecture based on the concept of an instruction stream. We call a sequence of instructions from the target of a taken branch to the next taken branch, potentially containing multiple basic blocks, a stream. The long length of instruction streams makes it possible for the stream fetch engine to provide a high fetch bandwidth and to hide the branch predictor access latency, leading to performance results close to a trace cache at a lower implementation cost and complexity. Therefore, enlarging instruction streams is an excellent way to improve the stream fetch engine. In this paper, we present several hardware and software mechanisms focused on enlarging those streams that finalize at particular branch types. However, our results point out that focusing on particular branch types is not a good strategy due to Amdahl's law. Consequently, we propose the multiple-stream predictor, a novel mechanism that deals with all branch types by combining single streams into long virtual streams. This proposal tolerates the prediction table access latency without requiring the complexity caused by additional hardware mechanisms like prediction overriding. Moreover, it provides high-performance results which are comparable to state-of-the-art fetch architectures but with a simpler design that consumes less energy.Peer ReviewedPostprint (published version

    Control speculation for energy-efficient next-generation superscalar processors

    Get PDF
    Conventional front-end designs attempt to maximize the number of "in-flight" instructions in the pipeline. However, branch mispredictions cause the processor to fetch useless instructions that are eventually squashed, increasing front-end energy and issue queue utilization and, thus, wasting around 30 percent of the power dissipated by a processor. Furthermore, processor design trends lead to increasing clock frequencies by lengthening the pipeline, which puts more pressure on the branch prediction engine since branches take longer to be resolved. As next-generation high-performance processors become deeply pipelined, the amount of wasted energy due to misspeculated instructions will go up. The aim of this work is to reduce the energy consumption of misspeculated instructions. We propose selective throttling, which triggers different power-aware techniques (fetch throttling, decode throttling, or disabling the selection logic) depending on the branch prediction confidence level. Results show that combining fetch-bandwidth reduction along with select-logic disabling provides the best performance in terms of overall energy reduction and energy-delay product improvement (14 percent and 10 percent, respectively, for a processor with a 22-stage pipeline and 16 percent and 13 percent, respectively, for a processor with a 42-stage pipeline).Peer ReviewedPostprint (published version

    The influence of cognition and emotion on Nigerian undergraduates frustration during e-Registration

    Get PDF
    This study was designed to investigate the relative and combined contributions of cognition and emotion to Nigerian undergraduates’ level of computer frustration in online environments. The 1972 students who participated in the study were randomly selected from the two state-owned universities in Ogun State, Nigeria. The data for the study were collected through the use of the Students’ Cognition Scale, Students’ Emotion Scale, and Students’ Computer Frustration Scale. Data analysis involved the use of mean and standard deviation as descriptive statistics, as well as the Pearson Product Moment Correlation and regression analysis as inferential statistics. The research findings revealed that students encountered various frustrating experiences during e-Registration and that a combination of the predictor variables, cognition and emotion, significantly accounted for 2.5% of the variance in the students’ level of frustration. Cognition was found to be the more potent contributor to this frustration. The results of this study further indicated that there was a statistically significant difference in the level of computer frustration among students at the two universities, potentially due to the relative differences in the schools’ technology facilities. Recommendations are made at the end of this paper in accordance with the findings of the study

    Empowering a helper cluster through data-width aware instruction selection policies

    Get PDF
    Narrow values that can be represented by less number of bits than the full machine width occur very frequently in programs. On the other hand, clustering mechanisms enable cost- and performance-effective scaling of processor back-end features. Those attributes can be combined synergistically to design special clusters operating on narrow values (a.k.a. helper cluster), potentially providing performance benefits. We complement a 32-bit monolithic processor with a low-complexity 8-bit helper cluster. Then, in our main focus, we propose various ideas to select suitable instructions to execute in the data-width based clusters. We add data-width information as another instruction steering decision metric and introduce new data-width based selection algorithms which also consider dependency, inter-cluster communication and load imbalance. Utilizing those techniques, the performance of a wide range of workloads are substantially increased; helper cluster achieves an average speedup of 11% for a wide range of 412 apps. When focusing on integer applications, the speedup can be as high as 22% on averagePeer ReviewedPostprint (published version
    • …
    corecore