56 research outputs found

    Parrondo's Paradox for Games with Three Players

    Get PDF
    Parrondo’s paradox appears in game theory which asserts that playing two losing games, A and B (say) randomly or periodically may result in a winning expectation. In the original paradox the strategy of game B was capital-dependent. Some extended versions of the original Parrondo’s game as history dependent game, cooperative Parrondo’s game and others have been introduced. In all of these methods, games are played by two players. In this paper, we introduce a generalized version of this paradox by considering three players. In our extension, two games are played among three players by throwing a three-sided dice. Each player will be in one of three places in the game. We set up the conditions for parameters under which player one is in the third place in two games A and B. Then paradoxical property is obtained by combining these two games periodically and chaotically and (s)he will be in the first place when (s)he plays the games in one of the mentioned fashions. Mathematical analysis of the generalized strategy is presented and the results are also justified by computer simulations

    Energy efficient SEU-tolerance in DVS-enabled real-time systems through information redundancy

    Get PDF

    A Survey of Fault-Tolerance Techniques for Embedded Systems from the Perspective of Power, Energy, and Thermal Issues

    Get PDF
    The relentless technology scaling has provided a significant increase in processor performance, but on the other hand, it has led to adverse impacts on system reliability. In particular, technology scaling increases the processor susceptibility to radiation-induced transient faults. Moreover, technology scaling with the discontinuation of Dennard scaling increases the power densities, thereby temperatures, on the chip. High temperature, in turn, accelerates transistor aging mechanisms, which may ultimately lead to permanent faults on the chip. To assure a reliable system operation, despite these potential reliability concerns, fault-tolerance techniques have emerged. Specifically, fault-tolerance techniques employ some kind of redundancies to satisfy specific reliability requirements. However, the integration of fault-tolerance techniques into real-time embedded systems complicates preserving timing constraints. As a remedy, many task mapping/scheduling policies have been proposed to consider the integration of fault-tolerance techniques and enforce both timing and reliability guarantees for real-time embedded systems. More advanced techniques aim additionally at minimizing power and energy while at the same time satisfying timing and reliability constraints. Recently, some scheduling techniques have started to tackle a new challenge, which is the temperature increase induced by employing fault-tolerance techniques. These emerging techniques aim at satisfying temperature constraints besides timing and reliability constraints. This paper provides an in-depth survey of the emerging research efforts that exploit fault-tolerance techniques while considering timing, power/energy, and temperature from the real-time embedded systems’ design perspective. In particular, the task mapping/scheduling policies for fault-tolerance real-time embedded systems are reviewed and classified according to their considered goals and constraints. Moreover, the employed fault-tolerance techniques, application models, and hardware models are considered as additional dimensions of the presented classification. Lastly, this survey gives deep insights into the main achievements and shortcomings of the existing approaches and highlights the most promising ones

    Laboratories risks evaluation of Persian Gulf and Oman Sea Ecological Research Center

    Get PDF
    In this study, to assess and classify risks associated with working in the laboratories of Persian Gulf and Oman Sea Ecological Research Center, the method of "Failure Mode Effects Analysis" (FMEA) as well as some statistical methods were used. The results of the risk assessment in the 11 affiliated laboratories showed that the risk levels in all cases, except for benthos laboratory, could be evaluated as moderate or high and therefore appropriate corrective actions must be implemented. Based on the results of the Kruskal-Wallis tests both before and after the corrective actions, there were significant differences between the laboratories from the viewpoint of risk priority number (RPN). The post hoc tests showed the lowest risk levels for the benthose and histology laboratories, while the highest risks identified in the laboratory of instrumental analysis. The results of the classification of the laboratories using cluster analysis are largely similar to those of the posthoc tests. According to Mann-Whitney U test, only in the case of the samples preparation laboratory, significant differences between the values of the RPN before and after the corrective actions could be observed (p> 0.05), however, the risk levels still remained high. In general it can be concluded that FMEA is an effective method for risk assessment in the research laboratories and appropriate statistical methods can also be used for complementary analysis

    NIKA2 observations of dust grain evolution from star-forming filament to T-Tauri disk: Preliminary results from NIKA2 observations of the Taurus B211/B213 filament

    Full text link
    To understand the evolution of dust properties in molecular clouds in the course of the star formation process, we constrain the changes in the dust emissivity index from star-forming filaments to prestellar and protostellar cores to T Tauri stars. Using the NIKA2 continuum camera on the IRAM 30~m telescope, we observed the Taurus B211/B213 filament at 1.2\,mm and 2\,mm with unprecedented sensitivity and used the resulting maps to derive the dust emissivity index β\beta. Our sample of 105 objects detected in the β\beta map of the B211/B213 filament indicates that, overall, β\beta decreases from filament and prestellar cores (β2±0.5\beta \sim 2\pm0.5) to protostellar cores (β1.2±0.2\beta \sim 1.2 \pm 0.2) to T-Tauri protoplanetary disk (β<1\beta < 1). The averaged dust emissivity index β\beta across the B211/B213 filament exhibits a flat (β2±0.3\beta \sim 2\pm0.3) profile. This may imply that dust grain sizes are rather homogeneous in the filament, start to grow significantly in size only after the onset of the gravitational contraction/collapse of prestellar cores to protostars, reaching big sizes in T Tauri protoplanetary disks. This evolution from the parent filament to T-Tauri disks happens on a timescale of about 1-2~Myr.Comment: to appear in Proc. of the mm Universe 2023 conference, Grenoble (France), June 2023, published by F. Mayet et al. (Eds), EPJ Web of conferences, EDP Science

    IAS/CEA Evolution of Dust in Nearby Galaxies (ICED): the spatially-resolved dust properties of NGC4254

    Full text link
    We present the first preliminary results of the project \textit{ICED}, focusing on the face-on galaxy NGC4254. We use the millimetre maps observed with NIKA2 at IRAM-30m, as part of the IMEGIN Guaranteed Time Large Program, and of a wide collection of ancillary data (multi-wavelength photometry and gas phase spectral lines) that are publicly available. We derive the global and local properties of interstellar dust grains through infrared-to-radio spectral energy distribution fitting, using the hierarchical Bayesian code HerBIE, which includes the grain properties of the state-of-the-art dust model, THEMIS. Our method allows us to get the following dust parameters: dust mass, average interstellar radiation field, and fraction of small grains. Also, it is effective in retrieving the intrinsic correlations between dust parameters and interstellar medium properties. We find an evident anti-correlation between the interstellar radiation field and the fraction of small grains in the centre of NGC4254, meaning that, at strong radiation field intensities, very small amorphous carbon grains are efficiently destroyed by the ultra-violet photons coming from newly formed stars, through photo-desorption and sublimation. We observe a flattening of the anti-correlation at larger radial distances, which may be driven by the steep metallicity gradient measured in NGC4254.Comment: to appear in Proc. of the mm Universe 2023 conference, Grenoble (France), June 2023, published by F. Mayet et al. (Eds), EPJ Web of conferences, EDP Science

    Exploring the interstellar medium of NGC 891 at millimeter wavelengths using the NIKA2 camera

    Full text link
    In the framework of the IMEGIN Large Program, we used the NIKA2 camera on the IRAM 30-m telescope to observe the edge-on galaxy NGC 891 at 1.15 mm and 2 mm and at a FWHM of 11.1" and 17.6", respectively. Multiwavelength data enriched with the new NIKA2 observations fitted by the HerBIE SED code (coupled with the THEMIS dust model) were used to constrain the physical properties of the ISM. Emission originating from the diffuse dust disk is detected at all wavelengths from mid-IR to mm, while mid-IR observations reveal warm dust emission from compact HII regions. Indications of mm excess emission have also been found in the outer parts of the galactic disk. Furthermore, our SED fitting analysis constrained the mass fraction of the small (< 15 Angstrom) dust grains. We found that small grains constitute 9.5% of the total dust mass in the galactic plane, but this fraction increases up to ~ 20% at large distances (|z| > 3 kpc) from the galactic plane.Comment: To appear in Proc. of the mm Universe 2023 conference, Grenoble (France), June 2023, published by F. Mayet et al. (Eds), EPJ Web of conferences, EDP Science
    corecore