61 research outputs found
The Dungeon Variations Problem Using Constraint Programming
The video games industry generates billions of dollars in sales every year. Video games can offer increasingly complex gaming experiences, with gigantic (but consistent) open worlds, thanks to larger and larger teams of developers and artists. In this paper, we propose a constraint-based approach for procedural dungeon generation in an open world/universe context, in order to provide players with consistent, open worlds with an excellent quality of storytelling. Thanks to a global description capturing all the possible rooms and situations of a given dungeon, our approach allows enumerating variations of this global pattern, which can then be presented to the player for more diversity. We formalise this problem in constraint programming by exploiting a graph abstraction of the dungeon pattern structure. Every path of the graph represents a possible variation matching a given set of constraints. We introduce a new propagator extending the "connected" graph constraint, which allows considering directed graphs with cycles. We show that thanks to this model and the proposed new propagator, it is possible to handle scenarios at the forefront of the game industry (AAA+ games). We demonstrate that our approach outperforms non-specialised solutions consisting of filtering only the relevant solutions a posteriori. We then conclude and offer several exciting perspectives raised by this approach to the Dungeon Variations Problem
Toward High-Performance Implementation of 5G SCMA Algorithms
International audienceThe recent evolution of mobile communication systems toward a 5G network is associated with the search for new types of non-orthogonal modulations such as Sparse Code Multiple Access (SCMA). Such modulations are proposed in response to demands for increasing the number of connected users. SCMA is a non-orthogonal multiple access technique that offers improved Bit Error Rate (BER) performance and higher spectral efficiency than other comparable techniques, but these improvements come at the cost of complex decoders. There are many challenges in designing near-optimum high throughput SCMA decoders. This paper explores means to enhance the performance of SCMA decoders. To achieve this goal, various improvements to the MPA algorithms are proposed. They notably aim at adapting SCMA decoding to the Single Instruction Multiple Data (SIMD) paradigm. An approximate modeling of noise is performed to reduce the complexity of floating-point calculations. The effects of Forward Error Corrections (FEC) such as polar, turbo and LDPC codes, as well as different ways of accessing memory and improving power efficiency of modified MPAs are investigated. The results show that the throughput of a SCMA decoder can be increased by 3.1 to 21 times when compared to the original MPA on different computing platforms using the suggested improvements
Fast and Flexible Software Polar List Decoders
International audienceFlexibility is one mandatory aspect of channel coding in modern wireless communication systems. Among other things, the channel decoder has to support several code lengths and code rates. This need for flexibility applies to polar codes that are considered for control channels in the future 5G standard. This paper presents a new generic and flexible implementation of a software Successive Cancellation List (SCL) decoder. A large set of parameters can be fine-tuned dynamically without re-compiling the software source code: the code length, the code rate, the frozen bits set, the puncturing patterns, the cyclic redundancy check, the list size, the type of decoding algorithm, the tree-pruning strategy and the data quantization. This generic and flexible SCL decoder enables to explore tradeoffs between throughput, latency and decoding performance. Several optimizations are proposed to achieve a competitive decoding speed despite the constraints induced by the genericity and the flexibility. The resulting polar list decoder is about 4 times faster than a generic software decoder and only 2 times slower than a non-flexible unrolled decoder. Thanks to the flexibility of the decoder, the fully adaptive SCL algorithm can be easily implemented and achieves higher throughput than any other similar decoder in the literature (up to 425 Mb/s on a single processor core for N = 2048 and K = 1723 at 4.5 dB)
Glutamine-Expanded Ataxin-7 Alters TFTC/STAGA Recruitment and Chromatin Structure Leading to Photoreceptor Dysfunction
Spinocerebellar ataxia type 7 (SCA7) is one of several inherited neurodegenerative disorders caused by a polyglutamine (polyQ) expansion, but it is the only one in which the retina is affected. Increasing evidence suggests that transcriptional alterations contribute to polyQ pathogenesis, although the mechanism is unclear. We previously demonstrated that theSCA7 gene product, ataxin-7 (ATXN7), is a subunit of the GCN5 histone acetyltransferaseâcontaining coactivator complexes TFTC/STAGA. We show here that TFTC/STAGA complexes purified from SCA7 mice have normal TRRAP, GCN5, TAF12, and SPT3 levels and that their histone or nucleosomal acetylation activities are unaffected. However, rod photoreceptors from SCA7 mouse models showed severe chromatin decondensation. In agreement, polyQ-expanded ataxin-7 induced histone H3 hyperacetylation, resulting from an increased recruitment of TFTC/STAGA to specific promoters. Surprisingly, hyperacetylated genes were transcriptionally down-regulated, and expression analysis revealed that nearly all rod-specific genes were affected, leading to visual impairment in SCA7 mice. In conclusion, we describe here a set of events accounting for SCA7 pathogenesis in the retina, in which polyQ-expanded ATXN7 deregulated TFTC/STAGA recruitment to a subset of genes specifically expressed in rod photoreceptors, leading to chromatin alterations and consequent progressive loss of rod photoreceptor function
Les provinces de la musique (Pratiques professionnelles, trajectoires et rapports au métier des instrumentistes classiques limougeauds)
LIMOGES-BU Lettres (870852107) / SudocSudocFranceF
Selective Fiber Degeneration in the Peripheral Nerve of a Patient With Severe Complex Regional Pain Syndrome
Aims: Complex regional pain syndrome (CRPS) is characterized by chronic debilitating pain disproportional to the inciting event and accompanied by motor, sensory, and autonomic disturbances. The pathophysiology of CRPS remains elusive. An exceptional case of severe CRPS leading to forearm amputation provided the opportunity to examine nerve histopathological features of the peripheral nerves.Methods: A 35-year-old female developed CRPS secondary to low voltage electrical injury. The CRPS was refractory to medical therapy and led to functional loss of the forelimb, repeated cutaneous wound infections leading to hospitalization. Specifically, the patient had exhausted a targeted conservative pain management programme prior to forearm amputation. Radial, median, and ulnar nerve specimens were obtained from the amputated limb and analyzed by light and transmission electron microscopy (TEM).Results: All samples showed features of selective myelinated nerve fiber degeneration (47â58% of fibers) on electron microscopy. Degenerating myelinated fibers were significantly larger than healthy fibers (p < 0.05), and corresponded to the larger Aα fibers (motor/proprioception) whilst smaller AÎŽ (pain/temperature) fibers were spared. Groups of small unmyelinated C fibers (Remak bundles) also showed evidence of degeneration in all samples.Conclusions: We are the first to show large fiber degeneration in CRPS using TEM. Degeneration of Aα fibers may lead to an imbalance in nerve signaling, inappropriately triggering the smaller healthy AÎŽ fibers, which transmit pain and temperature. These findings suggest peripheral nerve degeneration may play a key role in CRPS. Improved knowledge of pathogenesis will help develop more targeted treatments
Generalizing sampling-based multilingual alignment
International audienceSub-sentential alignment is the process by which multi-word translation units are extracted from sentence-aligned multilingual parallel texts. This process is required, for instance, in the course of training statistical machine translation systems. Standard approaches typically rely on the estimation of several probabilistic models of increasing complexity and on the use of various heuristics, that make it possible to align, first isolated words, then, by extension, groups of words. In this paper, we explore an alternative approach which relies on a much simpler principle: the comparison of occurrence profiles in sub-corpora obtained by sampling. After analyzing the strengths and weaknesses of this approach, we show how to improve the detection of multi-word translation units and evaluate these improvements on machine translation tasks
Généralisation de l'alignement sous-phrastique par échantillonnage
L'alignement sous-phrastique consiste Ă extraire des traductions dâunitĂ©s textuelles de grain infĂ©rieur Ă la phrase Ă partir de textes multilingues parallĂšles alignĂ©s au niveau de la phrase. Un tel alignement est nĂ©cessaire, par exemple, pour entraĂźner des systĂšmes de traduction statistique. Lâapproche standard pour rĂ©aliser cette tĂąche implique lâestimation successive de plusieurs modĂšles probabilistes de complexitĂ© croissante et lâutilisation dâheuristiques qui permettent dâaligner des mots isolĂ©s, puis, par extension, des groupes de mots. Dans cet article, nous considĂ©rons une approche alternative, initialement proposĂ©e dans (Lardilleux & Lepage, 2008), qui repose sur un principe beaucoup plus simple, Ă savoir la comparaison des profils dâoccurrences dans des sous-corpus obtenus par Ă©chantillonnage. AprĂšs avoir analysĂ© les forces et faiblesses de cette approche, nous montrons comment amĂ©liorer la dĂ©tection dâunitĂ©s de traduction longues, et Ă©valuons ces amĂ©liorations sur des tĂąches de traduction automatique.Sub-sentential alignment is the process by which multi-word translation units are extracted from sentence-aligned multilingual parallel texts. Such alignment is necessary, for instance, to train statistical machine translation systems. Standard approaches typically rely on the estimation of several probabilistic models of increasing complexity and on the use of various heuristics that make it possible to align, first isolated words, then, by extension, groups of words. In this paper, we explore an alternative approach, originally proposed in (Lardilleux & Lepage, 2008), that relies on a much simpler principle, which is the comparison of occurrence profiles in sub-corpora obtained by sampling. After analyzing the strengths and weaknesses of this approach, we show how to improve the detection of long translation units, and evaluate these improvements on machine translation tasks
Généralisation de l'alignement sous-phrastique par échantillonnage
L'alignement sous-phrastique consiste Ă extraire des traductions dâunitĂ©s textuelles de grain infĂ©rieur Ă la phrase Ă partir de textes multilingues parallĂšles alignĂ©s au niveau de la phrase. Un tel alignement est nĂ©cessaire, par exemple, pour entraĂźner des systĂšmes de traduction statistique. Lâapproche standard pour rĂ©aliser cette tĂąche implique lâestimation successive de plusieurs modĂšles probabilistes de complexitĂ© croissante et lâutilisation dâheuristiques qui permettent dâaligner des mots isolĂ©s, puis, par extension, des groupes de mots. Dans cet article, nous considĂ©rons une approche alternative, initialement proposĂ©e dans (Lardilleux & Lepage, 2008), qui repose sur un principe beaucoup plus simple, Ă savoir la comparaison des profils dâoccurrences dans des sous-corpus obtenus par Ă©chantillonnage. AprĂšs avoir analysĂ© les forces et faiblesses de cette approche, nous montrons comment amĂ©liorer la dĂ©tection dâunitĂ©s de traduction longues, et Ă©valuons ces amĂ©liorations sur des tĂąches de traduction automatique.Sub-sentential alignment is the process by which multi-word translation units are extracted from sentence-aligned multilingual parallel texts. Such alignment is necessary, for instance, to train statistical machine translation systems. Standard approaches typically rely on the estimation of several probabilistic models of increasing complexity and on the use of various heuristics that make it possible to align, first isolated words, then, by extension, groups of words. In this paper, we explore an alternative approach, originally proposed in (Lardilleux & Lepage, 2008), that relies on a much simpler principle, which is the comparison of occurrence profiles in sub-corpora obtained by sampling. After analyzing the strengths and weaknesses of this approach, we show how to improve the detection of long translation units, and evaluate these improvements on machine translation tasks
- âŠ