285 research outputs found

    Étude anatomique du carrefour postérieur formé par le tendon du muscle long fléchisseur de l'hallux et la poulie rétrotalienne

    Get PDF
    ABSTRACT INTRODUCTION Le tendon du long fléchisseur de l'hallux (FHL) a un trajet complexe entre sa jonction tendino-musculaire et son insertion distale à la base de la phalange distale de l'hallux. Il peut être freiné à 3 endroits : au noeud d'Henry, en regard des sésamoïdes, et au niveau de la poulie rétrotalienne. Cela peut engendrer des conséquences au niveau du membre inférieur, telles que limitation de mobilité ou déformation de l'avant-pied. Pour la poulie rétrotalienne, qui reste une structure mal décrite dans la littérature, il existe un traitement conservateur (manoeuvre du cordon de l'aspirateur) ou, en cas d'échec, un traitement chirurgical (ténolyse). RAISONS DE L'ÉTUDE ET OBJECTIFS Quelques études seulement se sont penchées sur l'importance du FHL et les conséquences de l'hallux limitus fonctionnel ou encore les origines de l'hallux valgus. Le but de cette étude est donc d'approfondir la région anatomique se situant en regard de cette poulie rétrotalienne, qui n'est que peu, voire pas prise en considération dans les travaux d'anatomie. MÉTHODOLOGIE Cette étude inclut onze pièces anatomiques (sept pièces avec hallux valgus et quatre sans). Ces pièces subiront un examen CT afin de réaliser des coupes sériées et de mesurer différents paramètres : volume musculaire, épaisseur/structure de la poulie, ... La région étudiée aura une épaisseur de 1.5 cm, à partir de l'articulation talo-calcanéenne en direction du tibia. La dissection s'intéressera à décrire la poulie et les structures articulaires sur le versant interne de la sous-talienne. Une validation anatomique par la préparation de ces pièces anatomiques en salle de dissection sera ainsi faite (quatre pièces avec CT suivies par dissection, documentation et identification de la gaine synoviale des pieds avec et sans hallux). RÉSULTATS ATTENDUS Nous nous attendons à démontrer que cette poulie rétrotalienne, ainsi que le tubercule talien postéro-latéral, sont des freins au libre coulissement du FHL dans sa gorge formée entre les deux tubercules et qu'ils ont donc un effet délétère sur le bon fonctionnement de la marche

    Improving Multi-Objective Test Case Selection by Injecting Diversity in Genetic Algorithms

    Get PDF
    A way to reduce the cost of regression testing consists of selecting or prioritizing subsets of test cases from a test suite according to some criteria. Besides greedy algorithms, cost cognizant additional greedy algorithms, multi-objective optimization algorithms, and Multi-Objective Genetic Algorithms (MOGAs), have also been proposed to tackle this problem. However, previous studies have shown that there is no clear winner between greedy and MOGAs, and that their combination does not necessarily produce better results. In this paper we show that the optimality of MOGAs can be significantly improved by diversifying the solutions (sub-sets of the test suite) generated during the search process. Specifically, we introduce a new MOGA, coined as DIV-GA (DIversity based Genetic Algorithm), based on the mechanisms of orthogonal design and orthogonal evolution that increase diversity by injecting new orthogonal individuals during the search process. Results of an empirical study conducted on eleven programs show that DIV-GA outperforms both greedy algorithms and the traditional MOGAs from the optimality point of view. Moreover, the solutions (sub-sets of the test suite) provided by DIV-GA are able to detect more faults than the other algorithms, while keeping the same test execution cost

    Computational complexity analysis of genetic programming

    Get PDF
    Genetic programming (GP) is an evolutionary computation technique to solve problems in an automated, domain-independent way. Rather than identifying the optimum of a function as in more traditional evolutionary optimization, the aim of GP is to evolve computer programs with a given functionality. While many GP applications have produced human competitive results, the theoretical understanding of what problem characteristics and algorithm properties allow GP to be effective is comparatively limited. Compared with traditional evolutionary algorithms for function optimization, GP applications are further complicated by two additional factors: the variable-length representation of candidate programs, and the difficulty of evaluating their quality efficiently. Such difficulties considerably impact the runtime analysis of GP, where space complexity also comes into play. As a result, initial complexity analyses of GP have focused on restricted settings such as the evolution of trees with given structures or the estimation of solution quality using only a small polynomial number of input/output examples. However, the first computational complexity analyses of GP for evolving proper functions with defined input/output behavior have recently appeared. In this chapter, we present an overview of the state of the art

    On the time and space complexity of genetic programming for evolving Boolean conjunctions

    Get PDF
    Genetic programming (GP) is a general purpose bio-inspired meta-heuristic for the evolution of computer programs. In contrast to the several successful applications, there is little understanding of the working principles behind GP. In this paper we present a performance analysis that sheds light on the behaviour of simple GP systems for evolving conjunctions of n variables (ANDn). The analysis of a random local search GP system with minimal terminal and function sets reveals the relationship between the number of iterations and the progress the GP makes toward finding the target function. Afterwards we consider a more realistic GP system equipped with a global mutation operator and prove that it can efficiently solve ANDn by producing programs of linear size that fit a training set to optimality and with high probability generalise well. Additionally, we consider more general problems which extend the terminal set with undesired variables or negated variables. In the presence of undesired variables, we prove that, if non-strict selection is used, then the algorithm fits the complete training set efficiently while the strict selection algorithm may fail with high probability unless the substitution operator is switched off. If negations are allowed, we show that while the algorithms fail to fit the complete training set, the constructed solutions generalise well. Finally, from a problem hardness perspective, we reveal the existence of small training sets that allow the evolution of the exact conjunctions even with access to negations or undesired variables

    On the Runtime Analysis of the Clearing Diversity-Preserving Mechanism

    Get PDF
    Clearing is a niching method inspired by the principle of assigning the available resources among a niche to a single individual. The clearing procedure supplies these resources only to the best individual of each niche: the winner. So far, its analysis has been focused on experimental approaches that have shown that clearing is a powerful diversity-preserving mechanism. Using rigorous runtime analysis to explain how and why it is a powerful method, we prove that a mutation-based evolutionary algorithm with a large enough population size, and a phenotypic distance function always succeeds in optimising all functions of unitation for small niches in polynomial time, while a genotypic distance function requires exponential time. Finally, we prove that with phenotypic and genotypic distances clearing is able to find both optima for Twomax and several general classes of bimodal functions in polynomial expected time. We use empirical analysis to highlight some of the characteristics that makes it a useful mechanism and to support the theoretical results

    Simple hyper-heuristics control the neighbourhood size of randomised local search optimally for LeadingOnes

    Get PDF
    Selection hyper-heuristics (HHs) are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this paper we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the LeadingOnes benchmark function. Our analysis shows that the standard Simple Random, Permutation, Greedy and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the `simple' Random Gradient HH so success can be measured over a fixed period of time τ, instead of a single iteration. For LeadingOnes we prove that the Generalised Random Gradient (GRG) HH can learn to adapt the neighbourhood size of Randomised Local Search to optimality during the run. As a result, we prove it has the best possible performance achievable with the low-level heuristics (Randomised Local Search with different neighbourhood sizes), up to lower order terms. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. In particular, with access to k low-level local search heuristics, it outperforms the best-possible algorithm using any subset of the k heuristics. Finally, we show that the advantages of GRG over Randomised Local Search and Evolutionary Algorithms using standard bit mutation increase if the anytime performance is considered (i.e., the performance gap is larger if approximate solutions are sought rather than exact ones). Experimental analyses confirm these results for different problem sizes (up to n = 108) and shed some light on the best choices for the parameter τ in various situations

    On steady-state evolutionary algorithms and selective pressure: Why inverse rank-based allocation of reproductive trials is best

    Get PDF
    We analyse the impact of the selective pressure for the global optimisation capabilities of steady-state evolutionary algorithms (EAs). For the standard bimodal benchmark function TwoMax, we rigorously prove that using uniform parent selection leads to exponential runtimes with high probability to locate both optima for the standard (+1) EA and (+1) RLS with any polynomial population sizes. However, we prove that selecting the worst individual as parent leads to efficient global optimisation with overwhelming probability for reasonable population sizes. Since always selecting the worst individual may have detrimental effects for escaping from local optima, we consider the performance of stochastic parent selection operators with low selective pressure for a function class called TruncatedTwoMax, where one slope is shorter than the other. An experimental analysis shows that the EAs equipped with inverse tournament selection, where the loser is selected for reproduction and small tournament sizes, globally optimise TwoMax efficiently and effectively escape from local optima of TruncatedTwoMax with high probability. Thus, they identify both optima efficiently while uniform (or stronger) selection fails in theory and in practice. We then show the power of inverse selection on function classes from the literature where populations are essential by providing rigorous proofs or experimental evidence that it outperforms uniform selection equipped with or without a restart strategy. We conclude the article by confirming our theoretical insights with an empirical analysis of the different selective pressures on standard benchmarks of the classical MaxSat and multidimensional knapsack problems

    Discovery and Preliminary Characterization of Translational Modulators that Impair the Binding of eIF6 to 60S Ribosomal Subunits

    Get PDF
    Eukaryotic initiation factor 6 (eIF6) is necessary for the nucleolar biogenesis of 60S ribosomes. However, most of eIF6 resides in the cytoplasm, where it acts as an initiation factor. eIF6 is necessary for maximal protein synthesis downstream of growth factor stimulation. eIF6 is an antiassociation factor that binds 60S subunits, in turn preventing premature 40S joining and thus the formation of inactive 80S subunits. It is widely thought that eIF6 antiassociation activity is critical for its function. Here, we exploited and improved our assay for eIF6 binding to ribosomes (iRIA) in order to screen for modulators of eIF6 binding to the 60S. Three compounds, eIFsixty-1 (clofazimine), eIFsixty-4, and eIFsixty-6 were identified and characterized. All three inhibit the binding of eIF6 to the 60S in the micromolar range. eIFsixty-4 robustly inhibits cell growth, whereas eIFsixty-1 and eIFsixty-6 might have dose- and cell-specific effects. Puromycin labeling shows that eIF6ixty-4 is a strong global translational inhibitor, whereas the other two are mild modulators. Polysome profiling and RT-qPCR show that all three inhibitors reduce the specific translation of well-known eIF6 targets. In contrast, none of them affect the nucleolar localization of eIF6. These data provide proof of principle that the generation of eIF6 translational modulators is feasible

    Towards a Runtime Comparison of Natural and Artificial Evolution

    Get PDF
    Evolutionary algorithms (EAs) form a popular optimisation paradigm inspired by natural evolution. In recent years the field of evolutionary computation has developed a rigorous analytical theory to analyse the runtimes of EAs on many illustrative problems. Here we apply this theory to a simple model of natural evolution. In the Strong Selection Weak Mutation (SSWM) evolutionary regime the time between occurrences of new mutations is much longer than the time it takes for a mutated genotype to take over the population. In this situation, the population only contains copies of one genotype and evolution can be modelled as a stochastic process evolving one genotype by means of mutation and selection between the resident and the mutated genotype. The probability of accepting the mutated genotype then depends on the change in fitness. We study this process, SSWM, from an algorithmic perspective, quantifying its expected optimisation time for various parameters and investigating differences to a similar evolutionary algorithm, the well-known (1+1) EA. We show that SSWM can have a moderate advantage over the (1+1) EA at crossing fitness valleys and study an example where SSWM outperforms the (1+1) EA by taking advantage of information on the fitness gradient

    Genetic diversity and its impact on disease severity in respiratory syncytial virus subtype-A and -B bronchiolitis before and after pandemic restrictions in Rome

    Get PDF
    Objectives: To scrutinize whether the high circulation of respiratory syncytial virus (RSV) observed in 2021-2022 and 2022-2023 was due to viral diversity, we characterized RSV-A and -B strains causing bronchiolitis in Rome, before and after the COVID-19 pandemic. Methods: RSV-positive samples, prospectively collected from infants hospitalized for bronchiolitis from 2017-2018 to 2022-2023, were sequenced in the G gene; phylogenetic results and amino acid substitutions were analyzed. Subtype-specific data were compared among seasons. Results: Predominance of RSV-A and -B alternated in the pre-pandemic seasons; RSV-A dominated in 2021-2022 whereas RSV-B was predominant in 2022-2023. RSV-A sequences were ON1 genotype but quite distant from the ancestor; two divergent clades included sequences from pre- and post-pandemic seasons. Nearly all RSV-B were BA10 genotype; a divergent clade included only strains from 2021-2022 and 2022-2023. RSV-A cases had lower need of O2 therapy and of intensive care during 2021-2022 with respect to all other seasons. RSV-B infected infants were more frequently admitted to intensive care units and needed O2 in 2022-2023. Conclusions: The intense RSV peak in 2021-2022, driven by RSV-A phylogenetically related to pre-pandemic strains is attributable to the immune debt created by pandemic restrictions. The RSV-B genetic divergence observed in post-pandemic strains may have increased the RSV-B specific immune debt, being a possible contributor to bronchiolitis severity in 2022-2023
    corecore