1,280 research outputs found

    Contemporary Anterior Cruciate Ligament Reconstruction

    Get PDF

    Does combined posterior cruciate ligament and posterolateral corner reconstruction for chronic posterior and posterolateral instability restore normal knee function?

    Get PDF
    SummaryIntroductionPosterior cruciate ligament (PCL) injuries are frequently associated with posterolateral corner (PLC) damages. These complex lesions are most often poorly tolerated clinically. Adherence to sound biomechanical principles treating these complex lesions entails obtaining a functional PCL and reconstructing sufficient posterolateral stability.HypothesisSurgical treatment of postero-posterolateral laxity (PPLL) re-establishes sufficient anatomical integrity to provide stability and satisfactory knee function.Material and methodsIn this retrospective, continuous, single-operator study, 21 patients were operated for chronic PPLL with combined reconstruction of the PCL and PLC and were reviewed with a minimum 1 year follow-up. The clinical and subjective outcomes were evaluated using the IKDC score. Surgical correction of posterior laxity was quantified clinically and radiologically on dynamic posterior drawer images (posterior Telos™ stress test and hamstrings contraction lateral view).ResultsThe mean subjective IKDC score was 62.8 at the last follow-up versus a preoperative score of 54.5 (NS). Preoperatively, all were classified in groups C and D. Postoperatively, 13 patients out of 21 were classified in groups A and B according to the overall clinical IKDC score. The radiological gain in laxity was 51% on the hamstring contraction films and 67% on the posterior Telos™ images (p<0.05).DiscussionThe objective of surgical treatment is to re-establish anatomical integrity to the greatest possible extent. The clinical and radiological laxity results are disappointing in terms of the objectives but are in agreement with the literature. The subjective evaluation demonstrated that this operation can provide sufficient function for standard daily activities but not sports activities.Level of evidenceLevel IV retrospective study

    Measuring the influence of concept detection on video retrieval

    Get PDF
    There is an increasing emphasis on including semantic concept detection as part of video retrieval. This represents a modality for retrieval quite different from metadata-based and keyframe similarity-based approaches. One of the premises on which the success of this is based, is that good quality detection is available in order to guarantee retrieval quality. But how good does the feature detection actually need to be? Is it possible to achieve good retrieval quality, even with poor quality concept detection and if so then what is the 'tipping point' below which detection accuracy proves not to be beneficial? In this paper we explore this question using a collection of rushes video where we artificially vary the quality of detection of semantic features and we study the impact on the resulting retrieval. Our results show that the impact of improving or degrading performance of concept detectors is not directly reflected as retrieval performance and this raises interesting questions about how accurate concept detection really needs to be

    High-level feature detection from video in TRECVid: a 5-year retrospective of achievements

    Get PDF
    Successful and effective content-based access to digital video requires fast, accurate and scalable methods to determine the video content automatically. A variety of contemporary approaches to this rely on text taken from speech within the video, or on matching one video frame against others using low-level characteristics like colour, texture, or shapes, or on determining and matching objects appearing within the video. Possibly the most important technique, however, is one which determines the presence or absence of a high-level or semantic feature, within a video clip or shot. By utilizing dozens, hundreds or even thousands of such semantic features we can support many kinds of content-based video navigation. Critically however, this depends on being able to determine whether each feature is or is not present in a video clip. The last 5 years have seen much progress in the development of techniques to determine the presence of semantic features within video. This progress can be tracked in the annual TRECVid benchmarking activity where dozens of research groups measure the effectiveness of their techniques on common data and using an open, metrics-based approach. In this chapter we summarise the work done on the TRECVid high-level feature task, showing the progress made year-on-year. This provides a fairly comprehensive statement on where the state-of-the-art is regarding this important task, not just for one research group or for one approach, but across the spectrum. We then use this past and on-going work as a basis for highlighting the trends that are emerging in this area, and the questions which remain to be addressed before we can achieve large-scale, fast and reliable high-level feature detection on video

    Burial and exhumation in a subduction wedge : mutual constraints from thermo-mechanical modelin and natural P-T-t data (Sch. Lustrés, W. Alps)

    No full text
    The dynamic processes leading to synconvergent exhumation of high-pressure low-temperature (HP-LT) rocks at oceanic accretionary margins, as well as the mechanisms maintaining nearly steady state regime in most accretion prisms, remain poorly understood. The present study aims at getting better constraints on the rheology, thermal conductivity, and chemical properties of the sediments in subduction zones. To reach that goal, oceanic subduction is modeled using a forward visco-elasto-plastic thermomechanical code (PARA(O)VOZ-FLAC algorithm), and synthetic pressure-temperature-time (P-T-t) paths, predicted from numerical experiments, are compared with natural P-T-t paths. The study is focused on the well constrained Schistes Lustrés complex (SL: western Alps) which is thought to represent the fossil accretionary wedge of the Liguro-Piemontese Ocean. For convergence rates comparable to Alpine subduction rates (∼3 cm yr−1), the best-fitting results are obtained for high-viscosity, low-density wedge sediments and/or a strong lower continental crust. After a transition period of 3-5 Ma the modeled accretionary wedges reach a steady state which lasts over 20 Ma. Over that time span a significant proportion (∼35%) of sediments entering the wedge undergoes P-T conditions typical of the SL complex (∼15-20 kbar; 350-450°C) with similar P-T loops. Computed exhumation rates (<6 mm yr−1) are in agreement with observations (1-5 mm yr−1). In presence of a serpentinite layer below the oceanic crust, exhumation of oceanic material takes place at rates approaching 3 mm yr−1. In all experiments the total pressure in the accretionary wedge never deviated by more than ±10% from the lithostatic component

    How surface properties influence mineral dust emissions in the Sahelian region ? A modelling case study during AMMA

    Get PDF
    Tropical mesoscale convective systems (MCSs) are a prominent feature of the African meteorology. A continuous monitoring of the aeolian activity in an experimental site located in Niger shows that such events are responsible for the major part of the annual local wind erosion, i.e. for most of the Sahelian dust emissions [Rajot, 2001]. However, the net effect of these MCSs on mineral dust budget has to be estimated: on the one hand, these systems produce extremely high surface wind velocities leading to intense dust uptake, but on the other hand, rainfalls associated with these systems can efficiently remove the emitted dust from the atmosphere. High resolution modelling appears as a relevant approach to correctly reproduce the surface meteorology associated with such meteorological systems [Bouet et al., submitted]. The question now arising concerns the reliability of surface characteristics available for the Sahelian region, especially soil texture and surface roughness, which are critical parameters for dust emissions. Contrary to arid regions, which are now well documented, data is still missing to correctly characterize semi-arid regions like the Sahel. This is in particular due to the well pronounced annual cycles of precipitations and vegetation in these regions and to the impact of land-use on surface properties. This study focuses on a case study of dust emission associated with the passage of a MCS observed during one of the Special Observing Periods of the international African Monsoon Multidisciplinary Analysis (AMMA &#8211; SOPs 1-2) program. The simulations were made using the Regional Atmospheric Modeling System (RAMS, Cotton et al. [2003]) coupled online with the dust production model developed by Marticorena and Bergametti [1995] and recently improved by Laurent et al. [2008] for Africa. The sensitivity of dust emission associated with the passage of the MCS to surface features is investigated using different data sets of surface properties (Harmonized World Soil Database, HWSD) and land-use (GLOBCOVER). In-situ measurements of dust concentrations (both ground-based and airborne), and of dust emission flux are used to validate the simulations

    Personalisation and recommender systems in digital libraries

    Get PDF
    Widespread use of the Internet has resulted in digital libraries that are increasingly used by diverse communities of users for diverse purposes and in which sharing and collaboration have become important social elements. As such libraries become commonplace, as their contents and services become more varied, and as their patrons become more experienced with computer technology, users will expect more sophisticated services from these libraries. A simple search function, normally an integral part of any digital library, increasingly leads to user frustration as user needs become more complex and as the volume of managed information increases. Proactive digital libraries, where the library evolves from being passive and untailored, are seen as offering great potential for addressing and overcoming these issues and include techniques such as personalisation and recommender systems. In this paper, following on from the DELOS/NSF Working Group on Personalisation and Recommender Systems for Digital Libraries, which met and reported during 2003, we present some background material on the scope of personalisation and recommender systems in digital libraries. We then outline the working group’s vision for the evolution of digital libraries and the role that personalisation and recommender systems will play, and we present a series of research challenges and specific recommendations and research priorities for the field

    Climate-induced changes in river flow regimes will alter future bird distributions

    Get PDF
    Anthropogenic forcing of the climate is causing an intensification of the global water cycle, leading to an increase in the frequency and magnitude of floods and droughts. River flow shapes the ecology of riverine ecosystems and climate-driven changes in river flows are predicted to have severe consequences for riverine species, across all levels of trophic organization. However, understanding species' responses to variation in flow is limited through a lack of quantitative modelling of hydroecological interactions. Here, we construct a Bioclimatic Envelope Model (BEM) ensemble based on a suite of plausible future flow scenarios to show how predicted alterations in flow regimes may alter the distribution of a predatory riverine species, the White-throated Dipper (Cinclus cinclus). Models predicted a gradual diminution of dipper probability of occurrence between present day and 2098. This decline was most rapid in western areas of Great Britain and was principally driven by a projected decrease in flow magnitude and variability around low flows. Climate-induced changes in river flow may, therefore, represent a previously unidentified mechanism by which climate change may mediate range shifts in birds and other riverine biota

    Evaluating the performance of hydrological models via cross-spectral analysis: case study of the Thames Basin, United Kingdom

    Get PDF
    Nine distributed hydrological models, forced with common meteorological inputs, simulated naturalised daily discharge from the Thames Basin for 1963-2001. While model-dependent evaporative losses are critical for modelling mean discharge, multiple physical processes at many time scales influence the variability and timing of discharge. Here we advocate the use of cross-spectral analysis to measure how the average amplitude, and independently the average phase, of modelled discharge differ from observed discharge at daily to decadal time scales. Simulation of the spectral properties of the model discharge via numerical manipulation of precipitation confirms that modelled transformation involves runoff generation and routing that amplify the annual cycle, while subsurface storage and routing of runoff between grid boxes introduces most autocorrelation and delays. Too much or too little modelled evaporation affects discharge variability as do the capacity and time constants of modelled stores. Additionally the performance of specific models would improve if four issues were tackled: a) non-sinusoidal annual variations in model discharge (prolonged low baseflow and shortened high baseflow, 3 models), b) excessive attenuation of high frequency variability (3 models), c) excessive short-term variability in winter half years but too little variability in summer half years (2 models) and d) introduction of phase delays at the annual scale only during runoff generation (3 models) or only during routing (1 model). Cross-spectral analysis reveals how re-runs of one model using alternative methods of runoff generation - designed to improve performance at the weekly to monthly time scales - degraded performance at the annual scale. The cross-spectral approach facilitates hydrological model diagnoses and development
    corecore