265 research outputs found
Recommended from our members
Refinement of clinical X-ray computed tomography (CT) scans containing metal implants
X-ray computed tomography (CT) data contains artefacts from many sources, with sufficient prominence to affect diagnostic utility when metal is present in the scans. These artefacts can be reduced, usually by the removal and in-filling of any sinogram data which has been affected by metal, and several such techniques have been proposed. Most of them are prone to introducing new artefacts into the CT data or may take a long time to correct the data. It is the purpose of this paper to introduce a new technique which is fast, yet can effectively remove most artefacts without introducing significant new ones. The new metal artefact reduction technique (RMAR) consists of an iterative refinement of the CT data by alternately forward- and back-projecting the part of the reconstruction near to metal. The forward-projection is corrected by making use of a prior derived from the reconstructed data which is independently estimated for each projection angle, and smoothed using a newly developed Bitonic filter. The new technique is compared with previously published (LI, NMAR, MDT) and commercial (O-MAR, IMAR) alternatives, quantitatively on phantom data, and qualitatively on a selection of clinical scans, mostly of the hip. The phantom data is from two recently published studies, enabling direct comparison with previous results. The results show an increased reduction of artefacts on the four phantom data sets tested. On two of the phantom data sets, RMAR is significantly better (p < 0.001) than all other techniques; on one it is as good as any other technique, and on the last it is only beaten by the Metal Deletion Technique (p < 0.001), which is significantly slower. On the clinical data sets, RMAR shows visually similar performance to MDT, with better preservation of bony features close to metal implants, but perhaps slightly reduced homogeneity in the far field. For typical CT data, RMAR can correct each image in 3–8 s, which is more than one hundred times faster than MDT. The new technique is demonstrated to have performance at least as good as MDT, with both out-performing other approaches. However, it is much faster then the latter technique, and shows better preservation of data very close to metal
Novel three-dimensional bone ‘mapping’ software can help assess progression of osseous metastases from routine CT
Imaging of bone metastasis response to therapy is a research priority. Stradwin is a new software-tool, with demonstrated sub-voxel accuracy in assessing cortical bone properties from routine CT. We applied this technology to the context of osseous metastases, with particular focus on disease progression using prostate cancer as a model. 3D–rendered ‘bone-maps’ were produced for 20 men with advanced prostate cancer, including a sub-cohort of 9 who had undergone serial scans. Correlation between baseline interpretation and assessments of progression between modalities was assessed. Bone-maps took significantly less time to interpret than CT bone windows (P < 0.001). Initial bone-mapping, without adjustment, demonstrated sensitivity and specificity for suspicious areas on CT of 70.7% and 73.1% respectively. Evaluating disease over time, concordance between bone-maps and current practice using RECIST outcomes was 100%.
This study demonstrates the feasibility and potential use of this free post-processing software in the serial assessment of osseous metastases
Quantitative 3D imaging parameters improve prediction of hip osteoarthritis outcome
Abstract: Osteoarthritis is an increasingly important health problem for which the main treatment remains joint replacement. Therapy developments have been hampered by a lack of biomarkers that can reliably predict disease, while 2D radiographs interpreted by human observers are still the gold standard for clinical trial imaging assessment. We propose a 3D approach using computed tomography—a fast, readily available clinical technique—that can be applied in the assessment of osteoarthritis using a new quantitative 3D analysis technique called joint space mapping (JSM). We demonstrate the application of JSM at the hip in 263 healthy older adults from the AGES-Reykjavík cohort, examining relationships between 3D joint space width, 3D joint shape, and future joint replacement. Using JSM, statistical shape modelling, and statistical parametric mapping, we show an 18% improvement in prediction of joint replacement using 3D metrics combined with radiographic Kellgren & Lawrence grade (AUC 0.86) over the existing 2D FDA-approved gold standard of minimum 2D joint space width (AUC 0.73). We also show that assessment of joint asymmetry can reveal significant differences between individuals destined for joint replacement versus controls at regions of the joint that are not captured by radiographs. This technique is immediately implementable with standard imaging technologies
An Analysis by Synthesis Approach for Automatic Vertebral Shape Identification in Clinical QCT
Quantitative computed tomography (QCT) is a widely used tool for osteoporosis
diagnosis and monitoring. The assessment of cortical markers like cortical bone
mineral density (BMD) and thickness is a demanding task, mainly because of the
limited spatial resolution of QCT. We propose a direct model based method to
automatically identify the surface through the center of the cortex of human
vertebra. We develop a statistical bone model and analyze its probability
distribution after the imaging process. Using an as-rigid-as-possible
deformation we find the cortical surface that maximizes the likelihood of our
model given the input volume. Using the European Spine Phantom (ESP) and a high
resolution \mu CT scan of a cadaveric vertebra, we show that the proposed
method is able to accurately identify the real center of cortex ex-vivo. To
demonstrate the in-vivo applicability of our method we use manually obtained
surfaces for comparison.Comment: Presented on German Conference on Pattern Recognition (GCPR) 2018 in
Stuttgar
Poor reproducibility of compression elastography in the Achilles tendon: same day and consecutive day measurements.
OBJECTIVE
To determine the reproducibility of compression elastography (CE) when measuring strain data, a measure of stiffness of the human Achilles tendon in vivo, over consecutive measures, consecutive days and when using different foot positions.
MATERIALS AND METHODS
Eight participants (4 males, 4 females; mean age 25.5 ± 2.51 years, range 21-30 years; height 173.6 ± 11.7 cm, range 156-189 cm) had five consecutive CE measurements taken on one day and a further five CE measures taken, one per day, at the same time of day, every day for a consecutive 5-day period. These 80 measurements were used to assess both the repeatability and reproducibility of the technique. Means, standard deviations, coefficient of variation (CV), Pearson correlation analysis (R) and intra-class correlation coefficients (ICC) were calculated.
RESULTS
For CE data, all CVs were above 53%, R values indicated no-to-weak correlations between measures at best (range 0.01-0.25), and ICC values were all classified in the poor category (range 0.00-0.11). CVs for length and diameter measures were acceptably low indicating a high level of reliability.
CONCLUSIONS
Given the wide variation obtained in the CE results, it was concluded that CE using this specific system has a low level of reproducibility for measuring the stiffness of the human Achilles tendon in vivo over consecutive days, consecutive measures and in different foot positions
A classification of diabetic foot infections using ICD-9-CM codes: application to a large computerized medical database
<p>Abstract</p> <p>Background</p> <p>Diabetic foot infections are common, serious, and varied. Diagnostic and treatment strategies are correspondingly diverse. It is unclear how patients are managed in actual practice and how outcomes might be improved. Clarification will require study of large numbers of patients, such as are available in medical databases. We have developed and evaluated a system for identifying and classifying diabetic foot infections that can be used for this purpose.</p> <p>Methods</p> <p>We used the (VA) Diabetes Epidemiology Cohorts (DEpiC) database to conduct a retrospective observational study of patients with diabetic foot infections. DEpiC contains computerized VA and Medicare patient-level data for patients with diabetes since 1998. We determined which ICD-9-CM codes served to identify patients with different types of diabetic foot infections and ranked them in declining order of severity: Gangrene, Osteomyelitis, Ulcer, Foot cellulitis/abscess, Toe cellulitis/abscess, Paronychia. We evaluated our classification by examining its relationship to patient characteristics, diagnostic procedures, treatments given, and medical outcomes.</p> <p>Results</p> <p>There were 61,007 patients with foot infections, of which 42,063 were classifiable into one of our predefined groups. The different types of infection were related to expected patient characteristics, diagnostic procedures, treatments, and outcomes. Our severity ranking showed a monotonic relationship to hospital length of stay, amputation rate, transition to long-term care, and mortality.</p> <p>Conclusions</p> <p>We have developed a classification system for patients with diabetic foot infections that is expressly designed for use with large, computerized, ICD-9-CM coded administrative medical databases. It provides a framework that can be used to conduct observational studies of large numbers of patients in order to examine treatment variation and patient outcomes, including the effect of new management strategies, implementation of practice guidelines, and quality improvement initiatives.</p
What's in a Sign? Trademark Law and Economic Theory
Abstract: The aim of this paper is to summarise the extant theory as it relates to the economics of trademark, and to give some suggestions for further research with reference to distinct streams of literature. The proposed line of study inevitably looks at the complex relationship between signs and economics. Trademark is a sign introduced to remedy a market failure. It facilitates purchase decisions by indicating the provenance of the goods, so that consumers can identify specific quality attributes deriving from their own, or others', past experience. Trademark holders, on their part, have an incentive to invest in quality because they will be able to reap the benefits in terms of reputation. In other words, trademark law becomes an economic device which, opportunely designed, can produce incentives for maximising market efficiency. This role must, of course, be recognised, as a vast body of literature has done, with its many positive economic consequences. Nevertheless, trademark appears to have additional economic effects that should be properly recognized: it can determine the promotion of market power and the emergence of rent-seeking behaviours. It gives birth to an idiosyncratic economics of signs where very strong protection tends to be assured, even though the welfare effects are as yet poorly understood. In this domain much remains to be done and the challenge to researchers is open
A volumetric technique for fossil body mass estimation applied to Australopithecus afarensis
Fossil body mass estimation is a well established practice within the field of physical anthropology. Previous studies have relied upon traditional allometric approaches, in which the relationship between one/several skeletal dimensions and body mass in a range of modern taxa is used in a predictive capacity. The lack of relatively complete skeletons has thus far limited the potential application of alternative mass estimation techniques, such as volumetric reconstruction, to fossil hominins. Yet across vertebrate paleontology more broadly, novel volumetric approaches are resulting in predicted values for fossil body mass very different to those estimated by traditional allometry. Here we present a new digital reconstruction of Australopithecus afarensis (A.L. 288-1; ‘Lucy’) and a convex hull-based volumetric estimate of body mass. The technique relies upon identifying a predictable relationship between the ‘shrink-wrapped’ volume of the skeleton and known body mass in a range of modern taxa, and subsequent application to an articulated model of the fossil taxa of interest. Our calibration dataset comprises whole body computed tomography (CT) scans of 15 species of modern primate. The resulting predictive model is characterized by a high correlation coefficient (r2 = 0.988) and a percentage standard error of 20%, and performs well when applied to modern individuals of known body mass. Application of the convex hull technique to A. afarensis results in a relatively low body mass estimate of 20.4 kg (95% prediction interval 13.5–30.9 kg). A sensitivity analysis on the articulation of the chest region highlights the sensitivity of our approach to the reconstruction of the trunk, and the incomplete nature of the preserved ribcage may explain the low values for predicted body mass here. We suggest that the heaviest of previous estimates would require the thorax to be expanded to an unlikely extent, yet this can only be properly tested when more complete fossils are available
- …