603 research outputs found

    From 3D Point Clouds to Pose-Normalised Depth Maps

    Get PDF
    We consider the problem of generating either pairwise-aligned or pose-normalised depth maps from noisy 3D point clouds in a relatively unrestricted poses. Our system is deployed in a 3D face alignment application and consists of the following four stages: (i) data filtering, (ii) nose tip identification and sub-vertex localisation, (iii) computation of the (relative) face orientation, (iv) generation of either a pose aligned or a pose normalised depth map. We generate an implicit radial basis function (RBF) model of the facial surface and this is employed within all four stages of the process. For example, in stage (ii), construction of novel invariant features is based on sampling this RBF over a set of concentric spheres to give a spherically-sampled RBF (SSR) shape histogram. In stage (iii), a second novel descriptor, called an isoradius contour curvature signal, is defined, which allows rotational alignment to be determined using a simple process of 1D correlation. We test our system on both the University of York (UoY) 3D face dataset and the Face Recognition Grand Challenge (FRGC) 3D data. For the more challenging UoY data, our SSR descriptors significantly outperform three variants of spin images, successfully identifying nose vertices at a rate of 99.6%. Nose localisation performance on the higher quality FRGC data, which has only small pose variations, is 99.9%. Our best system successfully normalises the pose of 3D faces at rates of 99.1% (UoY data) and 99.6% (FRGC data)

    Improving the cost effectiveness equation of cascade testing for Familial Hypercholesterolaemia (FH)

    Get PDF
    Purpose of Review : Many International recommendations for the management of Familial Hypercholesterolaemia (FH) propose the use of Cascade Testing (CT) using the family mutation to unambiguously identify affected relatives. In the current economic climate DNA information is often regarded as too expensive. Here we review the literature and suggest strategies to improve cost effectiveness of CT. Recent findings : Advances in next generation sequencing have both speeded up the time taken for a genetic diagnosis and reduced costs. Also, it is now clear that, in the majority of patients with a clinical diagnosis of FH where no mutation can be found, the most likely cause of their elevated LDL-cholesterol (LDL-C) is because they have inherited a greater number than average of common LDL-C raising variants in many different genes. The major cost driver for CT is not DNA testing but of treatment over the remaining lifetime of the identified relative. With potent statins now off-patent, the overall cost has reduced considerably, and combining these three factors, a FH service based around DNA-CT is now less than 25% of that estimated by NICE in 2009. Summary : While all patients with a clinical diagnosis of FH need to have their LDL-C lowered, CT should be focused on those with the monogenic form and not the polygenic form

    Mining developer communication data streams

    Full text link
    This paper explores the concepts of modelling a software development project as a process that results in the creation of a continuous stream of data. In terms of the Jazz repository used in this research, one aspect of that stream of data would be developer communication. Such data can be used to create an evolving social network characterized by a range of metrics. This paper presents the application of data stream mining techniques to identify the most useful metrics for predicting build outcomes. Results are presented from applying the Hoeffding Tree classification method used in conjunction with the Adaptive Sliding Window (ADWIN) method for detecting concept drift. The results indicate that only a small number of the available metrics considered have any significance for predicting the outcome of a build

    Effects of sterilization and vacuum exposure on potential heat shield materials for unmanned Mars mission

    Get PDF
    Sterilization and vacuum exposure effects on potential heat shield materials for unmanned Mars mission

    The permeabilities of three porous graphites Final report

    Get PDF
    Permeability measurements of porous graphites measured from room temperature to 1000 deg F in gaseous nitrogen and heliu

    Integration of Data Mining and Data Warehousing: a practical methodology

    Get PDF
    The ever growing repository of data in all fields poses new challenges to the modern analytical systems. Real-world datasets, with mixed numeric and nominal variables, are difficult to analyze and require effective visual exploration that conveys semantic relationships of data. Traditional data mining techniques such as clustering clusters only the numeric data. Little research has been carried out in tackling the problem of clustering high cardinality nominal variables to get better insight of underlying dataset. Several works in the literature proved the likelihood of integrating data mining with warehousing to discover knowledge from data. For the seamless integration, the mined data has to be modeled in form of a data warehouse schema. Schema generation process is complex manual task and requires domain and warehousing familiarity. Automated techniques are required to generate warehouse schema to overcome the existing dependencies. To fulfill the growing analytical needs and to overcome the existing limitations, we propose a novel methodology in this paper that permits efficient analysis of mixed numeric and nominal data, effective visual data exploration, automatic warehouse schema generation and integration of data mining and warehousing. The proposed methodology is evaluated by performing case study on real-world data set. Results show that multidimensional analysis can be performed in an easier and flexible way to discover meaningful knowledge from large datasets

    Workshop summary management and science of fish spawning aggregations in the Great Barrier Reef Marine Park, 12-13 July, 2007

    Get PDF
    Fish spawning aggregations (FSAs) are a key issue for management of the Great Barrier Reef Marine Park.Marine Park. A workshop was held by the Great Barrier Reef Marine Park Authority in July 2007 to bring together an expert group including reef fish scientists, managers and fishers to discuss the current status of fish spawning aggregations in the Marine Park and prioritise a strategic approach to management and science needs. Knowing that fish species that form spawning aggregations are, or potentially are vulnerable to overexploitation, the 28 workshop participants developed a list of research priorities and management considerations. The following research priorities were identified for the next five years (2007-2012): 1. The Queensland Government’s long-term monitoring programme for the Coral Reef Fin Fish Fishery should include collection of reproductive samples for key target species. 2. Continue and expand the long-term dataset from Scott Reef and Elford Reef coral trout spawning site monitoring project offshore from Cairns, with replication in the north and south of the Marine Park. 3. Priority species for research are large mouth nannygai, black jewfish and grunter. 4. Implement a Marine Park wide interview survey to compile historic and current information on spawning aggregations for all key species. 5. Survey Old Reef to determine actual aggregation sites and timing for key species. 6. Investigate the impacts of fishing disturbances on aggregations, specifically for Spanish mackerel, grey mackerel, flowery cod, camouflage cod and coral trout species

    An investigation of the mechanisms of heat transfer in low-density phenolic-nylon chars

    Get PDF
    Conductive heat transfer mechanisms in low density phenolic-nylon char

    The reduced cost of providing a nationally recognised service for familial hypercholesterolaemia

    No full text
    OBJECTIVE: Familial hypercholesterolaemia (FH) affects 1 in 500 people in the UK population and is associated with premature morbidity and mortality from coronary heart disease. In 2008, National Institute for Health and Care Excellence (NICE) recommended genetic testing of potential FH index cases and cascade testing of their relatives. Commissioners have been slow to respond although there is strong evidence of cost and clinical effectiveness. Our study quantifies the recent reduced cost of providing a FH service using generic atorvastatin and compares NICE costing estimates with three suggested alternative models of care (a specialist-led service, a dual model service where general practitioners (GPs) can access specialist advice, and a GP-led service).METHODS: Revision of existing 3?year costing template provided by NICE for FH services, and prediction of costs for running a programme over 10?years. Costs were modelled for the first population-based FH service in England which covers Southampton, Hampshire, Isle of Wight and Portsmouth (SHIP). Population 1.95 million.RESULTS: With expiry of the Lipitor (Pfizer atorvastatin) patent the cost of providing a 10-year FH service in SHIP reduces by 42.5% (£4.88 million on patent vs £2.80 million off patent). Further cost reductions are possible as a result of the reduced cost of DNA testing, more management in general practice, and lower referral rates to specialists. For instance a dual-care model with GP management of patients supported by specialist advice when required, costs £1.89 million.CONCLUSIONS: The three alternative models of care are now <50% of the cost of the original estimates undertaken by NICE
    • …
    corecore