8,331 research outputs found

    Change in Working Length at Different Stages of Instrumentation as a Function of Canal Curvature

    Get PDF
    The aim of this study was to determine the change in working length (∆WL) before and after coronal flaring and after complete rotary instrumentation as a function of canal curvature. One mesiobuccal or mesiolingual canal from each of 43 extracted molars had coronal standardization and access performed. Once the access was completed, canal preparation was accomplished using Gates Glidden drills for coronal flaring and EndoSequence files for rotary instrumentation. WLs were obtained at 3 time points: pre-instrumentation (unflared), mid-instrumentation (flared) and post-instrumentation (concluded). Measurements were made via direct visualization (DV) and the CanalPro apex locator (EM) in triplicate by a single operator with blinding across the time points. Root curvature was measured using Schneider’s technique. The change in working length was assessed using repeated-measures ANCOVA. The direct visualization measurements were statistically larger than the electronic measurements (paired t-test difference = 0.20 mm, SE = 0.037, P \u3c .0001), although a difference this large may not be clinically important. Overall, a greater change in working length was observed in straight canals than in curved canals. This unexpected finding was attributed to the limitations of the study, specifically the confounding factor of root length. This trend was more pronounced when measured electronically than via direct visualization, especially after complete instrumentation than after coronal flaring. The overall change in working length after complete instrumentation was found to be clinically insignificant in this study. A limited amount of change in working length may be expected prior to obturation

    Automatic landmark annotation and dense correspondence registration for 3D human facial images

    Full text link
    Dense surface registration of three-dimensional (3D) human facial images holds great potential for studies of human trait diversity, disease genetics, and forensics. Non-rigid registration is particularly useful for establishing dense anatomical correspondences between faces. Here we describe a novel non-rigid registration method for fully automatic 3D facial image mapping. This method comprises two steps: first, seventeen facial landmarks are automatically annotated, mainly via PCA-based feature recognition following 3D-to-2D data transformation. Second, an efficient thin-plate spline (TPS) protocol is used to establish the dense anatomical correspondence between facial images, under the guidance of the predefined landmarks. We demonstrate that this method is robust and highly accurate, even for different ethnicities. The average face is calculated for individuals of Han Chinese and Uyghur origins. While fully automatic and computationally efficient, this method enables high-throughput analysis of human facial feature variation.Comment: 33 pages, 6 figures, 1 tabl

    PTE: Predictive Text Embedding through Large-scale Heterogeneous Text Networks

    Full text link
    Unsupervised text embedding methods, such as Skip-gram and Paragraph Vector, have been attracting increasing attention due to their simplicity, scalability, and effectiveness. However, comparing to sophisticated deep learning architectures such as convolutional neural networks, these methods usually yield inferior results when applied to particular machine learning tasks. One possible reason is that these text embedding methods learn the representation of text in a fully unsupervised way, without leveraging the labeled information available for the task. Although the low dimensional representations learned are applicable to many different tasks, they are not particularly tuned for any task. In this paper, we fill this gap by proposing a semi-supervised representation learning method for text data, which we call the \textit{predictive text embedding} (PTE). Predictive text embedding utilizes both labeled and unlabeled data to learn the embedding of text. The labeled information and different levels of word co-occurrence information are first represented as a large-scale heterogeneous text network, which is then embedded into a low dimensional space through a principled and efficient algorithm. This low dimensional embedding not only preserves the semantic closeness of words and documents, but also has a strong predictive power for the particular task. Compared to recent supervised approaches based on convolutional neural networks, predictive text embedding is comparable or more effective, much more efficient, and has fewer parameters to tune.Comment: KDD 201

    End-to-end Learning for Short Text Expansion

    Full text link
    Effectively making sense of short texts is a critical task for many real world applications such as search engines, social media services, and recommender systems. The task is particularly challenging as a short text contains very sparse information, often too sparse for a machine learning algorithm to pick up useful signals. A common practice for analyzing short text is to first expand it with external information, which is usually harvested from a large collection of longer texts. In literature, short text expansion has been done with all kinds of heuristics. We propose an end-to-end solution that automatically learns how to expand short text to optimize a given learning task. A novel deep memory network is proposed to automatically find relevant information from a collection of longer documents and reformulate the short text through a gating mechanism. Using short text classification as a demonstrating task, we show that the deep memory network significantly outperforms classical text expansion methods with comprehensive experiments on real world data sets.Comment: KDD'201

    Why does the US dominate university league tables?

    Get PDF
    According to Academic Ranking of World Universities, the world’s top 500 universities are owned by only 38 countries, with the US alone having 157 of them. This paper investigates the socioeconomic determinants of the wide performance gap between countries and whether the US’s dominance in the league table is largely due to its economic power or something else. It is found that a large amount of cross country variation in university performance can be explained by just four socioeconomic factors: income, population size, R&D spending, and the national language. It is also found that conditional on the resources that it has, the US is actually underperforming by about 4 to 10 percent.
    • …
    corecore