43 research outputs found

    Fully automated landmarking and facial segmentation on 3D photographs

    Full text link
    Three-dimensional facial stereophotogrammetry provides a detailed representation of craniofacial soft tissue without the use of ionizing radiation. While manual annotation of landmarks serves as the current gold standard for cephalometric analysis, it is a time-consuming process and is prone to human error. The aim in this study was to develop and evaluate an automated cephalometric annotation method using a deep learning-based approach. Ten landmarks were manually annotated on 2897 3D facial photographs by a single observer. The automated landmarking workflow involved two successive DiffusionNet models and additional algorithms for facial segmentation. The dataset was randomly divided into a training and test dataset. The training dataset was used to train the deep learning networks, whereas the test dataset was used to evaluate the performance of the automated workflow. The precision of the workflow was evaluated by calculating the Euclidean distances between the automated and manual landmarks and compared to the intra-observer and inter-observer variability of manual annotation and the semi-automated landmarking method. The workflow was successful in 98.6% of all test cases. The deep learning-based landmarking method achieved precise and consistent landmark annotation. The mean precision of 1.69 (+/-1.15) mm was comparable to the inter-observer variability (1.31 +/-0.91 mm) of manual annotation. The Euclidean distance between the automated and manual landmarks was within 2 mm in 69%. Automated landmark annotation on 3D photographs was achieved with the DiffusionNet-based approach. The proposed method allows quantitative analysis of large datasets and may be used in diagnosis, follow-up, and virtual surgical planning.Comment: 13 pages, 4 figures, 7 tables, repository https://github.com/rumc3dlab/3dlandmarkdetection

    The EHA Research Roadmap: Normal Hematopoiesis.

    Get PDF
    International audienceIn 2016, the European Hematology Association (EHA) published the EHA Roadmap for European Hematology Research1 aiming to highlight achievements in the diagnostics and treatment of blood disorders, and to better inform European policy makers and other stakeholders about the urgent clinical and scientific needs and priorities in the field of hematology. Each section was coordinated by 1–2 section editors who were leading international experts in the field. In the 5 years that have followed, advances in the field of hematology have been plentiful. As such, EHA is pleased to present an updated Research Roadmap, now including 11 sections, each of which will be published separately. The updated EHA Research Roadmap identifies the most urgent priorities in hematology research and clinical science, therefore supporting a more informed, focused, and ideally a more funded future for European hematology research. The 11 EHA Research Roadmap sections include Normal Hematopoiesis; Malignant Lymphoid Diseases; Malignant Myeloid Diseases; Anemias and Related Diseases; Platelet Disorders; Blood Coagulation and Hemostatic Disorders; Transfusion Medicine; Infections in Hematology; Hematopoietic Stem Cell Transplantation; CAR-T and Other Cell-based Immune Therapies; and Gene Therapy

    A meta-analysis of the investment-uncertainty relationship

    Get PDF
    In this article we use meta-analysis to investigate the investment-uncertainty relationship. We focus on the direction and statistical significance of empirical estimates. Specifically, we estimate an ordered probit model and transform the estimated coefficients into marginal effects to reflect the changes in the probability of finding a significantly negative estimate, an insignificant estimate, or a significantly positive estimate. Exploratory data analysis shows that there is little empirical evidence for a positive relationship. The regression results suggest that the source of uncertainty, the level of data aggregation, the underlying model specification, and differences between short- and long-run effects are important sources of variation in study outcomes. These findings are, by and large, robust to the introduction of a trend variable to capture publication trends in the literature. The probability of finding a significantly negative relationship is higher in more recently published studies. JEL Classification: D21, D80, E22 1

    Reliability of the automatic procedures for locating earthquakes in southwestern Alps and northern Apennines (Italy)

    Full text link
    International audienceReliable automatic procedure for locating earthquake in quasi-real time is strongly needed for seismic warning system, earthquake preparedness, and producing shaking maps. The reliability of an automatic location algorithm is influenced by several factors such as errors in picking seismic phases, network geometry, and velocity model uncertainties. The main purpose of this work is to investigate the performances of different automatic procedures to choose the most suitable one to be applied for the quasi-real-time earthquake locations in northwestern Italy. The reliability of two automatic-picking algorithms (one based on the Characteristic Function (CF) analysis, CF picker, and the other one based on the Akaike's information criterion (AIC), AIC picker) and two location methods (“Hypoellipse” and “NonLinLoc” codes) is analysed by comparing the automatically determined hypocentral coordinates with reference ones. Reference locations are computed by the “Hypoellipse” code considering manually revised data and tested using quarry blasts. The comparison is made on a dataset composed by 575 seismic events for the period 2000–2007 as recorded by the Regional Seismic network of Northwestern Italy. For P phases, similar results, in terms of both amount of detected picks and magnitude of travel time differences with respect to manual picks, are obtained applying the AIC and the CF picker; on the contrary, for S phases, the AIC picker seems to provide a significant greater number of readings than the CF picker. Furthermore, the “NonLinLoc” software (applied to a 3D velocity model) is proved to be more reliable than the “Hypoellipse” code (applied to layered 1D velocity models), leading to more reliable automatic locations also when outliers (wrong picks) are present

    DPHL: A DIA Pan-human Protein Mass Spectrometry Library for Robust Biomarker Discovery

    Get PDF
    To address the increasing need for detecting and validating protein biomarkers in clinical specimens, mass spectrometry (MS)-based targeted proteomic techniques, including the selected reaction monitoring (SRM), parallel reaction monitoring (PRM), and massively parallel data-independent acquisition (DIA), have been developed. For optimal performance, they require the fragment ion spectra of targeted peptides as prior knowledge. In this report, we describe a MS pipeline and spectral resource to support targeted proteomics studies for human tissue samples. To build the spectral resource, we integrated common open-source MS computational tools to assemble a freely accessible computational workflow based on Docker. We then applied the workflow to generate DPHL, a comprehensive DIA pan-human library, from 1096 data-dependent acquisition (DDA) MS raw files for 16 types of cancer samples. This extensive spectral resource was then applied to a proteomic study of 17 prostate cancer (PCa) patients. Thereafter, PRM validation was applied to a larger study of 57 PCa patients and the differential expression of three proteins in prostate tumor was validated. As a second application, the DPHL spectral resource was applied to a study consisting of plasma samples from 19 diffuse large B cell lymphoma (DLBCL) patients and 18 healthy control subjects. Differentially expressed proteins between DLBCL patients and healthy control subjects were detected by DIA-MS and confirmed by PRM. These data demonstrate that the DPHL supports DIA and PRM MS pipelines for robust protein biomarker discovery. DPHL is freely accessible at https://www.iprox.org/page/project.html?id=IPX0001400000

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    Initial Public Offerings and the Firm Location

    Get PDF
    The firm geographic location matters in IPOs because investors have a strong preference for newly issued local stocks and provide abnormal demand in local offerings. Using equity holdings data for more than 53,000 households, we show the probability to participate to the stock market and the proportion of the equity wealth is abnormally increasing with the volume of the IPOs inside the investor region. Upon nearly the universe of the 167,515 going public and private domestic manufacturing firms, we provide consistent evidence that the isolated private firms have higher probability to go public, larger IPO underpricing cross-sectional average and volatility, and less pronounced long-run under-performance. Similar but opposite evidence holds for the local concentration of the investor wealth. These effects are economically relevant and robust to local delistings, IPO market timing, agglomeration economies, firm location endogeneity, self-selection bias, and information asymmetries, among others. Findings suggest IPO waves have a strong geographic component, highlight that underwriters significantly under-estimate the local demand component thus leaving unexpected money on the table, and support state-contingent but constant investor propensity for risk
    corecore