67 research outputs found

    Deep learning approach for breast cancer diagnosis

    Full text link
    Breast cancer is one of the leading fatal disease worldwide with high risk control if early discovered. Conventional method for breast screening is x-ray mammography, which is known to be challenging for early detection of cancer lesions. The dense breast structure produced due to the compression process during imaging lead to difficulties to recognize small size abnormalities. Also, inter- and intra-variations of breast tissues lead to significant difficulties to achieve high diagnosis accuracy using hand-crafted features. Deep learning is an emerging machine learning technology that requires a relatively high computation power. Yet, it proved to be very effective in several difficult tasks that requires decision making at the level of human intelligence. In this paper, we develop a new network architecture inspired by the U-net structure that can be used for effective and early detection of breast cancer. Results indicate a high rate of sensitivity and specificity that indicate potential usefulness of the proposed approach in clinical use

    Time series of ground reaction forces following a single leg drop jump landing in elite youth soccer players consist of four distinct phases

    Get PDF
    The single leg drop jump landing test may assess dynamic and static balance abilities in different phases of the landing. However objective definitions of different phases following landing and associated reliability are lacking. Therefore, we determined the existence of possible distinct phases of single leg drop jump landing on a force plate in 82 elite youth soccer players. Three outcome measures were calculated over moving windows of five sizes: center of pressure (COP) speed, COP sway and horizontal ground reaction force (GRF). Per outcome measure, a Factor Analysis was employed with all windows as input variables. It showed that four factors (patterns of variance) largely (>75%) explained the variance across subjects/trials along the 12s time series. Each factor was highly associated with a distinct phase of the time series signal: dynamic (0.4-2.7s), late dynamic (2.5-5.0s), static 1 (5.0-8.3s) and static 2 (8.1-11.7s). Intra-class correlations (ICC) between trials were lower for the dynamic phases (0.45-0.68) than for the static phases (0.60-0.86). The COP speed showed higher ICC's (0.63-0.86) than COP sway (0.45-0.61) and GRF (0.57-0.71) for all four phases. In conclusion, following a drop jump landing unique information is available in four distinct phases. The COP speed is most reliable, with higher reliability in the static phases compared to the dynamic phases. Future studies should assess the sensitivity of information from dynamic, late dynamic and static phase

    Adam: A Method for Stochastic Optimization

    No full text
    We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions. The method is straightforward to implement and is based on adaptive estimates of lower-order moments of the gradients. The method is computationally efficient, has little memory requirements and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The method exhibits invariance to diagonal rescaling of the gradients by adapting to the geometry of the objective function. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. We demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods

    Efficient Gradient-Based Inference through Transformations between Bayes Nets and Neural Nets

    Get PDF
    Hierarchical Bayesian networks and neural networks with stochastic hidden units are commonly perceived as two separate types of models. We show that either of these types of models can often be transformed into an instance of the other, by switching between centered and differentiable non-centered parameterizations of the latent variables. The choice of parameterization greatly influences the efficiency of gradient-based posterior inference; we show that they are often complementary to eachother, we clarify when each parameterization is preferred and show how inference can be made robust. In the non-centered form, a simple Monte Carlo estimator of the marginal likelihood can be used for learning the parameters. Theoretical results are supported by experiments
    • …
    corecore