13 research outputs found
Identification and characterization of learning weakness from drawing analysis at the pre-literacy stage
Handwriting learning delays should be addressed early to prevent their exacerbation and long-lasting consequences on whole children’s lives. Ideally, proper training should start even before learning how to write. This work presents a novel method to disclose potential handwriting problems, from a pre-literacy stage, based on drawings instead of words production analysis. Two hundred forty-one kindergartners drew on a tablet, and we computed features known to be distinctive of poor handwriting from symbols drawings. We verified that abnormal features patterns reflected abnormal drawings, and found correspondence in experts’ evaluation of the potential risk of developing a learning delay in the graphical sphere. A machine learning model was able to discriminate with 0.75 sensitivity and 0.76 specificity children at risk. Finally, we explained why children were considered at risk by the algorithms to inform teachers on the specific weaknesses that need training. Thanks to this system, early intervention to train specific learning delays will be finally possible
Pareto-Optimal Progressive Neural Architecture Search
Neural Architecture Search (NAS) is the process of automating architecture engineering, searching for the best deep learning configuration. One of the main NAS approaches proposed in the literature, Progressive Neural Architecture Search (PNAS), seeks for the architectures with a sequential model-based optimization strategy: it defines a common recursive structure to generate the networks, whose number of building blocks rises through iterations. However, NAS algorithms are generally designed for an ideal setting without considering the needs and the technical constraints imposed by practical applications. In this paper, we propose a new architecture search named Pareto-Optimal Progressive Neural Architecture Search (POPNAS) that combines the benefits of PNAS to a time-accuracy Pareto optimization problem. POPNAS adds a new time predictor to the existing approach to carry out a joint prediction of time and accuracy for each candidate neural network, searching through the Pareto front. This allows us to reach a trade-off between accuracy and training time, identifying neural network architectures with competitive accuracy in the face of a drastically reduced training time
Heterogeneous Datasets for Federated Survival Analysis Simulation
Survival analysis studies time-modeling techniques for an event of interest occurring for a population. Survival analysis found widespread applications in healthcare, engineering, and social sciences. However, the data needed to train survival models are often distributed, incomplete, censored, and confidential. In this context, federated learning can be exploited to tremendously improve the quality of the models trained on distributed data while preserving user privacy. However, federated survival analysis is still in its early development, and there is no common benchmarking dataset to test federated survival models. This work provides a novel technique for constructing realistic heterogeneous datasets by starting from existing non-federated datasets in a reproducible way. Specifically, we propose two dataset-splitting algorithms based on the Dirichlet distribution to assign each data sample to a carefully chosen client: quantity-skewed splitting and label-skewed splitting. Furthermore, these algorithms allow for obtaining different levels of heterogeneity by changing a single hyperparameter. Finally, numerical experiments provide a quantitative evaluation of the heterogeneity level using log-rank tests and a qualitative analysis of the generated splits. The implementation of the proposed methods is publicly available in favor of reproducibility and to encourage common practices to simulate federated environments for survival analysis