35 research outputs found

    Distributed learning: Developing a predictive model based on data from multiple hospitals without data leaving the hospital – A real life proof of concept

    Get PDF
    AbstractPurposeOne of the major hurdles in enabling personalized medicine is obtaining sufficient patient data to feed into predictive models. Combining data originating from multiple hospitals is difficult because of ethical, legal, political, and administrative barriers associated with data sharing. In order to avoid these issues, a distributed learning approach can be used. Distributed learning is defined as learning from data without the data leaving the hospital.Patients and methodsClinical data from 287 lung cancer patients, treated with curative intent with chemoradiation (CRT) or radiotherapy (RT) alone were collected from and stored in 5 different medical institutes (123 patients at MAASTRO (Netherlands, Dutch), 24 at Jessa (Belgium, Dutch), 34 at Liege (Belgium, Dutch and French), 48 at Aachen (Germany, German) and 58 at Eindhoven (Netherlands, Dutch)).A Bayesian network model is adapted for distributed learning (watch the animation: http://youtu.be/nQpqMIuHyOk). The model predicts dyspnea, which is a common side effect after radiotherapy treatment of lung cancer.ResultsWe show that it is possible to use the distributed learning approach to train a Bayesian network model on patient data originating from multiple hospitals without these data leaving the individual hospital. The AUC of the model is 0.61 (95%CI, 0.51–0.70) on a 5-fold cross-validation and ranges from 0.59 to 0.71 on external validation sets.ConclusionDistributed learning can allow the learning of predictive models on data originating from multiple hospitals while avoiding many of the data sharing barriers. Furthermore, the distributed learning approach can be used to extract and employ knowledge from routine patient data from multiple hospitals while being compliant to the various national and European privacy laws

    Improving the Rules of the DPA Contest

    Get PDF
    A DPA contest has been launched at CHES 2008. The goal of this initiative is to make it possible for researchers to compare different side-channel attacks in an objective manner. For this purpose, a set of 80000 traces corresponding to the encryption of 80000 different plaintexts with the Data Encryption Standard and a fixed key has been made available. In this short note, we discuss the rules that the contest uses to rate the effectiveness of different distinguishers. We first describe practical examples of attacks in which these rules can be misleading. Then, we suggest an improved set of rules that can be implemented easily in order to obtain a better interpretation of the comparisons performed

    Mathematical and physical concerns regarding cryptographic key length

    No full text
    Every security design involves choosing adequate parameters. When dealing with cryptography, one of the most important choices is the length of the key to be used. Such emphasis comes from Kerckhoffs' principle stating that ``the security must only rely on the key'' while the system should be considered public knowledge. This is indeed the case for our public standards, anyone can access their specifications. However, in those standards, no information is given about how to choose the key length given an application's security requirements. Using overwhelming large keys in order to work on the safe side is not the way to go as it crushes performances, especially for asymmetric schemes. A commonly used solution is to use recommendations from known and famous organizations, like the NIST or ECRYPT. Another option is to use the mathematical model proposed by Lenstra and Verheul to derive a key length corresponding to a desired security margin. Questions related to the appropriate selection of keys are at the basis of the work contained in this dissertation. First, we made the proposal of Lenstra and Verheul available on www.keylength.com, together with other recommendations. Afterwards, we pointed out that Lenstra and Verheul's analysis is based on software data points only. Therefore, we decided to introduce a new data point for hardware in order to compare it with software. For this purpose, we mounted an attack against an elliptic curve discrete logarithm problem using a cluster of low-cost FPGAs. Then, we noted that only mathematical attacks were considered, that is without taking into account any implementation defect of which an attacker could take advantage. Such a defect led to the now largely studied side-channel and fault attacks. We explored the potential threatd fault attacks really represent and dealt with the more pernicious kinds of fault attacks that aim at disturbing the public elements used in discrete logarithm based schemes. Finally, we studied how keys can be generated, but from a novel perspective: fuzzy extractors. This recently introduced tool allows building reproducible keys usable in cryptography from physics where measurements are error prone. Using them, we built a system to strongly link information to its medium, using secure paper and low-cost hardware. This shows that physics cannot only be used to break security systems but also to build them.(FSA 3) -- UCL, 200

    How to strongly link data and its medium: the paper case

    No full text
    Establishing a strong link between the paper medium and the data represented on it is an interesting alternative to defeat unauthorised copy and content modification attempts. Many applications would benefit from it, such as show tickets, contracts, banknotes or medical prescripts. In this study, the authors present a low-cost solution that establishes such a link by combining digital signatures, physically unclonable functions and fuzzy extractors. The proposed protocol provides two levels of security that can be used according to the time available for verifying the signature and the trust in the paper holder. In practice, this solution uses ultra-violet fibres that are poured into the paper mixture. Fuzzy extractors are then used to build identifiers for each sheet of paper and a digital signature is applied to the combination of these identifiers and the data to be protected from copy and modification. The authors additionally provide a careful statistical analysis of the robustness and amount of randomness reached by the extractors. The authors conclude that identifiers of 72 bits can be derived, which is assumed to be sufficient for the proposed application. However, more randomness, robustness and unclonability could be obtained at the cost of a more expensive process, keeping exactly the same methodology.Anglai

    PET imaging in adaptive radiotherapy of gastrointestinal tumors

    No full text
    INTRODUCTION: Radiotherapy is a cornerstone in the multimodality treatment of several gastrointestinal (GI) tumors. Positron-emission tomography (PET) has an established role in the diagnosis, response assessment and (re-)staging of these tumors. Nevertheless, the value of PET in adaptive radiotherapy remains unclear. This review focuses on the role of PET in adaptive radiotherapy, i.e. during the treatment course and in the delineation process. EVIDENCE ACQUISITION: The MEDLINE database was searched for the terms ("Radiotherapy"[Mesh] AND "Positron-Emission Tomography"[Mesh] AND one of the site-specific keywords, yielding a total of 1710 articles. After abstract selection, 27 papers were identified for esophageal neoplasms, 1 for gastric neoplasms, 9 for pancreatic neoplasms, 6 for liver neoplasms, 1 for biliary tract neoplasms, none for colonic neoplasms, 15 for rectal neoplasms and 12 for anus neoplasms. EVIDENCE SYNTHESIS: The use of PET for truly adaptive radiotherapy during treatment for GI tumors has barely been investigated, in contrast to the potential of the PET-defined metabolic tumor volume for optimization of the target volume. The optimized target definition seems useful for treatment individualization such as focal boosting strategies in esophageal, pancreatic and anorectal cancer. Nevertheless, for all GI tumors, further investigation is needed. CONCLUSIONS: In general, too little data are available to conclude on the role of PET imaging during radiotherapy for ART strategies in GI cancer. On the other hand, based on the available evidence, the use of biological imaging for target volume adaptation seems promising and could pave the road towards individualized treatment strategies.status: publishe

    Optimal use of limb mechanics distributes control during bimanual tasks.

    No full text
    Bimanual tasks involve the coordination of both arms, which often offers redundancy in the ways a task can be completed. The distribution of control across limbs is often considered from the perspective of handedness. In this context, although there are differences across dominant and nondominant arms during reaching control ( Sainburg 2002 ), previous studies have shown that the brain tends to favor the dominant arm when performing bimanual tasks ( Salimpour and Shadmehr 2014 ). However, biomechanical factors known to influence planning and control in unimanual tasks may also generate limb asymmetries in force generation, but their influence on bimanual control has remained unexplored. We investigated this issue in a series of experiments in which participants were instructed to generate a 20-N force with both arms, with or without perturbation of the target force during the trial. We modeled the task in the framework of optimal feedback control of a two-link model with six human-like muscles groups. The biomechanical model predicted a differential contribution of each arm dependent on the orientation of the target force and joint configuration that was quantitatively matched by the participants' behavior, regardless of handedness. Responses to visual perturbations were strongly influenced by the perturbation direction, such that online corrections also reflected an optimal use of limb biomechanics. These results show that the nervous system takes biomechanical constraints into account when optimizing the distribution of forces generated across limbs during both movement planning and feedback control of a bimanual task. NEW & NOTEWORTHY Here, we studied a bimanual force production task to examine the effects of biomechanical constraints on the distribution of control across limbs. Our findings show that the central nervous system optimizes the distribution of force across the two arms according to the joint configuration of the upper limbs. We further show that the underlying mechanisms influence both movement planning and online corrective responses to sudden changes in the target force

    Simple and reliable method to estimate the fingertip static coefficient of friction in precision grip

    No full text
    The static coefficient of friction (μstatic) plays an important role in dexterous object manipulation. Minimal normal force (i.e. grip force) needed to avoid dropping an object is determined by the tangential force at the fingertip-object contact and the frictional properties of the skin-object contact. Although frequently assumed to be constant for all levels of normal force (NF, the force normal to the contact), μstatic actually varies nonlinearly with NF and increases at low NF levels. No method is currently available to measure the relationship between μstatic and NF easily. Therefore, we propose a new method allowing the simple and reliable measurement of the fingertip μstatic at different NF levels, as well as an algorithm for determining μstatic from measured forces and torques. Our method is based on active, back-and-forth movements of a subject’s finger on the surface of a fixed six-axis force and torque sensor. μstatic is computed as the ratio of the tangential to the normal force at slip onset. A negative power law captures the relationship between μstatic and NF. Our method allows the continuous estimation of μstatic as a function of NF during dexterous manipulation, based on the relationship between μstatic and NF measured before manipulation
    corecore