29 research outputs found
The Business Model of Free Software: Legal Myths and Real Benefits
International audienceFree Software is the term coined by Richard Stallman in 1983 to denote programs whose sources are available to whoever receives a copy of the software and come with the freedom to run, copy, distribute, study, change and improve the software.As Richard Stallman’s concept grew in popularity, and with the subsequent advent of GNU/Linux, Free Software has received a great deal of attention and media publicity. With attention and publicity came expectations as well as a number of legal myths and confusions.The objective of this article is to clarify some of these legal misunderstandings while explaining how the legal fundamentals on which Free Software is based allow for a long-lasting business model based on a special kind of expertise-based support that benefits customers and guarantees the creation of a local pool of expertise.This article is based on our experience with GNAT Pro. GNAT Pro is the Free Software development environment for the Ada 95 programming language. It comprises a compiler that is part of GCC (the GNU Compiler Collection), a toolset and graphical Integrated Development Environment, and a set of supporting libraries.Developing, maintaining, and marketing GNAT Pro for ten years have provided significant experience with both technical and non-technical aspects of Free Software. This article summarizes the principal legal and business lessons learned
Free Software and Leveraged Service Organizations
International audienceIn this work we concentrate on where for- profit pure FLOSS business organizations in the embedded real-time software space draw their revenue. The business model of these ventures rests on an original concept: the LSO (Leveraged Service Organization) which, thanks to its subscription-based model, is capable of generating a stable cash flow that can be invested in innovation and reward employees and investors alike. The leverage aspect, in an LSO, comes from concentration of know-how and expertise around the Free Software package(s) marketed. Thanks to this expertise an LSO can offer a service of extremely high-value to its customers
System to Software Integrity: A Case Study
It is widely acknowledged that the main source of cost for developing high-integrity software systems is their verification. A significant portion of this verification cost is spent assessing that software complies with its requirements.
Over the years several different methods have been developed to address this issue, in particular: testing, peer reviews, formal verification and automatic code generation. It is more and more frequent that these verification strategies are mixed within the same system, so as to adopt the most appropriate one for each component. This increases the complexity of the integration phase because it has to cope with multiple formalisms, development and verification methods.
Our goal is to propose a pragmatic process to integrate components developed using different methods into a single system and demonstrate that properties already verified for each component in isolation are preserved in their composition. This process leverages AADL as a pivotal modeling language for system specification and relies on specific verifications between the latter and the components developed using heterogeneous modeling and programming languages, namely Simulink for computation intensive parts and Ada/SPARK 2014 for other components.
Our paper proceeds as follows. First we provide a high-level overview of our approach and enumerate the current methods for addressing the property preservation problem. Then we illustrate practically our approach using the Nose Gear Challenge problem, a simplified yet complete example of a high-integrity real-time system. We then conclude by comparing our approach to the state of the art
A Deep Learning Approach Validates Genetic Risk Factors for Late Toxicity After Prostate Cancer Radiotherapy in a REQUITE Multi-National Cohort.
Background: REQUITE (validating pREdictive models and biomarkers of radiotherapy toxicity to reduce side effects and improve QUalITy of lifE in cancer survivors) is an international prospective cohort study. The purpose of this project was to analyse a cohort of patients recruited into REQUITE using a deep learning algorithm to identify patient-specific features associated with the development of toxicity, and test the approach by attempting to validate previously published genetic risk factors. Methods: The study involved REQUITE prostate cancer patients treated with external beam radiotherapy who had complete 2-year follow-up. We used five separate late toxicity endpoints: ≥grade 1 late rectal bleeding, ≥grade 2 urinary frequency, ≥grade 1 haematuria, ≥ grade 2 nocturia, ≥ grade 1 decreased urinary stream. Forty-three single nucleotide polymorphisms (SNPs) already reported in the literature to be associated with the toxicity endpoints were included in the analysis. No SNP had been studied before in the REQUITE cohort. Deep Sparse AutoEncoders (DSAE) were trained to recognize features (SNPs) identifying patients with no toxicity and tested on a different independent mixed population including patients without and with toxicity. Results: One thousand, four hundred and one patients were included, and toxicity rates were: rectal bleeding 11.7%, urinary frequency 4%, haematuria 5.5%, nocturia 7.8%, decreased urinary stream 17.1%. Twenty-four of the 43 SNPs that were associated with the toxicity endpoints were validated as identifying patients with toxicity. Twenty of the 24 SNPs were associated with the same toxicity endpoint as reported in the literature: 9 SNPs for urinary symptoms and 11 SNPs for overall toxicity. The other 4 SNPs were associated with a different endpoint. Conclusion: Deep learning algorithms can validate SNPs associated with toxicity after radiotherapy for prostate cancer. The method should be studied further to identify polygenic SNP risk signatures for radiotherapy toxicity. The signatures could then be included in integrated normal tissue complication probability models and tested for their ability to personalize radiotherapy treatment planning
Efficient Algorithms for Cyclic Scheduling
This work addresses the problem of non-preemptively scheduling a cyclic set of interdependent operations, representing for instance a program loop, when p identical processors are available. For p = 1 we give a simple, efficient, polynomial time algorithm producing optimum results. When p ! 1 the problem becomes NP-hard and a slight modification of our algorithm generates provably close to optimum results. 1 Introduction With advances in hardware technology most of todays high performance computers offer some degree of parallelism. To take advantage of this concurrency, parallel extensions to sequential programming languages have been designed. Such extensions have mostly proved inadequate as they are usually tailored to some underlying parallel architecture and are consequently not portable. On the other hand there is a large amount of sequential applications that the users would like to run on these high performance computers. The high cost of machine specific software developm..
High-Integrity Systems Development for Integrated Modular Avionics using VxWorks and GNAT
Abstract. This paper presents recent trends in avionics systems development from bespoke systems through to COTS and emerging Integrated Modular Avionics architectures. The advances in Ada and RTOS technologies are explained and the impact of requirements for RTCA/DO-178B and EUROCAE/ED-12B certification and achievements are presented in the context of the GNAT and VxWorks technologies.