156 research outputs found

    Do We Run How We Say We Run? Formalization and Practice of Governance in OSS Communities

    Full text link
    Open Source Software (OSS) communities often resist regulation typical of traditional organizations. Yet formal governance systems are being increasingly adopted among communities, particularly through non-profit mentor foundations. Our study looks at the Apache Software Foundation Incubator program and 208 projects it supports. We assemble a scalable, semantic pipeline to discover and analyze the governance behavior of projects from their mailing lists. We then investigate the reception of formal policies among communities, through their own governance priorities and internalization of the policies. Our findings indicate that while communities observe formal requirements and policies as extensively as they are defined, their day-to-day governance focus does not dwell on topics that see most formal policy-making. Moreover formalization, be it dedicating governance focus or adopting policy, has limited association with project sustenance

    Messenger Visual, a pictogram-based instant messaging service for individuals with cognitive disability

    Get PDF
    Along history disabled individuals have suffered from social exclusion due to the limitations posed by their condition. For instance, deaf people are not able to watch television programs because of their sensory limitation. Despite this situation has improved thanks to the efforts in adapting the different services —today the majority of television programs offer subtitles or simultaneous translation to sign language—, the arrival of the Internet, as well as the rest of the information and communication technologies, poses new risks to the inclusion of disabled individuals. Taking into account the present digital exclusion situation of disabled individuals this project presents Messenger Visual, an Instant Messaging service based on pictograms for individuals with cognitive disability. Messenger Visual is composed of two different parts. On the one hand, the Instant Messaging service has been designed considering the requirements of communication based on pictograms. On the other hand, the Instant Messaging client has been designed taking into account the user interface usability requirements of individuals with cognitive disability. Finally, the project presents the methodology that we have used to evaluate Messenger Visual with a group of individuals with cognitive disability, as well as the results we have obtained. The evaluation process has lasted for six months and one-hour fortnightly sessions have been held with two groups of individuals from Fundació El Maresme with different cognitive disability profiles. These sessions have allowed us to gain better understanding of the user interface accessibility requirements, as well as to know how individuals with cognitive disability communicate using pictograms

    Sample efficiency, transfer learning and interpretability for deep reinforcement learning

    Get PDF
    Deep learning has revolutionised artificial intelligence, where the application of increased compute to train neural networks on large datasets has resulted in improvements in real-world applications such as object detection, text-to-speech synthesis and machine translation. Deep reinforcement learning (DRL) has similarly shown impressive results in board and video games, but less so in real-world applications such as robotic control. To address this, I have investigated three factors prohibiting further deployment of DRL: sample efficiency, transfer learning, and interpretability. To decrease the amount of data needed to train DRL systems, I have explored various storage strategies and exploration policies for episodic control (EC) algorithms, resulting in the application of online clustering to improve the memory efficiency of EC algorithms, and the maximum entropy mellowmax policy for improving the sample efficiency and final performance of the same EC algorithms. To improve performance during transfer learning, I have shown that a multi-headed neural network architecture trained using hierarchical reinforcement learning can retain the benefits of positive transfer between tasks while mitigating the interference effects of negative transfer. I additionally investigated the use of multi-headed architectures to reduce catastrophic forgetting under the continual learning setting. While the use of multiple heads worked well within a simple environment, it was of limited use within a more complex domain, indicating that this strategy does not scale well. Finally, I applied a wide range of quantitative and qualitative techniques to better interpret trained DRL agents. In particular, I compared the effects of training DRL agents both with and without visual domain randomisation (DR), a popular technique to achieve simulation-to-real transfer, providing a series of tests that can be applied before real-world deployment. One of the major findings is that DR produces more entangled representations within trained DRL agents, indicating quantitatively that they are invariant to nuisance factors associated with the DR process. Additionally, while my environment allowed agents trained without DR to succeed without requiring complex recurrent processing, all agents trained with DR appear to integrate information over time, as evidenced through ablations on the recurrent state.Open Acces

    Current Challenges in the Application of Algorithms in Multi-institutional Clinical Settings

    Get PDF
    The Coronavirus disease pandemic has highlighted the importance of artificial intelligence in multi-institutional clinical settings. Particularly in situations where the healthcare system is overloaded, and a lot of data is generated, artificial intelligence has great potential to provide automated solutions and to unlock the untapped potential of acquired data. This includes the areas of care, logistics, and diagnosis. For example, automated decision support applications could tremendously help physicians in their daily clinical routine. Especially in radiology and oncology, the exponential growth of imaging data, triggered by a rising number of patients, leads to a permanent overload of the healthcare system, making the use of artificial intelligence inevitable. However, the efficient and advantageous application of artificial intelligence in multi-institutional clinical settings faces several challenges, such as accountability and regulation hurdles, implementation challenges, and fairness considerations. This work focuses on the implementation challenges, which include the following questions: How to ensure well-curated and standardized data, how do algorithms from other domains perform on multi-institutional medical datasets, and how to train more robust and generalizable models? Also, questions of how to interpret results and whether there exist correlations between the performance of the models and the characteristics of the underlying data are part of the work. Therefore, besides presenting a technical solution for manual data annotation and tagging for medical images, a real-world federated learning implementation for image segmentation is introduced. Experiments on a multi-institutional prostate magnetic resonance imaging dataset showcase that models trained by federated learning can achieve similar performance to training on pooled data. Furthermore, Natural Language Processing algorithms with the tasks of semantic textual similarity, text classification, and text summarization are applied to multi-institutional, structured and free-text, oncology reports. The results show that performance gains are achieved by customizing state-of-the-art algorithms to the peculiarities of the medical datasets, such as the occurrence of medications, numbers, or dates. In addition, performance influences are observed depending on the characteristics of the data, such as lexical complexity. The generated results, human baselines, and retrospective human evaluations demonstrate that artificial intelligence algorithms have great potential for use in clinical settings. However, due to the difficulty of processing domain-specific data, there still exists a performance gap between the algorithms and the medical experts. In the future, it is therefore essential to improve the interoperability and standardization of data, as well as to continue working on algorithms to perform well on medical, possibly, domain-shifted data from multiple clinical centers

    Understanding the complexity of the corneal endothelium for regenerative medicine

    Get PDF
    Endothelial keratoplasty is the current therapy for corneal endothelial disease. Advances in surgical procedures are improving reproducibility and accessibility to corneal transplantation, causing an increase in the number of corneal transplantations globally. Unfortunately, there is currently a worldwide donor cornea shortage, aggravated by the increasing number of transplantations. It has been estimated that only one out of seventy patients in need has access to a donor cornea, and 12.7 million people in the world are awaiting treatment. The development of regenerative medicine approaches to treat corneal endothelial disease is necessary to tackle the increasing demand for donor corneal tissue and to provide treatment to those in need. The work described in this thesis addresses this need, with our main goal to contribute to the development of such innovative therapies. We review the current and developing approaches for the regeneration of the corneal endothelium, presenting its pros and cons, but also providing a social perspective and a regulatory guide for the approval of such treatments in the European Union. We show experimentally that corneal endothelial tissue could be delivered to the operation theater preloaded in an injection cannula, without affecting its quality
    corecore