2,439 research outputs found
A flexible multivariate random effects proportional odds model with application to adverse effects during radiation therapy
Radiation therapy in patients with head and neck cancer has a toxic effect on
mucosa, the soft tissue in and around the mouth. Hence mucositis is a serious
common side effect and is a condition characterized by pain and inflammation of
the surface of the mucosa. Although the mucosa recovers during breaks of and
following the radiotherapy course the recovery will depend on the type of
tissue involved and on its location. We present a novel flexible multivariate
random effects proportional odds model which takes account of the longitudinal
course of oral mucositis at different mouth sites and of the radiation dosage
(in terms of cumulative dose). The model is an extension of the {\em
proportional odds model} which is used for ordinal response variables. Our
model includes the ordinal multivariate response of the mucositis score by
location, random intercepts for individuals and includes a non-linear function
of cumulative radiation dose. The model allows to test whether sensitivity
differs by mouth sites after having adjusted for site specific cumulative
radiation dose. The model also allows to check whether and how the (non-linear)
effect of site specific dose differs by site. We fit the model to longitudinal
patient data from a prospective observation and find that after adjusting for
cumulative dose, upper, lower lips and mouth floor are associated with the
lowest mucositis scores and hard and soft palate are associated with the
highest mucositis scores. This implies the possibility that tissues at
different mouth locations differ in their sensitivity to the toxic effect of
radiation. We also find that cumulative dose followed by mouth site are the
strongest predictors of mucositis, and the effects of age and gender are
negligible
A Review on Joint Models in Biometrical Research
In some fields of biometrical research joint modelling of longitudinal measures and event time data has become very popular. This article reviews the work in that area of recent fruitful research by classifying approaches on joint models in three categories: approaches with focus on serial trends, approaches with focus on event time data and approaches with equal focus on both outcomes. Typically longitudinal measures and event time data are modelled jointly by introducing shared random effects or by considering conditional distributions together with marginal distributions. We present the approaches in an uniform nomenclature, comment on sub-models applied to longitudinal measures and event time data outcomes individually and exemplify applications in biometrical research
Revised classification of kinases based on bioactivity data: the importance of data density and choice of visualization
RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are
Linking Development Interventions to Conservation: Perspectives From Partners in the International Gorilla Conservation Programme
Insecticide resistance profile of Anopheles gambiae from a phase II field station in Cové, southern Benin: implications for the evaluation of novel vector control products.
BACKGROUND: Novel indoor residual spraying (IRS) and long-lasting insecticidal net (LLIN) products aimed at improving the control of pyrethroid-resistant malaria vectors have to be evaluated in Phase II semi-field experimental studies against highly pyrethroid-resistant mosquitoes. To better understand their performance it is necessary to fully characterize the species composition, resistance status and resistance mechanisms of the vector populations in the experimental hut sites. METHODS: Bioassays were performed to assess phenotypic insecticide resistance in the malaria vector population at a newly constructed experimental hut site in Cové, a rice growing area in southern Benin, being used for WHOPES Phase II evaluation of newly developed LLIN and IRS products. The efficacy of standard WHOPES-approved pyrethroid LLIN and IRS products was also assessed in the experimental huts. Diagnostic genotyping techniques and microarray studies were performed to investigate the genetic basis of pyrethroid resistance in the Cové Anopheles gambiae population. RESULTS: The vector population at the Cové experimental hut site consisted of a mixture of Anopheles coluzzii and An. gambiae s.s. with the latter occurring at lower frequencies (23 %) and only in samples collected in the dry season. There was a high prevalence of resistance to pyrethroids and DDT (>90 % bioassay survival) with pyrethroid resistance intensity reaching 200-fold compared to the laboratory susceptible An. gambiae Kisumu strain. Standard WHOPES-approved pyrethroid IRS and LLIN products were ineffective in the experimental huts against this vector population (8-29 % mortality). The L1014F allele frequency was 89 %. CYP6P3, a cytochrome P450 validated as an efficient metabolizer of pyrethroids, was over-expressed. CONCLUSION: Characterizing pyrethroid resistance at Phase II field sites is crucial to the accurate interpretation of the performance of novel vector control products. The strong levels of pyrethroid resistance at the Cové experimental hut station make it a suitable site for Phase II experimental hut evaluations of novel vector control products, which aim for improved efficacy against pyrethroid-resistant malaria vectors to WHOPES standards. The resistance genes identified can be used as markers for further studies investigating the resistance management potential of novel mixture LLIN and IRS products tested at the site
Improving Simulations of MPI Applications Using A Hybrid Network Model with Topology and Contention Support
Proper modeling of collective communications is essential for understanding the behavior of medium-to-large scale parallel applications, and even minor deviations in implementation can adversely affect the prediction of real-world performance. We propose a hybrid network model extending LogP based approaches to account for topology and contention in high-speed TCP networks. This model is validated within SMPI, an MPI implementation provided by the SimGrid simulation toolkit. With SMPI, standard MPI applications can be compiled and run in a simulated network environment, and traces can be captured without incurring errors from tracing overheads or poor clock synchronization as in physical experiments. SMPI provides features for simulating applications that require large amounts of time or resources, including selective execution, ram folding, and off-line replay of execution traces. We validate our model by comparing traces produced by SMPI with those from other simulation platforms, as well as real world environments.Une bonne modélisation des communications collective est indispensable à la compréhension des performances des applications parallèles et des différences, même minimes, dans leur implémentation peut drastiquement modifier les performances escomptées. Nous proposons un modèle réseau hybrid étendant les approches de type LogP mais permettant de rendre compte de la topologie et de la contention pour les réseaux hautes performances utilisant TCP. Ce modèle est mis en oeuvre et validé au sein de SMPI, une implémentation de MPI fournie par l'environnement SimGrid. SMPI permet de compiler et d'exécuter sans modification des applications MPI dans un environnement simulé. Il est alors possible de capturer des traces sans l'intrusivité ni les problème de synchronisation d'horloges habituellement rencontrés dans des expériences réelles. SMPI permet également de simuler des applications gourmandes en mémoire ou en temps de calcul à l'aide de techniques telles l'exécution sélective, le repliement mémoire ou le rejeu hors-ligne de traces d'exécutions. Nous validons notre modèle en comparant les traces produites à l'aide de SMPI avec celles de traces d'exécution réelle. Nous montrons le gain obtenu en les comparant également à celles obtenues avec des modèles plus classiques utilisés dans des outils concurrents
Dependability for declarative mechanisms: neural networks in autonomous vehicles decision making.
Despite being introduced in 1958, neural networks appeared in numerous applications of different fields in the last decade. This change was possible thanks to the reduced costs of computing power required for deep neural networks, and increasing available data that provide examples for training sets. The 2012 ImageNet image classification competition is often used as a example to describe how neural networks became at this time good candidates for applications: during this competition a neural network based solution won for the first time. In the following editions, all winning solutions were based on neural networks. Since then, neural networks have shown great results in several non critical applications (image recognition, sound recognition, text analysis, etc...). There is a growing interest to use them in critical applications as their ability to generalize makes them good candidates for applications such as autonomous vehicles, but standards do not allow that yet.
Autonomous driving functions are currently researched by the industry with the final objective of producing in the near future fully autonomous vehicles, as defined by the fifth level of the SAE international (Society of Automotive Engineers) classification. Autonomous driving process is usually decomposed into four different parts: the where sensors get information from the environment, the where the data from the different sensors is merged into one representation of the environment, the that uses the representation of the environment to decide what should be the vehicles behavior and the commands to send to the actuators and finally the part that implements these commands. In this thesis, following the interest of the company Stellantis, we will focus on the decision part of this process, considering neural network based solution.
Automotive being a safety critical application, it is required to implement and ensure the dependability of the systems, and this is why neural networks use is not allowed at the moment: their lack of safety forbid their use in such applications. Dependability methods for classical software systems are well known, but neural networks do not have yet similar dependable mechanisms to guarantee their trust. This problem is due to several reasons, among them the difficulty to test applications with a quasi-infinite operational domain and whose functions are hard to define exhaustively in the specifications. Here we can find the motivation of this thesis: how can we ensure the dependability of neural networks in the context of decision for autonomous vehicles?
Research is now being conducted on the topic of dependability and safety of neural networks with several approaches being considered and our research is motivated by the great potential in safety critical applications mentioned above. In this thesis, we will focus on one category of method that seems to be a good candidate to ensure the dependability of neural networks by solving some of the problems of testing: the formal verification for neural networks. These methods aim to prove that a neural network respects a safety property on an entire range of its input and output domains. Formal verification is already used in other domains and is seen as a trusted method to give confidence in a system, but it remains for the moment a research topic for neural networks with currently no industrial applications.
The main contributions of this thesis are the following: a proposal of a characterization of neural network from a software development perspective, and a corresponding classification of their faults, errors and failures, the identification of a potential threat to the use of formal verification. This threat is the erroneous neural network model problem, that may lead to trust a formally validated safety property that does not hold in real life, the realization of an experiment that implements a formal verification for neural networks in an autonomous driving application that is to the best of our knowledge the closest to industrial use. For this application, we chose to work with an ACC (Adaptive Cruise Control) function, which is an autonomous driving function that performs the longitudinal control of a vehicle. The experiment is conducted with the use of a simulator and a neural network formal verification tool. The other contributions of the thesis are the following: theoretical example of the erroneous neural network model problem and a practical example in our autonomous driving experiment, a proposal of detection and recovery mechanisms as a solution to the erroneous model problem mentioned above, an implementation of these detection and recovery mechanisms in our autonomous driving experiment and a discussion about difficulties and possible processes for the implementation of formal verification for neural networks that we developed during our experiments
Clinical risk factors and atherosclerotic plaque extent to define risk for major events in patients without obstructive coronary artery disease: the long-term coronary computed tomography angiography CONFIRM registry.
AimsIn patients without obstructive coronary artery disease (CAD), we examined the prognostic value of risk factors and atherosclerotic extent.Methods and resultsPatients from the long-term CONFIRM registry without prior CAD and without obstructive (≥50%) stenosis were included. Within the groups of normal coronary computed tomography angiography (CCTA) (N = 1849) and non-obstructive CAD (N = 1698), the prognostic value of traditional clinical risk factors and atherosclerotic extent (segment involvement score, SIS) was assessed with Cox models. Major adverse cardiac events (MACE) were defined as all-cause mortality, non-fatal myocardial infarction, or late revascularization. In total, 3547 patients were included (age 57.9 ± 12.1 years, 57.8% male), experiencing 460 MACE during 5.4 years of follow-up. Age, body mass index, hypertension, and diabetes were the clinical variables associated with increased MACE risk, but the magnitude of risk was higher for CCTA defined atherosclerotic extent; adjusted hazard ratio (HR) for SIS >5 was 3.4 (95% confidence interval [CI] 2.3-4.9) while HR for diabetes and hypertension were 1.7 (95% CI 1.3-2.2) and 1.4 (95% CI 1.1-1.7), respectively. Exclusion of revascularization as endpoint did not modify the results. In normal CCTA, presence of ≥1 traditional risk factors did not worsen prognosis (log-rank P = 0.248), while it did in non-obstructive CAD (log-rank P = 0.025). Adjusted for SIS, hypertension and diabetes predicted MACE risk in non-obstructive CAD, while diabetes did not increase risk in absence of CAD (P-interaction = 0.004).ConclusionAmong patients without obstructive CAD, the extent of CAD provides more prognostic information for MACE than traditional cardiovascular risk factors. An interaction was observed between risk factors and CAD burden, suggesting synergistic effects of both
FIREBall-2: advancing TRL while doing proof-of-concept astrophysics on a suborbital platform
Here we discuss advances in UV technology over the last decade, with an
emphasis on photon counting, low noise, high efficiency detectors in
sub-orbital programs. We focus on the use of innovative UV detectors in a NASA
astrophysics balloon telescope, FIREBall-2, which successfully flew in the Fall
of 2018. The FIREBall-2 telescope is designed to make observations of distant
galaxies to understand more about how they evolve by looking for diffuse
hydrogen in the galactic halo. The payload utilizes a 1.0-meter class telescope
with an ultraviolet multi-object spectrograph and is a joint collaboration
between Caltech, JPL, LAM, CNES, Columbia, the University of Arizona, and NASA.
The improved detector technology that was tested on FIREBall-2 can be applied
to any UV mission. We discuss the results of the flight and detector
performance. We will also discuss the utility of sub-orbital platforms (both
balloon payloads and rockets) for testing new technologies and proof-of-concept
scientific ideasComment: Submitted to the Proceedings of SPIE, Defense + Commercial Sensing
(SI19
- …
