431 research outputs found

    Perceptions of Court Appointed Special Advocates on Volunteer Turnover

    Get PDF
    There is a high turnover rate among court-appointed special advocates (CASA) in the United States. The purpose of this qualitative case study was to explore the perception of the retention of CASA volunteers. Maslach\u27s burnout theory and Greene\u27s theory of resilience provided the framework for the study. A sample of9 active and 5 inactive CASA volunteers, one CASA volunteer recruiter, 3 program supervisors, and one administrator were interviewed. The data was organized and coded manually to facilitate auto-coding using qualitative data software. All responses to each question were compiled in one place allowing for thematic analysis based on the frequency of terms and concepts occurring during the interviews. According to study findings, lengthy and complicated processes, restrictive laws and regulations, limited outcomes impact for the children, and unrealistic expectations of the CASA volunteers were the main reasons for the high turnover rate. Support and preparedness were crucial in the CASA volunteers\u27 decision to serve longer. The study findings would be available for decision makers to review and revise policies in order to improve the experience and adjust expectations imposed on CASA volunteers via recruitment and training messaging. Increasing CASA volunteers\u27 retention rate would change the trajectory of more children in foster care by improving their chances for achieving positive outcomes

    Implementation and Evaluation of an Indexing Model of Teaching and Learning Resources

    Get PDF
    AbstractWith the advent of teaching and learning resources (TLR), indexing becomes essential to ensure his identification, adaptation, reuse and sharing.Several Models indexing of TLR emerged.The problem is to go on a coherent model that would ensure interoperability between stakeholders in the learning domain (designer, developer, teacher resource center) and between systems that manage these resources.There are several standard of TLR indexing among them (DUBLIN CORE, LOM, SCORM, IMS-LD…) and we have the models to represent the semantic context of the content and have a combination of both.We designed our previous contributions Model entitled MIMTLR(El Guemmat et al., 2013a), which defines a Multi indexing model of Teaching and Learning Resources that aims to enhance the limited indexing LOM standard, with a semantic content indexing is based on ontology's. The purpose of this paper is to evaluate, implemented of MIMTLR and validate itby a programming languages, to ensure that it best meets the constraints provide by a powerful model indexing of TLR.We will present simulation and the advantages of this model that meets the needs identified in the development of teaching and learning resources that will be useful for those involved in information and communication technology for teaching and learning (ICTTL) especially E-Learning

    Contribution à la spécification et à la vérification des logiciels à base de composants : enrichissement du langage de données de Kmelia et vérication de contrats

    Get PDF
    With Model Driven Engineering models are the heart of software development. Thesemodels evolve through transformations. In this thesis our interest was the validationfor these model transformations by testing, and more precisely the test oracles. Wepropose two approaches to assist the tester to create these oracles. With the first ap-proach this assistance is passive; we provide the tester with a new oracle function.The test oracles created with this new oracle function control only part of the modelproduced by the transformation under test. We defined the notion of partial verdict,described the situations where having a partial verdict is beneficial for the tester andhow to test a transformation in this context. We developed a tool implementing thisproposal, and ran experiments with it. With the second approach, we provide a moreactive assistance about test oracles’ quality. We study the quality of a set of modeltransformation test oracles. We consider that the quality of a set of oracles is linkedto its ability to detect faults in the transformation under test. We show the limits ofmutation analysis which is used for this purpose, then we propose a new approach thatcorrects part of these drawbacks. We measure the coverage of the output meta-modelby the set of oracles we consider. Our approach does not depend on the language usedfor the transformation under test’s implementation. It also provides the tester withhints on how to improve her oracles. We defined a process to evaluate meta-modelcoverage and qualify test oracles. We developed a tool implementing our approach tovalidate it through experimentations.L'utilisation croissante des composants et des services logiciels dans les différents secteursd'activité (télécommunications, transports, énergie, finance, santé, etc.) exige desmoyens (modèles, méthodes, outils, etc.) rigoureux afin de maîtriser leur production etd'évaluer leur qualité. En particulier, il est crucial de pouvoir garantir leur bon fonctionnementen amont de leur déploiement lors du développement modulaire de systèmes logiciels.Kmelia est un modèle à composants multi-services développé dans le but de construiredes composants logiciels et des assemblages prouvés corrects. Trois objectifs principauxsont visés dans cette thèse. Le premier consiste à enrichir le pouvoir d'expression du modèle Kmelia avec un langage de données afin de satisfaire le double besoin de spécificationet de vérification. Le deuxième vise l'élaboration d'un cadre de développement fondé sur lanotion de contrats multi-niveaux. L'intérêt de tels contrats est de maîtriser la constructionprogressive des systèmes à base de composants et d'automatiser le processus de leur véri-fication. Nous nous focalisons dans cette thèse sur la vérification des contrats fonctionnelsen utilisant la méthode B. Le troisième objectif est l'instrumentation de notre approchedans la plate-forme COSTO/Kmelia. Nous avons implanté un prototype permettant deconnecter COSTO aux différents outils associés à la méthode B. Ce prototype permet deconstruire les machines B à partir des spécifications Kmelia en fonction des propriétés à vé-rifier. Nous montrons que la preuve des spécifications B générées garantit la cohérence desspécifications Kmelia de départ. Les illustrations basées sur l'exemple CoCoME confortentnos propositions

    Méthode systémique et organisationnelle d'Analyse Préliminaire des Risques basée sur une ontologie générique

    Get PDF
    L'analyse Préliminaire des Risques (APR) a été développée au début des années 60 dans les domaines aéronautiques et militaires. C'est aujourd'hui la pierre angulaire du Système de Management de la Sécurité (SMS) dans de nombreuses industries. Quasiment cinq décennies écoulées, mais la pratique d'APR accuse toujours une mauvaise compréhension. Une enquête réalisée par l'INRS auprès de 220 experts de la sûreté de fonctionnement, révèle que 81% des experts disent utiliser l'APR, et seulement 9% d'entre eux considèrent qu'ils la maitrise. En effet, cette révélation surprenante est justifiée compte tenu les nombreuses difficultés d'ordre méthodologique, terminologique, technique ou organisationnel que nous avons pu identifier. De surcroît, L'APR ne fait toujours pas l'objet d'un projet de normalisation, ce qui ouvre la porte à toute sorte de divergence. Dans une démarche de résolution des difficultés constatées, nous proposons la méthode Management Préliminaire des Risques (MPR) basée sur un processus accidentel générique permettant de canaliser les mécanismes de capitalisation et d'exploitation des connaissances relatives aux scénarios d'accident (causalité, entités, situations, événements, etc.). La méthode MPR, se rattache au Système de Management de la Sécurité (SMS) sur un point d'ancrage essentiel qu'est la gestion des processus techniques et organisationnels. Notre objectif est triple, d'abord montrer les 10 difficultés majeures que nous avons constatées en matière de management des risques, ensuite présenter la méthode MPR que nous proposons pour résoudre ces difficultés dans le but d'exploiter efficacement l'échange de savoir-faire en matière de management des risques relatifs à différents systèmes voire issus de différents domaine, et enfin présenter de manière synthétique l'outil SIGAR (Système Informatique Générique d'Analyse de Risque) dédié à cette méthode.APR ; Management des risques; Ontologie, Systémique; Analyse des risques;

    Mechanisms of Nuclear Export in Cancer and Resistance to Chemotherapy

    Get PDF
    YesTumour suppressor proteins, such as p53, BRCA1, and ABC, play key roles in preventing the development of a malignant phenotype, but those that function as transcriptional regulators need to enter the nucleus in order to function. The export of proteins between the nucleus and cytoplasm is complex. It occurs through nuclear pores and exported proteins need a nuclear export signal (NES) to bind to nuclear exportin proteins, including CRM1 (Chromosomal Region Maintenance protein 1), and the energy for this process is provided by the RanGTP/RanGDP gradient. Due to the loss of DNA repair and cell cycle checkpoints, drug resistance is a major problem in cancer treatment, and often an initially successful treatment will fail due to the development of resistance. An important mechanism underlying resistance is nuclear export, and a number of strategies that can prevent nuclear export may reverse resistance. Examples include inhibitors of CRM1, antibodies to the nuclear export signal, and alteration of nuclear pore structure. Each of these are considered in this review

    Machine learning for internet of things classification using network traffic parameters

    Get PDF
    With the growth of the internet of things (IoT) smart objects, managing these objects becomes a very important challenge, to know the total number of interconnected objects on a heterogeneous network, and if they are functioning correctly; the use of IoT objects can have advantages in terms of comfort, efficiency, and cost. In this context, the identification of IoT objects is the first step to help owners manage them and ensure the security of their IoT environments such as smart homes, smart buildings, or smart cities. In this paper, to meet the need for IoT object identification, we have deployed an intelligent environment to collect all network traffic traces based on a diverse list of IoT in real-time conditions. In the exploratory phase of this traffic, we have developed learning models capable of identifying and classifying connected IoT objects in our environment. We have applied the six supervised machine learning algorithms: support vector machine, decision tree (DT), random forest (RF), k-nearest neighbors, naive Bayes, and stochastic gradient descent classifier. Finally, the experimental results indicate that the DT and RF models proved to be the most effective and demonstrate an accuracy of 97.72% on the analysis of network traffic data and more particularly information contained in network protocols. Most IoT objects are identified and classified with an accuracy of 99.21%

    Classification automatique de la densité des tissus mammaires

    Get PDF
    Le cancer du sein est un problème de santé publique. L’imagerie médicale est l’un des éléments clés dans le diagnostic. Cependant, la qualité d’interprétation d’une mammographie reste variable. Une des caractéristiques de l’anatomie et de la physiologie du sein est la densité du tissu mammaire qui est importante pour deux raisons principales : (1) la densité mammaire accrue est associée à une diminution de sensibilité de la mammographie pour la détection du cancer du sein (Schetter, 2014), (2) la densité du sein est l’un des plus importants facteurs de risque connus pour le cancer du sein (Prevrhal et al., 2002 ; Boyd et al., 1995). Le classement automatique de la densité des tissus est donc un processus important dans le diagnostic. De plus, le système de classification BI-RADS identifie quatre niveaux de densité du sein, mais la base de données mini-MIAS est divisée en trois catégories de densité. Dans cet article, nous décrivons une méthode pour la classification de la densité globale du sein en utilisant les réseaux de neurones artificiels. Cette approche présente l’avantage de ne pas nécessiter d’étape de prétraitement et de s’adapter aux différentes bases de données de mammographies. La validité de notre méthode est démontrée en utilisant 240 mammographies de la base de données DDSM et 180 mammographies de la base de données mini-MIAS, avec un taux de classification correcte de 87,50 % et 86,11 %, respectivement.Breast cancer is an international public health concern. Medical imaging is one of the key elements in diagnosis. However, the quality of the interpretation of mammograms remains variable. One of the important characteristics in breast anatomy and physiology is breast tissue density. Density is important for two main reasons: first, increased breast density is associated with decreased mammographic sensitivity for the detection of breast cancer (Schetter, 2014). Second, breast density is one of the strongest known risk factors for breast cancer (Prevrhal et al., 2002; Boyd et al., 1995). For these reasons, automatic tissue density classification is an important process in diagnosis. Moreover, the BI-RADS (Breast Imaging-Reporting And Data System) classification system identifies four levels of breast density, but the mini-MIAS (Mammographic Image Analysis Society) database is divided into three density categories. In this article we describe a method for overall breast density classification using artificial neural networks. This approach has the advantages of not requiring a preprocessing step and the ability to be adapted to different mammography databases. The validation of our method is demonstrated using 240 mammograms from the DDSM database and 180 mammograms from mini-MIAS database, with the correct classification rate of 87.50% and 86.11%, respectively
    • …
    corecore