40 research outputs found
La légitimité des parties prenantes dans l'aménagement des villes : éthique de la conduite des projets urbains
Les projets urbains – appelés aussi projets d’aménagement urbain ou opérations d’aménagement – sont largement tributaires de la participation accrue d’une multitude de parties prenantes, dont les atouts pour le projet et les revendications vis-à -vis de celui-ci peuvent être complémentaires et/ou contradictoires. Dans ce contexte quel degré de priorité accorder à telle ou telle partie prenante ? Pour y parvenir, les porteurs d’un projet sont appelés à évaluer la légitimité de ces parties prenantes. Afin d’examiner ce concept, l’étude fait appel au croisement de textes issus de corpus théoriques autres que les corpus classiques de l’urbanisme et des études urbaines : ceux de la théorie des stakeholders dans le domaine du management organisationnel et de l’éthique des affaires ; du modèle des cités, ancré dans la sociologie pragmatique ; et des théories de la justice. Puis elle confronte les enseignements de ces théories à l’analyse empirique d’un cas pratique virtuel, créé pour les besoins de cette recherche. L’aller-retour entre la théorie et des mises en situation concrètes, mettant souvent en avant des tensions entre légitimités concurrentes, permet d’affiner les principes éthiques à la base de ces légitimités, selon une méthode inspirée de celle de l’équilibre réflexif. L’analyse montre que la légitimité d’une partie prenante résulte d’un raisonnement s’appuyant sur une norme d’efficacité ou sur une norme éthique, idéalement compatibles dans le cadre d’une éthique de responsabilité. Dans la première approche (l’efficacité) le porteur du projet sera principalement attentif aux capacités d’action des parties prenantes sur le projet, incluant la capacité de représentativité, les capacités légales (titres, droits, liens contractuels), les savoirs et savoir-faire (expertise, compétences), la créativité, les ressources matérielles et financières, et la capacité à engager. Dans la deuxième approche (éthique) le porteur de projet sera principalement attentif aux revendications légitimes des parties prenantes vis-à -vis du projet : les revendications de bien-être global, de capabilités (autonomie, libertés, possibilités offertes aux individus, avantages socio-économiques), de liens communautaires (valeurs et traditions), de bien-être d’autrui et les revendications environnementales. La recherche permet alors de faire émerger six principes éthiques pour l’évaluation de la légitimité : l’utilité globale, le respect de la liberté, l’équité, la solidarité (et la tolérance), le care et la responsabilité environnementale. Alors que les théories de l’urbanisme – dont l’un des objets principaux est pourtant l’analyse critique des processus de fabrication des villes – se sont peu intéressées aux principes normatifs à la base de ces processus, cette recherche propose un cadre conceptuel utile aux chercheurs et aux professionnels pour porter un jugement, notamment en termes éthiques, sur les décisions relatives à la prise en compte des parties prenantes dans les projets urbains. En faisant cela, elle contribue à jeter les bases d’une éthique de la conduite des projets urbains.Urban projects - also referred to as urban development projects - are largely dependent on the increased involvement of a multitude of stakeholders, whose assets for, and demands on, the project may be complementary and / or contradictory. In this context, how much priority should be given to different stakeholders? To answer this question, project leaders are called upon to assess the legitimacy of these stakeholders. To examine this concept, the study calls upon the crossing of texts from theoretical corpuses other than the classic corpus of urban planning and urban studies: those of stakeholder theory in the field of organizational management and business ethics; the model of the « cités », anchored in pragmatic sociology; and theories of justice. Then it confronts the lessons of these theories with the empirical analysis of a virtual practical case created for the purpose of this research. The round-trip between theory and concrete situations, often emphasizing tensions between competing legitimacies, makes it possible to refine the ethical principles underlying these legitimacies, through a method inspired by that of reflexive equilibrium. The analysis shows that the legitimacy of a stakeholder results from reasoning based on an efficiency standard or on an ethical standard, ideally compatible in the context of an ethics of responsibility. In the first approach (effectiveness) the proponent will be mainly attentive to the stakeholders' capacities for action on the project, including representativeness, legal capacities (titles, rights, contractual links), knowledge and know-how (expertise, skills), creativity, material and financial resources, and the ability to engage other stakeholders into the project. In the second (ethical) approach, the project proponent will be mainly attentive to the legitimate demands of stakeholders on the project: demands for global well-being, capability building (autonomy, freedoms, opportunities for individuals, socio-economic benefits), community ties (values ​​and traditions of large and small communities), well-being of others and environmental claims. The research then enables the emergence of six ethical principles for the assessment of legitimacies: global utility, respect for freedom, equity, solidarity (and tolerance), care and environmental responsibility. While the theories of planning and urban studies - one of the main objects of which is the critical analysis of the manufacturing processes of cities - have paid little attention to the normative principles underlying these processes, this research proposes a conceptual framework that is useful for researchers and professionals to make a judgment, especially in terms of ethics, on decisions concerning the consideration of stakeholders in urban projects. In doing so, it helps to lay the foundations for an ethic of the conduct of urban projects
Recommended from our members
Rationale and design of a multicenter Chronic Kidney Disease (CKD) and at-risk for CKD electronic health records-based registry: CURE-CKD.
BACKGROUND: Chronic kidney disease (CKD) is a global public health problem, exhibiting sharp increases in incidence, prevalence, and attributable morbidity and mortality. There is a critical need to better understand the demographics, clinical characteristics, and key risk factors for CKD; and to develop platforms for testing novel interventions to improve modifiable risk factors, particularly for the CKD patients with a rapid decline in kidney function.
METHODS: We describe a novel collaboration between two large healthcare systems (Providence St. Joseph Health and University of California, Los Angeles Health) supported by leadership from both institutions, which was created to develop harmonized cohorts of patients with CKD or those at increased risk for CKD (hypertension/HTN, diabetes/DM, pre-diabetes) from electronic health record data.
RESULTS: The combined repository of candidate records included more than 3.3 million patients with at least a single qualifying measure for CKD and/or at-risk for CKD. The CURE-CKD registry includes over 2.6 million patients with and/or at-risk for CKD identified by stricter guide-line based criteria using a combination of administrative encounter codes, physical examinations, laboratory values and medication use. Notably, data based on race/ethnicity and geography in part, will enable robust analyses to study traditionally disadvantaged or marginalized patients not typically included in clinical trials.
DISCUSSION: CURE-CKD project is a unique multidisciplinary collaboration between nephrologists, endocrinologists, primary care physicians with health services research skills, health economists, and those with expertise in statistics, bio-informatics and machine learning. The CURE-CKD registry uses curated observations from real-world settings across two large healthcare systems and has great potential to provide important contributions for healthcare and for improving clinical outcomes in patients with and at-risk for CKD
Recommended from our members
An automated lung segmentation approach using bidirectional chain codes to improve nodule detection accuracy
Computer-aided detection and diagnosis (CAD) has been widely investigated to improve radiologists׳ diagnostic accuracy in detecting and characterizing lung disease, as well as to assist with the processing of increasingly sizable volumes of imaging. Lung segmentation is a requisite preprocessing step for most CAD schemes. This paper proposes a parameter-free lung segmentation algorithm with the aim of improving lung nodule detection accuracy, focusing on juxtapleural nodules. A bidirectional chain coding method combined with a support vector machine (SVM) classifier is used to selectively smooth the lung border while minimizing the over-segmentation of adjacent regions. This automated method was tested on 233 computed tomography (CT) studies from the lung imaging database consortium (LIDC), representing 403 juxtapleural nodules. The approach obtained a 92.6% re-inclusion rate. Segmentation accuracy was further validated on 10 randomly selected CT series, finding a 0.3% average over-segmentation ratio and 2.4% under-segmentation rate when compared to manually segmented reference standards done by an expert
Motivating the additional use of external validity: examining transportability in a model of glioblastoma multiforme.
Despite the growing ubiquity of data in the medical domain, it remains difficult to apply results from experimental and observational studies to additional populations suffering from the same disease. Many methods are employed for testing internal validity; yet limited effort is made in testing generalizability, or external validity. The development of disease models often suffers from this lack of validity testing and trained models frequently have worse performance on different populations, rendering them ineffective. In this work, we discuss the use of transportability theory, a causal graphical model examination, as a mechanism for determining what elements of a data resource can be shared or moved between a source and target population. A simplified Bayesian model of glioblastoma multiforme serves as the example for discussion and preliminary analysis. Examination over data collection hospitals from the TCGA dataset demonstrated improvement of prediction in a transported model over a baseline model
Recommended from our members
Asthma clustering methods: a literature-informed application to the children’s health study data
ObjectiveThe heterogeneity of asthma has inspired widespread application of statistical clustering algorithms to a variety of datasets for identification of potentially clinically meaningful phenotypes. There has not been a standardized data analysis approach for asthma clustering, which can affect reproducibility and clinical translation of results. Our objective was to identify common and effective data analysis practices in the asthma clustering literature and apply them to data from a Southern California population-based cohort of schoolchildren with asthma.MethodsAs of January 1, 2020, we reviewed key statistical elements of 77 asthma clustering studies. Guided by the literature, we used 12 input variables and three clustering methods (hierarchical clustering, k-medoids, and latent class analysis) to identify clusters in 598 schoolchildren with asthma from the Southern California Children's Health Study (CHS).ResultsClusters of children identified by latent class analysis were characterized by exhaled nitric oxide, FEV1/FVC, FEV1 percent predicted, asthma control and allergy score; and were predictive of control at two year follow up. Clusters from the other two methods were less clinically remarkable, primarily differentiated by sex and race/ethnicity and less predictive of asthma control over time.ConclusionUpon review of the asthma phenotyping literature, common approaches of data clustering emerged. When applying these elements to the Children's Health Study data, latent class analysis clusters-represented by exhaled nitric oxide and spirometry measures-had clinical relevance over time
Data model for personalized patient health guidelines: an exploratory study.
Practitioner guidelines simultaneously provide broad overviews and in-depth details of disease. Written for experts, they are difficult for patients to understand, yet patients often use these guidelines as a source of information to help them to learn about their health. Using practitioner guidelines along with patient information needs and preferences, we created a method to design an information model for providing patients access to their personal health information, linked to individualized, relevant supporting information from guidelines within a patient portal. This model consists of twelve classes of concepts. We manually reviewed and annotated medical records to demonstrate the validity of our model. Each class of the model was found within at least one patient's record, and seven classes of concepts appeared in over half of the patients' records annotated. These annotations show that the model produced by the method can be used to determine what guideline information is relevant to an individual patient, based on concepts in their health information
A data-driven approach for quality assessment of radiologic interpretations
Given the increasing emphasis on delivering high-quality, cost-efficient healthcare, improved methodologies are needed to measure the accuracy and utility of ordered diagnostic examinations in achieving the appropriate diagnosis. Here, we present a data-driven approach for performing automated quality assessment of radiologic interpretations using other clinical information (e.g., pathology) as a reference standard for individual radiologists, subspecialty sections, imaging modalities, and entire departments. Downstream diagnostic conclusions from the electronic medical record are utilized as “truth” to which upstream diagnoses generated by radiology are compared. The described system automatically extracts and compares patient medical data to characterize concordance between clinical sources. Initial results are presented in the context of breast imaging, matching 18 101 radiologic interpretations with 301 pathology diagnoses and achieving a precision and recall of 84% and 92%, respectively. The presented data-driven method highlights the challenges of integrating multiple data sources and the application of information extraction tools to facilitate healthcare quality improvement
Recommended from our members
Using Sequential Decision Making to Improve Lung Cancer Screening Performance
Recommended from our members
A data-driven approach for quality assessment of radiologic interpretations
Given the increasing emphasis on delivering high-quality, cost-efficient healthcare, improved methodologies are needed to measure the accuracy and utility of ordered diagnostic examinations in achieving the appropriate diagnosis. Here, we present a data-driven approach for performing automated quality assessment of radiologic interpretations using other clinical information (e.g., pathology) as a reference standard for individual radiologists, subspecialty sections, imaging modalities, and entire departments. Downstream diagnostic conclusions from the electronic medical record are utilized as “truth” to which upstream diagnoses generated by radiology are compared. The described system automatically extracts and compares patient medical data to characterize concordance between clinical sources. Initial results are presented in the context of breast imaging, matching 18 101 radiologic interpretations with 301 pathology diagnoses and achieving a precision and recall of 84% and 92%, respectively. The presented data-driven method highlights the challenges of integrating multiple data sources and the application of information extraction tools to facilitate healthcare quality improvement
Recommended from our members
Consumers’ Patient Portal Preferences and Health Literacy: A Survey Using Crowdsourcing
BackgroundeHealth apps have the potential to meet the information needs of patient populations and improve health literacy rates. However, little work has been done to document perceived usability of portals and health literacy of specific topics.ObjectiveOur aim was to establish a baseline of lung cancer health literacy and perceived portal usability.MethodsA survey based on previously validated instruments was used to assess a baseline of patient portal usability and health literacy within the domain of lung cancer. The survey was distributed via Amazon’s Mechanical Turk to 500 participants.ResultsOur results show differences in preferences and literacy by demographic cohorts, with a trend of chronically ill patients having a more positive reception of patient portals and a higher health literacy rate of lung cancer knowledge (P<.05).ConclusionsThis article provides a baseline of usability needs and health literacy that suggests that chronically ill patients have a greater preference for patient portals and higher level of health literacy within the domain of lung cancer