2,801 research outputs found

    Myopia during emergency improvisation: lessons from a catastrophic wildfire

    Get PDF
    Purpose The purpose of this paper is to explore how a number of processes joined to create the microlevel strategies and procedures that resulted in the most lethal and tragic forest fire in Portugal's history, recalled as the EN236-1 road tragedy in the fire of Pedrogao Grande. Design/methodology/approach Using an inductive theory development approach, the authors consider how the urgency and scale of perceived danger coupled with failures of system-wide communication led fire teams to improvise repeatedly. Findings The paper shows how structure collapse led teams to use only local information prompting acts of improvisational myopia, in the particular shape of corrosive myopia, and how a form of incidental improvisation led to catastrophic results. Practical implications The research offers insights into the dangers of improvisation arising from corrosive myopia, identifying ways to minimize them with the development of improvisation practices that allow for the creation of new patterns of action. The implications for managing surprise through improvisation extend to risk contexts beyond wildfires. Originality/value The paper stands out for showing the impact of improvisational myopia, especially in its corrosive form, which stands in stark contrast to the central role of attention to the local context highlighted in previous research on improvisation. At the same time, by exploring the effects of incidental improvisation, it also departs from the agentic conception of improvisation widely discussed in the improvisation literature.info:eu-repo/semantics/acceptedVersio

    How does regulation affect innovation and technology change in the water sector in England and Wales?

    Get PDF
    This thesis examines the role of regulation in technological change in the water sector in England and Wales. Based on a combination of Social-Ecological Systems (SES) theory and the Multi-Level Perspective on technological transitions a Comparative Information-Graded Approach (CIGA) is developed in Part 1. As part of the CIGA, a series of tools is used for characterizing and evaluating the relationship between regulation and technology. In Part 2, the CIGA is applied to characterize the relationship between regulation and water innovation in England and Wales based on official publications, Environment Agency data, and interviews. In particular, 7 mechanisms are identified by which regulation affects innovation and 5 issues of trust negatively interact with innovation. As trust is established through these mechanisms, opportunities for innovation are at times sacrificed. Part 3 develops and analyses a set of models based on findings in Part 2. Dynamical systems and fictitious play analysis of a trustee game model of regulation exhibits cyclicality providing an explanation for observed cycles which create an inconsistent drive for innovation. Trustee and coordination models are evaluated in Chapter 7 highlighting how most tools struggle with the issue of technological lock-in. Chapter 8 develops a model of two innovators and a public good water technology over time, showing the role foresight plays in this context as well as the disincentive to develop it. Taken together, the CIGA characterization and modelling work provide a series of recommendations and insights into how the system of regulation affects technology change.Open Acces

    Wonder Vision-A Hybrid Way-finding System to assist people with Visual Impairment

    Get PDF
    We use multi-sensory information to find our ways around environments. Among these, vision plays a crucial part in way-finding tasks, such as perceiving landmarks and layouts. People with impaired vision may find it difficult to move around in unfamiliar environments because they are unable to use their eyesight to capture critical information. Limiting vision can affect how people interact with their environment, especially for navigation. Individuals with varying degrees of vision will require a different level of way-finding aids. Blind people rely heavily on white canes, whereas low-vision patients could choose from magnifiers for amplifying signs, or even GPS mobile applications to acquire knowledge before their arrival. The purpose of this study is to investigate the in-situ challenges of way-finding for persons with visual impairments. With the methodologies of Research through Design (RTD) and User-centered Design (UCD), I conducted online user research and created a series of iterative prototypes towards a final one: Wonder Vision. It is a hybrid way-finding system that combines Augmented Reality (AR) and Voice User Interface (VUI) to assist people with visual impairments. The descriptive evaluation method suggests Wonder Vision as a possible solution for helping people with visual impairments to find their way toward their goals

    A Systematic Review of Artificial Intelligence in Assistive Technology for People with Visual Impairment

    Get PDF
    Recent advances in artificial intelligence (AI) have led to the development of numerous successful applications that utilize data to significantly enhance the quality of life for people with visual impairment. AI technology has the potential to further improve the lives of visually impaired individuals. However, accurately measuring the development of visual aids continues to be challenging. As an AI model is trained on larger and more diverse datasets, its performance becomes increasingly robust and applicable to a variety of scenarios. In the field of visual impairment, deep learning techniques have emerged as a solution to previous challenges associated with AI models. In this article, we provide a comprehensive and up-to-date review of recent research on the development of AI-powered visual aides tailored to the requirements of individuals with visual impairment. We adopt the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology, meticulously gathering and appraising pertinent literature culled from diverse databases. A rigorous selection process was undertaken, appraising articles against precise inclusion and exclusion criteria. Our meticulous search yielded a trove of 322 articles, and after diligent scrutiny, 12 studies were deemed suitable for inclusion in the ultimate analysis. The study's primary objective is to investigate the application of AI techniques to the creation of intelligent devices that aid visually impaired individuals in their daily lives. We identified a number of potential obstacles that researchers and developers in the field of visual impairment applications might encounter. In addition, opportunities for future research and advancements in AI-driven visual aides are discussed. This review seeks to provide valuable insights into the advancements, possibilities, and challenges in the development and implementation of AI technology for people with visual impairment. By examining the current state of the field and designating areas for future research, we expect to contribute to the ongoing progress of improving the lives of visually impaired individuals through the use of AI-powered visual aids

    Strategies For Improving Epistasis Detection And Replication

    Get PDF
    Genome-wide association studies (GWAS) have been extensively critiqued for their perceived inability to adequately elucidate the genetic underpinnings of complex disease. Of particular concern is “missing heritability,” or the difference between the total estimated heritability of a phenotype and that explained by GWAS-identified loci. There are numerous proposed explanations for this missing heritability, but a frequently ignored and potentially vastly informative alternative explanation is the ubiquity of epistasis underlying complex phenotypes. Given our understanding of how biomolecules interact in networks and pathways, it is not unreasonable to conclude that the effect of variation at individual genetic loci may non-additively depend on and should be analyzed in the context of their interacting partners. It has been recognized for over a century that deviation from expected Mendelian proportions can be explained by the interaction of multiple loci, and the epistatic underpinnings of phenotypes in model organisms have been extensively experimentally quantified. Therefore, the dearth of inspiring single locus GWAS hits for complex human phenotypes (and the inconsistent replication of these between populations) should not be surprising, as one might expect the joint effect of multiple perturbations to interacting partners within a functional biological module to be more important than individual main effects. Current methods for analyzing data from GWAS are not well-equipped to detect epistasis or replicate significant interactions. The multiple testing burden associated with testing each pairwise interaction quickly becomes nearly insurmountable with increasing numbers of loci. Statistical and machine learning approaches that have worked well for other types of high-dimensional data are appealing and may be useful for detecting epistasis, but potentially require tweaks to function appropriately. Biological knowledge may also be leveraged to guide the search for epistasis candidates, but requires context-appropriate application (as, for example, two loci with significant main effects may not have a significant interaction, and vice versa). Rather than renouncing GWAS and the wealth of associated data that has been accumulated as a failure, I propose the development of new techniques and incorporation of diverse data sources to analyze GWAS data in an epistasis-centric framework

    Learning and Governance in Inter-Firm Relations

    Get PDF
    This paper connects theory of learning with theory of governance, in the context of inter-firm relations. It recognizes fundamental criticism of transaction cost economics (TCE), but preserves elements from that theory. The theory of governance used incorporates learning and trust. The paper identifies two kinds of relational risk: hold-up and spillover. For the governance of relations, i.e. the control of relational risk, it develops a box of instruments which includes trust, next to instruments derived and adapted from TCE. These instruments are geared to problems that are specific to learning in interaction between firms. They also include additional roles for go-betweens.transaction cost economics;trust;inter-organizational learning

    Réduction du comportement myope dans le contrôle des FMS : une approche semi-hétérarchique basée sur la simulation-optimisation

    Get PDF
    Heterarchical-based control for flexible manufacturing systems (FMS) localizes control capabilities in decisional entities (DE), resulting in highly reactive and low complex control architectures. However, these architectures present myopic behavior since DEs have limited visibility of other DEs and their behavior, making difficult to ensure certain global performance. This dissertation focuses on reducing myopic behavior. At first, a definition and a typology of myopic behavior in FMS is proposed. In this thesis, myopic behavior is dealt explicitly so global performance can be improved. Thus, we propose a semi-heterarchical architecture in which a global decisional entity (GDE) deals with different kinds of myopic decisions using simulation-based optimization (SbOs). Different optimization techniques can be used so myopic decisions can be dealt individually, favoring GDE modularity. Then, the SbOs can adopt different roles, being possible to reduce myopic behavior in different ways. More, it is also possible to grant local decisional entities with different autonomy levels by applying different interaction modes. In order to balance reactivity and global performance, our approach accepts configurations in which some myopic behaviors are reduced and others are accepted. Our approach was instantiated to control the assembly cell at Valenciennes AIPPRIMECA center. Simulation results showed that the proposed architecture reduces myopic behavior whereby it strikes a balance between reactivity and global performance. The real implementation on the assembly cell verified the effectiveness of our approach under realistic dynamic scenarios, and promising results were obtained.Le contrôle hétérarchique des systèmes de production flexibles (FMS) préconise un contrôle peu complexe et hautement réactif supporté par des entités décisionnelles locales (DEs). En dépit d'avancées prometteuses, ces architectures présentent un comportement myope car les DEs ont une visibilité informationnelle limitée sue les autres DEs, ce qui rend difficile la garantie d'une performance globale minimum. Cette thèse se concentre sur les approches permettant de réduire cette myopie. D'abord, une définition et une typologie de cette myopie dans les FMS sont proposées. Ensuite, nous proposons de traiter explicitement le comportement myope avec une architecture semi-hétérarchique. Dans celle-ci, une entité décisionnelle globale (GDE) traite différents types de décisions myopes à l'aide des différentes techniques d'optimisation basée sur la simulation (SbO). De plus, les SbO peuvent adopter plusieurs rôles, permettant de réduire le comportement myope de plusieurs façons. Il est également possible d'avoir plusieurs niveaux d'autonomie en appliquant différents modes d'interaction. Ainsi, notre approche accepte des configurations dans lesquelles certains comportements myopes sont réduits et d'autres sont acceptés. Notre approche a été instanciée pour contrôler la cellule flexible AIP- PRIMECA de l'Université de Valenciennes. Les résultats des simulations ont montré que l'architecture proposée peut réduire les comportements myopes en établissant un équilibre entre la réactivité et la performance globale. Des expérimentations réelles ont été réalisées sur la cellule AIP-PRIMECA pour des scenarios dynamiques et des résultats prometteurs ont été obtenus

    Analysis of the human corneal shape with machine learning

    Full text link
    Cette thèse cherche à examiner les conditions optimales dans lesquelles les surfaces cornéennes antérieures peuvent être efficacement pré-traitées, classifiées et prédites en utilisant des techniques de modélisation géométriques (MG) et d’apprentissage automatiques (AU). La première étude (Chapitre 2) examine les conditions dans lesquelles la modélisation géométrique peut être utilisée pour réduire la dimensionnalité des données utilisées dans un projet d’apprentissage automatique. Quatre modèles géométriques ont été testés pour leur précision et leur rapidité de traitement : deux modèles polynomiaux (P) – polynômes de Zernike (PZ) et harmoniques sphériques (PHS) – et deux modèles de fonctions rationnelles (R) : fonctions rationnelles de Zernike (RZ) et fonctions rationnelles d’harmoniques sphériques (RSH). Il est connu que les modèles PHS et RZ sont plus précis que les modèles PZ pour un même nombre de coefficients (J), mais on ignore si les modèles PHS performent mieux que les modèles RZ, et si, de manière plus générale, les modèles SH sont plus précis que les modèles R, ou l’inverse. Et prenant en compte leur temps de traitement, est-ce que les modèles les plus précis demeurent les plus avantageux? Considérant des valeurs de J (nombre de coefficients du modèle) relativement basses pour respecter les contraintes de dimensionnalité propres aux taches d’apprentissage automatique, nous avons établi que les modèles HS (PHS et RHS) étaient tous deux plus précis que les modèles Z correspondants (PZ et RR), et que l’avantage de précision conféré par les modèles HS était plus important que celui octroyé par les modèles R. Par ailleurs, les courbes de temps de traitement en fonction de J démontrent qu’alors que les modèles P sont traités en temps quasi-linéaires, les modèles R le sont en temps polynomiaux. Ainsi, le modèle SHR est le plus précis, mais aussi le plus lent (un problème qui peut en partie être remédié en appliquant une procédure de pré-optimisation). Le modèle ZP était de loin le plus rapide, et il demeure une option intéressante pour le développement de projets. SHP constitue le meilleur compromis entre la précision et la rapidité. La classification des cornées selon des paramètres cliniques a une longue tradition, mais la visualisation des effets moyens de ces paramètres sur la forme de la cornée par des cartes topographiques est plus récente. Dans la seconde étude (Chapitre 3), nous avons construit un atlas de cartes d’élévations moyennes pour différentes variables cliniques qui pourrait s’avérer utile pour l’évaluation et l’interprétation des données d’entrée (bases de données) et de sortie (prédictions, clusters, etc.) dans des tâches d’apprentissage automatique, entre autres. Une base de données constituée de plusieurs milliers de surfaces cornéennes antérieures normales enregistrées sous forme de matrices d’élévation de 101 by 101 points a d’abord été traitée par modélisation géométrique pour réduire sa dimensionnalité à un nombre de coefficients optimal dans une optique d’apprentissage automatique. Les surfaces ainsi modélisées ont été regroupées en fonction de variables cliniques de forme, de réfraction et de démographie. Puis, pour chaque groupe de chaque variable clinique, une surface moyenne a été calculée et représentée sous forme de carte d’élévations faisant référence à sa SMA (sphère la mieux ajustée). Après avoir validé la conformité de la base de donnée avec la littérature par des tests statistiques (ANOVA), l’atlas a été vérifié cliniquement en examinant si les transformations de formes cornéennes présentées dans les cartes pour chaque variable étaient conformes à la littérature. C’était le cas. Les applications possibles d’un tel atlas sont discutées. La troisième étude (Chapitre 4) traite de la classification non-supervisée (clustering) de surfaces cornéennes antérieures normales. Le clustering cornéen un domaine récent en ophtalmologie. La plupart des études font appel aux techniques d’extraction des caractéristiques pour réduire la dimensionnalité de la base de données cornéennes. Le but est généralement d’automatiser le processus de diagnostique cornéen, en particulier en ce qui a trait à la distinction entre les cornées normales et les cornées irrégulières (kératocones, Fuch, etc.), et dans certains cas, de distinguer différentes sous-classes de cornées irrégulières. L’étude de clustering proposée ici se concentre plutôt sur les cornées normales afin de mettre en relief leurs regroupements naturels. Elle a recours à la modélisation géométrique pour réduire la dimensionnalité de la base de données, utilisant des polynômes de Zernike, connus pour leur interprétativité transparente (chaque terme polynomial est associé à une caractéristique cornéenne particulière) et leur bonne précision pour les cornées normales. Des méthodes de différents types ont été testées lors de prétests (méthodes de clustering dur (hard) ou souple (soft), linéaires or non-linéaires. Ces méthodes ont été testées sur des surfaces modélisées naturelles (non-normalisées) ou normalisées avec ou sans traitement d’extraction de traits, à l’aide de différents outils d’évaluation (scores de séparabilité et d’homogénéité, représentations par cluster des coefficients de modélisation et des surfaces modélisées, comparaisons statistiques des clusters sur différents paramètres cliniques). Les résultats obtenus par la meilleure méthode identifiée, k-means sans extraction de traits, montrent que les clusters produits à partir de surfaces cornéennes naturelles se distinguent essentiellement en fonction de la courbure de la cornée, alors que ceux produits à partir de surfaces normalisées se distinguent en fonction de l’axe cornéen. La dernière étude présentée dans cette thèse (Chapitre 5) explore différentes techniques d’apprentissage automatique pour prédire la forme de la cornée à partir de données cliniques. La base de données cornéennes a d’abord été traitée par modélisation géométrique (polynômes de Zernike) pour réduire sa dimensionnalité à de courts vecteurs de 12 à 20 coefficients, une fourchette de valeurs potentiellement optimales pour effectuer de bonnes prédictions selon des prétests. Différentes méthodes de régression non-linéaires, tirées de la bibliothèque scikit-learn, ont été testées, incluant gradient boosting, Gaussian process, kernel ridge, random forest, k-nearest neighbors, bagging, et multi-layer perceptron. Les prédicteurs proviennent des variables cliniques disponibles dans la base de données, incluant des variables géométriques (diamètre horizontal de la cornée, profondeur de la chambre cornéenne, côté de l’œil), des variables de réfraction (cylindre, sphère et axe) et des variables démographiques (âge, genre). Un test de régression a été effectué pour chaque modèle de régression, défini comme la sélection d’une des 256 combinaisons possibles de variables cliniques (les prédicteurs), d’une méthode de régression, et d’un vecteur de coefficients de Zernike d’une certaine taille (entre 12 et 20 coefficients, les cibles). Tous les modèles de régression testés ont été évalués à l’aide de score de RMSE établissant la distance entre les surfaces cornéennes prédites (les prédictions) et vraies (les topographies corn¬éennes brutes). Les meilleurs d’entre eux ont été validés sur l’ensemble de données randomisé 20 fois pour déterminer avec plus de précision lequel d’entre eux est le plus performant. Il s’agit de gradient boosting utilisant toutes les variables cliniques comme prédicteurs et 16 coefficients de Zernike comme cibles. Les prédictions de ce modèle ont été évaluées qualitativement à l’aide d’un atlas de cartes d’élévations moyennes élaborées à partir des variables cliniques ayant servi de prédicteurs, qui permet de visualiser les transformations moyennes d’en groupe à l’autre pour chaque variables. Cet atlas a permis d’établir que les cornées prédites moyennes sont remarquablement similaires aux vraies cornées moyennes pour toutes les variables cliniques à l’étude.This thesis aims to investigate the best conditions in which the anterior corneal surface of normal corneas can be preprocessed, classified and predicted using geometric modeling (GM) and machine learning (ML) techniques. The focus is on the anterior corneal surface, which is the main responsible of the refractive power of the cornea. Dealing with preprocessing, the first study (Chapter 2) examines the conditions in which GM can best be applied to reduce the dimensionality of a dataset of corneal surfaces to be used in ML projects. Four types of geometric models of corneal shape were tested regarding their accuracy and processing time: two polynomial (P) models – Zernike polynomial (ZP) and spherical harmonic polynomial (SHP) models – and two corresponding rational function (R) models – Zernike rational function (ZR) and spherical harmonic rational function (SHR) models. SHP and ZR are both known to be more accurate than ZP as corneal shape models for the same number of coefficients, but which type of model is the most accurate between SHP and ZR? And is an SHR model, which is both an SH model and an R model, even more accurate? Also, does modeling accuracy comes at the cost of the processing time, an important issue for testing large datasets as required in ML projects? Focusing on low J values (number of model coefficients) to address these issues in consideration of dimensionality constraints that apply in ML tasks, it was found, based on a number of evaluation tools, that SH models were both more accurate than their Z counterparts, that R models were both more accurate than their P counterparts and that the SH advantage was more important than the R advantage. Processing time curves as a function of J showed that P models were processed in quasilinear time, R models in polynomial time, and that Z models were fastest than SH models. Therefore, while SHR was the most accurate geometric model, it was the slowest (a problem that can partly be remedied by applying a preoptimization procedure). ZP was the fastest model, and with normal corneas, it remains an interesting option for testing and development, especially for clustering tasks due to its transparent interpretability. The best compromise between accuracy and speed for ML preprocessing is SHP. The classification of corneal shapes with clinical parameters has a long tradition, but the visualization of their effects on the corneal shape with group maps (average elevation maps, standard deviation maps, average difference maps, etc.) is relatively recent. In the second study (Chapter 3), we constructed an atlas of average elevation maps for different clinical variables (including geometric, refraction and demographic variables) that can be instrumental in the evaluation of ML task inputs (datasets) and outputs (predictions, clusters, etc.). A large dataset of normal adult anterior corneal surface topographies recorded in the form of 101×101 elevation matrices was first preprocessed by geometric modeling to reduce the dimensionality of the dataset to a small number of Zernike coefficients found to be optimal for ML tasks. The modeled corneal surfaces of the dataset were then grouped in accordance with the clinical variables available in the dataset transformed into categorical variables. An average elevation map was constructed for each group of corneal surfaces of each clinical variable in their natural (non-normalized) state and in their normalized state by averaging their modeling coefficients to get an average surface and by representing this average surface in reference to the best-fit sphere in a topographic elevation map. To validate the atlas thus constructed in both its natural and normalized modalities, ANOVA tests were conducted for each clinical variable of the dataset to verify their statistical consistency with the literature before verifying whether the corneal shape transformations displayed in the maps were themselves visually consistent. This was the case. The possible uses of such an atlas are discussed. The third study (Chapter 4) is concerned with the use of a dataset of geometrically modeled corneal surfaces in an ML task of clustering. The unsupervised classification of corneal surfaces is recent in ophthalmology. Most of the few existing studies on corneal clustering resort to feature extraction (as opposed to geometric modeling) to achieve the dimensionality reduction of the dataset. The goal is usually to automate the process of corneal diagnosis, for instance by distinguishing irregular corneal surfaces (keratoconus, Fuch, etc.) from normal surfaces and, in some cases, by classifying irregular surfaces into subtypes. Complementary to these corneal clustering studies, the proposed study resorts mainly to geometric modeling to achieve dimensionality reduction and focuses on normal adult corneas in an attempt to identify their natural groupings, possibly in combination with feature extraction methods. Geometric modeling was based on Zernike polynomials, known for their interpretative transparency and sufficiently accurate for normal corneas. Different types of clustering methods were evaluated in pretests to identify the most effective at producing neatly delimitated clusters that are clearly interpretable. Their evaluation was based on clustering scores (to identify the best number of clusters), polar charts and scatter plots (to visualize the modeling coefficients involved in each cluster), average elevation maps and average profile cuts (to visualize the average corneal surface of each cluster), and statistical cluster comparisons on different clinical parameters (to validate the findings in reference to the clinical literature). K-means, applied to geometrically modeled surfaces without feature extraction, produced the best clusters, both for natural and normalized surfaces. While the clusters produced with natural corneal surfaces were based on the corneal curvature, those produced with normalized surfaces were based on the corneal axis. In each case, the best number of clusters was four. The importance of curvature and axis as grouping criteria in corneal data distribution is discussed. The fourth study presented in this thesis (Chapter 5) explores the ML paradigm to verify whether accurate predictions of normal corneal shapes can be made from clinical data, and how. The database of normal adult corneal surfaces was first preprocessed by geometric modeling to reduce its dimensionality into short vectors of 12 to 20 Zernike coefficients, found to be in the range of appropriate numbers to achieve optimal predictions. The nonlinear regression methods examined from the scikit-learn library were gradient boosting, Gaussian process, kernel ridge, random forest, k-nearest neighbors, bagging, and multilayer perceptron. The predictors were based on the clinical variables available in the database, including geometric variables (best-fit sphere radius, white-towhite diameter, anterior chamber depth, corneal side), refraction variables (sphere, cylinder, axis) and demographic variables (age, gender). Each possible combination of regression method, set of clinical variables (used as predictors) and number of Zernike coefficients (used as targets) defined a regression model in a prediction test. All the regression models were evaluated based on their mean RMSE score (establishing the distance between the predicted corneal surfaces and the raw topographic true surfaces). The best model identified was further qualitatively assessed based on an atlas of predicted and true average elevation maps by which the predicted surfaces could be visually compared to the true surfaces on each of the clinical variables used as predictors. It was found that the best regression model was gradient boosting using all available clinical variables as predictors and 16 Zernike coefficients as targets. The most explicative predictor was the best-fit sphere radius, followed by the side and refractive variables. The average elevation maps of the true anterior corneal surfaces and the predicted surfaces based on this model were remarkably similar for each clinical variable
    corecore