1,180 research outputs found

    Methodology for automatic recovering of 3D partitions from unstitched faces of non-manifold CAD models

    Get PDF
    Data exchanges between different software are currently used in industry to speed up the preparation of digital prototypes for Finite Element Analysis (FEA). Unfortunately, due to data loss, the yield of the transfer of manifold models rarely reaches 1. In the case of non-manifold models, the transfer results are even less satisfactory. This is particularly true for partitioned 3D models: during the data transfer based on the well-known exchange formats, all 3D partitions are generally lost. Partitions are mainly used for preparing mesh models required for advanced FEA: mapped meshing, material separation, definition of specific boundary conditions, etc. This paper sets up a methodology to automatically recover 3D partitions from exported non-manifold CAD models in order to increase the yield of the data exchange. Our fully automatic approach is based on three steps. First, starting from a set of potentially disconnected faces, the CAD model is stitched. Then, the shells used to create the 3D partitions are recovered using an iterative propagation strategy which starts from the so-called manifold vertices. Finally, using the identified closed shells, the 3D partitions can be reconstructed. The proposed methodology has been validated on academic as well as industrial examples.This work has been carried out under a research contract between the Research and Development Direction of the EDF Group and the Arts et Métiers ParisTech Aix-en-Provence

    Parallel Image Processing Concepts

    Get PDF
    Image processing is a task of analysing the image and produces a resultant output in linear way. Image processing tasks are widely used in many applications domains, including medical imaging, industrial manufacturing, entertainment and security systems. Often the size of the image is very large, the processing time has to be very small and usually real-time constraints have to be met. The image analysis requires a large amount of memory and cpu performance, to cope this problem image processing task is parallelized. Parallelism of image analysis task becomes a key factor for processing a huge raw image data. Parallelization allows a scalable and flexible resource management and reduces a time for developing image analysis program. This paper presenting, the automatic parallelization of image processing task in a distributed system, in which suitable subtasks for parallel processing are extracted and mapped with the components of distributed system. This paper presents different design issues of parallel image processing in distributed system. Which helps the image analysis tasks that how to post processing the image in parallel. This technique is quite interactive especially when developing parallel program, as this requires little efforts for finding a suitable distribution of program module and data

    Towards a Digital Assessment: Artificial Intelligence Assisted Error Analysis in ESL

    Get PDF
    The study we present here aims to explore the possibilities that new Artificial Intelligence tools offer teachers to design assessments to improve the written proficiency of students of English as a Foreign Language (the participants in this study have predominantly Spanish as their L1) in a University English Language Course with CEFR B2 objective. The group we are going to monitor is, as far as the Spanish university system is concerned, on average: more than sixty students, with diverse backgrounds and unequal proficiency in English. In such conditions, the teacher must be very attentive to meet the needs of all students/learners and, at the same time, keep track of successes and failures in the designed study plans. One of the most notable reasons for subject/class failure and dropout, in a scenario such as the one described, is the performance and time dedication to written competence (Cabrera, 2014 & López Urdaneta, 2011). Consequently, we will explore whether the union of all the theoretical baggage that underpins the linguistic and pedagogical tradition of Error Analysis, one of the most notable tools for enhancing the writing competence of English as a Foreign Language, and new intelligent technologies can provide new perspectives and strategies to effectively help learners of English as a Foreign Language to produce more appropriate written texts (more natural outputs) and, at the same time, to check whether an AI-assisted Error Analysis-based assessment produces better results in error avoidance and rule application in the collected writing samples

    A Review of Artificial Intelligence in the Internet of Things

    Get PDF
    Humankind has the ability of learning new things automatically due to the capacities with which we were born. We simply need to have experiences, read, study… live. For these processes, we are capable of acquiring new abilities or modifying those we already have. Another ability we possess is the faculty of thinking, imagine, create our own ideas, and dream. Nevertheless, what occurs when we extrapolate this to machines? Machines can learn. We can teach them. In the last years, considerable advances have been done and we have seen cars that can recognise pedestrians or other cars, systems that distinguish animals, and even, how some artificial intelligences have been able to dream, paint, and compose music by themselves. Despite this, the doubt is the following: Can machines think? Or, in other words, could a machine which is talking to a person and is situated in another room make them believe they are talking with another human? This is a doubt that has been present since Alan Mathison Turing contemplated it and it has not been resolved yet. In this article, we will show the beginnings of what is known as Artificial Intelligence and some branches of it such as Machine Learning, Computer Vision, Fuzzy Logic, and Natural Language Processing. We will talk about each of them, their concepts, how they work, and the related work on the Internet of Things fields

    Audiovisual translation, translators, and technology: From automation pipe dream to human–machine convergence

    Get PDF
    Audiovisual translation (AVT), broadly understood as a synonym for media content localization, and not only as a particular practice of linguistic transfer, is undergoing a revolution that was unthinkable only a few years ago – even in those territories where viewers are less accustomed to localized content. Digitalization and technological changes, which have had such an impact on the way audiovisual texts – whether original, localized, or adapted – are produced, distributed, edited, consumed, and shared have also had a substantial impact on the AVT profession. This article explores the ways in which technology has been evolving as an aid to translators: from being merely a clerical aid for transcribing digital texts to automating tasks and integrating machine translation into human translation processes. This it does by providing a range of tools to assist translators in their work processes, progressively migrating both tools and processes to cloud-based environments. The focus is then on AVT, and more particularly on dubbing, where digitalization has shaped the consumer market and posed several challenges to language technology developments and AVT professional practices. Academia has also paid attention to such developments and has increasingly dealt with a number of matters affecting both practice and training to cater to the needs of current media markets. A final word is devoted to proposing a literacy-based framework for the training of translators that embraces technology so as to incorporate automation as an additional aid and which redefines the audiovisual translator’s workstation

    Reconstruction par tractographie des fibres de la matière blanche chez l'adulte sain ou souffrant de lésions neurologiques

    Get PDF
    Le cerveau est l'un des organes les plus complexes et les plus méconnus du corps humain. Grâce à l'imagerie par résonance magnétique (IRM) et plus précisément l'imagerie de diffusion, il est maintenant possible de reconstruire la connectivité de la matière blanche. Avec le temps ou la maladie, le cerveau peut subir des altérations pouvant modifier la connectivité de la matière blanche. Il est important de prendre en compte ces altérations pour pouvoir effectuer des analyses précises des connexions cérébrales.\\ La majorité des algorithmes utilisés dans le domaine de l'imagerie cérébrale sont développés avec des images provenant de sujets jeunes et sains. Cependant, la réalité de la recherche appliquée et de la clinique est tout autre. Les outils utilisés doivent donc être modulaires, que ce soit pour le traitement d'un sujet sain, âgé ou souffrant d'une pathologie.\\ Premièrement, cette thèse présente une mise en contexte. Ensuite, cette thèse s'intéresse au développement de méthodes pour la tractographie en milieu pratique et d'outils automatisés de traitement de l'IRM de diffusion (IRMd). Le guide sur la tractographie en milieu pratique est un chapitre de livre qui a pour but de former et conseiller les chercheurs cliniciens pour l'obtention d'un tractogramme répondant à leurs besoins. Les outils développés dans cette thèse sont composés d'un algorithme de traitement de l'IRMd automatisé appelé TractoFlow, ainsi que d'un outil de segmentation robuste aux lésions de matière blanche liées au vieillissement appelé DORIS. TractoFlow permet d'obtenir un tractogramme à partir des images d'IRMd brute facilement, rapidement et de manière reproductible. Notre second algorithme, DORIS, permet d'obtenir une segmentation des tissus cérébraux en 10 classes à partir des mesures de l'IRMd tout en améliorant la qualité de la tractographie anatomiquement contrainte. En guise de discussion, cette thèse présente deux projets futurs: DORIS adapté aux lésions et la tractographie adaptative au tissu sous-jacent. DORIS adapté aux lésions à pour but d'ajouter une 11ème classe afin de segmenter les lésions liées à la sclérose en plaque. Ensuite la tractographie adaptive présente une nouvelle manière de reconstruire les fibres de matière blanche en adaptant les paramètre de reconstruction suivant le tissu traversé. Cette thèse vise donc à remplir 2 objectifs: le premier est de pouvoir traiter et analyser la connectivité cérébrale chez des sujets jeunes, des sujets âgés ou souffrant d'une pathologie, le second est de répondre aux besoins du milieu clinique et de la recherche appliquée en étant simple et modulaire. Finalement, cette thèse conclue en présentant l'impact des différents outils sur la communauté et en discutant de ma vision du futur de l'IRMd et de la tractographie.\

    What about if buildings respond to my mood?

    Get PDF
    This work analyzes the possibilities of interaction between the built environment and its users, focused on the responsiveness of the first to the emotions of the latter. Transforming the built environment according to the mood, feelings, and emotions of users, moment by moment, is discussed and analyzed. The main goal of this research is to define a responsive model by which the built environment can respond in a personalized way to the users’ emotions. For such, computational technical issues, building construction elements and users’ interaction are identified and analyzed. Case studies where occurs an interaction between the physical space and users are presented. We define a model for an architecture that is responsive to the user’s emotions assuming the individual at one end and the space at the other. The interaction between both ends takes place according to intermediate steps: the collection of data, the recognition of emotion, and the execution of the action that responds to the detected emotion. As this work focuses on an innovative and disruptive aspect of the built environment, the recognition of the new difficulties and related ethical issues are discussed.info:eu-repo/semantics/acceptedVersio

    General Dynamic Surface Reconstruction: Application to the 3D Segmentation of the Left Ventricle

    Get PDF
    Aquesta tesi descriu la nostra contribució a la reconstrucció tridimensional de les superfícies interna i externa del ventricle esquerre humà. La reconstrucció és un primer procés dins d'una aplicació global de Realitat Virtual dissenyada com una important eina de diagnòstic per a hospitals. L'aplicació parteix de la reconstrucció de les superfícies i proveeix a l'expert de manipulació interactiva del model en temps real, a més de càlculs de volums i de altres paràmetres d'interès. El procés de recuperació de les superfícies es caracteritza per la seva velocitat de convergència, la suavitat a les malles finals i la precisió respecte de les dades recuperades. Donat que el diagnòstic de patologies cardíaques requereix d'experiència, temps i molt coneixement professional, la simulació és un procés clau que millora la eficiència.Els nostres algorismes i implementacions han estat aplicats a dades sintètiques i reals amb diferències relatives a la quantitat de dades inexistents, casuístiques presents a casos patològics i anormals. Els conjunts de dades inclouen adquisicions d'instants concrets i de cicles cardíacs complets. La bondat del sistema de reconstrucció ha estat avaluada mitjançant paràmetres mèdics per a poder comparar els nostres resultats finals amb aquells derivats a partir de programari típic utilitzat pels professionals de la medicina.A més de l'aplicació directa al diagnòstic mèdic, la nostra metodologia permet reconstruccions de tipus genèric en el camp dels Gràfics 3D per ordinador. Les nostres reconstruccions permeten generar models tridimensionals amb un baix cost en quant a la interacció manual necessària i a la càrrega computacional associada. Altrament, el nostre mètode pot entendre's com un robust algorisme de triangularització que construeix superfícies partint de núvols de punts que poden obtenir-se d'escàners làser o sensors magnètics, per exemple.Esta tesis describe nuestra contribución a la reconstrucción tridimensional de las superficies interna y externa del ventrículo izquierdo humano. La reconstrucción es un primer proceso que forma parte de una aplicación global de Realidad Virtual diseñada como una importante herramienta de diagnóstico para hospitales. La aplicación parte de la reconstrucción de las superficies y provee al experto de manipulación interactiva del modelo en tiempo real, además de cálculos de volúmenes y de otros parámetros de interés. El proceso de recuperación de las superficies se caracteriza por su velocidad de convergencia, la suavidad en las mallas finales y la precisión respecto de los datos recuperados. Dado que el diagnóstico de patologías cardíacas requiere experiencia, tiempo y mucho conocimiento profesional, la simulación es un proceso clave que mejora la eficiencia.Nuestros algoritmos e implementaciones han sido aplicados a datos sintéticos y reales con diferencias en cuanto a la cantidad de datos inexistentes, casuística presente en casos patológicos y anormales. Los conjuntos de datos incluyen adquisiciones de instantes concretos y de ciclos cardíacos completos. La bondad del sistema de reconstrucción ha sido evaluada mediante parámetros médicos para poder comparar nuestros resultados finales con aquellos derivados a partir de programario típico utilizado por los profesionales de la medicina.Además de la aplicación directa al diagnóstico médico, nuestra metodología permite reconstrucciones de tipo genérico en el campo de los Gráficos 3D por ordenador. Nuestras reconstrucciones permiten generar modelos tridimensionales con un bajo coste en cuanto a la interacción manual necesaria y a la carga computacional asociada. Por otra parte, nuestro método puede entenderse como un robusto algoritmo de triangularización que construye superficies a partir de nubes de puntos que pueden obtenerse a partir de escáneres láser o sensores magnéticos, por ejemplo.This thesis describes a contribution to the three-dimensional reconstruction of the internal and external surfaces of the human's left ventricle. The reconstruction is a first process fitting in a complete VR application that will serve as an important diagnosis tool for hospitals. Beginning with the surfaces reconstruction, the application will provide volume and interactive real-time manipulation to the model. We focus on speed, precision and smoothness for the final surfaces. As long as heart diseases diagnosis requires experience, time and professional knowledge, simulation is a key-process that enlarges efficiency.The algorithms and implementations have been applied to both synthetic and real datasets with differences regarding missing data, present in cases where pathologies and abnormalities arise. The datasets include single acquisitions and complete cardiac cycles. The goodness of the reconstructions has been evaluated with medical parameters in order to compare our results with those retrieved by typical software used by physicians.Besides the direct application to medicine diagnosis, our methodology is suitable for generic reconstructions in the field of computer graphics. Our reconstructions can serve for getting 3D models at low cost, in terms of manual interaction and CPU computation overhead. Furthermore, our method is a robust tessellation algorithm that builds surfaces from clouds of points that can be retrieved from laser scanners or magnetic sensors, among other available hardware

    Steps towards adaptive situation and context-aware access: a contribution to the extension of access control mechanisms within pervasive information systems

    Get PDF
    L'évolution des systèmes pervasives a ouvert de nouveaux horizons aux systèmes d'information classiques qui ont intégré des nouvelles technologies et des services qui assurent la transparence d'accès aux resources d'information à n'importe quand, n'importe où et n'importe comment. En même temps, cette évolution a relevé des nouveaux défis à la sécurité de données et à la modélisation du contrôle d'accès. Afin de confronter ces challenges, differents travaux de recherche se sont dirigés vers l'extension des modèles de contrôles d'accès (en particulier le modèle RBAC) afin de prendre en compte la sensibilité au contexte dans le processus de prise de décision. Mais la liaison d'une décision d'accès aux contraintes contextuelles dynamiques d'un utilisateur mobile va non seulement ajouter plus de complexité au processus de prise de décision mais pourra aussi augmenter les possibilités de refus d'accès. Sachant que l'accessibilité est un élément clé dans les systèmes pervasifs et prenant en compte l'importance d'assurer l'accéssibilité en situations du temps réel, nombreux travaux de recherche ont proposé d'appliquer des mécanismes flexibles de contrôle d'accès avec des solutions parfois extrêmes qui depassent les frontières de sécurité telle que l'option de "Bris-de-Glace". Dans cette thèse, nous introduisons une solution modérée qui se positionne entre la rigidité des modèles de contrôle d'accès et la flexibilité qui expose des risques appliquées pendant des situations du temps réel. Notre contribution comprend deux volets : au niveau de conception, nous proposons PS-RBAC - un modèle RBAC sensible au contexte et à la situation. Le modèle réalise des attributions des permissions adaptatives et de solution de rechange à base de prise de décision basée sur la similarité face à une situation importanteÀ la phase d'exécution, nous introduisons PSQRS - un système de réécriture des requêtes sensible au contexte et à la situation et qui confronte les refus d'accès en reformulant la requête XACML de l'utilisateur et en lui proposant une liste des resources alternatives similaires qu'il peut accéder. L'objectif est de fournir un niveau de sécurité adaptative qui répond aux besoins de l'utilisateur tout en prenant en compte son rôle, ses contraintes contextuelles (localisation, réseau, dispositif, etc.) et sa situation. Notre proposition a été validé dans trois domaines d'application qui sont riches des contextes pervasifs et des scénarii du temps réel: (i) les Équipes Mobiles Gériatriques, (ii) les systèmes avioniques et (iii) les systèmes de vidéo surveillance.The evolution of pervasive computing has opened new horizons to classical information systems by integrating new technologies and services that enable seamless access to information sources at anytime, anyhow and anywhere. Meanwhile this evolution has opened new threats to information security and new challenges to access control modeling. In order to meet these challenges, many research works went towards extending traditional access control models (especially the RBAC model) in order to add context awareness within the decision-making process. Meanwhile, tying access decisions to the dynamic contextual constraints of mobile users would not only add more complexity to decision-making but could also increase the possibilities of access denial. Knowing that accessibility is a key feature for pervasive systems and taking into account the importance of providing access within real-time situations, many research works have proposed applying flexible access control mechanisms with sometimes extreme solutions that depass security boundaries such as the Break-Glass option. In this thesis, we introduce a moderate solution that stands between the rigidity of access control models and the riskful flexibility applied during real-time situations. Our contribution is twofold: on the design phase, we propose PS-RBAC - a Pervasive Situation-aware RBAC model that realizes adaptive permission assignments and alternative-based decision-making based on similarity when facing an important situation. On the implementation phase, we introduce PSQRS - a Pervasive Situation-aware Query Rewriting System architecture that confronts access denials by reformulating the user's XACML access request and proposing to him a list of alternative similar solutions that he can access. The objective is to provide a level of adaptive security that would meet the user needs while taking into consideration his role, contextual constraints (location, network, device, etc.) and his situation. Our proposal has been validated in three application domains that are rich in pervasive contexts and real-time scenarios: (i) Mobile Geriatric Teams, (ii) Avionic Systems and (iii) Video Surveillance Systems

    Compact gml: merging mobile computing and mobile cartography

    Get PDF
    The use of portable devices is moving from "Wireless Applications", typically implemented as browsing-on-the-road, to "Mobile Computing", which aims to exploit increasing processing power of consumer devices. As users get connected with smartphones and PDAs, they look for geographic information and location-aware services. While browser-based approaches have been explored (using static images or graphics formats such as Mobile SVG), a data model tailored for local computation on mobile devices is still missing. This paper presents the Compact Geographic Markup Language (cGML) that enables design and development of specific purpose GIS applications for portable consumer devices where a cGML document can be used as a spatial query result as well
    corecore