6 research outputs found

    Voronoi Diagrams for Convex Polygon-Offset Distance Functions

    No full text
    In this paper we develop the concept of a convex polygon-offset distance function. Using offset as a notion of distance, we show how to compute the corresponding nearest- and furthest-site Voronoi diagrams of point sites in the plane. We provide near-optimal deterministic O(n(log n + log² m) + m)-time algorithms, where n is the number of points and m is the complexity of the underlying polygon, for computing compact representations of both diagrams

    An Electrically Active Microneedle Electroporation Array for Intracellular Delivery of Biomolecules

    Get PDF
    The objective of this research is the development of an electrically active microneedle array that can deliver biomolecules such as DNA and drugs to epidermal cells by means of electroporation. Properly metallized microneedles could serve as microelectrodes essential for electroporation. Furthermore, the close needle-to-needle spacing of microneedle electrodes provides the advantage of utilizing reduced voltage, which is essential for safety as well as portable applications, while maintaining the large electric fields required for electroporation. Therefore, microneedle arrays can potentially be used as part of a minimally invasive, highly-localized electroporation system for cells in the epidermis layer of the skin. This research consists of three parts: development of the 3-D microfabrication technology to create the microneedle array, fabrication and characterization of the microneedle array, and the electroporation studies performed with the microneedle array. A 3-D fabrication process was developed to produce a microneedle array using an inclined UV exposure technique combined with micromolding technology, potentially enabling low cost mass-manufacture. The developed technology is also capable of fabricating 3-D microstructures of various heights using a single mask. The fabricated microneedle array was then tested to demonstrate its feasibility for through-skin electrical and mechanical functionality using a skin insertion test. It was found that the microneedles were able to penetrate skin without breakage. To study the electrical properties of the array, a finite element simulation was performed to examine the electric field distribution. From these simulation results, a predictive model was constructed to estimate the effective volume for electroporation. Finally, studies to determine hemoglobin release from bovine red blood cells (RBC) and the delivery of molecules such as calcein and bovine serum albumin (BSA) into human prostate cancer cells were used to verify the electrical functionality of this device. This work established that this device can be used to lyse RBC and to deliver molecules, e.g. calcein, into cells, thus supporting our contention that this metallized microneedle array can be used to perform electroporation at reduced voltage. Further studies to show efficacy in skin should now be performed.Ph.D.Committee Chair: Mark G. Allen; Committee Member: Mark R. Prausnitz; Committee Member: Oliver Brand; Committee Member: Pamela Bhatti; Committee Member: Shyh-Chiang She

    Conditions de coupe en fraisage à grande vitesse : effet de la variation de la vitesse d’avance

    Get PDF
    Dans le processus de fabrication en fraisage à grande vitesse ‘FGV’, l’étude de la réaction de la machine au cours de l’usinage est une tâche très délicate et importante. En effet, l’identification du comportement de la machine nécessite la modélisation de la loi de mouvement des axes et de la trajectoire réelle aux niveaux des discontinuités. Le nombre important de discontinuités engendrent une instabilité de la vitesse de déplacement des axes, ce qui implique une augmentation du temps d’usinage et un non-respect de la vitesse d’avance programmée, se traduisant par des problèmes de productivité et une sous-estimation du coût de l’usinage pour l’industriel. L’objectif de cette thèse est de développer un outil informatique qui permet de calculer la vitesse d’avance et de faire une estimation précise du temps de cycle pour n’importe quelle trajectoire générée par un logiciel de FAO. Pour ce faire, nous avons déterminé un modèle qui permet d’identifier le comportement cinématique des axes d’un centre d’usinage en FGV pour toute forme de trajectoire. À partir de la modélisation de la variation de la vitesse d’avance, nous avons déterminé le temps réel selon les trajectoires et l’erreur imposée par le bureau des méthodes. Enfin, nous utilisons ces résultats pour mettre en place une méthodologie pour l’aide au choix du diamètre de l’outil et de la stratégie d’usinage. Afin de valider les modèles et les méthodologies développés, une étude expérimentale a été réalisée sur des applications didactiques et industrielles

    Conditions de coupe en fraisage à grande vitesse (effet de la variation de la vitesse d'avance)

    Get PDF
    Dans le processus de fabrication en fraisage à grande vitesse FGV , l étude de la réaction de la machine au cours de l usinage est une tâche très délicate et importante. En effet, l identification du comportement de la machine nécessite la modélisation de la loi de mouvement des axes et de la trajectoire réelle aux niveaux des discontinuités. Le nombre important de discontinuités engendrent une instabilité de la vitesse de déplacement des axes, ce qui implique une augmentation du temps d usinage et un non-respect de la vitesse d avance programmée, se traduisant par des problèmes de productivité et une sous-estimation du coût de l usinage pour l industriel. L objectif de cette thèse est de développer un outil informatique qui permet de calculer la vitesse d avance et de faire une estimation précise du temps de cycle pour n importe quelle trajectoire générée par un logiciel de FAO. Pour ce faire, nous avons déterminé un modèle qui permet d identifier le comportement cinématique des axes d un centre d usinage en FGV pour toute forme de trajectoire. À partir de la modélisation de la variation de la vitesse d avance, nous avons déterminé le temps réel selon les trajectoires et l erreur imposée par le bureau des méthodes. Enfin, nous utilisons ces résultats pour mettre en place une méthodologie pour l aide au choix du diamètre de l outil et de la stratégie d usinage. Afin de valider les modèles et les méthodologies développés, une étude expérimentale a été réalisée sur des applications didactiques et industrielles.In the context of high speed milling HSM , the feed rate does not always reach the programmed value during the machining process which implies an increase of machining time and non-compliance with the programmed feed rate. This phenomenon leads to productivity issues and an underestimation of the cost of machining for the industry.The aim of this study is to develop a computerised tool in order to automate the determination process of the evolution of the feed rate for an imposed error and the estimation of cycle time and production cost. To begin with,a modeling approach in order to evaluate feed rate during any type of discontinuity between linear and circular contours in different combination by taking into account the specific machining tolerances.is presented. Then, the cycle time will be estimated with a maximum error of 7% between the actual and the prediction cycle time. The proposed method permits to develop a methodology to determine the optimal diameter of the tool and the optimal strategy. Finally, an industrial application was carried out in order to validate models and to determine the influence of feed rate evolution on the cycle time.TOULOUSE-INP (315552154) / SudocSudocFranceF

    Automating Geospatial RDF Dataset Integration and Enrichment

    Get PDF
    Over the last years, the Linked Open Data (LOD) has evolved from a mere 12 to more than 10,000 knowledge bases. These knowledge bases come from diverse domains including (but not limited to) publications, life sciences, social networking, government, media, linguistics. Moreover, the LOD cloud also contains a large number of crossdomain knowledge bases such as DBpedia and Yago2. These knowledge bases are commonly managed in a decentralized fashion and contain partly verlapping information. This architectural choice has led to knowledge pertaining to the same domain being published by independent entities in the LOD cloud. For example, information on drugs can be found in Diseasome as well as DBpedia and Drugbank. Furthermore, certain knowledge bases such as DBLP have been published by several bodies, which in turn has lead to duplicated content in the LOD . In addition, large amounts of geo-spatial information have been made available with the growth of heterogeneous Web of Data. The concurrent publication of knowledge bases containing related information promises to become a phenomenon of increasing importance with the growth of the number of independent data providers. Enabling the joint use of the knowledge bases published by these providers for tasks such as federated queries, cross-ontology question answering and data integration is most commonly tackled by creating links between the resources described within these knowledge bases. Within this thesis, we spur the transition from isolated knowledge bases to enriched Linked Data sets where information can be easily integrated and processed. To achieve this goal, we provide concepts, approaches and use cases that facilitate the integration and enrichment of information with other data types that are already present on the Linked Data Web with a focus on geo-spatial data. The first challenge that motivates our work is the lack of measures that use the geographic data for linking geo-spatial knowledge bases. This is partly due to the geo-spatial resources being described by the means of vector geometry. In particular, discrepancies in granularity and error measurements across knowledge bases render the selection of appropriate distance measures for geo-spatial resources difficult. We address this challenge by evaluating existing literature for point set measures that can be used to measure the similarity of vector geometries. Then, we present and evaluate the ten measures that we derived from the literature on samples of three real knowledge bases. The second challenge we address in this thesis is the lack of automatic Link Discovery (LD) approaches capable of dealing with geospatial knowledge bases with missing and erroneous data. To this end, we present Colibri, an unsupervised approach that allows discovering links between knowledge bases while improving the quality of the instance data in these knowledge bases. A Colibri iteration begins by generating links between knowledge bases. Then, the approach makes use of these links to detect resources with probably erroneous or missing information. This erroneous or missing information detected by the approach is finally corrected or added. The third challenge we address is the lack of scalable LD approaches for tackling big geo-spatial knowledge bases. Thus, we present Deterministic Particle-Swarm Optimization (DPSO), a novel load balancing technique for LD on parallel hardware based on particle-swarm optimization. We combine this approach with the Orchid algorithm for geo-spatial linking and evaluate it on real and artificial data sets. The lack of approaches for automatic updating of links of an evolving knowledge base is our fourth challenge. This challenge is addressed in this thesis by the Wombat algorithm. Wombat is a novel approach for the discovery of links between knowledge bases that relies exclusively on positive examples. Wombat is based on generalisation via an upward refinement operator to traverse the space of Link Specifications (LS). We study the theoretical characteristics of Wombat and evaluate it on different benchmark data sets. The last challenge addressed herein is the lack of automatic approaches for geo-spatial knowledge base enrichment. Thus, we propose Deer, a supervised learning approach based on a refinement operator for enriching Resource Description Framework (RDF) data sets. We show how we can use exemplary descriptions of enriched resources to generate accurate enrichment pipelines. We evaluate our approach against manually defined enrichment pipelines and show that our approach can learn accurate pipelines even when provided with a small number of training examples. Each of the proposed approaches is implemented and evaluated against state-of-the-art approaches on real and/or artificial data sets. Moreover, all approaches are peer-reviewed and published in a conference or a journal paper. Throughout this thesis, we detail the ideas, implementation and the evaluation of each of the approaches. Moreover, we discuss each approach and present lessons learned. Finally, we conclude this thesis by presenting a set of possible future extensions and use cases for each of the proposed approaches
    corecore