543 research outputs found

    Automated Debugging Methodology for FPGA-based Systems

    Get PDF
    Electronic devices make up a vital part of our lives. These are seen from mobiles, laptops, computers, home automation, etc. to name a few. The modern designs constitute billions of transistors. However, with this evolution, ensuring that the devices fulfill the designer’s expectation under variable conditions has also become a great challenge. This requires a lot of design time and effort. Whenever an error is encountered, the process is re-started. Hence, it is desired to minimize the number of spins required to achieve an error-free product, as each spin results in loss of time and effort. Software-based simulation systems present the main technique to ensure the verification of the design before fabrication. However, few design errors (bugs) are likely to escape the simulation process. Such bugs subsequently appear during the post-silicon phase. Finding such bugs is time-consuming due to inherent invisibility of the hardware. Instead of software simulation of the design in the pre-silicon phase, post-silicon techniques permit the designers to verify the functionality through the physical implementations of the design. The main benefit of the methodology is that the implemented design in the post-silicon phase runs many order-of-magnitude faster than its counterpart in pre-silicon. This allows the designers to validate their design more exhaustively. This thesis presents five main contributions to enable a fast and automated debugging solution for reconfigurable hardware. During the research work, we used an obstacle avoidance system for robotic vehicles as a use case to illustrate how to apply the proposed debugging solution in practical environments. The first contribution presents a debugging system capable of providing a lossless trace of debugging data which permits a cycle-accurate replay. This methodology ensures capturing permanent as well as intermittent errors in the implemented design. The contribution also describes a solution to enhance hardware observability. It is proposed to utilize processor-configurable concentration networks, employ debug data compression to transmit the data more efficiently, and partially reconfiguring the debugging system at run-time to save the time required for design re-compilation as well as preserve the timing closure. The second contribution presents a solution for communication-centric designs. Furthermore, solutions for designs with multi-clock domains are also discussed. The third contribution presents a priority-based signal selection methodology to identify the signals which can be more helpful during the debugging process. A connectivity generation tool is also presented which can map the identified signals to the debugging system. The fourth contribution presents an automated error detection solution which can help in capturing the permanent as well as intermittent errors without continuous monitoring of debugging data. The proposed solution works for designs even in the absence of golden reference. The fifth contribution proposes to use artificial intelligence for post-silicon debugging. We presented a novel idea of using a recurrent neural network for debugging when a golden reference is present for training the network. Furthermore, the idea was also extended to designs where golden reference is not present

    Ultrasonic sensor platforms for non-destructive evaluation

    Get PDF
    Robotic vehicles are receiving increasing attention for use in Non-Destructive Evaluation (NDE), due to their attractiveness in terms of cost, safety and their accessibility to areas where manual inspection is not practical. A reconfigurable Lamb wave scanner, using autonomous robotic platforms is presented. The scanner is built from a fleet of wireless miniature robotic vehicles, each with a non-contact ultrasonic payload capable of generating the A0 Lamb wave mode in plate specimens. An embedded Kalman filter gives the robots a positional accuracy of 10mm. A computer simulator, to facilitate the design and assessment of the reconfigurable scanner, is also presented. Transducer behaviour has been simulated using a Linear Systems approximation (LS), with wave propagation in the structure modelled using the Local Interaction Simulation Approach (LISA). Integration of the LS and LISA approaches were validated for use in Lamb wave scanning by comparison with both analytical techniques and more computationally intensive commercial finite element/diference codes. Starting with fundamental dispersion data, the work goes on to describe the simulation of wave propagation and the subsequent interaction with artificial defects and plate boundaries. The computer simulator was used to evaluate several imaging techniques, including local inspection of the area under the robot and an extended method that emits an ultrasonic wave and listens for echos (B-Scan). These algorithms were implemented in the robotic platform and experimental results are presented. The Synthetic Aperture Focusing Technique (SAFT) was evaluated as a means of improving the fidelity of B-Scan data. It was found that a SAFT is only effective for transducers with reasonably wide beam divergence, necessitating small transducers with a width of approximately 5mm. Finally, an algorithm for robot localisation relative to plate sections was proposed and experimentally validated

    Virtual metrology for plasma etch processes.

    Get PDF
    Plasma processes can present dicult control challenges due to time-varying dynamics and a lack of relevant and/or regular measurements. Virtual metrology (VM) is the use of mathematical models with accessible measurements from an operating process to estimate variables of interest. This thesis addresses the challenge of virtual metrology for plasma processes, with a particular focus on semiconductor plasma etch. Introductory material covering the essentials of plasma physics, plasma etching, plasma measurement techniques, and black-box modelling techniques is rst presented for readers not familiar with these subjects. A comprehensive literature review is then completed to detail the state of the art in modelling and VM research for plasma etch processes. To demonstrate the versatility of VM, a temperature monitoring system utilising a state-space model and Luenberger observer is designed for the variable specic impulse magnetoplasma rocket (VASIMR) engine, a plasma-based space propulsion system. The temperature monitoring system uses optical emission spectroscopy (OES) measurements from the VASIMR engine plasma to correct temperature estimates in the presence of modelling error and inaccurate initial conditions. Temperature estimates within 2% of the real values are achieved using this scheme. An extensive examination of the implementation of a wafer-to-wafer VM scheme to estimate plasma etch rate for an industrial plasma etch process is presented. The VM models estimate etch rate using measurements from the processing tool and a plasma impedance monitor (PIM). A selection of modelling techniques are considered for VM modelling, and Gaussian process regression (GPR) is applied for the rst time for VM of plasma etch rate. Models with global and local scope are compared, and modelling schemes that attempt to cater for the etch process dynamics are proposed. GPR-based windowed models produce the most accurate estimates, achieving mean absolute percentage errors (MAPEs) of approximately 1:15%. The consistency of the results presented suggests that this level of accuracy represents the best accuracy achievable for the plasma etch system at the current frequency of metrology. Finally, a real-time VM and model predictive control (MPC) scheme for control of plasma electron density in an industrial etch chamber is designed and tested. The VM scheme uses PIM measurements to estimate electron density in real time. A predictive functional control (PFC) scheme is implemented to cater for a time delay in the VM system. The controller achieves time constants of less than one second, no overshoot, and excellent disturbance rejection properties. The PFC scheme is further expanded by adapting the internal model in the controller in real time in response to changes in the process operating point

    Virtual metrology for plasma etch processes.

    Get PDF
    Plasma processes can present dicult control challenges due to time-varying dynamics and a lack of relevant and/or regular measurements. Virtual metrology (VM) is the use of mathematical models with accessible measurements from an operating process to estimate variables of interest. This thesis addresses the challenge of virtual metrology for plasma processes, with a particular focus on semiconductor plasma etch. Introductory material covering the essentials of plasma physics, plasma etching, plasma measurement techniques, and black-box modelling techniques is rst presented for readers not familiar with these subjects. A comprehensive literature review is then completed to detail the state of the art in modelling and VM research for plasma etch processes. To demonstrate the versatility of VM, a temperature monitoring system utilising a state-space model and Luenberger observer is designed for the variable specic impulse magnetoplasma rocket (VASIMR) engine, a plasma-based space propulsion system. The temperature monitoring system uses optical emission spectroscopy (OES) measurements from the VASIMR engine plasma to correct temperature estimates in the presence of modelling error and inaccurate initial conditions. Temperature estimates within 2% of the real values are achieved using this scheme. An extensive examination of the implementation of a wafer-to-wafer VM scheme to estimate plasma etch rate for an industrial plasma etch process is presented. The VM models estimate etch rate using measurements from the processing tool and a plasma impedance monitor (PIM). A selection of modelling techniques are considered for VM modelling, and Gaussian process regression (GPR) is applied for the rst time for VM of plasma etch rate. Models with global and local scope are compared, and modelling schemes that attempt to cater for the etch process dynamics are proposed. GPR-based windowed models produce the most accurate estimates, achieving mean absolute percentage errors (MAPEs) of approximately 1:15%. The consistency of the results presented suggests that this level of accuracy represents the best accuracy achievable for the plasma etch system at the current frequency of metrology. Finally, a real-time VM and model predictive control (MPC) scheme for control of plasma electron density in an industrial etch chamber is designed and tested. The VM scheme uses PIM measurements to estimate electron density in real time. A predictive functional control (PFC) scheme is implemented to cater for a time delay in the VM system. The controller achieves time constants of less than one second, no overshoot, and excellent disturbance rejection properties. The PFC scheme is further expanded by adapting the internal model in the controller in real time in response to changes in the process operating point

    Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems

    Full text link
    [ES] La utilización de sistemas empotrados en cada vez más ámbitos de aplicación está llevando a que su diseño deba enfrentarse a mayores requisitos de rendimiento, consumo de energía y área (PPA). Asimismo, su utilización en aplicaciones críticas provoca que deban cumplir con estrictos requisitos de confiabilidad para garantizar su correcto funcionamiento durante períodos prolongados de tiempo. En particular, el uso de dispositivos lógicos programables de tipo FPGA es un gran desafío desde la perspectiva de la confiabilidad, ya que estos dispositivos son muy sensibles a la radiación. Por todo ello, la confiabilidad debe considerarse como uno de los criterios principales para la toma de decisiones a lo largo del todo flujo de diseño, que debe complementarse con diversos procesos que permitan alcanzar estrictos requisitos de confiabilidad. Primero, la evaluación de la robustez del diseño permite identificar sus puntos débiles, guiando así la definición de mecanismos de tolerancia a fallos. Segundo, la eficacia de los mecanismos definidos debe validarse experimentalmente. Tercero, la evaluación comparativa de la confiabilidad permite a los diseñadores seleccionar los componentes prediseñados (IP), las tecnologías de implementación y las herramientas de diseño (EDA) más adecuadas desde la perspectiva de la confiabilidad. Por último, la exploración del espacio de diseño (DSE) permite configurar de manera óptima los componentes y las herramientas seleccionados, mejorando así la confiabilidad y las métricas PPA de la implementación resultante. Todos los procesos anteriormente mencionados se basan en técnicas de inyección de fallos para evaluar la robustez del sistema diseñado. A pesar de que existe una amplia variedad de técnicas de inyección de fallos, varias problemas aún deben abordarse para cubrir las necesidades planteadas en el flujo de diseño. Aquellas soluciones basadas en simulación (SBFI) deben adaptarse a los modelos de nivel de implementación, teniendo en cuenta la arquitectura de los diversos componentes de la tecnología utilizada. Las técnicas de inyección de fallos basadas en FPGAs (FFI) deben abordar problemas relacionados con la granularidad del análisis para poder localizar los puntos débiles del diseño. Otro desafío es la reducción del coste temporal de los experimentos de inyección de fallos. Debido a la alta complejidad de los diseños actuales, el tiempo experimental dedicado a la evaluación de la confiabilidad puede ser excesivo incluso en aquellos escenarios más simples, mientras que puede ser inviable en aquellos procesos relacionados con la evaluación de múltiples configuraciones alternativas del diseño. Por último, estos procesos orientados a la confiabilidad carecen de un soporte instrumental que permita cubrir el flujo de diseño con toda su variedad de lenguajes de descripción de hardware, tecnologías de implementación y herramientas de diseño. Esta tesis aborda los retos anteriormente mencionados con el fin de integrar, de manera eficaz, estos procesos orientados a la confiabilidad en el flujo de diseño. Primeramente, se proponen nuevos métodos de inyección de fallos que permiten una evaluación de la confiabilidad, precisa y detallada, en diferentes niveles del flujo de diseño. Segundo, se definen nuevas técnicas para la aceleración de los experimentos de inyección que mejoran su coste temporal. Tercero, se define dos estrategias DSE que permiten configurar de manera óptima (desde la perspectiva de la confiabilidad) los componentes IP y las herramientas EDA, con un coste experimental mínimo. Cuarto, se propone un kit de herramientas que automatiza e incorpora con eficacia los procesos orientados a la confiabilidad en el flujo de diseño semicustom. Finalmente, se demuestra la utilidad y eficacia de las propuestas mediante un caso de estudio en el que se implementan tres procesadores empotrados en un FPGA de Xilinx serie 7.[CA] La utilització de sistemes encastats en cada vegada més àmbits d'aplicació està portant al fet que el seu disseny haja d'enfrontar-se a majors requisits de rendiment, consum d'energia i àrea (PPA). Així mateix, la seua utilització en aplicacions crítiques provoca que hagen de complir amb estrictes requisits de confiabilitat per a garantir el seu correcte funcionament durant períodes prolongats de temps. En particular, l'ús de dispositius lògics programables de tipus FPGA és un gran desafiament des de la perspectiva de la confiabilitat, ja que aquests dispositius són molt sensibles a la radiació. Per tot això, la confiabilitat ha de considerar-se com un dels criteris principals per a la presa de decisions al llarg del tot flux de disseny, que ha de complementar-se amb diversos processos que permeten aconseguir estrictes requisits de confiabilitat. Primer, l'avaluació de la robustesa del disseny permet identificar els seus punts febles, guiant així la definició de mecanismes de tolerància a fallades. Segon, l'eficàcia dels mecanismes definits ha de validar-se experimentalment. Tercer, l'avaluació comparativa de la confiabilitat permet als dissenyadors seleccionar els components predissenyats (IP), les tecnologies d'implementació i les eines de disseny (EDA) més adequades des de la perspectiva de la confiabilitat. Finalment, l'exploració de l'espai de disseny (DSE) permet configurar de manera òptima els components i les eines seleccionats, millorant així la confiabilitat i les mètriques PPA de la implementació resultant. Tots els processos anteriorment esmentats es basen en tècniques d'injecció de fallades per a poder avaluar la robustesa del sistema dissenyat. A pesar que existeix una àmplia varietat de tècniques d'injecció de fallades, diverses problemes encara han d'abordar-se per a cobrir les necessitats plantejades en el flux de disseny. Aquelles solucions basades en simulació (SBFI) han d'adaptar-se als models de nivell d'implementació, tenint en compte l'arquitectura dels diversos components de la tecnologia utilitzada. Les tècniques d'injecció de fallades basades en FPGAs (FFI) han d'abordar problemes relacionats amb la granularitat de l'anàlisi per a poder localitzar els punts febles del disseny. Un altre desafiament és la reducció del cost temporal dels experiments d'injecció de fallades. A causa de l'alta complexitat dels dissenys actuals, el temps experimental dedicat a l'avaluació de la confiabilitat pot ser excessiu fins i tot en aquells escenaris més simples, mentre que pot ser inviable en aquells processos relacionats amb l'avaluació de múltiples configuracions alternatives del disseny. Finalment, aquests processos orientats a la confiabilitat manquen d'un suport instrumental que permeta cobrir el flux de disseny amb tota la seua varietat de llenguatges de descripció de maquinari, tecnologies d'implementació i eines de disseny. Aquesta tesi aborda els reptes anteriorment esmentats amb la finalitat d'integrar, de manera eficaç, aquests processos orientats a la confiabilitat en el flux de disseny. Primerament, es proposen nous mètodes d'injecció de fallades que permeten una avaluació de la confiabilitat, precisa i detallada, en diferents nivells del flux de disseny. Segon, es defineixen noves tècniques per a l'acceleració dels experiments d'injecció que milloren el seu cost temporal. Tercer, es defineix dues estratègies DSE que permeten configurar de manera òptima (des de la perspectiva de la confiabilitat) els components IP i les eines EDA, amb un cost experimental mínim. Quart, es proposa un kit d'eines (DAVOS) que automatitza i incorpora amb eficàcia els processos orientats a la confiabilitat en el flux de disseny semicustom. Finalment, es demostra la utilitat i eficàcia de les propostes mitjançant un cas d'estudi en el qual s'implementen tres processadors encastats en un FPGA de Xilinx serie 7.[EN] Embedded systems are steadily extending their application areas, dealing with increasing requirements in performance, power consumption, and area (PPA). Whenever embedded systems are used in safety-critical applications, they must also meet rigorous dependability requirements to guarantee their correct operation during an extended period of time. Meeting these requirements is especially challenging for those systems that are based on Field Programmable Gate Arrays (FPGAs), since they are very susceptible to Single Event Upsets. This leads to increased dependability threats, especially in harsh environments. In such a way, dependability should be considered as one of the primary criteria for decision making throughout the whole design flow, which should be complemented by several dependability-driven processes. First, dependability assessment quantifies the robustness of hardware designs against faults and identifies their weak points. Second, dependability-driven verification ensures the correctness and efficiency of fault mitigation mechanisms. Third, dependability benchmarking allows designers to select (from a dependability perspective) the most suitable IP cores, implementation technologies, and electronic design automation (EDA) tools. Finally, dependability-aware design space exploration (DSE) allows to optimally configure the selected IP cores and EDA tools to improve as much as possible the dependability and PPA features of resulting implementations. The aforementioned processes rely on fault injection testing to quantify the robustness of the designed systems. Despite nowadays there exists a wide variety of fault injection solutions, several important problems still should be addressed to better cover the needs of a dependability-driven design flow. In particular, simulation-based fault injection (SBFI) should be adapted to implementation-level HDL models to take into account the architecture of diverse logic primitives, while keeping the injection procedures generic and low-intrusive. Likewise, the granularity of FPGA-based fault injection (FFI) should be refined to the enable accurate identification of weak points in FPGA-based designs. Another important challenge, that dependability-driven processes face in practice, is the reduction of SBFI and FFI experimental effort. The high complexity of modern designs raises the experimental effort beyond the available time budgets, even in simple dependability assessment scenarios, and it becomes prohibitive in presence of alternative design configurations. Finally, dependability-driven processes lack an instrumental support covering the semicustom design flow in all its variety of description languages, implementation technologies, and EDA tools. Existing fault injection tools only partially cover the individual stages of the design flow, being usually specific to a particular design representation level and implementation technology. This work addresses the aforementioned challenges by efficiently integrating dependability-driven processes into the design flow. First, it proposes new SBFI and FFI approaches that enable an accurate and detailed dependability assessment at different levels of the design flow. Second, it improves the performance of dependability-driven processes by defining new techniques for accelerating SBFI and FFI experiments. Third, it defines two DSE strategies that enable the optimal dependability-aware tuning of IP cores and EDA tools, while reducing as much as possible the robustness evaluation effort. Fourth, it proposes a new toolkit (DAVOS) that automates and seamlessly integrates the aforementioned dependability-driven processes into the semicustom design flow. Finally, it illustrates the usefulness and efficiency of these proposals through a case study consisting of three soft-core embedded processors implemented on a Xilinx 7-series SoC FPGA.Tuzov, I. (2020). Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/159883TESI

    Quantitative electron microscopy for microstructural characterisation

    Get PDF
    Development of materials for high-performance applications requires accurate and useful analysis tools. In parallel with advances in electron microscopy hardware, we require analysis approaches to better understand microstructural behaviour. Such improvements in characterisation capability permit informed alloy design. New approaches to the characterisation of metallic materials are presented, primarily using signals collected from electron microscopy experiments. Electron backscatter diffraction is regularly used to investigate crystallography in the scanning electron microscope, and combined with energy-dispersive X-ray spectroscopy to simultaneusly investigate chemistry. New algorithms and analysis pipelines are developed to permit accurate and routine microstructural evaluation, leveraging a variety of machine learning approaches. This thesis investigates the structure and behaviour of Co/Ni-base superalloys, derived from V208C. Use of the presently developed techniques permits informed development of a new generation of advanced gas turbine engine materials.Open Acces

    Ultra-high-resolution optical imaging for silicon integrated-circuit inspection

    Get PDF
    This thesis concerns the development of novel resolution-enhancing optical techniques for the purposes of non-destructive sub-surface semiconductor integrated-circuit (IC) inspection. This was achieved by utilising solid immersion lens (SIL) technology, polarisation-dependent imaging, pupil-function engineering and optical coherence tomography (OCT). A SIL-enhanced two-photon optical beam induced current (TOBIC) microscope was constructed for the acquisition of ultra-high-resolution two- and three-dimensional images of a silicon flip-chip using a 1.55μm modelocked Er:fibre laser. This technology provided diffraction-limited lateral and axial resolutions of 166nm and 100nm, respectively - an order of magnitude improvement over previous TOBIC imaging work. The ultra-high numerical aperture (NA) provided by SIL-imaging in silicon (NA=3.5) was used to show, for the first time, the presence of polarisation-dependent vectorialfield effects in an image. These effects were modelled using vector diffraction theory to confirm the increasing ellipticity of the focal-plane energy density distribution as the NA of the system approaches unity. An unprecedented resolution performance ranging from 240nm to ~100nm was obtained, depending of the state of polarisation used. The resolution-enhancing effects of pupil-function engineering were investigated and implemented into a nonlinear polarisation-dependent SIL-enhanced laser microscope to demonstrate a minimum resolution performance of 70nm in a silicon flip-chip. The performance of the annular apertures used in this work was modelled using vectorial diffraction theory to interpret the experimentally-obtained images. The development of an ultra-high-resolution high-dynamic-range OCT system is reported which utilised a broadband supercontinuum source and a balanced-detection scheme in a time-domain Michelson interferometer to achieve an axial resolution of 2.5μm (in air). The examination of silicon ICs demonstrated both a unique substrate profiling and novel inspection technology for circuit navigation and characterisation. In addition, the application of OCT to the investigation of artwork samples and contemporary banknotes is demonstrated for the purposes of art conservation and counterfeit prevention

    Acoustic emission testing and acousto-ultrasonics for structural health monitoring

    Get PDF
    The global trends in the construction of modern structures require the integration of sensors together with data recording and analysis modules so that its integrity can be continuously monitored for safe-life, economic and ecological reasons. This process of measuring and analysing the data from a distributed sensor network all over a structural system in order to quantify its condition is known as structural health monitoring (SHM). The research presented in this thesis is motivated by the need to improve the inspection capabilities and reliability of SHM systems based on ultrasonic guided waves with focus on the acoustic emission and acousto-ultrasonics techniques. The use of a guided wave-based approach is driven by the fact that these waves are able to propagate over relatively long distances, interact sensitively with and/or being related to different types of defect. The main emphasis of the thesis is concentrated on the development of different methodologies based on signal analysis together with the fundamental understanding of wave propagation for the solution of problems such as damage detection, localisation and identification. The behaviour of guided waves for both techniques is predicted through modelling in order to investigate the characteristics of the modes being propagated throughout the evaluated structures and support signal analysis. The validity of the developed model is extensively investigated by contrasting numerical simulations and experiments. In this thesis special attention is paid to the development of efficient SHM methodologies. This fact requires robust signal processing techniques for the correct interpretation of the complex ultrasonic waves. Therefore, a variety of existing algorithms for signal processing and pattern recognition are evaluated and integrated into the different proposed methodologies. Additionally, effects such as temperature variability and operational conditions are experimentally studied in order to analyse their influence on the performance of developed methodologies. At the end, the efficiency of these methodologies are experimentally evaluated in diverse isotropic and anisotropic composite structures.Nach den heutigen Standards zur Konstruktion moderner Leichtbaustrukturen ist es zur Strukturüberwachung aufgrund von wirtschaftlichen, ökologischen und Sicherheitsaspekten unerlässlich, Sensoren und Module zur Datenspeicherung und –analyse in diese Strukturen zu integrieren. Den Prozess der Strukturüberwachung anhand der Messung und Analyse von Daten aus einem dezentralen Sensornetzwerk wird als „Structural Health Monitoring (SHM)“ bezeichnet. Die vorliegende Arbeit und die darin vorgestellten Untersuchungen reagieren auf den Bedarf an verbesserter Genauigkeit und höherer Zuverlässigkeit von SHM-Systemen, die auf geführten Ultraschallwellen basieren, wobei der Fokus der Untersuchung auf Schallemissions- und Acousto-Ultraschalltechniken liegt. Da geführte Wellen lange Wege zurückzulegen können und mit hoher Empfindlichkeit und Genauigkeit auf verschiedene Schadenstypen reagieren, eignen sie sich sehr gut für die Überwachung dünnwandiger Strukturen. Der Schwerpunkt der Arbeit liegt in der Entwicklung verschiedener Methoden zur Signalanalyse zur Lösung von Problemen wie Schadenserkennung, lokalisierung und identifizierung. Dies ist nicht ohne ein grundlegendes Verständnis der Wellenausbreitungsmechanismen möglich, sodass ein Modell entwickelt wird, anhand dessen die Charakteristiken der angeregten Moden sowie die Wellenausbreitung in den zu untersuchenden Strukturen analysiert werden können, um so die Signalanalyse zu unterstützen. Die Validität des entwickelten Modells wird eingehend anhand von verschiedenen numerischen Simulationen und Experimenten untersucht. Um besonders effiziente Methoden des SHMs zu entwickeln, sind robuste Signalverarbeitungstechniken zur zuverlässigen Interpretation komplexer Ultraschallwellen notwending. Aus diesem Grund erfolgt die Auswertung einer Vielzahl existierender Algorithmen zur Signalverarbeitung und Mustererkennung, die in die hier vorgestellten Methoden integriert werden. Des Weiteren wird experimentell untersucht, welchen Einfluss Effekte wie Temperaturschwankungen und Betriebsbedingungen auf diese Methoden haben. Abschließend wird experimentell die Effizienz der entwickelten Methoden bei der Überwachung diverser isotroper und anisotroper Faserverbundstrukturen nachgewiesen

    Using the RISCI Genetic Screening Platform for Elucidating Apoptosis Signalling Network

    Get PDF
    Considerable development in the field of nanotechnology is increasingly yielding novel applications of nanoparticles. The unique properties of nanoparticles in particular their high aspect ratio (length : width ratio), however could pose potential risks to the user. A high throughput genetic screening platform, RISCI (robotic single cDNA investigation), was previously established for the systematic evaluation of single gene activities. Here, RISCI was utilised to identify pro-apoptotic genes as well as genes involved in the positive and negative regulation of silica nanoparticle-induced cell death. This project describes the further development of the screening platform by harnessing its capability to screen a cDNA library comprising approximately 30,000 full length, completely annotated, and sequenced human genes for novel regulators of apoptosis. It integrates an extensive skill sets and is broadly organised into three major phases: Setup, Screen and Analysis. The integration of a pro-apoptosis treatment to screen for inhibitors and sensitizers is a novel aspect of the current experimental setup, along with the low redundancy library. The extensive setup phase focused on technical aspects. The cDNA library, acquired as plasmid DNA, was transformed into a bacterial host for replication and subsequent DNA isolation. A new high-throughput process was developed encompassing the production of competent bacteria and a heat shock transformation protocol, which was subsequently transferred onto the robotic platform. In parallel, the software controlling the robots was redeveloped to allow for execution of user-defined protocols while novel transfection protocols were adapted for automation. The screen identified 699 apoptosis inducers, 1,141 inhibitors and 626 sensitizers. Bioinformatics analysis revealed that the inducers were highly enriched for cell death associated terms, while the inhibitors were strongly associated with cancer profiles. Both inducers and sensitizers were predominantly achieving the functional effect on the protein level, but inhibitors were mainly transcription based. Enriched metal response genes also suggest that the silica nanoparticles were causing their toxicity through reactive oxygen species generation. Intriguingly, the screen identified many noncoding sequences as being functionally capable of regulating apoptosis. These noncoding candidates are capable of regulating the protein coding counterparts identified from the screen. The truly interesting part of the project outcome remains those unknown candidates that were implicated in apoptosis regulation for the first time. Dissemination of the consolidated candidate list would help accelerate the experimental validation of these candidates and aid other researchers in deriving novel hypotheses when the candidates are placed in their research context. [For supplementary files please contact author]
    • …
    corecore