406 research outputs found
Ti-6Al-4V β Phase Selective Dissolution: In Vitro Mechanism and Prediction
Retrieval studies document Ti-6Al-4V β phase dissolution within total hip replacement systems. A gap persists in our mechanistic understanding and existing standards fail to reproduce this damage. This thesis aims to (1) elucidate the Ti-6Al-4V selective dissolution mechanism as functions of solution chemistry, electrode potential and temperature; (2) investigate the effects of adverse electrochemical conditions on additively manufactured (AM) titanium alloys and (3) apply machine learning to predict the Ti-6Al-4V dissolution state. We hypothesized that (1) cathodic activation and inflammatory species (H2O2) would degrade the Ti-6Al-4V oxide, promoting dissolution; (2) AM Ti-6Al-4V selective dissolution would occur and (3) near field electrochemical impedance spectra (nEIS) would distinguish between dissolved and polished Ti-6Al-4V, allowing for deep neural network prediction. First, we show a combinatorial effect of cathodic activation and inflammatory species, degrading the oxide film’s polarization resistance (Rp) by a factor of 105 Ωcm2 (p = 0.000) and inducing selective dissolution. Next, we establish a potential range (-0.3 V to –1 V) where inflammatory species, cathodic activation and increasing solution temperatures (24 oC to 55 oC) synergistically affect the oxide film. Then, we evaluate the effect of solution temperature on the dissolution rate, documenting a logarithmic dependence. In our second aim, we show decreased AM Ti-6Al-4V Rp when compared with AM Ti-29Nb-21Zr in H2O2. AM Ti-6Al-4V oxide degradation preceded pit nucleation in the β phase. Finally, in our third aim, we identified gaps in the application of artificial intelligence to metallic biomaterial corrosion. With an input of nEIS spectra, a deep neural network predicted the surface dissolution state with 96% accuracy. In total, these results support the inclusion of inflammatory species and cathodic activation in pre-clinical titanium devices and biomaterial testing
Bias in Deep Learning and Applications to Face Analysis
Deep learning has fostered the progress in the field of face analysis, resulting in the integration of these models in multiple aspects of society. Even though the majority of research has focused on optimizing standard evaluation metrics, recent work has exposed the bias of such algorithms as well as the dangers of their unaccountable utilization.n this thesis, we explore the bias of deep learning models in the discriminative and the generative setting. We begin by investigating the bias of face analysis models with regards to different demographics. To this end, we collect KANFace, a large-scale video and image dataset of faces captured ``in-the-wild’'. The rich set of annotations allows us to expose the demographic bias of deep learning models, which we mitigate by utilizing adversarial learning to debias the deep representations. Furthermore, we explore neural augmentation as a strategy towards training fair classifiers. We propose a style-based multi-attribute transfer framework that is able to synthesize photo-realistic faces of the underrepresented demographics. This is achieved by introducing a multi-attribute extension to Adaptive Instance Normalisation that captures the multiplicative interactions between the representations of different attributes. Focusing on bias in gender recognition, we showcase the efficacy of the framework in training classifiers that are more fair compared to generative and fairness-aware methods.In the second part, we focus on bias in deep generative models. In particular, we start by studying the generalization of generative models on images of unseen attribute combinations. To this end, we extend the conditional Variational Autoencoder by introducing a multilinear conditioning framework. The proposed method is able to synthesize unseen attribute combinations by modeling the multiplicative interactions between the attributes. Lastly, in order to control protected attributes, we investigate controlled image generation without training on a labelled dataset. We leverage pre-trained Generative Adversarial Networks that are trained in an unsupervised fashion and exploit the clustering that occurs in the representation space of intermediate layers of the generator. We show that these clusters capture semantic attribute information and condition image synthesis on the cluster assignment using Implicit Maximum Likelihood Estimation.Open Acces
Recommended from our members
Automated Testing and Debugging for Big Data Analytics
The prevalence of big data analytics in almost every large-scale software system has generated a substantial push to build data-intensive scalable computing (DISC) frameworks such as Google MapReduce and Apache Spark that can fully harness the power of existing data centers. However, frameworks once used by domain experts are now being leveraged by data scientists, business analysts, and researchers. This shift in user demographics calls for immediate advancements in the development, debugging, and testing practices of big data applications, which are falling behind compared to the DISC framework design and implementation. In practice, big data applications often fail as users are unable to test all behaviors emerging from interleaving dataflow operators, user-defined functions, and framework's code. "Testing based on a random sample" rarely guarantees the reliability and "trial and error" and "print" debugging methods are expensive and time-consuming. Thus, the current practice of developing a big data application must be improved and the tools built to enhance the developer's productivity must adapt to the distinct characteristics of data-intensive scalable computing. By synthesizing ideas from software engineering and database systems, our hypothesis is that we can design effective and scalable testing and debugging algorithms for big data analytics without compromising the performance and efficiency of the underlying DISC framework. To design such techniques, we investigate how we can build interactive and responsive debugging primitives that significantly reduce the debugging time, yet do not pose much performance overhead on big data applications. Furthermore, we investigate how we can leverage data provenance techniques from databases and fault-isolation algorithms from software engineering to pinpoint the minimal subset of failure-inducing inputs efficiently. To improve the reliability of big data analytics, we investigate how we can abstract the semantics of dataflow operators and use them in tandem with the semantics of user-defined functions to generate a minimum set of synthetic test inputs capable of revealing more defects than the entire input dataset.To examine the first hypothesis, we introduce interactive, real-time debugging primitives for big data analytics through innovative and scalable debugging features such as simulated breakpoint, dynamic watchpoint, and crash culprit identification. Second, we design a new automated fault localization approach that combines insights from both the software engineering and database literature to bring delta debugging closer to a reality in the big data applications by leveraging data provenance and by constructing systems optimizations for debugging provenance queries. Lastly, we devise a new symbolic-execution based white-box testing algorithm for big data applications that abstracts the implementation of dataflow operators using logical specifications instead of modeling their implementations and combines them with the semantics of any arbitrary user-defined function. We instantiate the idea of an interactive debugging algorithm as BigDebug, the idea of an automated debugging algorithm as BigSift, and the idea of symbolic execution-based testing as BigTest. Our investigation shows that the interactive debugging primitives can scale to terabytes---our record-level tracing incurs less than 25% overhead on average and provides up to 100% time saving compared to the baseline replay debugger. Second, we observe that by combining data provenance with delta debugging, we can identify the minimum faulty input in just under 30% of the original job execution time. Lastly, we verify that by abstracting dataflow operators using logical specifications, we can efficiently generate the most concise test data suitable for local testing while revealing twice as many faults as prior approaches. Our investigations collectively demonstrate that developer productivity can be significantly improved through effective and scalable testing and debugging techniques for big data analytics, without impacting the DISC framework's performance. This dissertation affirms the feasibility of automated debugging and testing techniques for big data analytics---techniques that were previously considered infeasible for large-scale data processing
Collection of abstracts of the 24th European Workshop on Computational Geometry
International audienceThe 24th European Workshop on Computational Geomety (EuroCG'08) was held at INRIA Nancy - Grand Est & LORIA on March 18-20, 2008. The present collection of abstracts contains the 63 scientific contributions as well as three invited talks presented at the workshop
Testability and redundancy techniques for improved yield and reliability of CMOS VLSI circuits
The research presented in this thesis is concerned with the design of fault-tolerant integrated circuits as a contribution to the design of fault-tolerant systems. The economical manufacture of very large area ICs will necessitate the incorporation of fault-tolerance features which are routinely employed in current high density dynamic random access memories. Furthermore, the growing use of ICs in safety-critical applications and/or hostile environments in addition to the prospect of single-chip systems will mandate the use of fault-tolerance for improved reliability. A fault-tolerant IC must be able to detect and correct all possible faults that may affect its operation. The ability of a chip to detect its own faults is not only necessary for fault-tolerance, but it is also regarded as the ultimate solution to the problem of testing. Off-line periodic testing is selected for this research because it achieves better coverage of physical faults and it requires less extra hardware than on-line error detection techniques. Tests for CMOS stuck-open faults are shown to detect all other faults. Simple test sequence generation procedures for the detection of all faults are derived. The test sequences generated by these procedures produce a trivial output, thereby, greatly simplifying the task of test response analysis. A further advantage of the proposed test generation procedures is that they do not require the enumeration of faults. The implementation of built-in self-test is considered and it is shown that the hardware overhead is comparable to that associated with pseudo-random and pseudo-exhaustive techniques while achieving a much higher fault coverage through-the use of the proposed test generation procedures. The consideration of the problem of testing the test circuitry led to the conclusion that complete test coverage may be achieved if separate chips cooperate in testing each other's untested parts. An alternative approach towards complete test coverage would be to design the test circuitry so that it is as distributed as possible and so that it is tested as it performs its function. Fault correction relies on the provision of spare units and a means of reconfiguring the circuit so that the faulty units are discarded. This raises the question of what is the optimum size of a unit? A mathematical model, linking yield and reliability is therefore developed to answer such a question and also to study the effects of such parameters as the amount of redundancy, the size of the additional circuitry required for testing and reconfiguration, and the effect of periodic testing on reliability. The stringent requirement on the size of the reconfiguration logic is illustrated by the application of the model to a typical example. Another important result concerns the effect of periodic testing on reliability. It is shown that periodic off-line testing can achieve approximately the same level of reliability as on-line testing, even when the time between tests is many hundreds of hours
Molecular characterization of phenothiazines in experimental cancer therapy - New tricks of an old drug revealed
Cancer is characterized by uncontrolled malignant proliferation of cells that eventually interfere with tissue/organ functions. Traditionally, cancer is treated with chemo- and/or radiotherapy when surgery is not an option. Unfortunately, the efficacy of conventional anti-cancer chemotherapy is severely limited by therapy resistance. A conceptually appealing strategy to combat tumor resistance is to use chemosensitizers, compounds that selectively sensitize tumor cells to chemotherapy without affecting normal tissue. Phenothiazines belong to a class of “old” drugs that are used clinically to treat psychiatric disorders. In this thesis, we characterized the chemosensitizing potential of phenothiazines in combination with DNA damaging chemotherapeutic drugs. Our primary aims are to elucidate the molecular mechanisms by which phenothiazines impart sensitization and to delineate molecular determinants that predict responsiveness of tumors to phenothiazine-based intervention.
In Paper I, we confirmed that the phenothiazine compound trifluoperazine (TFP) was a potent sensitizer of bleomycin in human non-small cell lung carcinoma (NSCLC) cells; the likely mechanism being inhibition of repair of DNA single strand breaks (SSB) as well as DNA double strand breaks (DSB).
In Paper II, we found that TFP delayed the resolution of bleomycin- or cisplatin-induced γH2AX, a marker of unrepaired DNA DSB, prolonged the cell cycle arrest and increased oxidative stress in NSCLC cells. TFP co-treated cells eventually resumed cycling without fully repairing the DNA damage, which led to mitotic defects, secondary checkpoint arrest, exacerbated oxidative stress, organelle dysfunction, caspase activation and ultimately apoptosis.
In Paper III, we uncovered a possible link between phenothiazines and chromatin remodeling by in silico gene expression analysis. We found that TFP and structurally related phenothiazines significantly enhanced the activity DNA-PK/ATM in tumor but not normal fibroblasts in response to DNA DSB-inducing agents, resulting in increased selective phosphorylation of a subset of ATM substrates with chromatin regulatory functions. Notably, this represents an adaptive response which could be targeted by DNA-PK/ATM inhibitors to further enhance TFP-mediated chemosensitization in NSCLC cells. Moreover, we found that wild-type p53 is a potential predictor of unresponsiveness to phenothiazine-based chemosensitization. We further demonstrated that TFP preferentially increased the cytotoxicity of direct-acing DNA damaging agents, but not indirect-acting DNA damaging or non-DNA damaging agents, in p53-deficient tumor cells (NSCLC, breast cancer).
In Paper IV, we compared the gene expression profile of NSCLC residual clones that survived cisplatin treatment with counterparts that survived cisplatin/TFP co-treatment. We found that survival after cisplatin was associated with enrichment of pathways involved in DNA metabolism/repair, cell cycle and RNA post-translational modification. Pathway analysis showed that several DNA repair genes were concurrently up-regulated in residual clones that survived cisplatin treatment, but not in residual clones that survived cisplatin/TFP co-treatment did not shown such enrichment of DNA repair genes. In summary, our data showed for the first time that inhibition of DNA DSB repair by TFP is related to alterations in DNA-PK/ATM signaling, which led to increased apoptosis in the short term and gene expression changes as well as loss of clonogenicity in the long term. Further, our identification of molecular contexts that predict responsiveness to phenothiazines will aid in the design of future clinical trials
Recommended from our members
Harnessing Simulated Data with Graphs
Physically accurate simulations allow for unlimited exploration of arbitrarily crafted environments. From a scientific perspective, digital representations of the real world are useful because they make it easy validate ideas. Virtual sandboxes allow observations to be collected at-will, without intricate setting up for measurements or needing to wait on the manufacturing, shipping, and assembly of physical resources. Simulation techniques can also be utilized over and over again to test the problem without expending costly materials or producing any waste.
Remarkably, this freedom to both experiment and generate data becomes even more powerful when considering the rising adoption of data-driven techniques across engineering disciplines. These are systems that aggregate over available samples to model behavior, and thus are better informed when exposed to more data. Naturally, the ability to synthesize limitless data promises to make approaches that benefit from datasets all the more robust and desirable.
However, the ability to readily and endlessly produce synthetic examples also introduces several new challenges. Data must be collected in an adaptive format that can capture the complete diversity of states achievable in arbitrary simulated configurations while too remaining amenable to downstream applications. The quantity and zoology of observations must also straddle a range which prevents overfitting but is descriptive enough to produce a robust approach. Pipelines that naively measure virtual scenarios can easily be overwhelmed by trying to sample an infinite set of available configurations. Variations observed across multiple dimensions can quickly lead to a daunting expansion of states, all of which must be processed and solved. These and several other concerns must first be addressed in order to safely leverage the potential of boundless simulated data.
In response to these challenges, this thesis proposes to wield graphs in order to instill structure over digitally captured data, and curb the growth of variables. The paradigm of pairing data with graphs introduced in this dissertation serves to enforce consistency, localize operators, and crucially factor out any combinatorial explosion of states. Results demonstrate the effectiveness of this methodology in three distinct areas, each individually offering unique challenges and practical constraints, and together showcasing the generality of the approach. Namely, studies observing state-of-the-art contributions in design for additive manufacturing, side-channel security threats, and large-scale physics based contact simulations are collectively achieved by harnessing simulated datasets with graph algorithms
- …