103 research outputs found

    Transmission Electron Microscopy Studies In Shape Memory Alloys

    Get PDF
    In NiTi, a reversible thermoelastic martensitic transformation can be induced by temperature or stress between a cubic (B2) austenite phase and a monoclinic (B19\u27) martensite phase. Ni-rich binary compositions are cubic at room temperature (requiring stress or cooling to transform to the monoclinic phase), while Ti-rich binary compositions are monoclinic at room temperature (requiring heating to transform to the cubic phase). The stress induced transformation results in the superelastic effect, while the thermally induced transformation is associated with strain recovery that results in the shape memory effect. Ternary elemental additions such as Fe can additionally introduce an intermediate rhombohedral (R) phase between the cubic and monoclinic phase transformation. This work was initiated with the broad objective of connecting the macroscopic behavior in shape memory alloys with microstructural observations from transmission electron microscopy (TEM). Specifically, the goals were to examine (i) the effect of mechanical cycling and plastic deformation in superelastic NiTi; (ii) the effect of thermal cycling during loading in shape memory NiTi; (iii) the distribution of twins in martensitic NiTi-TiC composites; and (iv) the R-phase in NiTiFe. Both in situ and ex situ lift out focused ion beam (FIB) and electropolishing techniques were employed to fabricate shape memory alloy samples for TEM characterization. The Ni rich NiTi samples were fully austenitic in the undeformed state. The introduction of plastic deformation (8% and 14% in the samples investigated) resulted in the stabilization of martensite in the unloaded state. An interlaying morphology of the austenite and martensite was observed and the martensite needles tended to orient themselves in preferred orientations. The aforementioned observations were more noticeable in mechanically cycled samples. The observed dislocations in mechanically cycled samples appear to be shielded from the external applied stress via mismatch accommodation since they are not associated with unrecoverable strain after a load-unload cycle. On application of stress, the austenite transforms to martensite and is expected to accommodate the stress and strain mismatch through preferential transformation, variant selection, reorientation and coalescence. The stabilized martensite (i.e., martensite that exists in the unloaded state) is expected to accommodate the mismatch through variant reorientation and coalescence. On thermally cycling a martensitic NiTi sample under load through the phase transformation, significant variant coalescence, variant reorientation and preferred variant selection was observed. This was attributed to the internal stresses generated as a result of the thermal cycling. A martensitic NiTi-TiC composite was also characterized and the interface between the matrix and the inclusion was free of twins while significant twins were observed at a distance away from the matrix-inclusion interface. Incorporating a cold stage, diffraction patterns from NiTiFe samples were obtained at temperatures as low as -160ºC. Overall, this work provided insight in to deformation phenomena in shape memory materials that have implications for engineering applications (e.g., cyclic performance of actuators, engineering life of superelastic components, stiffer shape memory composites and low-hysteresis R-phase based actuators). This work was supported in part by an NSF CAREER award (DMR 0239512)

    Big Data Analytics by Using Hadoop

    Get PDF
    Data is large and vast, with more data coming into the system every day. Summarization analytics are all about grouping similar data together and then performing an operation such as calculating a statistic, building an index, or just simply counting. Filtering is more about understanding a smaller piece of your data, such as all records generated from a particular user, or the top ten most used verbs in a corpus of text. In short, filtering allows you to apply a microscope to your data. It can also be considered a form of search. Hadoop allows us to modify the way data is loaded on disk in two major ways: configuring how contiguous chunks of input are generated from blocks in, and configuring how records appear in the map phase. The two classes you’ll be playing with to do this are Record Reader and Input Format. These work with the Hadoop MapReduce framework in a very similar way to how mappers and Reducers are plugged in. This is about the analytics side of Hadoop or MapReduce. Computation in Hadoop MapReduce is performed in parallel, automatically, with a simple abstraction for developers that obviate complex synchronization and network programming. Unlike many other distributed data processing systems, Hadoop runs the user-provided processing logic on the machine where the data lives rather than dragging the data across the network; a huge win for performance. At Q&A sites such as Experts exchange service developed and the number of users grew from thousands to millions, storing, processing, and managing all the incoming data became increasingly challenging. There were several reasons for adopting Hadoop: The distributed file system provided redundant backups for the data stored on it at no extra cost. Scalability was simplified through the ability to add cheap, commodity hardware when required. Hadoop provided a flexible framework for running distributed computing algorithms with a relatively easy learning curve. Hadoop can be used to form core backend batch and near real-time computing infrastructures. It can also be used to store and archive massive datasets

    Data Migration from RDBMS to Hadoop

    Get PDF
    Oracle, IBM, Microsoft and Teradata own a large portion of the information on the planet. By that on the off chance that we run an inquiry in any piece of the world, it is likely that you are perusing the information from a Database possessed by them. The bigger the volume of information moves from Oracle to DB2 or other is testing assignment for the business. The conception of Hadoop and NoSQL innovation spoke to a seismic movement that shook the RDBMS market and offering a different option for organizations. The Database merchants moved rapidly to Big Data for position and opposite. Indeed, even everybody has own enormous information innovation like prophet NoSQL and mongo DB ,There is a colossal business sector for an elite information movement that can duplicate the information and put away in RDBMS Databases to Hadoop or NoSQL databases. Current data is available in the RDBMS databases like oracle, SQL Server, MySQL and Teradata. We are planning to migrate RDBMS data to big data which is support NoSQL database and contains verity of data from the existed system it’s take huge resources and time to migrate pita bytes of data. Time and resource may be constraints for the current migrating process

    Bioanalytical Method Development and Validation for the Estimation of Tenofovir Disoproxil Fumarate and Lamuvidine in Human Plasma by Using Rp-Hplc.

    Get PDF
    A Simple Reverse-Phase High-Performance Liquid Chromatographic Method For The Estimation Of Tenofovir Disoproxil Fumarate And Lamivudine In Human Plasma Samples Has Been Developed And Validated. The Assay Of The Drugs Was Performed On A Phenomenex C18 Column With UV Detection At 259 Nm. The Mobile Phase Consisted Of 0.05% Heptane Sulphonic Acid: Acetonitrile In The Ratio Of 80:20, And A Flow Rate Of 1 Ml/Min Was Maintained. Linearity In The Range Of 200-1000 Ng/Ml For Tenofovir (R2=0.998) And That Of Lamivudine Was Found To Be 200 To 1000ng/Ml (R2=0.998). Analytic Parameters Have Been Evaluated. Intra-Day And Inter-Day Precision As Expressed By Relative Standard Deviation Was Found To Be Less Than 2%. The Method Has Been Applied Successfully For The Estimation Of Tenofovir Disoproxil Fumarate And Lamivudine In Spiked Human Plasma Samples

    Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Get PDF
    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems

    Seeing the Forest in Family Violence Research: Moving to a Family-Centered Approach

    Get PDF
    Victims of family violence are sorted into fragmented systems that fail to address the family as an integrated unit. Each system provides specialized care to each type of victim (child, older adult, adult, animal) and centers on the expertise of the medical and service providers involved. Similarly, researchers commonly study abuse from the frame of the victim, rather than looking at a broader frame-the family. We propose the following 5 steps to create a research paradigm to holistically address the response, recognition, and prevention of family violence.By developing an integrated research model to address family violence, and by using that model to support integrated systems of care, we propose a fundamental paradigm shift to improve the lives of families living with and suffering from violence

    ANN prediction of corrosion behaviour of uncoated and biopolymers coated cp-Titanium substrates

    Get PDF
    The present study focuses on biopolymer surface modification of cp-Titanium with Chitosan, Gelatin, and Sodium Alginate. The biopolymers were spin coated onto a cp-Titanium substrate and further subjected to Electrochemical Impedance Spectroscopic (EIS) characterization. Artificial Neural Network (ANN) was developed to predict the Open Circuit Potential (OCP) values and Nyquist plot for bare and biopolymer coated cp-Titanium substrate. The experimental data obtained was utilized for ANN training. Two input parameters, i.e., substrate condition (coated or uncoated) and time period were considered to predict the OCP values. Backpropagation Levenberg-Marquardt training algorithm was utilized in order to train ANN and to fit the model. For Nyquist plot, the network was trained to predict the imaginary impedance based on real impedance as a function of immersion periods using the Back Propagation Bayesian algorithm. The biopolymer coated cp-Titanium substrate shows the enhanced corrosion resistance compared to uncoated substrates. The ANN model exhibits excellent comparison with the experimental results in both the cases indicating that the developed model is very accurate and efficiently predicts the OCP values and Nyquist plot

    Surface functionalization of chitosan as a coating material for orthopaedic applications:A Comprehensive Review

    Get PDF
    Metallic implants have dominated the biomedical implant industries for the past century for load-bearing applications, while the polymeric implants have shown great promise for tissue engineering applications. The surface properties of such implants are critical as the interaction of implant surfaces, and the body tissues may lead to unfavourable reactions. Desired implant properties are biocompatibility, corrosion resistance, and antibacterial activity. A polymer coating is an efficient and economical way to produce such surfaces. A lot of research has been carried out on chitosan (CS)-modified metallic and polymer scaffolds in the last decade. Different methods such as electrophoretic deposition, sol-gel methods, dip coating and spin coating, electrospinning, etc. have been utilized to produce CS coatings. However, a systematic review of chitosan coatings on scaffolds focussing on widely employed techniques is lacking. This review surveys literature concerning the current status of orthopaedic applications of CS for the purpose of coatings. In this review, the various preparation methods of coating, and the role of the surface functionalities in determining the efficiency of coatings are discussed. Effect of nanoparticle additions on the polymeric interfaces and in regulating the properties of surface coatings are also investigated in detail

    Effiziente Lösung schwachbesetzter linearer Gleichungssysteme aus Ingenieursanwendungen auf Vektor-Hardware

    No full text
    Block-based Linear Iterative Solver (BLIS) is a scalable software library for solving large sparse linear systems, especially arising from engineering applications. BLIS has been developed during this work to particularly take care of the performance issues related to Krylov iterative methods on vector systems. After several failed attempts to port some general public domain linear solvers onto the NEC SX-8, it is clear that the developers of most solver libraries do not focus on performance issues related to vector systems. This is also true for other software projects due to the fact that clusters of scalar processors were the dominant high performance computing installations in the past few decades. With the advent of vector processing units on most commodity scalar processors, vectorization is again becoming an important software design consideration. In order to understand the strengths and weaknesses of various hardware architectures, benchmarking studies have been done in this work. These studies show that the vector systems are well balanced than most scalar systems with respect to many aspects that determine the sustained performance of many real world applications. The two main performance problems with the public domain solvers are addressed in this work. The first problem of short vector length is solved by introducing a vector specific sparse storage format. The second and the more important problem of high memory latencies is addressed by blocking the sparse matrix. Most engineering problems have multiple unknowns (degrees of freedom) per mesh point to be solved. Typically, public domain solvers do not block all the unknowns to be solved at each mesh point. Instead, they assemble and solve each unknown separately which requires a huge amount of memory traffic. The approach adopted in this work reduces the load on the memory subsystem by blocking all the unknowns at each mesh point and then solving the resulting blocked global system of equations. This is a natural approach for engineering simulations and results in performance improvement on scalar systems due to cache blocking and on vector systems due to reduced memory traffic. Preconditioning is one of the areas in linear solvers that is still actively researched. A preconditioned system of equations has better spectral properties and hence the solution methods will converge faster than with the original system. The key consideration is to keep the time needed for the additional work of preparing the preconditioner as low as possible while at the same time improving the condition number of the resulting system as much as possible. Block based splitting methods and scaling are effective preconditioners than their point based counterparts and at the same time are also efficient. Block based incomplete factorization implemented in BLIS is also more efficient than the corresponding point based method. Robust scalable preconditioners such as the algebraic multigrid method are also available in BLIS. The performance measurements of three application codes running on the NEC SX-8 and using BLIS to solve the linear systems are presented. Lastly, memory bandwidth limitations of new hardware architectures such as the multi-core systems and the STI CELL Broadband Engine are studied. The efficiency and scaling of BLIS is tested on the multi-core systems. Also, the performance of blocked sparse matrix vector product kernel is studied on the STI CELL processor.Block-Based-Linear-Iterative-Solver (BLIS) ist eine skalierbare numerische Software-Bibliothek zur Loesung großer schwachbesetzter linearer Gleichungssysteme, wie sie besonders in Ingenieursanwendungen auftreten. BLIS wurde entwickelt mit Ruecksicht auf die Besonderheiten von parallelen Vektorsystemen. Erreichen hoher Gleitkommaleistung war unser zentrales Ziel. Einige fehlgeschlagene Versuche auf dem Vektorrechner NEC SX-8 mit ueblichen Public Domain Solvern machten deutlich, dass die Entwickler keine Ruecksicht auf die Eigenschaften von Vektorsystemen genommen haben. Dies gilt ebenso fuer andere Softwareprojekte auf Grund der Tatsache, dass Cluster von Skalarprozessoren das Hoechstleistungsrechnen der letzten Jahrzehnte dominiert haben. Inzwischen haben aber Vektorrecheneinheiten in skalare Prozessoren Einzug gehalten. Vektorisierbare Algorithmen und deren Implementierungen erhalten groeßere Bedeutung. Um die Unterschiede der verschiedenen Architekturen bewerten zu koennen, haben wir verschiedene Benchmark-Studien durchgefuehrt. Diese Studien haben nachgewiesen, dass Vektorsysteme besser balanciert sind als die ueblichen Skalarsysteme, wenn Aspekte betrachtet werden, die die sustained Performance vieler realer Anwendungen bestimmen. Zwei wesentliche Probleme werden in dieser Arbeit adressiert, die die Leistungsfaehigkeit von Public Domain Solvern begrenzen. Das Problem der ungeeigneten kurzen Vektorlaenge konnte durch die Verwendung eines vektorgeeigneten Datenformates fuer schwach besetzte Matrizen geloest werden. Dieses Format ist gleichermaßen auch fuer moderne Skalarprozessoren geeignet. Das zweite Problem der schaedlichen Auswirkung der hohen Memory-Latenzzeiten konnte durch die Verwendung von Bloecken in der Matrixstruktur geloest werden. Die meisten Ingenieursanwendungen weisen mehrfache Unbekannte (Freiheitsgrade) an jedem Knotenpunkt des zu loesenden Problems auf. Die typischen Public Domain Loeser nutzen diese Besonderheit nicht. Stattdessen assemblieren und loesen sie jede Unbekannte separat und vergroeßern damit den Druck auf das Memorysystem insbesondere beim indirekten Zugriff. Unser Ansatz veringert diesen Druck durch das Blocking der Knotenpunkt-Variablen und der Loesung des daraus resultierenden globalen Systems von Block-Gleichungen. Dies ist ein natuerlicher Ansatz fuer Simulationen aus dem Ingenieursumfeld und ermoeglicht Leistungsverbesserungen auf Skalarsystemen durch Cache-Blocking und auf Vektorsystemen durch Redukition des Memory-Verkehrs. Praekonditionierung ist eines der Gebiete der Loesungsverfahren linearer Gleichungssysteme, auf dem noch immer aktiv geforscht wird. Ein praekonditioniertes Gleichungssystem hat bessere spektrale Eigenschaften. Deshalb konvergieren die Loesungsmethoden schneller als beim originalen System. Schluessel zu einem erfolgreichen Praekonditionierungsverfahrens ist, die Konditionszahl der veraenderten Matrix moeglichst klein zu halten bei geringem zusaetzlichem Zeitaufwand fuer die Praekonditionierung. Blockbasierte Splitting-Methoden und blockbasierte Skalierung sind numerisch effektivere Praekonditionierer als ihr punktbasiertes Gegenst¨uck und benutzen die Hardware effizienter. Blockbasierte unvollstaendige Zerlegung (ILU), wie sie in BLIS implementiert wurde, ist ebenfalls effizienter als das entsprechende punktbasierte Verfahren. Robuste skalierbare Praekonditionierer wie die Algebraische Multigridmethode sind ebenfalls in BLIS verfuegbar. Leistungsmessungen auf der NEC SX-8 fuer drei Anwendungsprogramme unter Benutzung von BLIS werden dargestellt. Darueberhinaus werden Bandbreitenbegrenzungen neuer Hardware-Architekturen wie der STI Cell Broadband Engine analysiert. Effizienz und Skalierung von BLIS auf Multi-Core Systemen wird getestet. Die Leistungsfaehigkeit des Matrix mal Vektor Kerns fuer schwachbesetzte Matrizen mit Bloecken wird getestet und beschrieben
    corecore