1,797 research outputs found

    Automated computation of materials properties

    Full text link
    Materials informatics offers a promising pathway towards rational materials design, replacing the current trial-and-error approach and accelerating the development of new functional materials. Through the use of sophisticated data analysis techniques, underlying property trends can be identified, facilitating the formulation of new design rules. Such methods require large sets of consistently generated, programmatically accessible materials data. Computational materials design frameworks using standardized parameter sets are the ideal tools for producing such data. This work reviews the state-of-the-art in computational materials design, with a focus on these automated ab-initio\textit{ab-initio} frameworks. Features such as structural prototyping and automated error correction that enable rapid generation of large datasets are discussed, and the way in which integrated workflows can simplify the calculation of complex properties, such as thermal conductivity and mechanical stability, is demonstrated. The organization of large datasets composed of ab-initio\textit{ab-initio} calculations, and the tools that render them programmatically accessible for use in statistical learning applications, are also described. Finally, recent advances in leveraging existing data to predict novel functional materials, such as entropy stabilized ceramics, bulk metallic glasses, thermoelectrics, superalloys, and magnets, are surveyed.Comment: 25 pages, 7 figures, chapter in a boo

    Ultrascan solution modeler: integrated hydrodynamic parameter and small angle scattering computation and fitting tools

    Get PDF
    This is a preprint of a paper in the proceedings of the XSEDE12 conference, held July 16-19, 2012 in Chicago, IL. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.UltraScan Solution Modeler (US-SOMO) processes atomic and lower-resolution bead model representations of biological and other macromolecules to compute various hydrodynamic parameters, such as the sedimentation and diffusion coefficients, relaxation times and intrinsic viscosity, and small angle scattering curves, that contribute to our understanding of molecular structure in solution. Knowledge of biological macromolecules' structure aids researchers in understanding their function as a path to disease prevention and therapeutics for conditions such as cancer, thrombosis, Alzheimer's disease and others. US-SOMO provides a convergence of experimental, computational, and modeling techniques, in which detailed molecular structure and properties are determined from data obtained in a range of experimental techniques that, by themselves, give incomplete information. Our goal in this work is to develop the infrastructure and user interfaces that will enable a wide range of scientists to carry out complicated experimental data analysis techniques on XSEDE. Our user community predominantly consists of biophysics and structural biology researchers. A recent search on PubMed reports 9,205 papers in the decade referencing the techniques we support. We believe our software will provide these researchers a convenient and unique framework to refine structures, thus advancing their research. The computed hydrodynamic parameters and scattering curves are screened against experimental data, effectively pruning potential structures into equivalence classes. Experimental methods may include analytical ultracentrifugation, dynamic light scattering, small angle X-ray and neutron scattering, NMR, fluorescence spectroscopy, and others. One source of macromolecular models is X-ray crystallography. However, the conformation in solution may not match that observed in the crystal form. Using computational techniques, an initial fixed model can be expanded into a search space utilizing high temperature molecular dynamic approaches or stochastic methods such as Brownian dynamics. The number of structures produced can vary greatly, ranging from hundreds to tens of thousands or more. This introduces a number of cyberinfrastructure challenges. Computing hydrodynamic parameters and small angle scattering curves can be computationally intensive for each structure, and therefore cluster compute resources are essential for timely results. Input and output data sizes can vary greatly from less than 1 MB to 2 GB or more. Although the parallelization is trivial, along with data size variability there is a large range of compute sizes, ranging from one to potentially thousands of cores with compute time of minutes to hours. In addition to the distributed computing infrastructure challenges, an important concern was how to allow a user to conveniently submit, monitor and retrieve results from within the C++/Qt GUI application while maintaining a method for authentication, approval and registered publication usage throttling. Middleware supporting these design goals has been integrated into the application with assistance from the Open Gateway Computing Environments (OGCE) collaboration team. The approach was tested on various XSEDE clusters and local compute resources. This paper reviews current US-SOMO functionality and implementation with a focus on the newly deployed cluster integration.This work was supported by NIH grant K25GM090154 to EB, NSF grant OCI-1032742 to MP, NSF grant TG-MCB070040N to BD, and NIH grant RR-022200 to B

    Measurement of electric fields in the ionosphere, volume 2 Final report, Aug. 1966 - Sep. 1967

    Get PDF
    Electric field meter, using electron beam deflection techniques, for ionospheric measurement

    Implementation and Web Mounting of the WebOMiner_S Recommendation System

    Get PDF
    The ability to quickly extract information from a large amount of heterogeneous data available on the web from various Business to Consumer (B2C) or Ecommerce stores selling similar products (such as Laptops) for comparative querying and knowledge discovery remains a challenge because different web sites have different structures for their web data and web data are unstructured. For example: Find out the best and cheapest deal for Dell Laptop comparing BestBuy.ca and Amazon.com based on the following specification: Model: Inspiron 15 series, ram: 16gb, processor: i5, Hdd: 1 TB. The “WebOMiner” and “WebOMiner_S” systems perform automatic extraction by first parsing web html source code into a document object model (DOM) tree before using some pattern mining techniques to discover heterogeneous data types (e.g. text, image, links, lists) so that product schemas are extracted and stored in a back-end data warehouse for querying and recommendation. Although a web interface application of this system needs to be developed to make it accessible for to all users on the web.This thesis proposes a Web Recommendation System through Graphical User Interface, which is mounted readily on the web and is accessible to all users. It also performs integration of the web data consisting of all the product features such as Product model name, product description, market price subject to the retailer, etc. retained from the extraction process. Implementation is done using “Java server pages (JSP)” as the GUI designed in HTML, CSS, JavaScript and the framework used for this application is “Spring framework” which forms a bridge between the GUI and the data warehouse. SQL database is implemented to store the extracted product schemas for further integration, querying and knowledge discovery. All the technologies used are compatible with UNIX system for hosting the required application

    Development of an IoT-based automatic fertigation system

    Get PDF
    Fertigation supplies water and liquid fertilizer through the same channel to plants. Using drip irrigation set up for fertigation allows the root zone of the plant to be continuously supplied with nutrients and water throughout the farm season. Conventionally, fertigated systems are controlled using pre-set timers to turn on and off fertilizer injectors and irrigation pumps, and also to set the frequency and duration of supply. Therefore, fertigation management is usually based only on predictive algorithms or historical data, which may not be accurate for all situations. Development of a microcontroller-based fertigation management system within the Nigerian (Sub-Saharan Africa) region using a capacitive soil moisture sensor and a JXCT-IOT Frequency Domain Reflectometry (FDR) soil nitrogen sensor is presented. The sensors are placed in the soil around the root region of plants to enable a microcontroller to monitor the soil properties, determine how much water or nutrients the plant needs, and supply the amount needed through a drip irrigation framework. The tap water and urea solution are placed inside separate reservoirs and supplied to the plant through solenoid valves controlled by the microcontroller. Furthermore, an Internet of Things (IoT) client (Blynk IoT) was integrated with the fertigation system so that the fertigation process, as well as the soil state, could be monitored and controlled remotely. The data read from the sensors as well as the state of the solenoid valves were sent over the internet to be stored on the Blynk servers. A website and mobile (Android) dashboards were also created using the Blynk IOT platform to display the states of the valves and the sensor readings. The automatic fertigation system was found to be functional. The system keeps the soil moisture and nitrogen content between the recommended ranges: moisture content between 25% and 46% and nitrogen content between 20 mg/kg and 30 mg/kg for cucumber crops. Fertigation events occur every morning between 5 and 6 am

    Towards Comparative Web Content Mining using Object Oriented Model

    Get PDF
    Web content data are heterogeneous in nature; usually composed of different types of contents and data structure. Thus, extraction and mining of web content data is a challenging branch of data mining. Traditional web content extraction and mining techniques are classified into three categories: programming language based wrappers, wrapper (data extraction program) induction techniques, and automatic wrapper generation techniques. First category constructs data extraction system by providing some specialized pattern specification languages, second category is a supervised learning, which learns data extraction rules and third category is automatic extraction process. All these data extraction techniques rely on web document presentation structures, which need complicated matching and tree alignment algorithms, routine maintenance, hard to unify for vast variety of websites and fail to catch heterogeneous data together. To catch more diversity of web documents, a feasible implementation of an automatic data extraction technique based on object oriented data model technique, 00Web, had been proposed in Annoni and Ezeife (2009). This thesis implements, materializes and extends the structured automatic data extraction technique. We developed a system (called WebOMiner) for extraction and mining of structured web contents based on object-oriented data model. Thesis extends the extraction algorithms proposed by Annoni and Ezeife (2009) and develops an automata based automatic wrapper generation algorithm for extraction and mining of structured web content data. Our algorithm identifies data blocks from flat array data structure and generates Non-Deterministic Finite Automata (NFA) pattern for different types of content data for extraction. Objective of this thesis is to extract and mine heterogeneous web content and relieve the hard effort of matching, tree alignment and routine maintenance. Experimental results show that our system is highly effective and it performs the mining task with 100% precision and 96.22% recall value

    Field Effect Transistor Nanosensor for Breast Cancer Diagnostics

    Full text link
    Silicon nanochannel field effect transistor (FET) biosensors are one of the most promising technologies in the development of highly sensitive and label-free analyte detection for cancer diagnostics. With their exceptional electrical properties and small dimensions, silicon nanochannels are ideally suited for extraordinarily high sensitivity. In fact, the high surface-to-volume ratios of these systems make single molecule detection possible. Further, FET biosensors offer the benefits of high speed, low cost, and high yield manufacturing, without sacrificing the sensitivity typical for traditional optical methods in diagnostics. Top down manufacturing methods leverage advantages in Complementary Metal Oxide Semiconductor (CMOS) technologies, making richly multiplexed sensor arrays a reality. Here, we discuss the fabrication and use of silicon nanochannel FET devices as biosensors for breast cancer diagnosis and monitoring

    Development, Properties, and Applications of CVD Diamond-Based Heat Sinks

    Get PDF
    Heat sink is an essential component to nanoelectronics, microelectronics, and optoelectronics applications because it allows the thermal management of devices such as integrated circuits (ICs), microelectromechanical systems (MEMSs), and graphic unit processing. There are different materials being employed for heat sink production. Among them, diamond has stood out due to its excellent chemical and physical properties. This book chapter focuses on the development, properties, and applications of CVD diamond heat sinks. It covers the basic concepts of heat conduction applied to CVD diamond as a heat sink material and its production as freestanding CVD wafers of polycrystalline CVD diamond, since the literature about this topic is extensive, giving the reader a comprehensive overview. We will comprise the use and potential widening of applications of in CVD diamond heat sink technology, providing the reader with a substantial background at the current development of solutions and new frontiers in the practical use of CVD diamond thermal management devices

    Brain-inspired computing with fluidic iontronic nanochannels

    Full text link
    The unparalleled energy efficiency of the brain is driving researchers to seek out new brain-inspired (neuromorphic) computing paradigms. Artificial aqueous ion channels are emerging as an exciting new platform for neuromorphic computing, representing a departure from conventional solid-state devices by directly mimicking the fluidic ion transport found in the brain. However, despite recent interest, a tangible demonstration of neuromorphic computing remains a challenge. Here we successfully perform neuromorphic reservoir computing using easy to fabricate tapered microchannels that embed a conducting network of fluidic nanochannels between colloids, which we show to be a novel memristor (memory resistor). Remarkably, a wide range of typical conductance memory timescales can easily be achieved by constructing channels of different length, a unique and highly desirable feature. This work is inspired and supported by a new theoretical model, which stems directly from traditional diffusion-conduction equations and shows excellent agreement with the experiments, predicting the features and relevant parameters presented here. Our results represent a fundamental step in realising the promise of ion channels as a new platform to emulate the rich aqueous dynamics of the brain

    Computation of atomic astrophysical opacities

    Full text link
    The revision of the standard Los Alamos opacities in the 1980-1990s by a group from the Lawrence Livermore National Laboratory (OPAL) and the Opacity Project (OP) consortium was an early example of collaborative big-data science, leading to reliable data deliverables (atomic databases, monochromatic opacities, mean opacities, and radiative accelerations) widely used since then to solve a variety of important astrophysical problems. Nowadays the precision of the OPAL and OP opacities, and even of new tables (OPLIB) by Los Alamos, is a recurrent topic in a hot debate involving stringent comparisons between theory, laboratory experiments, and solar and stellar observations in sophisticated research fields: the standard solar model (SSM), helio and asteroseismology, non-LTE 3D hydrodynamic photospheric modeling, nuclear reaction rates, solar neutrino observations, computational atomic physics, and plasma experiments. In this context, an unexpected downward revision of the solar photospheric metal abundances in 2005 spoiled a very precise agreement between the helioseismic indicators (the radius of the convection zone boundary, the sound-speed profile, and helium surface abundance) and SSM benchmarks, which could be somehow reestablished with a substantial opacity increase. Recent laboratory measurements of the iron opacity in physical conditions similar to the boundary of the solar convection zone have indeed predicted significant increases (30-400%), although new systematic improvements and comparisons of the computed tables have not yet been able to reproduce them. We give an overview of this controversy, and within the OP approach, discuss some of the theoretical shortcomings that could be impairing a more complete and accurate opacity accountingComment: 31 pages, 10 figures. This review is originally based on a talk given at the 12th International Colloquium on Atomic Spectra and Oscillator Strengths for Astrophysical and Laboratory Plasmas, Sao Paulo, Brazil, July 2016. It has been published in the Atoms online journa
    corecore