7,757 research outputs found

    Investigation of Silicon Nanoparticle-Polystyrene Hybrids

    Get PDF
    Current LED lights are created with quantum dots made of metals like selenium, tellurium, and cadmium which can be toxic. Silicon is used as a non-toxic substance and is the second most abundant element in the earth's crust. When silicon is prepared at a nanometer size, unique luminesce optical properties emerge that can be tuned using sized surface chemistry. Therefore, silicon nanoparticles can be used as an alternative emitter for LED lights. To produce hydride-terminated silicon nanoparticles we must synthesize the particles. Hydrogen silsesquioxane (HSQ) is processed at 1100 °C for one hour causing Si to cluster and form a SiO2 matrix, also known as the composite. The composite is then manually crushed in ethanol. The solution is further ground using glass beads, then filtered to get the composite powder. The final step is the HF etching. The hydride-terminated particles are then functionalized using three different methods to synthesize silicon nanoparticle-polystyrene hybrids, which determine the magnitude of luminosity and the quality of the hybrids. We spin coat each method and results were analyzed. Method 1 uses heat to functionalize hydride-terminated silicon nanoparticles with styrene. This process also causes styrene to attach to styrene to form a polystyrene chain. Method 1 gave a homogeneous mixture which yielded a consistent, bright and homogenous film. In method 2, dodecyl-terminated silicon nanoparticles are mixed with premade polystyrene. While this method gave better control of the amount of silicon nanoparticles inside the polymer hybrid, a homogeneous mixture was not created due to the different structures of polystyrene and dodecyl chains. Method 3 has dodecyl-terminated silicon with in-situ styrene polymerization. It generated a homogeneous mixture. The in-situ polymerization stabilizes the particles, allowing for brighter luminescence. Because of the stability and lower molecular weight, the mixture was easier to dissolve. We concluded that the different methods resulted in different polymer molecular weights and this created distinct properties between the polymer hybrids when spin-coating.   &nbsp

    Discovery of dominant and dormant genes from expression data using a novel generalization of SNR for multi-class problems

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The Signal-to-Noise-Ratio (SNR) is often used for identification of biomarkers for two-class problems and no formal and useful generalization of SNR is available for multiclass problems. We propose innovative generalizations of SNR for multiclass cancer discrimination through introduction of two indices, Gene Dominant Index and Gene Dormant Index (GDIs). These two indices lead to the concepts of dominant and dormant genes with biological significance. We use these indices to develop methodologies for discovery of dominant and dormant biomarkers with interesting biological significance. The dominancy and dormancy of the identified biomarkers and their excellent discriminating power are also demonstrated pictorially using the scatterplot of individual gene and 2-D Sammon's projection of the selected set of genes. Using information from the literature we have shown that the GDI based method can identify dominant and dormant genes that play significant roles in cancer biology. These biomarkers are also used to design diagnostic prediction systems.</p> <p>Results and discussion</p> <p>To evaluate the effectiveness of the GDIs, we have used four multiclass cancer data sets (Small Round Blue Cell Tumors, Leukemia, Central Nervous System Tumors, and Lung Cancer). For each data set we demonstrate that the new indices can find biologically meaningful genes that can act as biomarkers. We then use six machine learning tools, Nearest Neighbor Classifier (NNC), Nearest Mean Classifier (NMC), Support Vector Machine (SVM) classifier with linear kernel, and SVM classifier with Gaussian kernel, where both SVMs are used in conjunction with one-vs-all (OVA) and one-vs-one (OVO) strategies. We found GDIs to be very effective in identifying biomarkers with strong class specific signatures. With all six tools and for all data sets we could achieve better or comparable prediction accuracies usually with fewer marker genes than results reported in the literature using the same computational protocols. The dominant genes are usually easy to find while good dormant genes may not always be available as dormant genes require stronger constraints to be satisfied; but when they are available, they can be used for authentication of diagnosis.</p> <p>Conclusion</p> <p>Since GDI based schemes can find a small set of dominant/dormant biomarkers that is adequate to design diagnostic prediction systems, it opens up the possibility of using real-time qPCR assays or antibody based methods such as ELISA for an easy and low cost diagnosis of diseases. The dominant and dormant genes found by GDIs can be used in different ways to design more reliable diagnostic prediction systems.</p

    A hybrid global image orientation method for simultaneously estimating global rotations and global translations

    Get PDF
    In recent years, the determination of global image orientation, i.e. global SfM, has gained a lot of attentions from researchers, mainly due to its time efficiency. Most of the global methods take relative rotations and translations as input for a two-step strategy comprised of global rotation averaging and global translation averaging. This paper by contrast presents a hybrid approach that aims to solve global rotations and translations simultaneously, but hierarchically. We first extract an optimal minimum cover connected image triplet set (OMCTS) which includes all available images with a minimum number of triplets, all of them with the three related relative orientations being compatible to each other. For non-collinear triplets in the OMCTS, we introduce some basic characterizations of the corresponding essential matrices and solve for the image pose parameters by averaging the constrained essential matrices. For the collinear triplets, on the other hand, the image pose parameters are estimated by relative orientation using the depth of object points from individual local spatial intersection. Finally, all image orientations are estimated in a common coordinate frame by traversing every solved triplet using a similarity transformation. We show results of our method on different benchmarks and demonstrate the performance and capability of the proposed approach by comparing with other global SfM methods. © 2020 Copernicus GmbH. All rights reserved

    Biomechanics of Pharyngeal Deglutitive Function Following Total Laryngectomy

    Get PDF
    Copyright © 2016 American Academy of Otolaryngology—Head and Neck Surgery Foundation. Reprinted by permission of SAGE PublicationsObjective: Post-laryngectomy surgery, pharyngeal weakness and pharyngoesophageal junction (PEJ) restriction are the underlying candidate mechanisms of dysphagia. We aimed to determine, in laryngectomees whether: 1) hypopharyngeal propulsion is reduced and/or PEJ resistance is increased; 2) endoscopic dilatation improves dysphagia; and 3) if so, whether symptomatic improvement correlate with reduction in resistance to flow across the PEJ. Methods: Swallow biomechanics were assessed in 30 total laryngectomees. Average peak contractile pressure (hPP) and hypopharyngeal intrabolus pressure (hIBP) were measured from combined high resolution manometry and video-fluoroscopic recordings of barium swallows (2, 5&10ml). Patients were stratified into severe dysphagia (Sydney Swallow Questionnaire (SSQ)>500) and mild/nil dysphagia (SSQ≤500). In 5 patients, all measurements were repeated after endoscopic dilatation. Results: Dysphagia was reported by 87%, and 57% had severe and 43% had minor/nil dysphagia. Laryngectomees had lower hPP than controls (110±14mmHg vs 170±15mmHg; p<0.05), while hIBP was higher (29±5mmHg vs 6±5mmHg; p<0.05). There were no differences in hPP between patient groups. However, hIBP was higher in severe than in mild/nil dysphagia (41±10mmHg vs 13±3mmHg; p<0.05). Pre-dilation hIBP (R2=0.97) and its decrement following dilatation (R2=0.98) were good predictors of symptomatic improvement. Conclusion: Increased PEJ resistance is the predominant determinant of dysphagia as it correlates better with dysphagia severity than peak pharyngal contractile pressure. While both baseline PEJ resistance and its decrement following dilatation are strong predictors of outcome following dilatation, the peak pharyngeal pressure is not. PEJ resistance is vital to detect as it is the only potentially reversible component of dysphagia in this context

    Search for Intrinsic Excitations in 152Sm

    Full text link
    The 685 keV excitation energy of the first excited 0+ state in 152Sm makes it an attractive candidate to explore expected two-phonon excitations at low energy. Multiple-step Coulomb excitation and inelastic neutron scattering studies of 152Sm are used to probe the E2 collectivity of excited 0+ states in this "soft" nucleus and the results are compared with model predictions. No candidates for two-phonon K=0+ quadrupole vibrational states are found. A 2+, K=2 state with strong E2 decay to the first excited K=0+ band and a probable 3+ band member are established.Comment: 4 pages, 6 figures, accepted for publication as a Rapid Communication in Physical Review

    Managing healthcare budgets in times of austerity: the role of program budgeting and marginal analysis

    Get PDF
    Given limited resources, priority setting or choice making will remain a reality at all levels of publicly funded healthcare across countries for many years to come. The pressures may well be even more acute as the impact of the economic crisis of 2008 continues to play out but, even as economies begin to turn around, resources within healthcare will be limited, thus some form of rationing will be required. Over the last few decades, research on healthcare priority setting has focused on methods of implementation as well as on the development of approaches related to fairness and legitimacy and on more technical aspects of decision making including the use of multi-criteria decision analysis. Recently, research has led to better understanding of evaluating priority setting activity including defining ‘success’ and articulating key elements for high performance. This body of research, however, often goes untapped by those charged with making challenging decisions and as such, in line with prevailing public sector incentives, decisions are often reliant on historical allocation patterns and/or political negotiation. These archaic and ineffective approaches not only lead to poor decisions in terms of value for money but further do not reflect basic ethical conditions that can lead to fairness in the decision-making process. The purpose of this paper is to outline a comprehensive approach to priority setting and resource allocation that has been used in different contexts across countries. This will provide decision makers with a single point of access for a basic understanding of relevant tools when faced with having to make difficult decisions about what healthcare services to fund and what not to fund. The paper also addresses several key issues related to priority setting including how health technology assessments can be used, how performance can be improved at a practical level, and what ongoing resource management practice should look like. In terms of future research, one of the most important areas of priority setting that needs further attention is how best to engage public members

    Data production models for the CDF experiment

    Get PDF
    The data production for the CDF experiment is conducted on a large Linux PC farm designed to meet the needs of data collection at a maximum rate of 40 MByte/sec. We present two data production models that exploits advances in computing and communication technology. The first production farm is a centralized system that has achieved a stable data processing rate of approximately 2 TByte per day. The recently upgraded farm is migrated to the SAM (Sequential Access to data via Metadata) data handling system. The software and hardware of the CDF production farms has been successful in providing large computing and data throughput capacity to the experiment.Comment: 8 pages, 9 figures; presented at HPC Asia2005, Beijing, China, Nov 30 - Dec 3, 200

    Quantitative localized proton-promoted dissolution kinetics of calcite using scanning electrochemical microscopy (SECM)

    Get PDF
    Scanning electrochemical microscopy (SECM) has been used to determine quantitatively the kinetics of proton-promoted dissolution of the calcite (101̅4) cleavage surface (from natural “Iceland Spar”) at the microscopic scale. By working under conditions where the probe size is much less than the characteristic dislocation spacing (as revealed from etching), it has been possible to measure kinetics mainly in regions of the surface which are free from dislocations, for the first time. To clearly reveal the locations of measurements, studies focused on cleaved “mirror” surfaces, where one of the two faces produced by cleavage was etched freely to reveal defects intersecting the surface, while the other (mirror) face was etched locally (and quantitatively) using SECM to generate high proton fluxes with a 25 μm diameter Pt disk ultramicroelectrode (UME) positioned at a defined (known) distance from a crystal surface. The etch pits formed at various etch times were measured using white light interferometry to ascertain pit dimensions. To determine quantitative dissolution kinetics, a moving boundary finite element model was formulated in which experimental time-dependent pit expansion data formed the input for simulations, from which solution and interfacial concentrations of key chemical species, and interfacial fluxes, could then be determined and visualized. This novel analysis allowed the rate constant for proton attack on calcite, and the order of the reaction with respect to the interfacial proton concentration, to be determined unambiguously. The process was found to be first order in terms of interfacial proton concentration with a rate constant k = 6.3 (± 1.3) × 10–4 m s–1. Significantly, this value is similar to previous macroscopic rate measurements of calcite dissolution which averaged over large areas and many dislocation sites, and where such sites provided a continuous source of steps for dissolution. Since the local measurements reported herein are mainly made in regions without dislocations, this study demonstrates that dislocations and steps that arise from such sites are not needed for fast proton-promoted calcite dissolution. Other sites, such as point defects, which are naturally abundant in calcite, are likely to be key reaction sites

    Data processing model for the CDF experiment

    Get PDF
    The data processing model for the CDF experiment is described. Data processing reconstructs events from parallel data streams taken with different combinations of physics event triggers and further splits the events into datasets of specialized physics datasets. The design of the processing control system faces strict requirements on bookkeeping records, which trace the status of data files and event contents during processing and storage. The computing architecture was updated to meet the mass data flow of the Run II data collection, recently upgraded to a maximum rate of 40 MByte/sec. The data processing facility consists of a large cluster of Linux computers with data movement managed by the CDF data handling system to a multi-petaByte Enstore tape library. The latest processing cycle has achieved a stable speed of 35 MByte/sec (3 TByte/day). It can be readily scaled by increasing CPU and data-handling capacity as required.Comment: 12 pages, 10 figures, submitted to IEEE-TN
    corecore