146 research outputs found

    Development of Polarizable Force Field Models for Transition Metal Ions

    Get PDF
    This dissertation focuses on the development of polarizable molecular mechanics (MM) force field models for the third-row transition metal (TM) ions. These TM ions perform important structural and chemical functions in a wide range of organic and biological environments because of the unique properties of the 3d orbitals. Being able study these systems in silico can provide a tremendous amount of information that is difficult to obtain through experiments. However, the standard treatment of ions in traditional MM models has shown to be insufficient for describing the d-shell electronic effects. In this work, empirical models for TM electronic effects are derived from the valence bond (VB) theory and the angular overlap model (AOM). The TM potential functions are incorporated into the AMOEBA (Atomic Multipole Optimized Energetics for Biomolecular Applications) MM force field. A consistent polarizable electrostatics model is applied between metal and ligand sites at all interaction distance, enabling the study of ligand association / dissociation and other dynamic events. Specifically, theories are presented in the context of Ni(II), Cu(II) and Zn(II) ions. Parameters are obtained by fitting the TM models to gas-phase ab initio computations. Finally, results from molecular dynamics simulations of aqueous ions and select type 1 copper proteins (plastocyanin and azurin) are analyzed. Evidence from this study suggests that explicit description of d-shell electronic effect can significantly improve the performance of MM models. This allows one to perform more reliable investigations on complex TM systems than can be achieved with traditional MM methods but without the computational expense of ab initio calculations

    Consistent Density Scanning and Information Extraction From Point Clouds of Building Interiors

    Get PDF
    Over the last decade, 3D range scanning systems have improved considerably enabling the designers to capture large and complex domains such as building interiors. The captured point cloud is processed to extract specific Building Information Models, where the main research challenge is to simultaneously handle huge and cohesive point clouds representing multiple objects, occluded features and vast geometric diversity. These domain characteristics increase the data complexities and thus make it difficult to extract accurate information models from the captured point clouds. The research work presented in this thesis improves the information extraction pipeline with the development of novel algorithms for consistent density scanning and information extraction automation for building interiors. A restricted density-based, scan planning methodology computes the number of scans to cover large linear domains while ensuring desired data density and reducing rigorous post-processing of data sets. The research work further develops effective algorithms to transform the captured data into information models in terms of domain features (layouts), meaningful data clusters (segmented data) and specific shape attributes (occluded boundaries) having better practical utility. Initially, a direct point-based simplification and layout extraction algorithm is presented that can handle the cohesive point clouds by adaptive simplification and an accurate layout extraction approach without generating an intermediate model. Further, three information extraction algorithms are presented that transforms point clouds into meaningful clusters. The novelty of these algorithms lies in the fact that they work directly on point clouds by exploiting their inherent characteristic. First a rapid data clustering algorithm is presented to quickly identify objects in the scanned scene using a robust hue, saturation and value (H S V) color model for better scene understanding. A hierarchical clustering algorithm is developed to handle the vast geometric diversity ranging from planar walls to complex freeform objects. The shape adaptive parameters help to segment planar as well as complex interiors whereas combining color and geometry based segmentation criterion improves clustering reliability and identifies unique clusters from geometrically similar regions. Finally, a progressive scan line based, side-ratio constraint algorithm is presented to identify occluded boundary data points by investigating their spatial discontinuity

    Real-time monitoring and diagnostics of crystal-based collimation of particle accelerator beams

    Get PDF
    The beam collimation represents one of the important items for the future upgrade of the Large Hadron Collider (LHC) at CERN. An effective collimation system is particularly required at higher beam intensities, as even a relatively small number of particles impinging on the superconducting magnets can cause quenching (a sudden loss of superconducting condition). Although the currently used collimation system at CERN is working properly, it presents some limitations which can be overcome by future upgrades. One of these limitations is due the particle diffraction from heavy absorbers. An alternative option to the current collimation system at CERN is represented by the use of bent crystals. These latter are expected to be very effective in beam collimation. In fact, they have the advantage to guide halo particles of the beam on a single absorber. This allows the improvements to the cleaning performance as well as to the impedance of the collider as compared to the multi-stage collimation systems, consisting of large blocks made of amorphous material, placed around the beam. In this framework, UA9 Experiment at CERN is carrying on since many years an R&D on various types of crystals. The aim is to find the best solution to overcome the limitations of the currently used collimation system at CERN, in view of future upgrades of the collider. The first part of this PhD work has been devoted, within the UA9 collaboration, to the characterization of some new crystals to be used in LHC and in the Super Proton Synchrotron (SPS) for collimation. The radiation hardness for high energy neutrons were also tested for these crystals. Beam collimation monitoring is performed in the UA9 crystal based system using a Cherenkov detector for high energy protons going through the fused silica. Presently, classical PMTs are in use to collect the Cherenkov light, but its dark count rate is directly affected by the high intensity radiation. With the aim to face this limitation, the second part of this PhD project focused on the characterization of ZnO material, which resulted to be very promising for realizing alternative photodetectors. In this respect, Cherenkov detector/setup used in UA9 could be updated with more functional sensor systems which are radiation resistant and compatible with vacuum requirements in the beam pipe. Another important aspect in collimation systems is the real time monitoring of collimated beams inside the accelerators, especially when using a crystal based collimation system as in UA9. A good approach to face this aspect is to develop a machine learning based real time framework to analyze the signal and detect the faults. The last aim of this work is to present a preliminary study of data acquisition as a starting point to develop a real time framework to be built in the future. This work has been carried out using a SiPM sensor (which competes with the PMTs) with a fast ADC digitizer in real time

    Fluorescence Resonance Energy Transfer (FRET) systems for biomedical sensor applications

    Get PDF
    This thesis investigates the use of Fluorescence Resonance Energy Transfer (FRET) for biomedical sensor applications. FRET is a process by which energy is transferred, via long range dipole-dipole interactions, from a donor molecule (D) in an excited electronic state to an acceptor molecule (A). The emission band of D must overlap the absorption band of A in order for FRET to occur. FRET is employed in a variety of biomedical applications, including the study of cell biology and protein folding/unfolding and is also used for enhanced optical bioassays. The distance dependence of the FRET interaction enables the technique to be used as a molecular ruler to report, for example, on conformational changes in biomolecules. The �rst phase of this work involved the design and implementation of a model 2-D FRET platform that is compatible with optical biochips. The donor-acceptor pair used was a Ruthenium-complex/Cy5 system where the donor-acceptor separation was controlled using highly reproducible polyelectrolyte spacer layers, which were deposited using a layer-by-layer technique. The FRET process was demonstrated in both uorescence intensity and lifetime mode. The interaction between FRET and the plasmonic enhancement of uorescence in the presence of adjacent metal nanoparticles was also investigated. Dipole-dipole interactions limit the FRET e�ect to donor-acceptor distances of typically less than 10nm. The use of the plasmonic e�ect to increase this distance, which would facilitate the use of FRET in a wider variety of applications, was explored. The size, shape and composition of metal nanoparticles were tailored to give a resonance absorption which optimises the enhancement of the dye uorescence. As well developing a 2-D solid planar platform, the FRET-plasmonic interaction was also investigated in solution phase, by designing a model that incorporated donor and acceptor-labeled oligonucleotides as controlled spacers and spherical gold and silver nanoparticles for plasmonic enhancement. Throughout the work, theoretical calculations were carried out, and, where relevant, theoretical predictions were compared with experimental measurements. Apart from designing two FRETplasmonic investigation models, a key result to emerge from this work is that while individual plasmonic enhancement of the donor and acceptor is occurring in the presence of metal nanoparticles, no plasmonic enhancement of the FRET interaction is observed for the experimental systems. Theoretical modeling confirmed the reduction of the FRET efficiency

    HISTOLOGICAL STUDIES OF BREWERY SPENT GRAINS IN DIETARY PROTEIN FORMULATION IN DONRYU RATS

    Get PDF
    The increasing production of large tonnage of products in brewing industries continually generates lots of solid waste which includes spent grains, surplus yeast, malt sprout and cullet. The disposal of spent grains is often a problem and poses major health and environmental challenges, thereby making it imminently necessary to explore alternatives for its management. This paper focuses on investigating the effects of Brewery Spent Grain formulated diet on haematological, biochemical, histological and growth performance of Donryu rats. The rats were allocated into six dietary treatment groups and fed on a short-term study with diet containing graded levels of spent grains from 0, 3, 6, 9, 12 and 100% weight/weight. The outcome demonstrated that formulated diet had a positive effect on the growth performance of the rats up to levels of 6% inclusions, while the haematological and biochemical evaluation revealed that threshold limit should not exceed 9% of the grain. However, the histological study on the liver indicated a limit of 3% inclusion in feed without serious adverse effect. Thus invariably showing that blend between ranges 1-3% is appropriate for the utilization of the waste in human food without adverse effect on the liver organ. The economic advantage accruing from this waste conversion process not only solves problem of waste disposal but also handle issues of malnutrition in feeding ration

    Morphological operations in image processing and analysis

    Get PDF
    Morphological operations applied in image processing and analysis are becoming increasingly important in today\u27s technology. Morphological operations which are based on set theory, can extract object features by suitable shape (structuring elements). Morphological filters are combinations of morphological operations that transform an image into a quantitative description of its geometrical structure which based on structuring elements. Important applications of morphological operations are shape description, shape recognition, nonlinear filtering, industrial parts inspection, and medical image processing. In this dissertation, basic morphological operations are reviewed, algorithms and theorems are presented for solving problems in distance transformation, skeletonization, recognition, and nonlinear filtering. A skeletonization algorithm using the maxima-tracking method is introduced to generate a connected skeleton. A modified algorithm is proposed to eliminate non-significant short branches. The back propagation morphology is introduced to reach the roots of morphological filters in only two-scan. The definitions and properties of back propagation morphology are discussed. The two-scan distance transformation is proposed to illustrate the advantage of this new definition. G-spectrum (geometric spectrum) which based upon the cardinality of a set of non-overlapping segments in an image using morphological operations is presented to be a useful tool not only for shape description but also for shape recognition. The G-spectrum is proven to be translation-, rotation-, and scaling-invariant. The shape likeliness based on G-spectrum is defined as a measurement in shape recognition. Experimental results are also illustrated. Soft morphological operations which are found to be less sensitive to additive noise and to small variations are the combinations of order statistic and morphological operations. Soft morphological operations commute with thresholding and obey threshold superposition. This threshold decomposition property allows gray-scale signals to be decomposed into binary signals which can be processed by only logic gates in parallel and then binary results can be combined to produce the equivalent output. Thus the implementation and analysis of function-processing soft morphological operations can be done by focusing only on the case of sets which not only are much easier to deal with because their definitions involve only counting the points instead of sorting numbers, but also allow logic gates implementation and parallel pipelined architecture leading to real-time implementation. In general, soft opening and closing are not idempotent operations, but under some constraints the soft opening and closing can be idempotent and the proof is given. The idempotence property gives us the idea of how to choose the structuring element sets and the value of index such that the soft morphological filters will reach the root signals without iterations. Finally, summary and future research of this dissertation are provided

    Multibody dynamics based simulation studies of escapement mechanisms in mechanical watch movement.

    Get PDF
    Fu, Kin Chung Denny.Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.Includes bibliographical references (leaves 119-123).Abstracts in English and Chinese.Abstract --- p.i摘要 --- p.iiiAcknowledgements --- p.ivTable of Contents --- p.vList of Figures --- p.viiiList of Tables --- p.xiChapter Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Objective --- p.1Chapter 1.2 --- Fundamental knowledge of multibody dynamics --- p.2Chapter 1.3 --- Escapement mechanisms --- p.5Chapter 1.3.1 --- Time keeping accuracy and stability factors --- p.7Chapter 1.3.2 --- Estimations of moment of inertia --- p.9Chapter 1.3.3 --- Other simulations and analyses --- p.15Chapter 1.4 --- Thesis outlines --- p.15Chapter 1.5 --- Chapter summary --- p.17Chapter Chapter 2 --- Multibody Dynamics --- p.18Chapter 2.1 --- The unilateral corner law of impact --- p.18Chapter 2.2 --- The Coulomb's friction --- p.19Chapter 2.3 --- "Slip, stick, and slip reversal phenomena" --- p.20Chapter 2.4 --- The coefficients of restitution --- p.20Chapter 2.5 --- Ways of formulating multiple contacts --- p.22Chapter 2.6 --- Integration procedure --- p.22Chapter 2.7 --- The P. Pfeiffer and Ch. Glocker's approach --- p.23Chapter 2.7.1 --- Kinematics calculation --- p.23Chapter 2.7.2 --- Configuration index --- p.26Chapter 2.7.3 --- Motion without contact --- p.27Chapter 2.7.4 --- Motion for detachment and slip-stick transition and LCP formulation --- p.27Chapter 2.7.5 --- Motion for impact and LCP formulation --- p.37Chapter 2.8 --- Solving LCP --- p.50Chapter 2.9 --- Chapter summary --- p.52Chapter Chapter 3 --- Development of the Simulation Tool --- p.54Chapter 3.1 --- Kinematics calculation --- p.54Chapter 3.1.1 --- Geometric definitions --- p.55Chapter 3.1.2 --- Line-to-line contact --- p.59Chapter 3.1.3 --- Arc-to-line contact --- p.62Chapter 3.1.4 --- Kinematics calculation procedures --- p.67Chapter 3.2 --- Obtaining the solutions --- p.72Chapter 3.3 --- Revised numerical treatment for LCP solving --- p.73Chapter 3.4 --- Integration procedure of simulation --- p.74Chapter 3.5 --- Verification example --- p.76Chapter 3.5.1 --- Classical mechanics approach --- p.76Chapter 3.5.2 --- Pre-calculation before application --- p.79Chapter 3.5.3 --- Simulation results --- p.81Chapter 3.6 --- Chapter summary --- p.83Chapter Chapter 4 --- Application to Swiss Lever Escapement --- p.84Chapter 4.1 --- Working principle of Swiss lever escapement --- p.84Chapter 4.2 --- Simulation of Swiss lever escapement --- p.87Chapter 4.2.1 --- Pre-calculation of kinematics --- p.88Chapter 4.2.2 --- Simulation results --- p.89Chapter 4.3 --- More simulations --- p.102Chapter 4.3.1 --- Theoretical optimal peak amplitudes --- p.102Chapter 4.3.2 --- Simulation of coaxial escapement --- p.103Chapter 4.3.3 --- Simulations with different simulation parameters --- p.109Chapter 4.3.4 --- Relation of input complexity and computational time --- p.111Chapter 4.4 --- Chapter summary --- p.113Chapter Chapter 5 --- Conclusions and Future works --- p.114Chapter 5.1 --- Conclusions --- p.114Chapter 5.2 --- Future works --- p.117Bibliography --- p.11

    Novel perspectives and approaches to video summarization

    Get PDF
    The increasing volume of videos requires efficient and effective techniques to index and structure videos. Video summarization is such a technique that extracts the essential information from a video, so that tasks such as comprehension by users and video content analysis can be conducted more effectively and efficiently. The research presented in this thesis investigates three novel perspectives of the video summarization problem and provides approaches to such perspectives. Our first perspective is to employ local keypoint to perform keyframe selection. Two criteria, namely Coverage and Redundancy, are introduced to guide the keyframe selection process in order to identify those representing maximum video content and sharing minimum redundancy. To efficiently deal with long videos, a top-down strategy is proposed, which splits the summarization problem to two sub-problems: scene identification and scene summarization. Our second perspective is to formulate the task of video summarization to the problem of sparse dictionary reconstruction. Our method utilizes the true sparse constraint L0 norm, instead of the relaxed constraint L2,1 norm, such that keyframes are directly selected as a sparse dictionary that can reconstruct the video frames. In addition, a Percentage Of Reconstruction (POR) criterion is proposed to intuitively guide users in selecting an appropriate length of the summary. In addition, an L2,0 constrained sparse dictionary selection model is also proposed to further verify the effectiveness of sparse dictionary reconstruction for video summarization. Lastly, we further investigate the multi-modal perspective of multimedia content summarization and enrichment. There are abundant images and videos on the Web, so it is highly desirable to effectively organize such resources for textual content enrichment. With the support of web scale images, our proposed system, namely StoryImaging, is capable of enriching arbitrary textual stories with visual content
    corecore