45 research outputs found

    Towards self-assembled metamaterials

    Get PDF
    How far can we push chemical self-assembly? This is one of the 25 biggest questions science is facing over the next quarter century, as reported by the Science journal in 2005. The idea of self-assembly is to fabricate synthetic structures or materials from the bottom-up. Up to date a huge class of distinct structures was successfully demonstrated to be fabricated by self-assembly. One important scientific area that exerts the ideas of self-assembly arose from the fusion of the fields of colloidal nanochemistry and nanooptics. There, the focus is on the fabrication of bottom-up nanophotonic structures with a tailored optical response. Very interesting are self-assembled metamaterials (MMs). They promise to widen the possibilities on how to control the propagation of light to an extraordinary degree. Concerning self-assembled MMs the precise spatial arrangement of its unit cells across larger dimensions is not possible in most cases; leading essentially to amorphous structures. Such self-assembled MMs require novel analytical means to describe their optical properties and innovative designs of functional elements that possess a desired near- and far-field response. The first goal of this thesis is the introduction and development of a feasible theoretical description of amorphous MMs. Once the theory is established the second goal is on experimental realizations of self-assembled MMs. Therefore, the focus of this thesis is on self-assembled MMs and the question on how far they can be pushed to obtain artificial materials with an extraordinary optical response

    Joint communication and radar sensing in 5G mobile network by compressive sensing

    Full text link

    Next Generation Graphene Photonics Enabled by Ultrafast Light-Matter Interactions and Machine Learning

    Full text link
    Graphene was first experimentally studied in 2004, featuring an atomically-thin structure. Since then, many unique photonic and electrical properties of graphene and other 2D materials were reported. However, additional efforts are necessary to convert these findings in physics to successful industrial applications. This thesis presents works exploiting the picosecond-scale ultrafast light-matter interactions in graphene to meet the growing demands in IR sensing, 3D detection, and THz light source. We will start from graphene’s interactions with ultrafast lasers. The hot carrier generation, relaxation, and transport will be discussed in graphene and graphene heterostructures. We present a graphene phototransistor with decent near- and mid-infrared (IR) responsivity. Moreover, the detector’s responsivity is tunable with a gate voltage. The responsivity has different gate dependence under different illumination wavelengths. Based on the spectrally-resolved response, we adopt least square regression algorithms to extract the light source’s spectral information at near-infrared. We further perform first-principle photocurrent simulations and spectral reconstructions on defect-free ideal devices with optimized band structure. The results indicate the detector's potential as an ultra-compact on-chip spectrometer for multispectral imaging after further developments. Then we discuss how the graphene detector’s high transparency enables a novel 3D detection and imaging technology. Our graphene phototransistors absorb < 10% of light and give a 3 A/W photoresponse at 532 nm wavelength. The high transparency and sensitivity enable transparent photodetector arrays built on glass substrates, with over 85% of incident light power transmits through such an imager chip. We stack multiple transparent arrays at different focal depths in a camera system. The setup enables simultaneous light intensity (image) acquisition at different depths. We use artificial neural networks to process the image stack data into 3D position and configuration of the objects. For a proof-of-concept demonstration, we used the setup to achieve 3D ranging and tracking of a point source. The technical approach benefits from compactness, high speed, and decent power efficiency for real-time 3D tracking applications. Lastly, we explore the potential of graphene heterostructures as terahertz (THz) emitters and ultrafast photodetectors. The picosecond-scale light-matter interaction of graphene allows us to engineer its optical and electrical structures for THz field emission. We insert a graphene layer in the channel of a silicon photoconductive switch. The device works as a THz electromagnetic wave emitter under femtosecond laser pulse illumination. We use an on-chip pump-probe system to study the temporal and spatial behavior of the THz generation. Our device’s emission amplitude is 80 times larger than a graphene-free control group under identical device geometry and test conditions. Moreover, we also observe strong photocurrent generation below 0.5 ps verified by the photocurrent autocorrelation test. The responsivity is 800 times larger than that in the graphene-free control group. The substantial enhancements are attributed to the high mobility in graphene and the strong absorption in silicon. Gate dependence observations indicate vertical hot-carrier transfer from the silicon layer to the graphene layer, followed by efficient lateral charge separation inside graphene. The results open the gate for more research and development of graphene-based strong THz sources and sensitive ultrafast photodetectors. We conclude the works with strategies to convert graphene’s unique properties to practical and competitive applications. The strategies are extended to general nanodevice and nano-system development methodologies. Specifically, we propose the synergic design of nanodevices and machine learning algorithms as a feasible approach towards many new applications.PHDElectrical and Computer EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169683/1/dehui_1.pd

    Insights into the structure and aggregation of lens crystallins and other aggregation-prone proteins

    Get PDF
    Cataract is the world's leading cause of blindness. The destabilization, partial unfolding, and aggregation of lens crystallin proteins cause the loss of lens transparency (opacification) and cataract formation. Numerous congenital mutations and age-related changes to the long-lived alpha-, beta- and gamma-crystallins are associated with cataract and their study has provided insight into the molecular basis of this disease. In this thesis, alpha- and gamma-crystallin isoforms have been characterised under crowded and oxidative conditions, respectively. Finally, the conformational heterogeneity of model proteins was studied by capillary electrophoresis as a prelude to such studies on the more complex crystallins. Chapter 2 details the structural characterisation of the disulfide-linked gammaS-crystallin dimer, an oxidative product in the aging lens. X-ray crystallography revealed an intermolecular disulfide bond from C24-C24' and two intramolecular disulfides, one in each subunit, between C22 and C26. Small angle X-ray scattering confirmed the extended in-solution biological assembly in lieu of a compact state. It was demonstrated that the disulfide-linked dimer was stable at glutathione concentrations akin to those in aged and catractous lenses. The dimer had a higher aggregation propensity compared to the monomeric form owing to uncooperative domain unfolding. These findings provide novel insight into the contributions of oxidative modification to the formation of age-related cataract. Chapter 3 describes the impacts that a highly crowded environment comparable to the eye lens has on the structure and function of the molecular chaperone alphaB-crystallin. Macromolecular crowding using Ficoll 400 induces significant destabilisation, unfolding, an increase in size/oligomeric state, and a loss of chaperone function leading to kinetically distinct amorphous and fibrillar aggregation. These results are recapitulated in-principle using the biologically relevant crowding agent bovine gamma-crystallin. Aggregation is prevented by the lens partner protein alphaA-crystallin at physiologically relevant ratios through an increase in the alphaA/alphaB-crystallin complex stability. These results complement multiple dilute in vitro and in vivo studies and provide support for therapeutic approaches prevent and reverse cataract via alpha-crystallin stabilisation. Chapter 4 investigates capillary electrophoresis as a method for studying the conformational heterogeneity of a protein. Bovine serum albumin (BSA), yeast alcohol dehydrogenase (YADH), and bovine alpha-lactalbumin (BLA) were used to assess the application of this method towards various conformational aspects in comparison to SEC-MALS. The method distinguished between BSA oligomers and two different monomer populations, multiple YADH monomer and tetramer conformations, and apo- and holo-BLA. The 'dispersity of electrophoretic mobilities' allowed a relative comparison of the levels of conformational heterogeneity between unrelated proteins. This enables for better interpretation of the heterogeneity of more complex proteins such as post-translationally modified crystallins in vivo and oligomeric alpha-crystallin. Overall, this thesis provides new insights into the molecular basis for post-translational and environmental changes in alpha- and gamma-crystallins that cause cataract

    Assessing and Enabling Independent Component Analysis As A Hyperspectral Unmixing Approach

    Get PDF
    As a result of its capacity for material discrimination, hyperspectral imaging has been utilized for applications ranging from mining to agriculture to planetary exploration. One of the most common methods of exploiting hyperspectral images is spectral unmixing, which is used to discriminate and locate the various types of materials that are present in the scene. When this processing is done without the aid of a reference library of material spectra, the problem is called blind or unsupervised spectral unmixing. Independent component analysis (ICA) is a blind source separation approach that operates by finding outputs, called independent components, that are statistically independent. ICA has been applied to the unsupervised spectral unmixing problem, producing intriguing, if somewhat unsatisfying results. This dissatisfaction stems from the fact that independent components are subject to a scale ambiguity which must be resolved before they can be used effectively in the context of the spectral unmixing problem. In this dissertation, ICA is explored as a spectral unmixing approach. Various processing steps that are common in many ICA algorithms are examined to assess their impact on spectral unmixing results. Synthetically-generated but physically-realistic data are used to allow the assessment to be quantitative rather than qualitative only. Additionally, two algorithms, class-based abundance rescaling (CBAR) and extended class-based abundance rescaling (CBAR-X), are introduced to enable accurate rescaling of independent components. Experimental results demonstrate the improved rescaling accuracy provided by the CBAR and CBAR-X algorithms, as well as the general viability of ICA as a spectral unmixing approach

    The Galaxy Environment of Quasars in the Clowes-Campusano Large Quasar Group

    Get PDF
    Quasars have been used as efficient probes of high-redshift galaxy clustering as they are known to favour overdense environments. Quasars may also trace the largescale structure of the early universe (0.4 1< z <1 2) in the form of Large Quasar Groups(LQGs), which have comparable sizes (r.J 100-200hMpc) to the largest structures seen at the present epoch. This thesis describes an ultra-deep, wide-field optical study of a region containing three quasars from the largest known LJQG, the Clowes-Campusano LQG of at least 18 quasars at z 1.3, to examine their galaxy environments and to find indications of any associated large-scale structure in the form of galaxies. The optical data were obtained using the Big Throughput Camera (BTC) on the 4-m Blanco telescope at the Cerro Tololo Interamerican Observatory (CTIO) over two nights in April 1998, resulting in ultra-deep V, I imaging of a 40.6 x 34.9 arcmin 2 field centred at l0L47m30s, +05 0 30'00" containing three quasars from the LQG as well as four quasars at higher redshifts. The final catalogues contain 10 sources and are 50% complete to V 26.35 and I 25.85 in the fully exposed areas. The Cluster Red Sequence method of Gladders & Yee (2000) is used to identify and characterise galaxy clusters in the BTC field. The method is motivated by the observation that the bulk of early-type galaxies in all rich clusters lie along a tight, linear colour-magnitude relation - the cluster red sequence - which evolves with redshift, allowing the cluster redshift to be estimated from the colour of the red sequence. The method is applied to the detection of high-redshift clusters in the BTC field through the selection of galaxies redder than the expected colour of the z = 0.5 red sequence. A 2c excess of these red galaxies is found in the BTC field in comparison to the 27arcmin 2 ETS-DEEP HDF-South field. These galaxies are shown from the EIS-DEEP UBVRIJHK 3 photometry to hearly-type galaxies at 0.7 1< z 1.5. This excess, corresponding to 1000 extra red galaxies over the BTC field, along with the 3c excess of Mgti absorbers observed at 1.2 < z < 1.3(Williger et al., 2000), supports the hypothesis that the Clowes-Campusano LQG traces a large-scale structure in the form of galaxies at z 1.3. Four high-redshift cluster candidates are found, one of which is confirmed by additional K data to be at z = 0.8 + 0.1. Two of the high-redshift clusters are associated with quasars: the z = 1.426 quasar is located on the periphery of a cluster of V - I 3 galaxies; and the z = 1.226 LQC quasar is found within a large-scale structure of 100-150 red galaxies extending over 2-3h'Mpc. Additional K imaging confirms their association with the quasar, with red sequences at V - K 6.9 and I - K 4.3 indicating a population of 15-18 massive ellipticals at z = 1.2 ± 0.1 that are concentrated in two groups on either side of the quasar. The four z ± 1.3 quasars in the BTC field are found in a wide variety of environments,from those indistinguishable from the field, to being associated with rich clusters, but are on average in overdense regions comparable to poor clusters. These results are similar to those of previous studies of quasars at these redshifts, and are consistent with the quasars being hosted by massive ellipticals which trace mass in the same biased manner. It is also notable how the quasars associated with clustering are located on the cluster peripheries rather than in the high-density cluster cores, a result which is initially surprising given that quasars are thought to be hosted by massive elliptical galaxies, but in retrospect can be understood in the framework of both galaxy interaction and galaxy formation quasar triggering mechanisms

    Recycling process of permanent magnets by polymer binder using injection molding technique

    Get PDF
    Seltene Erden-Elemente (REE) werden aufgrund ihrer technologischen Bedeutung und geopolitischen Versorgungskriterien als kritische Metalle eingestuft. Sie werden in einem breiten Spektrum von Anwendungen eingesetzt, einschließlich der Herstellung von Magneten, Batterieelektroden, Katalysatoren und Polierpulver. Viele dieser Anwendungen sind wichtig für die sog. „grünen“ Technologien. Dauermagneten sind hinsichtlich der Marktgröße die wichtigste Anwendung insbesondere für Neodym-, Praseodym-, Dysprosium- und Terbium-Magnete, die in NdFeB-Magneten verwendet werden. Die Nachfrage nach Seltenerdelementen für die Herstellung von Magneten nimmt zu und es wird erwartet, dass sich dieser Trend in den kommenden Jahren fortsetzt. Um die mit der Nachfrage verbundenen Risiken zu verringern, wurden Maßnahmen zur Entwicklung von Recyclingtechnologien zur Wiederverwendung von NdFeB aus Magneten ergriffen. Während der industrielle NdFeB-Schrott bereits zurückgewonnen wird, ist das Recycling von Magneten aus Altprodukten noch weitergehend auf Labor- und Pilotprojekte beschränkt. Diese Abhandlung stellt die Ergebnisse der Materialanalyse vor, die die Möglichkeit bestätigen, magnetische Materialien durch die Einarbeitung in eine Polymermatrix zu recyceln und mittels Spritzgussprozess vorzubereiten. Kern der vorliegenden Dissertation ist die Frage, wie der geschlossene Kreislauf und das Recyclingverfahren von Neodynium Magneten aus Elektroschrott gestattet sein soll. Um diese Frage zu beantworten, sind folgende Aspekte relevant: • Die Wahl der Technologien/Prozesse, die für das Recycling eingesetzt werden. • Nachweis der Wiederverwendung von Neodym-Magneten, die aus WEEE (Waste of Electrical and Electronic Equipment) gewonnen sind. • Herstellung und Analyse von Polymer/Magnet- Compound. • Einfluss der Magnetpartikel, abhängig von ihrer Anzahl und Größe, auf die Viskosität und Fließverhalten des Materials während des Spritzgussprozess. • Analyse des Einflusses der Restmagnetisierung auf das Fließverhalten und einer gezielten Anordnung von magnetischen Partikeln im Bauteil. • Technisch-ökonomische Analyse, die entscheidend dazu beitragen wird, ob und in welchem Ausmaß die Einführung des Prozesses erreichbar ist und damit geschlossene Kreisläufe möglich sind. Auf der Grundlage einer umfangreichen Analyse wurden die optimalen Prozessparameter und die Spritzgussmöglichkeiten des verwendeten Materials vorgestellt. Die Nachfrage nach NdFeB-Magneten in Motoranwendungen wächst und wird in den nächsten Jahren voraussichtlich noch zunehmen. Vor allem die Nachfrage nach E-Bike und E-Autos gewinnt an Bedeutung. Infolgedessen wird die Nachfrage nach schweren Seltenen Erden steigen, was die Entwicklung von Recyclingsystemen für diese Materialien erforderlich macht.Rare earth elements (REE) are classified as critical metals due to their technological importance and geopolitical supply criteria. They are used in a wide range of applications, including the manufacture of magnets, battery electrodes, catalysts, and polishing powders. Many of these applications are important for so-called "green" technologies. Permanent magnets are the most important application in terms of market size, particularly for neodymium, praseodymium, dysprosium, and terbium magnets used in NdFeB magnets. The demand for rare earth elements for the production of magnets is increasing and this trend is expected to continue in the coming years (Langkau S. 2020; Li J. 2020; Goodenough K.M. et al. 2018). To mitigate the risks associated with that demand, have been taken to develop recycling technologies to reuse NdFeB magnets. While industrial scrap is already being recovered, recycling of magnets from end-of-life products is still further limited to laboratory and pilot projects. The following work presents the results of the material analysis, which confirm the possibility to recycle magnetic materials by using a polymer matrix. The main goal of this dissertation is the question of how the closed-loop and recycling process of neodymium magnets from electronic waste should be designed. To answer this question, the following aspects are relevant: • The choice of technologies/processes used for recycling and processing. • Evidence of reuse of neodymium magnets obtained from WEEE (Waste of Electrical and Electronic Equipment). • Process flow analysis and final product evaluation (polymer/magnet compound). • The effect of magnetic particles characteristics (size, distribution, and contribution) on the viscosity and flow behavior of the material during the injection molding process. • Analysis of residual magnetization on the flow behavior and a targeted arrangement of magnetic particles in the component. • Technical-economic analysis, which decisively contributes to whether and to what extent the introduction of the process is achievable. Based on an extensive analysis, the optimal process parameters and the maximum injection possibilities of the material used is discussed along the whole processing line. The demand for NdFeB magnets in motor applications is growing and is expected to increase in the coming years. In particular, the demand for e-bikes and e-vehicles is gaining importance (Kampker A. et al. 2021; Pollák F. 2021; Flores P.J 2021). As a result, the demand for heavy rare earths will increase, necessitating the development of recycling systems for these materials, where this thesis is one basic concept to close the loop

    Light-driven bimorph soft actuators : Design, fabrication, and properties

    Get PDF
    Soft robots that can move like living organisms and adapt to their surroundings are currently in the limelight from fundamental studies to technological applications, due to their advances in material flexibility, human-friendly interaction, and biological adaptation that surpass conventional rigid machines. Light-fueled smart actuators based on responsive soft materials are considered to be one of the most promising candidates to promote the field of untethered soft robotics, thereby attracting considerable attention amongst materials scientists and microroboticists to investigate photomechanics, photoswitch, bioinspired design, and actuation realization. In this review, we discuss the recent state-of-the-art advances in light-driven bimorph soft actuators, with the focus on bilayer strategy, i.e., integration between photoactive and passive layers within a single material system. Bilayer structures can endow soft actuators with unprecedented features such as ultrasensitivity, programmability, superior compatibility, robustness, and sophistication in controllability. We begin with an explanation about the working principle of bimorph soft actuators and introduction of a synthesis pathway toward light-responsive materials for soft robotics. Then, photothermal and photochemical bimorph soft actuators are sequentially introduced, with an emphasis on the design strategy, actuation performance, underlying mechanism, and emerging applications. Finally, this review is concluded with a perspective on the existing challenges and future opportunities in this nascent research Frontier.acceptedVersionPeer reviewe

    Sparse Signal Recovery Based on Compressive Sensing and Exploration Using Multiple Mobile Sensors

    Get PDF
    The work in this dissertation is focused on two areas within the general discipline of statistical signal processing. First, several new algorithms are developed and exhaustively tested for solving the inverse problem of compressive sensing (CS). CS is a recently developed sub-sampling technique for signal acquisition and reconstruction which is more efficient than the traditional Nyquist sampling method. It provides the possibility of compressed data acquisition approaches to directly acquire just the important information of the signal of interest. Many natural signals are sparse or compressible in some domain such as pixel domain of images, time, frequency and so forth. The notion of compressibility or sparsity here means that many coefficients of the signal of interest are either zero or of low amplitude, in some domain, whereas some are dominating coefficients. Therefore, we may not need to take many direct or indirect samples from the signal or phenomenon to be able to capture the important information of the signal. As a simple example, one can think of a system of linear equations with N unknowns. Traditional methods suggest solving N linearly independent equations to solve for the unknowns. However, if many of the variables are known to be zero or of low amplitude, then intuitively speaking, there will be no need to have N equations. Unfortunately, in many real-world problems, the number of non-zero (effective) variables are unknown. In these cases, CS is capable of solving for the unknowns in an efficient way. In other words, it enables us to collect the important information of the sparse signal with low number of measurements. Then, considering the fact that the signal is sparse, extracting the important information of the signal is the challenge that needs to be addressed. Since most of the existing recovery algorithms in this area need some prior knowledge or parameter tuning, their application to real-world problems to achieve a good performance is difficult. In this dissertation, several new CS algorithms are proposed for the recovery of sparse signals. The proposed algorithms mostly do not require any prior knowledge on the signal or its structure. In fact, these algorithms can learn the underlying structure of the signal based on the collected measurements and successfully reconstruct the signal, with high probability. The other merit of the proposed algorithms is that they are generally flexible in incorporating any prior knowledge on the noise, sparisty level, and so on. The second part of this study is devoted to deployment of mobile sensors in circumstances that the number of sensors to sample the entire region is inadequate. Therefore, where to deploy the sensors, to both explore new regions while refining knowledge in aleady visited areas is of high importance. Here, a new framework is proposed to decide on the trajectories of sensors as they collect the measurements. The proposed framework has two main stages. The first stage performs interpolation/extrapolation to estimate the phenomenon of interest at unseen loactions, and the second stage decides on the informative trajectory based on the collected and estimated data. This framework can be applied to various problems such as tuning the constellation of sensor-bearing satellites, robotics, or any type of adaptive sensor placement/configuration problem. Depending on the problem, some modifications on the constraints in the framework may be needed. As an application side of this work, the proposed framework is applied to a surrogate problem related to the constellation adjustment of sensor-bearing satellites
    corecore