29 research outputs found

    Binary matrix factorisations under Boolean arithmetic

    Get PDF
    For a binary matrix X, the Boolean rank br(X) is the smallest integer for which X can be factorised into the Boolean matrix product of two binary matrices A and B with inner dimension br(X). The isolation number i(X) of X is the maximum number of 1s no two of which are in a same row, column or a 2 x 2 submatrix of all 1s. In Part I. of this thesis, we continue Anna Lubiw's study of firm matrices. X is said to be firm if i(X)=br(X) and this equality holds for all its submatrices. We show that the stronger concept of superfirmness of X is equivalent to having no odd holes in the rectangle cover graph of X, the graph in which br(X) and i(X) translate to the clique cover number and the independence number, respectively. A binary matrix is minimally non-firm if it is not firm but all of its proper submatrices are. We introduce a matrix operation that leads to generalised binary matrices and, under some conditions, preserves firmness and superfirmness. Then we use this matrix operation to derive several infinite families of minimally non-firm matrices. To the best of our knowledge, minimally non-firm matrices have not been studied before and our constructions provide the first infinite families of them. In Part II. of this thesis, we explore rank-k binary matrix factorisation (k-BMF). In k-BMF, we are given an m x n binary matrix X with possibly missing entries and need to find two binary matrices A and B of dimension m x k and k x n respectively, which minimise the distance between X and the Boolean matrix product of A and B in the squared Frobenius norm. We present a compact and two exponential size integer programs (IPs) for k-BMF and show that the compact IP has a weak LP relaxation, while the exponential size IPs have a stronger equivalent LP relaxation. We introduce a new objective function, which differs from the traditional squared Frobenius objective in attributing a weight to zero entries of the input matrix that is proportional to the number of times a zero is erroneously covered in a rank-k factorisation. For one of the exponential size IPs we describe a computational approach based on column generation. Experimental results on synthetic and real word datasets suggest that our integer programming approach is competitive against available methods for k-BMF and provides accurate low-error factorisations

    Clifford Algebra: A Case for Geometric and Ontological Unification

    Get PDF
    Robert Batterman’s ontological insights (2002, 2004, 2005) are apt: Nature abhors singularities. “So should we,” responds the physicist. However, the epistemic assessments of Batterman concerning the matter prove to be less clear, for in the same vein he write that singularities play an essential role in certain classes of physical theories referring to certain types of critical phenomena. I devise a procedure (“methodological fundamentalism”) which exhibits how singularities, at least in principle, may be avoided within the same classes of formalisms discussed by Batterman. I show that we need not accept some divergence between explanation and reduction (Batterman 2002), or between epistemological and ontological fundamentalism (Batterman 2004, 2005). Though I remain sympathetic to the ‘principle of charity’ (Frisch (2005)), which appears to favor a pluralist outlook, I nevertheless call into question some of the forms such pluralist implications take in Robert Batterman’s conclusions. It is difficult to reconcile some of the pluralist assessments that he and some of his contemporaries advocate with what appears to be a countervailing trend in a burgeoning research tradition known as Clifford (or geometric) algebra. In my critical chapters (2 and 3) I use some of the demonstrated formal unity of Clifford algebra to argue that Batterman (2002) equivocates a physical theory’s ontology with its purely mathematical content. Carefully distinguishing the two, and employing Clifford algebraic methods reveals a symmetry between reduction and explanation that Batterman overlooks. I refine this point by indicating that geometric algebraic methods are an active area of research in computational fluid dynamics, and applied in modeling the behavior of droplet-formation appear to instantiate a “methodologically fundamental” approach. I argue in my introductory and concluding chapters that the model of inter-theoretic reduction and explanation offered by Fritz Rohrlich (1988, 1994) provides the best framework for accommodating the burgeoning pluralism in philosophical studies of physics, with the presumed claims of formal unification demonstrated by physicists choices of mathematical formalisms such as Clifford algebra. I show how Batterman’s insights can be reconstructed in Rohrlich’s framework, preserving Batterman’s important philosophical work, minus what I consider are his incorrect conclusions

    Exploring scatterer anisotrophy in synthetic aperture radar via sub-aperture analysis

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (p. 189-193).Scattering from man-made objects in SAR imagery exhibits aspect and frequency dependencies which are not always well modeled by standard SAR imaging techniques based on the ideal point scattering model. This is particularly the case for highresolution wide-band and wide-aperture data where model deviations are even more pronounced. If ignored, these deviations will reduce recognition performance due to the model mismatch, but when appropriately accounted for, these deviations from the ideal point scattering model can be exploited as attributes to better distinguish scatterers and their respective targets. With this in mind, this thesis develops an efficient modeling framework based on a sub-aperture pyramid to utilize scatterer anisotropy for the purpose of target classification. Two approaches are presented to exploit scatterer anisotropy using the sub-aperture pyramid. The first is a nonparametric classifier that learns the azimuthal dependencies within an image and makes a classification decision based on the learned dependencies. The second approach is a parametric attribution of the observed anisotropy characterizing the azimuthal location and concentration of the scattering response. Working from the sub-aperture scattering model, we develop a hypothesis test to characterize anisotropy. We start with an isolated scatterer model which produces a test with an intuitive interpretation. We then address the problem of robustness to interfering scatterers by extending the model to account for neighboring scatterers which corrupt the anisotropy attribution.(cont.) The development of the anisotropy attribution culminates with an iterative attribution approach that identifies and compensates for neighboring scatterers. In the course of the development of the anisotropy attribution, we also study the relationship between scatterer phenomenology and our anisotropy attribution. This analysis reveals the information provided by the anisotropy attribution for two common sources of anisotropy. Furthermore, the analysis explicitly demonstrates the benefit of using wide-aperture data to produce more stable and more descriptive characterizations of scatterer anisotropy.y Andrew J. Kim.Ph.D

    Analysis of DC microgrids as stochastic hybrid systems

    Get PDF
    A modeling framework for dc microgrids and distribution systems based on the dual active bridge (DAB) topology is presented. The purpose of this framework is to accurately characterize dynamic behavior of multi-converter systems as a function of exogenous load and source inputs. The base model is derived for deterministic inputs and then extended for the case of stochastic load behavior. At the core of the modeling framework is a large-signal DAB model that accurately describes the dynamics of both ac and dc state variables. This model addresses limitations of existing DAB converter models, which are not suitable for system-level analysis due to inaccuracy and poor upward scalability. The converter model acts as a fundamental building block in a general procedure for constructing models of multi-converter systems. System-level model construction is only possible due to structural properties of the converter model that mitigate prohibitive increases in size and complexity. To characterize the impact of randomness in practical loads, stochastic load descriptions are included in the deterministic dynamic model. The combined behavior of distributed loads is represented by a continuous-time stochastic process. Models that govern this load process are generated using a new modeling procedure, which builds incrementally from individual device-level representations. To merge the stochastic load process and deterministic dynamic models, the microgrid is modeled as a stochastic hybrid system. The stochastic hybrid model predicts the evolution of moments of dynamic state variables as a function of load model parameters. Moments of dynamic states provide useful approximations of typical system operating conditions over time. Applications of the deterministic models include system stability analysis and computationally efficient time-domain simulation. The stochastic hybrid models provide a framework for performance assessment and optimization --Abstract, page iv

    Micro-Mechanical Voltage Tunable Fabry-Perot Filters Formed in (111) Silicon

    Get PDF
    The MEMS (Micro-Electro-Mechanical-Systems) technology is quickly evolving as a viable means to combine micro-mechanical and micro-optical elements on the same chip. One MEMS technology that has recently gained attention by the research community is the micro-mechanical Fabry-Perot optical filter. A MEMS based Fabry-Perot consists of a vertically integrated structure composed of two mirrors separated by an air gap. Wavelength tuning is achieved by applying a bias between the two mirrors resulting in an attractive electrostatic force which pulls the mirrors closer. In this work, we present a new micro-mechanical Fabry-Perot structure which is simple to fabricate and is integratable with low cost silicon photodetectors and transistors. The structure consists of a movable gold coated oxide cantilever for the top mirror and a stationary Au/Ni plated silicon bottom mirror. The fabrication process is single mask level, self aligned, and requires only one grown or deposited layer. Undercutting of the oxide cantilever is carried out by a combination of RIE and anisotropic KOH etching of the (111) silicon substrate. Metallization of the mirrors is provided by thermal evaporation and electroplating. The optical and electrical characteristics of the fabricated devices were studied and show promissing results. A wavelength shift of 120nm with 53V applied bias was demonstrated by one device geometry using 6.27 micrometer air gap. The finesse of the structure was 2.4. Modulation bandwidths ranging from 91KHz to greater than 920KHz were also observed. Theoretical calculations show that if mirror reflectivity, smoothness, and parallelism are improved, a finesse of 30 is attainable. The predictions also suggest that a reduction of the air gap to 1 micrometer results in an increased wavelength tuning range of 175 nm with a CMOS compatible 4.75V

    Quantum information outside quantum information

    Get PDF
    Quantum theory, as counter-intuitive as a theory can get, has turned out to make predictions of the physical world that match observations so precisely that it has been described as the most accurate physical theory ever devised. Viewing quantum entanglement, superposition and interference not as undesirable necessities but as interesting resources paved the way to the development of quantum information science. This area studies the processing, transmission and storage of information when one accounts that information is physical and subjected to the laws of nature that govern the systems it is encoded in. The development of the consequences of this idea, along with the great advances experienced in the control of individual quantum systems, has led to what is now known as the second quantum revolution, in which quantum information science has emerged as a fully-grown field. As such, ideas and tools developed within the framework of quantum information theory begin to permeate to other fields of research. This Ph.D. dissertation is devoted to the use of concepts and methods akin to the field of quantum information science in other areas of research. In the same way, it also considers how encoding information in quantum degrees of freedom may allow further development of well-established research fields and industries. This is, this thesis aims to the study of quantum information outside the field of quantum information. Four different areas are visited. A first question posed is that of the role of quantum information in quantum field theory, with a focus in the quantum vacuum. It is known that the quantum vacuum contains entanglement, but it remains unknown whether it can be accessed and exploited in experiments. We give crucial steps in this direction by studying the extraction of vacuum entanglement in realistic models of light-matter interaction, and by giving strict mathematical conditions of general applicability that must be fulfilled for extraction to be possible at all. Another field where quantum information methods can offer great insight is in that of quantum thermodynamics, where the idealizations made in macroscopic thermodynamics break down. Making use of a quintessential framework of quantum information and quantum optics, we study the cyclic operation of a microscopic heat engine composed by a single particle reciprocating between two finite-size baths, focusing on the consequences of the removal of the macroscopic idealizations. One more step down the stairs to applications in society, we analyze the impact that encoding information in quantum systems and processing it in quantum computers may have in the field of machine learning. A great desideratum in this area, largely obstructed by computational power, is that of explainable models which not only make predictions but also provide information about the decision process that triggers them. We develop an algorithm to train neural networks using explainable techniques that exploits entanglement and superposition to execute efficiently in quantum computers, in contrast with classical counterparts. Furthermore, we run it in state-of-the-art quantum computers with the aim of assessing the viability of realistic implementations. Lastly, and encompassing all the above, we explore the notion of causality in quantum mechanics from an information-theoretic point of view. While it is known since the work of John S. Bell in 1964 that, for a same causal pattern, quantum systems can generate correlations between variables that are impossible to obtain employing only classical systems, there is an important lack of tools to study complex causal effects whenever a quantum behavior is expected. We fill this gap by providing general methods for the characterization of the quantum correlations achievable in complex causal patterns. Closing the circle, we make use of these tools to find phenomena of fundamental and experimental relevance back in quantum information.La teoría cuántica, la más extraña y antiintuitiva de las teorías físicas, es también considerada como la teoría más precisa jamás desarrollada. La interpretación del entrelazamiento, la superposición y la interferencia como interesantes recursos aprovechables cimentó el desarrollo de la teoría cuántica de la información (QIT), que estudia el procesado, transmisión y almacenamiento de información teniendo en cuenta que ésta es física, en tanto a que está sujeta a las leyes de la naturaleza que gobiernan los sistemas en que se codifica. El desarrollo de esta idea, en conjunción con los recientes avances en el control de sistemas cuánticos individuales, ha dado lugar a la conocida como segunda revolución cuántica, en la cual la QIT ha emergido como un área de estudio con denominación propia. A consecuencia de su desarrollo actual, ideas y herramientas creadas en su seno comienzan a permear a otros ámbitos de investigación. Esta tesis doctoral está dedicada a la utilización de conceptos y métodos originales del campo de información cuántica en otras áreas. También considera cómo la codificación de información en grados de libertad cuánticos puede afectar el futuro desarrollo de áreas de investigación e industrias bien establecidas. Es decir, esta tesis tiene como objetivo el estudio de la información cuántica fuera de la información cuántica, haciendo hincapié en cuatro ámbitos diferentes. Una primera cuestión propuesta es la del papel de la información cuántica en la teoría cuántica de campos, con especial énfasis en el vacío cuántico. Es conocido que el vacío cuántico contiene entrelazamiento, pero aún se desconoce éste es accesible para su uso en realizaciones experimentales. En esta tesis se dan pasos cruciales en esta dirección mediante el estudio de la extracción de entrelazamiento en modelos realistas de la interacción materia-radiación, y dando condiciones matemáticas estrictas que deben ser satisfechas para que dicha extracción sea posible. Otro campo en el cual métodos propios de QIT pueden ofrecer nuevos puntos de vista es en termodinámica cuántica. A través del uso de un marco de trabajo ampliamente utilizado en información y óptica cuánticas, estudiamos la operación cíclica de un motor térmico microscópico que alterna entre dos baños térmicos de tamaño finito, prestando especial atención a las consecuencias de la eliminación de las idealizaciones macroscópicas utilizadas en termodinámica macroscópica. Acercándonos a aplicaciones industriales, analizamos el potencial impacto de codificar y procesar información en sistemas cuánticos en el ámbito del aprendizaje automático. Un fin codiciado en esta área, inaccesible debido a su coste computacional, es el de modelos explicativos que realicen predicciones, y además ofrezcan información acerca del proceso de decisión que las genera. Presentamos un algoritmo de entrenamiento de redes neuronales con técnicas explicativas que hace uso del entrelazamiento y la superposición para tener una ejecución eficiente en ordenadores cuánticos, en comparación con homólogos clásicos. Además, ejecutamos el algoritmo en ordenadores cuánticos contemporáneos con el objetivo de evaluar la viabilidad de implementaciones realistas. Finalmente, y englobando todo lo anterior, exploramos la noción de causalidad en mecánica cuántica desde el punto de vista de la teoría de la información. A pesar de que es conocido que para un mismo patrón causal existen sistemas cuánticos que dan lugar a correlaciones imposibles de generar por mediación de sistemas clásicos, existe una notable falta de herramientas para estudiar efectos causales cuánticos complejos. Cubrimos esta falta mediante métodos generales para la caracterización de las correlaciones cuánticas que pueden ser generadas en estructuras causales complejas. Cerrando el círculo, usamos estas herramientas para encontrar fenómenos de relevancia fundamental y experimental en la información cuántic

    Magmatic-hydrothermal fluid history of the Harrison Pass Pluton, Ruby Mountains, NV: implications for the Ruby Mountains-East Humboldt Range metamorphic core complex and Carlin-type Au deposits, The

    Get PDF
    2016 Summer.Includes bibliographical references.Intrusion of the ~36 Ma, calc-alkaline, granodiorite-monzogranite Harrison Pass Pluton (HPP) occurred as magmatic fronts migrated southwest across the Great Basin during the Eocene. The HPP was locally intruded into the Ruby Mountains-East Humboldt Range (RMEHR), a classic Cordilleran metamorphic core complex that would undergo rapid tectonic exhumation during the late Cenozoic. Although the emplacement depth of the HPP provides an estimate for the magnitude and timing of subsequent uplift, disagreement exists between published mineral thermobarometry data and stratigraphic reconstructions. Synchronous with emplacement of the HPP was a regional hydrothermal fluid event responsible for deposition of >200 Moz of Au in sediment-hosted Carlin-type deposits (CTD's) along four linear trends. Magmatic, meteoric, and metamorphic models have been invoked to explain the origin of fluids and Au for CTD's, but few studies have directly examined the fluids generated by a potential source intrusion such as the HPP. Investigation of the magmatic-hydrothermal fluid history of the HPP, particularly the pressure-temperature conditions of fluid entrapment and fluid geochemistry, is an effective means of testing and improving existing models for the development of the RMEHR metamorphic core complex and for the origin of CTD's. Field and petrographic observations of pegmatites, aplites, miarolitic cavities, quartz veins, and multiple types of hydrothermal alteration, coupled with data from fluid inclusion microthermometry, LA-ICP-MS fluid inclusion geochemistry, and oxygen stable isotopes from magmatic and hydrothermal quartz, demonstrate that two-stage intrusive assembly was paralleled by a two-stage magmatic-hydrothermal fluid system. Early stage fluid activity was dominated by two aqueous, low salinity (~3 wt % eq. NaCl), B-Na-K-Rb- Sr-Cs-bearing, ore metal-poor fluids. These fluids were entrapped at ~600-700°C and ~2400-7600 bar in pegmatites, miarolitic cavities, and quartz veins within early stage units, as well as in quartz and calcite veins in base-metal skarns along the pluton margin and in the contact metamorphic aureole. Late stage fluid activity was dominated by one aquo-carbonic, low salinity (~3 wt % eq. NaCl), B-Na-K-Rb-Sr-Cs-bearing, ore metal-poor fluid. This fluid was entrapped at 570-680°C and ~4800-7200 bar in pegmatites, aplites, and quartz veins, and did not migrate out of late-stage intrusive units. Magmatic δ18O values for quartz demonstrate that this magmatic-hydrothermal fluid system evolved without significant dilution from meteoric inputs until the late influx of post-intrusion hydrothermal fluids, interpreted to be of mixed magmatic-meteoric origin. These fluids were aqueous, low-temperature (320-410°C), low salinity (<4 wt % eq. NaCl), and were entrapped at <2400 bar in fault-hosted microcrystalline quartz veins.The entrapment conditions for early stage magmatic-hydrothermal fluids determined from fluid inclusion microthermometry data indicate that the HPP was emplaced at depths of 9-18 km in the Ruby Mountains-East Humboldt Range. Brittle-ductile deformation of the HPP on the regionally-exposed Ruby Mountain Shear Zone indicate that at least 9 km of vertical exhumation has occurred since the intrusion of the HPP. Such emplacement depth estimates are consistent with published mineral thermobarometry from the HPP and from nearby metamorphic rocks. It is interpreted that the disparity between these estimates and the relatively shallow minimum emplacement depths of 4-6 km suggested by stratigraphic reconstructions is supportive of the existence of poorly preserved, Mesozoic thrust sheets that augmented the thickness of the overlying rock package during the Eocene. Although a well-accepted model for the origin of fluids and Au for CTD's remains outstanding, the model of Muntean et al. (2011) argues for the separation of a low-salinity, vapor-rich, Au-partitioning fluid from a high-salinity, base metal-partitioning fluid in mid- crustal magma chambers as a critical process in the evolution of CTD ore fluids. Although HPP emplacement depths and the existence of a robust magmatic-hydrothermal fluid system are broadly supportive of this model, no evidence of a high-salinity fluid was observed. Low concentrations of base metals (Cu, Pb) and CTD pathfinder elements (As, Tl) relative to whole-rock values are not consistent with the efficient fluid partitioning of metals invoked by Muntean et al. (2011). Also, low concentrations of CTD pathfinder elements relative to published values for CTD ore fluids indicate that the HPP was not Au-enriched. However, similar salinities and δ18O values suggest that HPP fluids may represent the minor magmatic component of ore fluids detected at some CTD's, but other fluid inputs and an external Au source would be required to produce these ore fluids. Thus, it is suggested that the magmatic-hydrothermal fluid history of the HPP is more consistent with a dominantly amagmatic fluid model for the origin of CTD's

    Timescales and processes of methane hydrate formation and breakdown, with application to geologic systems

    Get PDF
    © The Author(s), 2020. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Ruppel, C. D., & Waite, W. F. Timescales and processes of methane hydrate formation and breakdown, with application to geologic systems. Journal of Geophysical Research: Solid Earth, 125(8), (2020): e2018JB016459, doi:10.1029/2018JB016459.Gas hydrate is an ice‐like form of water and low molecular weight gas stable at temperatures of roughly −10°C to 25°C and pressures of ~3 to 30 MPa in geologic systems. Natural gas hydrates sequester an estimated one sixth of Earth's methane and are found primarily in deepwater marine sediments on continental margins, but also in permafrost areas and under continental ice sheets. When gas hydrate is removed from its stability field, its breakdown has implications for the global carbon cycle, ocean chemistry, marine geohazards, and interactions between the geosphere and the ocean‐atmosphere system. Gas hydrate breakdown can also be artificially driven as a component of studies assessing the resource potential of these deposits. Furthermore, geologic processes and perturbations to the ocean‐atmosphere system (e.g., warming temperatures) can cause not only dissociation, but also more widespread dissolution of hydrate or even formation of new hydrate in reservoirs. Linkages between gas hydrate and disparate aspects of Earth's near‐surface physical, chemical, and biological systems render an assessment of the rates and processes affecting the persistence of gas hydrate an appropriate Centennial Grand Challenge. This paper reviews the thermodynamic controls on methane hydrate stability and then describes the relative importance of kinetic, mass transfer, and heat transfer processes in the formation and breakdown (dissociation and dissolution) of gas hydrate. Results from numerical modeling, laboratory, and some field studies are used to summarize the rates of hydrate formation and breakdown, followed by an extensive treatment of hydrate dynamics in marine and cryospheric gas hydrate systems.Both authors have received nearly two decades of support from the U.S. Geological Survey's (USGS's) Energy Resources Program and the Coastal/Marine Hazards and Resources Program and from numerous DOE‐USGS Interagency Agreements, most recently DE‐FE0023495. C. R. acknowledges support from NOAA's Office of Ocean Exploration and Research (OER) under NOAA‐USGS Interagency Agreement 16‐01118
    corecore