2,560 research outputs found

    When a thin periodic layer meets corners: asymptotic analysis of a singular Poisson problem

    Get PDF
    The present work deals with the resolution of the Poisson equation in a bounded domain made of a thin and periodic layer of finite length placed into a homogeneous medium. We provide and justify a high order asymptotic expansion which takes into account the boundary layer effect occurring in the vicinity of the periodic layer as well as the corner singularities appearing in the neighborhood of the extremities of the layer. Our approach combines the method of matched asymptotic expansions and the method of periodic surface homogenization, and a complete justification is included in the paper or its appendix.Comment: 58 page

    Synergetic use of polarimetric Doppler radars at Ka- and C- band for retrieval of water drop and ice particle size distributions

    Get PDF
    The transition from bulk to spectral bin microphysics schemes within numerical weather prediction (NWP) models promises more realistic simulations of cloud resolving processes and finally an improving of weather forecasting quality. However the needed size distributions of water drops and ice particles were parameterized from rare observations, typically from ground based rain distrometers or aircraft in-situ measurements. The current work introduces a retrieval method to derive size distributions from synergetic use of vertical pointed Ka-band and polarimetric C-band radar. The method is based on using the full height-resolved Doppler spectra instead of mean values of reflectivity and radial velocity inside the radar bin volume. Within a Mie and T-matrix based radar forward operator, Doppler spectra were simulated from assumed size distributions, taking into account the attenuation at Ka-band. In an iterative way, the parameters of distributions were varied until differences between simulated and observed radar profiles could minimized. Additional data e.g. from radiosounding and SODAR wind profilers were used to estimate and minimize the most relevant error sources expected from vertical air motion and turbulence. First results were achieved from a case study of 8th July 2007 during the Convection and Orographically Induced Precipitation Study (COPS)

    Journeys of social conscience : Humanities I in Action at Hong Kong International School

    Full text link
    In this presentation high school teachers Marty Schmidt and Mike Kersten present a service learning course that they teach at Hong Kong International School called Humanities I in Action that offers students a transformative journey of social conscience. A wide-ranging, interdisciplinary curriculum has been developed in the past ten years that enables students to consider their place in the world. An essential part of the curriculum are ten out-of-the-classroom experiences, including a 4-day trip to an orphanage in China. The depth gained in this course is also due in part to the amount of time, 80 minutes/day for 180 days, with motivated students who have chosen to take the class. Following an introduction of the course curriculum, the presenters will share a recently created video, which provides interview excerpts from students who have participated in and have been deeply impacted by the course. Using Marty\u27s research on social conscience as a guide, student comments in the video are metaphorically placed onto the symbol of a labyrinth to provide a unifying structure for their individual journeys through the course. You may learn more about social conscience curricula from Marty\u27s blog: http://martinschmidtinasia.wordpress.com

    Running Neutrino Mass Parameters in See-Saw Scenarios

    Full text link
    We systematically analyze quantum corrections in see-saw scenarios, including effects from above the see-saw scales. We derive approximate renormalization group equations for neutrino masses, lepton mixings and CP phases, yielding an analytic understanding and a simple estimate of the size of the effects. Even for hierarchical masses, they often exceed the precision of future experiments. Furthermore, we provide a software package allowing for a convenient numerical renormalization group analysis, with heavy singlets being integrated out successively at their mass thresholds. We also discuss applications to model building and related topics.Comment: 49 pages, 9 figures; minor corrections in Sec. 6.5.1; the accompanying software packages REAP/MPT can be downloaded from http://www.ph.tum.de/~rg

    Bulkloading and Maintaining XML Documents

    Get PDF
    The popularity of XML as a exchange and storage format brings about massive amounts of documents to be stored, maintained and analyzed -- a challenge that traditionally has been tackled with Database Management Systems (DBMS). To open up the content of XML documents to analysis with declarative query languages, efficient bulk loading techniques are necessary. Database technology has traditionally been offering support for these tasks but yet falls short of providing efficient automation techniques for the challenges that large collections of XML data raise. As storage back-end, many applications rely on relational databases, which are designed towards large data volumes. This paper studies the bulk load and update algorithms for XML data stored in relational format and outlines opportunities and problems. We investigate both (1) bulk insertion and deletion as well as (2) updates in the form of edit scripts which heavily use pointer-chasing techniques which often are considered orthogonal to the algebraic operations relational databases are optimized for. To get the most out of relational database systems, we show that one should make careful use of edit scripts and replace them with bulk operations if more than a very small portion of the database is updated. We implemented our ideas on top of the Monet Database System and benchmarked their performance

    Ortung und Analyse von Blitzentladungen mittels Registrierung von VLF-Atmospherics innerhalb eines Messnetzes

    Get PDF
    Natürliche Blitzentladungen können aufgrund ihrer abgestrahlten, elektromagnetischen Impulswellen (Atmospherics) in verschiedenen Frequenzintervallen geortet werden. Unter Verwendung magnetischer Sensoren wurde im Rahmen dieser Arbeit ein Messsystem entwickelt und erprobt, dessen Detektionsschwerpunkt im VLF-Bereich liegt. Unterschiedlichste Blitzentladungen wurden mit Messnetzen in Deutschland, Brasilien und Australien gezielt geortet und analysiert. Durch eine aufwendige Ermittlung der Signalankunftszeiten an den Messstationen und die Verwendung eines erweiterten Laufzeitverfahrens zur Blitzpeilung, welches neben Länge und Breite auch die Höhe der Blitzentladung als Parameter zulässt, ergaben sich mittlere Laufzeitfehler von nur 0,2 µs. Simulationen, statistische Vergleiche und alternative Verfahren zur Blitzhöhenbestimmung belegen, dass die jeweils berechnete Höhe eine sinnvolle, physikalische Größe darstellt. Durch diese wird es möglich, Wolken- und Erdblitze auf einfache Art voneinander zu unterscheiden. Die Peilgenauigkeit und Effizienz der erfassten Blitzereignisse wurden durch Vergleiche mit einer Reihe von anderen Blitzmessnetzen untersucht. Dabei stellte sich heraus, dass von unseren Netzwerken bis zu zehn Mal so viele reale Blitzereignisse gemeldet wurden. Die Vergleiche der Stromamplituden zeigen, dass die zusätzlich erfassten Ereignisse fast ausschließlich zu schwächeren Blitzen mit Entladungsströmen unter 5 kA gehören. Die starken Amplituden werden von den verschiedenen Netzen in sehr guter Übereinstimmung gemessen. Die Peildifferenzen zwischen zeitgleichen Ereignissen der verglichenen Datensätze liegen in den meisten Fällen unterhalb eines Kilometers. Aufgrund der großen Datenfülle und der Emissionshöhenbestimmung der Impulse wurde es erstmals möglich, mit einem VLF-Messnetz dreidimensionale Entladungsstrecken darzustellen. Durch die Analyse der aufgezeichneten Wellenformen wurden Einteilungen der Impulse in verschiedene Kategorien möglich. So ließen sich signifikante Unterschiede zwischen Wolken- und Erdblitzen, aber auch bei Vor- und Hauptentladungen aufzeigen

    Indexing real-world data using semi-structured documents

    Get PDF
    We address the problem of deriving meaningful semantic index information for a multi-media database using a semi-structured docu-ment model. We show how our framework, called {em feature grammars, can be used to (1)~exploit third-party interpretation modules for real-world unstructured components, and (2)~use context-free grammars to convert such poorly or unstructured input to semi-structured output. The basic idea is to enrich context-free grammars with special symbols called detectors, which provide for the necessary structure {em just-in-time to satisfy a parser look-ahead. A prototype implementation has been constructed in the Acoi project to demonstrate the feasibility of this approach for indexing both images and audio documents

    Querying XML Documents Made Easy: Nearest Concept Queries

    Get PDF
    Due to the ubiquity and popularity of XML, users often are in the following situation: they want to query XML documents which contain potentially interesting information but they are unaware of the mark-up structure that is used. For example, it is easy to guess the contents of an XML bibliography file whereas the mark-up depends on the methodological, cultural and personal background of the author(s). Nonetheless, it is this hierarchical structure that forms the basis of XML query languages. In this paper we exploit the tree structure of XML documents to equip users with a powerful tool, the meet operator, that lets them query databases with whose content they are familiar, but without requiring knowledge of tags and hierarchies. Our approach is based on computing the lowest common ancestor of nodes in the XML syntax tree: eg, given two strings, we are looking for nodes whose offspring contains these two strings. The novelty of this approach is that the result type is unknown at query formulation time and dependent on the database instance. If the two strings are an author's name and a year, mainly publications of the author in this year are returned. If the two strings are numbers the result mostly consists of publications that have the numbers as year or page numbers. Because the result type of a query is not specified by the user we refer to the lowest common ancestor as nearest concept We also present a running example taken from the bibliography domain, and demonstrate that the operator can be implemented efficiently
    corecore