131 research outputs found

    Similitude of ice dynamics against scaling of geometry and physical parameters

    Get PDF
    The concept of similitude is commonly employed in the fields of fluid dynamics and engineering but rarely used in cryospheric research. Here we apply this method to the problem of ice flow to examine the dynamic similitude of isothermal ice sheets in shallow-shelf approximation against the scaling of their geometry and physical parameters. Carrying out a dimensional analysis of the stress balance we obtain dimensionless numbers that characterize the flow. Requiring that these numbers remain the same under scaling we obtain conditions that relate the geometric scaling factors, the parameters for the ice softness, surface mass balance and basal friction as well as the ice-sheet intrinsic response time to each other. We demonstrate that these scaling laws are the same for both the (two-dimensional) flow-line case and the three-dimensional case. The theoretically predicted ice-sheet scaling behavior agrees with results from numerical simulations that we conduct in flow-line and three-dimensional conceptual setups. We further investigate analytically the implications of geometric scaling of ice sheets for their response time. With this study we provide a framework which, under several assumptions, allows for a fundamental comparison of the ice-dynamic behavior across different scales. It proves to be useful in the design of conceptual numerical model setups and could also be helpful for designing laboratory glacier experiments. The concept might also be applied to real-world systems, e.g., to examine the response times of glaciers, ice streams or ice sheets to climatic perturbations

    Efficient Hardware Implementation of Constant Time Sampling for HQC

    Full text link
    HQC is one of the code-based finalists in the last round of the NIST post quantum cryptography standardization process. In this process, security and implementation efficiency are key metrics for the selection of the candidates. A critical compute kernel with respect to efficient hardware implementations and security in HQC is the sampling method used to derive random numbers. Due to its security criticality, recently an updated sampling algorithm was presented to increase its robustness against side-channel attacks. In this paper, we pursue a cross layer approach to optimize this new sampling algorithm to enable an efficient hardware implementation without comprising the original algorithmic security and side-channel attack robustness. We compare our cross layer based implementation to a direct hardware implementation of the original algorithm and to optimized implementations of the previous sampler version. All implementations are evaluated using the Xilinx Artix 7 FPGA. Our results show that our approach reduces the latency by a factor of 24 compared to the original algorithm and by a factor of 28 compared to the previously used sampler with significantly less resources

    Search for New Physics in rare decays at LHCb

    Full text link
    Rare heavy flavor decays provide stringent tests of the Standard Model of particle physics and allow to test for possible new Physics scenarios. The LHCb experiment at CERN is the ideal place for these searches as it has recorded the worlds largest sample of beauty mesons. The status of the rare decay analyses with 1\invfb of \sqrt s = 7\tev of pppp--collisions collected by the LHCb experiment in 2011 is reviewed. The worlds most precise measurements of the angular structure of \BdToKstmm decays is discussed, as well as the isospin asymmetry measurement in \decay{B}{\kaon^{(*)} \mup\mun} decays. The most stringent upper exclusion limit on the branching fraction of \Bsmm decays is shown, as well as searches for lepton number and lepton flavor violating processes.Comment: 6 pages, Proceedings for an invited talk at the 4th Workshop on Theory, Phenomenology and Experiments in Heavy Flavour Physics, Capri, Italy, 11-13 June 2012; updated reference

    Phylogenomic analysis of natural products biosynthetic gene clusters allows discovery of arseno-organic metabolites in model streptomycetes

    Get PDF
    We are indebted with Marnix Medema, Paul Straight and Sean Rovito, for useful discussions and critical reading of the manuscript, as well as with Alicia Chagolla and Yolanda Rodriguez of the MS Service of Unidad Irapuato, Cinvestav, and Araceli Fernandez for technical support in high-performance computing. This work was funded by Conacyt Mexico (grants No. 179290 and 177568) and FINNOVA Mexico (grant No. 214716) to FBG. PCM was funded by Conacyt scholarship (No. 28830) and a Cinvestav posdoctoral fellowship. JF and JFK acknowledge funding from the College of Physical Sciences, University of Aberdeen, UK.Peer reviewedPublisher PD

    On Sparse Hitting Sets: From Fair Vertex Cover to Highway Dimension

    Get PDF
    We consider the Sparse Hitting Set (Sparse-HS) problem, where we are given a set system (V,?,?) with two families ?,? of subsets of the universe V. The task is to find a hitting set for ? that minimizes the maximum number of elements in any of the sets of ?. This generalizes several problems that have been studied in the literature. Our focus is on determining the complexity of some of these special cases of Sparse-HS with respect to the sparseness k, which is the optimum number of hitting set elements in any set of ? (i.e., the value of the objective function). For the Sparse Vertex Cover (Sparse-VC) problem, the universe is given by the vertex set V of a graph, and ? is its edge set. We prove NP-hardness for sparseness k ? 2 and polynomial time solvability for k = 1. We also provide a polynomial-time 2-approximation algorithm for any k. A special case of Sparse-VC is Fair Vertex Cover (Fair-VC), where the family ? is given by vertex neighbourhoods. For this problem it was open whether it is FPT (or even XP) parameterized by the sparseness k. We answer this question in the negative, by proving NP-hardness for constant k. We also provide a polynomial-time (2-1/k)-approximation algorithm for Fair-VC, which is better than any approximation algorithm possible for Sparse-VC or the Vertex Cover problem (under the Unique Games Conjecture). We then switch to a different set of problems derived from Sparse-HS related to the highway dimension, which is a graph parameter modelling transportation networks. In recent years a growing literature has shown interesting algorithms for graphs of low highway dimension. To exploit the structure of such graphs, most of them compute solutions to the r-Shortest Path Cover (r-SPC) problem, where r > 0, ? contains all shortest paths of length between r and 2r, and ? contains all balls of radius 2r. It is known that there is an XP algorithm that computes solutions to r-SPC of sparseness at most h if the input graph has highway dimension h. However it was not known whether a corresponding FPT algorithm exists as well. We prove that r-SPC and also the related r-Highway Dimension (r-HD) problem, which can be used to formally define the highway dimension of a graph, are both W[1]-hard. Furthermore, by the result of Abraham et al. [ICALP 2011] there is a polynomial-time O(log k)-approximation algorithm for r-HD, but for r-SPC such an algorithm is not known. We prove that r-SPC admits a polynomial-time O(log n)-approximation algorithm

    Die Klassifizierung von Schulen als Mittel der Schulsteuerung und lokalen Profilbildung. BegleitumstĂ€nde, nachkriegszeitliche Anpassungsprobleme und aktuelle Folgen der Klassifizierung des berufsbildenden Schulwesens seit den dreißiger Jahren des 20. Jahrhunderts

    Full text link
    Die Unterscheidung zwischen Berufsschulen, Berufsfachschulen und Fachschulen geht auf einen Erlass des Reichsministeriums fĂŒr Wissenschaft, Erziehung und Volksbildung von 1937 zurĂŒck. Der Erlass, seine Genese und seine langfristigen strukturellen Auswirkungen auf die Benennung der beruflichen Schulen werden unter Zugrundelegung von Dokumenten aus dem DFG-Forschungsprojekt "Datenhandbuch zur deutschen Bildungsgeschichte: Band V: Das Berufsbildende Schulsystem in Deutschland 1815-1945" untersucht und in einen grĂ¶ĂŸeren Entwicklungszusammenhang eingeordnet. Besondere Aufmerksamkeit gilt dem VerhĂ€ltnis zwischen der in den 1930er-Jahren entstandenen Klassifikation, dem Funktionszuwachs der beruflichen Schulen und ihrer Verflechtung mit dem Abschluss- und Berechtigungssystem der allgemeinbildenden Schulen. (DIPF/Orig.)The differentiation between vocational schools, training colleges, and technical colleges goes back to an edict decreed by the German ministry for science, schooling, and national education in 1937. This edict, its origins and its long-term impact on the designation of vocational schools are examined and placed within a broader framework of development on the basis of documents provided by a research project sponsored by the German Research Association (DFG), i.e. the "Data Handbook on the History of German Education: Vol. V: The German Vocational School System, 1815-1945". Special emphasis is placed upon the relation between the classification which evolved during the 1930s, the increase in functions served by the vocational schools, and their interconnection with the system of degrees and entitlements of the general schools. (DIPF/Orig.

    Scalable high-precision trimming of photonic resonances by polymer exposure to energetic beams

    Get PDF
    Integrated photonic circuits (PICs) have seen an explosion in interest, through to commercialization in the past decade. Most PICs rely on sharp resonances to modulate, steer, and multiplex signals. However, the spectral characteristics of high-quality resonances are highly sensitive to small variations in fabrication and material constants, which limits their applicability. Active tuning mechanisms are commonly employed to account for such deviations, consuming energy and occupying valuable chip real estate. Readily employable, accurate, and highly scalable mechanisms to tailor the modal properties of photonic integrated circuits are urgently required. Here, we present an elegant and powerful solution to achieve this in a scalable manner during the semiconductor fabrication process using existing lithography tools: by exploiting the volume shrinkage exhibited by certain polymers to permanently modulate the waveguide’s effective index. This technique enables broadband and lossless tuning with immediate applicability in wide-ranging applications in optical computing, telecommunications, and free-space optics
    • 

    corecore