344 research outputs found
Consulting Project 2018/19: Manufacturing process of superconducting magnets: Analysis of manufacturing chain technologies for market-oriented industries. Report
An international consortium of more than 150 organisations worldwide is studying the feasibility of
future particle collider scenarios to expand our understanding of the inner workings of the Universe.
The core of this Future Circular Collider (FCC) study, hosted by CERN, an international organisation
near Geneva (Switzerland), is a 100 km long circular particle collider infrastructure that extends CERN's
current accelerator complex. As a first step, an intensity frontier electron-positron collider is assumed.
The ultimate goal is to build a proton collider with an energy seven times larger than the Large Hadron
Collider (LHC). Such a machine has to be built with novel superconductive magnet technology. Since
it takes decades for such technology to reach industrial maturity levels, R&D has already started. The
superconducting magnet system is considered the major cost driver for construction of such a proton
collider. A good cost-benefit balance for industrial suppliers is considered an important factor for the
funding of such a project.
Aim
The aim of this investigation was to identify the industrial impact potentials of the key processes
needed for the manufacturing of novel high-field superconducting magnets and to find innovative
additional applications for these technologies outside the particle-accelerator domain. Suppliers
and manufacturing partners of CERN would benefit if the know-how could be used for other markets
and to improve their internal efficiency and competitivity on the world-market. Eventually, being more
cost-effective in the manufacturing and being able to leverage further markets on a long-time scale will
also reduce the cost for each step in the manufacturing chain and ultimately lead to lower costs for the
superconducting magnet system of a future high-energy particle collider.
Method
The project is carried out by means of the Technology Competence Leveraging method, which has
been pioneered by the Vienna University of economics and business in Austria. It aims to find new
application fields for the three most promising technologies required to manufacture novel high-field
superconducting magnets. This is achieved by gathering information from user-communities,
conducting interviews with experts in different industries and brainstorming for new out-of-the-box
ideas. The most valuable application fields were evaluated according to their Benefit Relevance and
Strategic Fit. During the process, 71 interviews with experts have been carried out, through which 38
new application fields were found with credible impacts beyond particle accelerator projects. They
relate to manufacturing "superconducting Rutherford cables" (15), "thermal treatment" (10) and
"vacuum impregnation with novel epoxy" (13).
Superconducting magnet manufacturing technologies for market-oriented industries Report.
Results: A short description of all application fields that were classified as "high potential" can be found here:
Superconducting Rutherford cable
* Aircraft charging: Commercial airplanes only spend around 45 minutes on the ground at a
time to load and unload passengers. For future electric aircraft this time window would be to
small to charge using conventional cables. The superconducting Rutherford cable could charge
an electric plane fast and efficiently.
* Electricity distribution in hybrid-electric aircraft: On a shorter time scale, hybrid-electric
aircraft is an appealing ecological technology with economic advantages. In this case, electricity
for the electric engines is produced by a generator. Cables with high current densities are needed
inside the aircraft to distribute the energy. The superconducting Rutherford cable could be a
candidate for this task.
* Compact and efficient electricity generators: Using the superconducting Rutherford cable,
small and light engines and generators can be constructed. One end-use example is for instance
the generation of electricity using highly-efficient wind turbines.
Thermal treatment: Heat treatment is needed during the production of superconducting magnet coils. In this processing step,
the raw materials are reacted to form the superconductor. This processing step is used for certain lowtemperature
superconductors as well as for certain high-temperature superconductors.
* Scrap metal recycling: Using a large-scale oven with very accurate temperature stabilisation
over long time periods, melting points of different metals can be selected. This leads to more
efficient recycling of scrap metal. It also permits a higher degrees of process automation and
quality management.
* Thermal treatment of aluminium: Thermal treatment of aluminium comprises technologies
like tempering and hardening. The goal of this technique is to change the characteristics of
aluminium and alloys containing aluminium. End-use applications include for instance the
automotive and aerospace industry, where such exact treatment is necessary.
Vacuum impregnation
* Waste treatmnent companies currently face challenges because new legislation require more
leak-tight containers. Novel epoxy resin developed for superconducting magnets in particle
colliders also needs to withstand high radiation levels. Therefore, this technology can be useful
in the process of managing highly-activated radioactive waste
Detecting Encryption Vulnerabilities By Coupling Architectural Analyses and Source Code Analyses
Architectural security analyses calculate security vulnerabilities by evaluating architectural security design models comprising the system architecture and security-related information. The architectural analysis is performed before the implementation phase to avoid implementing a vulnerable system. Consequentially, the architectural vulnerabilities are calculated based on the assumption that the implementation complies with the specified system. When the implementation does not comply with the security design models, the architectural analysis may miss vulnerabilities in the final system. We address this problem by presenting an approach for analysis coupling, which allows the architectural analysis to be performed with information about security weaknesses regarding data encryption in the implementation detected by a source code analysis searching for predefined patterns. We perform a case study-based evaluation of the accuracy to detect architectural vulnerabilities arising from weaknesses in the implementation. In this evaluation, we apply the coupling approach to couple an architectural analysis with three source code analyses and apply them to three systems containing encryption-related weaknesses. Our evaluation shows that the coupling enables the detection of architectural vulnerabilities that are not detectable when performing the architectural analysis in isolation. However, the evaluation shows negative impacts of our coupling approach by missing existing vulnerabilities or reporting not existing vulnerabilities
Essays in Empirical Financial Research
This thesis comprises three studies that deal with the application of machine learning methods in empirical financial market research. Two central topics are covered: portfolio optimization and corporate earnings forecasting. In the first study, a machine learning approach for the direct optimization of stock portfolio weights conditional on predictor variables is presented and empirically tested. It is shown that the presented approach leads to significant utility gains for investors compared to alternative less complex linear methods. This improvement is robust across different investor types and portfolio restrictions. Furthermore, the study shows that a higher degree of risk aversion and a stronger degree of restrictions lead to a reduction in the complexity of the machine learning approach. In the second study, company earnings are predicted using machine learning methods based on fundamental data. The central scientific contribution of this study is the interpretation of the machine learning model. One of the key findings is that variables originating from a company’s income statement are particularly important. Further, it is shown that the relationship between fundamentals and future corporate earnings differs for profit and loss companies, respectively, but is somewhat linear in each case. In the third study, the relationship between the accuracy of forecasting models for corporate earnings and an investment strategy frequently used in the literature conditional on earnings is examined in more detail. In contrast to the existing literature, transaction costs are considered explicitly. In addition, a new accuracy measure is introduced that measures systematic distortions of forecasts, such as systematic under- or overestimation of future earnings. Finally, it is shown that transaction costs correlate neither with the standard measure of forecast accuracy nor with the newly introduced measure of systematic distortions
Model-driven Quantification of Correctness with Palladio and KeY
In this report, we present an approach for the quantification of correctness of service-oriented software systems by combining the modeling tool Palladio and the deductive verification approach KeY.
Our approach uses Palladio for modeling the service-oriented architecture, the usage scenarios of the system (called services) in particular, and the distribution of values for the parameters provided by the users. The correctness of a service is modeled as a Boolean condition. We use Palladio to compute the probability of a service being called with critical parameters, i.e., in a way that its correctness condition is violated. The critical parameters are computed by KeY, a deductive verification tool for Java. The approach is not limited to KeY: Other techniques, such as bug finding (testing, bounded model checking) can be used, as well as other
verification tools.
We present two scenarios, which we use as examples to evaluate the feasibility of the approach. Finally, we close with remarks on the extension to security properties.
Furthermore, we discuss a possible approach to guide developers to locations of the code that should be verified or secured
Competition co-immunoprecipitation reveals the interactors of the chloroplast CPN60 chaperonin machinery
The functionality of all metabolic processes in chloroplasts depends on a balanced integration of nuclear- and chloroplast-encoded polypeptides into the plastid's proteome. The chloroplast chaperonin machinery is an essential player in chloroplast protein folding under ambient and stressful conditions, with a more intricate structure and subunit composition compared to the orthologous GroEL/ES chaperonin of Escherichia coli. However, its exact role in chloroplasts remains obscure, mainly because of very limited knowledge about the interactors. We employed the competition immunoprecipitation method for the identification of the chaperonin's interactors in Chlamydomonas reinhardtii. Co-immunoprecipitation of the target complex in the presence of increasing amounts of isotope-labelled competitor epitope and subsequent mass spectrometry analysis specifically allowed to distinguish true interactors from unspecifically co-precipitated proteins. Besides known substrates such as RbcL and the expected complex partners, we revealed numerous new interactors with high confidence. Proteins that qualify as putative substrate proteins differ from bulk chloroplast proteins by a higher content of beta-sheets, lower alpha-helical conformation and increased aggregation propensity. Immunoprecipitations targeted against a subunit of the co-chaperonin lid revealed the ClpP protease as a specific partner complex, pointing to a close collaboration of these machineries to maintain protein homeostasis in the chloroplast
Vitamin A controls the allergic response through T follicular helper cell as well as plasmablast differentiation
Background Vitamin A regulates the adaptive immune response and a modulatory impact on type I allergy is discussed. The cellular mechanisms are largely unknown. Objective To determine the vitamin A-responding specific lymphocyte reaction in vivo. Methods Antigen-specific B and T lymphocytes were analyzed in an adoptive transfer airway inflammation mouse model in response to 9-cis retinoic acid (9cRA) and after lymphocyte-specific genetic targeting of the receptor RAR alpha. Flow cytometry, quantitative PCR, next-generation sequencing, and specific Ig-ELISA were used to characterize the cells functionally. Results Systemic 9cRA profoundly enhanced the specific IgA-secreting B-cell frequencies in the lung tissue and serum IgA while reducing serum IgE concentrations. RAR alpha overexpression in antigen-specific B cells promoted differentiation into plasmablasts at the expense of germinal center B cells. In antigen-specific T cells, RAR alpha strongly promoted the differentiation of T follicular helper cells followed by an enhanced germinal center response. Conclusions 9cRA signaling via RAR alpha impacts the allergen-specific immunoglobulin response directly by the differentiation of B cells and indirectly by promoting T follicular helper cells
Hydrothermal Activity at a Cretaceous Seamount, Canary Archipelago, Caused by Rejuvenated Volcanism
Our knowledge of venting at intraplate seamounts is limited. Almost nothing is known
about past hydrothermal activity at seamounts, because indicators are soon blanketed
by sediment. This study provides evidence for temporary hydrothermal circulation at
Henry Seamount, a re-activated Cretaceous volcano near El Hierro island, close to the
current locus of the Canary Island hotspot. In the summit area at around 3000–3200 m
water depth, we found areas with dense coverage by shell fragments from vesicomyid
clams, a few living chemosymbiotic bivalves, and evidence for sites of weak fluid venting.
Our observations suggest pulses of hydrothermal activity since some thousands or tens
of thousands years, which is now waning. We also recovered glassy heterolithologic
tephra and dispersed basaltic rock fragments from the summit area. Their freshness
suggests eruption during the Pleistocene to Holocene, implying minor rejuvenated
volcanism at Henry Seamount probably related to the nearby Canary hotspot. Heat
flow values determined on the surrounding seafloor (49 ± 7 mW/m
2
) are close to the
expected background for conductively cooled 155 Ma old crust; the proximity to the
hotspot did not result in elevated basal heat flow. A weak increase in heat flow toward
the southwestern seamount flank likely reflects recent local fluid circulation. We propose
that hydrothermal circulation at Henry Seamount was, and still is, driven by heat pulses
from weak rejuvenated volcanic activity. Our results suggest that even single eruptions at
submarine intraplate volcanoes may give rise to ephemeral hydrothermal systems and
generate potentially habitable environments
Bathymetric and Seismic Data, Heat Flow Data, and Age Constraints of Le Gouic Seamount, Northeastern Atlantic
Until the year 2019 only around 15% of the Earth’s seafloor were mapped at fine spatial resolution
(<800 m) by multibeam echosounder systems (Wölfl et al., 2019). Most of our knowledge of global
bathymetry is based on depths predicted by gravity observations from satellite altimeters. These
predicted depths are combined with shipboard soundings to produce global bathymetric grids.
The first topographic map of the world’s oceans so produced (Smith and Sandwell, 1997) had a
resolution between 1 and 12 km, and subsequent improvements in data and filtering techniques
led to several updates. The latest bathymetric grid of the General Bathymetric Chart of the Oceans
(GEBCO_2020) uses the SRTM15+V2.0 data set, which has a grid spacing of 15 arc sec, equivalent
to about 500 × 500 m at the equator (Tozer et al., 2019). This resolution does not imply that reliable
depth data are available for each grid cell. There are vast areas of the oceans where the accuracy of
these grids is limited by lacking shipborne multibeam data, which are needed for calibrating and
ground-truthing predicted depths (Smith and Sandwell, 1994).
The resolution and accuracy of the bathymetric grids are critical factors for global estimates
of the number and size distribution of seamounts, in particular for small edifices of <1,000 m
height (Wessel, 2001; Hillier and Watts, 2007; Kim and Wessel, 2011). A case in point is Le
Gouic Seamount, located in the NE Atlantic about 100 km SW of Tropic Seamount on ca. 152 Ma
crust, close to magnetic isochrone M24 (Bird et al., 2007). The seamount belongs to the Canary
Island Seamount Province (CISP; van den Bogaard, 2013), also termed Western Saharan Seamount
Province (WSSP) by some workers (e.g., Josso et al., 2019). It is listed in the Kim and Wessel (2011)
seamount census with the ID KW-00902, located at 21.26216 ◦ W/23.0199 ◦ N, with a height of 498 m;
hence it appears as a tiny cone in pre-2019 bathymetric grids (Figure 1a). After first mapping of
large parts of the seamount by the French oceanographic survey vessel “Beautemps-Beaupré” in
2013, it is represented at its full height in the actual GEBCO_2020 grid, which is based on the
SRTM15+V2.0 data set (Tozer et al., 2019).
In this data report we present new multibeam bathymetric data for Le Gouic Seamount,
mapping its full extent for the first time. The data were obtained during a transit of R/V METEOR
cruise M146 in 2018. We also present a reflection seismic profile across the seamount that was
shot during the mapping, and seafloor heatflow data obtained on a profile near the northeastern
seamount base and co-located on the reflection profile. On the basis of this data we can place
constraints on the age of the seamount, and speculate about possible rejuvenated magmatic activity
- …
