175 research outputs found
MODELS FOR MULTIDISCIPLINARY DESIGN OPTIMIZATION: AN EXEMPLARY OFFICE BUILDING
The mathematical and technical foundations of optimization have been developed to a large extent. In the design of buildings, however, optimization is rarely applied because of insufficient adaptation of this method to the needs of building design. The use of design optimization requires the consideration of all relevant objectives in an interactive and multidisciplinary process. Disciplines such as structural, light, and thermal engineering, architecture, and economics impose various objectives on the design. A good solution calls for a compromise between these often contradictory objectives. This presentation outlines a method for the application of Multidisciplinary Design Optimization (MDO) as a tool for the designing of buildings. An optimization model is established considering the fact that in building design the non-numerical aspects are of major importance than in other engineering disciplines. A component-based decomposition enables the designer to manage the non-numerical aspects in an interactive design optimization process. A façade example demonstrates a way how the different disciplines interact and how the components integrate the disciplines in one optimization model. In this grid-based façade example, the materials switch between a discrete number of materials and construction types. For light and thermal engineering, architecture, and economics, analysis functions calculate the performance; utility functions serve as an important means for the evaluation since not every increase or decrease of a physical value improves the design. For experimental purposes, a genetic algorithm applied to the exemplary model demonstrates the use of optimization in this design case. A component-based representation first serves to manage non-numerical characteristics such as aesthetics. Furthermore, it complies with usual fabrication methods in building design and with object-oriented data handling in CAD. Therefore, components provide an important basis for an interactive MDO process in building design
Plasma proteome profiling to assess human health and disease
The majority of diagnostic decisions are made on results from blood-based tests, and protein measurements are prominent among them. However, current assays are restricted to individual proteins, whereas it would be much more desirable to measure all of them in an unbiased, hypothesis-free manner. Therefore, characterization of the plasma proteome by mass spectrometry holds great promise for clinical application.
Due to great technological challenges and study design issues, plasma proteomics has not yet lived up to its promises: no new biomarkers have been discovered, plasma proteomics has not entered clinical diagnostics and few biologically meaningful insights have been gained. As a consequence, relatively few groups still continue to pursue plasma proteomics, despite the undiminished clinical need.
The overall aim of my PhD thesis was to pave the way for biomarker discovery and clinical applications of proteomics by precision characterization of the human blood plasma proteome. First, we streamlined the standard, time consuming and laborintensive proteomic workflow, and replaced it by a rapid, robust and highly reproducible robotic platform. After optimization of digestion conditions, peptide clean-up procedures and LC-MS/MS procedures, we can now prepare 96 samples in a fully-automated way within 3h and we routinely measure hundreds of plasma proteomes. Our workflow decreases hands-on time and opens the field for a new concept in biomarker discovery, which we termed ‘Plasma Proteome Profiling’.
It enables the highly reproducibility (CV<20% for most proteins), and quantitative analysis of several hundred proteins from 1 μl of plasma, reflecting an individual’s physiology. The quantified proteins include inflammatory markers, proteins belonging to the lipid homeostasis system, gender-related proteins, sample quality markers and more than 50 FDA-approved biomarkers. One of my major goals was to demonstrate that MS-based proteomics can be applied to large cohorts and that it is possible to gain biologically and medically relevant information from this. We achieved this aim with our first large scale plasma proteomic study in which we analyzed by far the largest plasma
proteomics study with almost 1,300 proteomes, which allowed us to define inflammatory and insulin resistance panels in a weight loss cohort.
In summary, this PhD thesis has developed the concept and practice of Plasma
Proteome Profiling as a fundamentally new approach in biomarker research and medical diagnostics – the system-wide phenotyping of humans in health and disease
Pathway toward prior knowledge-integrated machine learning in engineering
Despite the digitalization trend and data volume surge, first-principles
models (also known as logic-driven, physics-based, rule-based, or
knowledge-based models) and data-driven approaches have existed in parallel,
mirroring the ongoing AI debate on symbolism versus connectionism. Research for
process development to integrate both sides to transfer and utilize domain
knowledge in the data-driven process is rare. This study emphasizes efforts and
prevailing trends to integrate multidisciplinary domain professions into
machine acknowledgeable, data-driven processes in a two-fold organization:
examining information uncertainty sources in knowledge representation and
exploring knowledge decomposition with a three-tier knowledge-integrated
machine learning paradigm. This approach balances holist and reductionist
perspectives in the engineering domain.Comment: 8 pages, 4 figure
Plasma proteome profiling to assess human health and disease
The majority of diagnostic decisions are made on results from blood-based tests, and protein measurements are prominent among them. However, current assays are restricted to individual proteins, whereas it would be much more desirable to measure all of them in an unbiased, hypothesis-free manner. Therefore, characterization of the plasma proteome by mass spectrometry holds great promise for clinical application.
Due to great technological challenges and study design issues, plasma proteomics has not yet lived up to its promises: no new biomarkers have been discovered, plasma proteomics has not entered clinical diagnostics and few biologically meaningful insights have been gained. As a consequence, relatively few groups still continue to pursue plasma proteomics, despite the undiminished clinical need.
The overall aim of my PhD thesis was to pave the way for biomarker discovery and clinical applications of proteomics by precision characterization of the human blood plasma proteome. First, we streamlined the standard, time consuming and laborintensive proteomic workflow, and replaced it by a rapid, robust and highly reproducible robotic platform. After optimization of digestion conditions, peptide clean-up procedures and LC-MS/MS procedures, we can now prepare 96 samples in a fully-automated way within 3h and we routinely measure hundreds of plasma proteomes. Our workflow decreases hands-on time and opens the field for a new concept in biomarker discovery, which we termed ‘Plasma Proteome Profiling’.
It enables the highly reproducibility (CV<20% for most proteins), and quantitative analysis of several hundred proteins from 1 μl of plasma, reflecting an individual’s physiology. The quantified proteins include inflammatory markers, proteins belonging to the lipid homeostasis system, gender-related proteins, sample quality markers and more than 50 FDA-approved biomarkers. One of my major goals was to demonstrate that MS-based proteomics can be applied to large cohorts and that it is possible to gain biologically and medically relevant information from this. We achieved this aim with our first large scale plasma proteomic study in which we analyzed by far the largest plasma
proteomics study with almost 1,300 proteomes, which allowed us to define inflammatory and insulin resistance panels in a weight loss cohort.
In summary, this PhD thesis has developed the concept and practice of Plasma
Proteome Profiling as a fundamentally new approach in biomarker research and medical diagnostics – the system-wide phenotyping of humans in health and disease
MODELS FOR MULTIDISCIPLINARY DESIGN OPTIMIZATION: AN EXEMPLARY OFFICE BUILDING
The mathematical and technical foundations of optimization have been developed to a large extent. In the design of buildings, however, optimization is rarely applied because of insufficient adaptation of this method to the needs of building design. The use of design optimization requires the consideration of all relevant objectives in an interactive and multidisciplinary process. Disciplines such as structural, light, and thermal engineering, architecture, and economics impose various objectives on the design. A good solution calls for a compromise between these often contradictory objectives. This presentation outlines a method for the application of Multidisciplinary Design Optimization (MDO) as a tool for the designing of buildings. An optimization model is established considering the fact that in building design the non-numerical aspects are of major importance than in other engineering disciplines. A component-based decomposition enables the designer to manage the non-numerical aspects in an interactive design optimization process. A façade example demonstrates a way how the different disciplines interact and how the components integrate the disciplines in one optimization model. In this grid-based façade example, the materials switch between a discrete number of materials and construction types. For light and thermal engineering, architecture, and economics, analysis functions calculate the performance; utility functions serve as an important means for the evaluation since not every increase or decrease of a physical value improves the design. For experimental purposes, a genetic algorithm applied to the exemplary model demonstrates the use of optimization in this design case. A component-based representation first serves to manage non-numerical characteristics such as aesthetics. Furthermore, it complies with usual fabrication methods in building design and with object-oriented data handling in CAD. Therefore, components provide an important basis for an interactive MDO process in building design
Explainable AI for engineering design: A unified approach of systems engineering and component-based deep learning
Data-driven models created by machine learning gain in importance in all
fields of design and engineering. They have high potential to assists
decision-makers in creating novel artefacts with better performance and
sustainability. However, limited generalization and the black-box nature of
these models lead to limited explainability and reusability. These drawbacks
provide significant barriers retarding adoption in engineering design. To
overcome this situation, we propose a component-based approach to create
partial component models by machine learning (ML). This component-based
approach aligns deep learning to systems engineering (SE). By means of the
example of energy efficient building design, we first demonstrate better
generalization of the component-based method by analyzing prediction accuracy
outside the training data. Especially for representative designs different in
structure, we observe a much higher accuracy (R2 = 0.94) compared to
conventional monolithic methods (R2 = 0.71). Second, we illustrate
explainability by exemplary demonstrating how sensitivity information from SE
and rules from low-depth decision trees serve engineering. Third, we evaluate
explainability by qualitative and quantitative methods demonstrating the
matching of preliminary knowledge and data-driven derived strategies and show
correctness of activations at component interfaces compared to white-box
simulation results (envelope components: R2 = 0.92..0.99; zones: R2 =
0.78..0.93). The key for component-based explainability is that activations at
interfaces between the components are interpretable engineering quantities. In
this way, the hierarchical component system forms a deep neural network (DNN)
that a priori integrates information for engineering explainability. ...Comment: 17 page
Toward reliable signals decoding for electroencephalogram: A benchmark study to EEGNeX
This study examines the efficacy of various neural network (NN) models in
interpreting mental constructs via electroencephalogram (EEG) signals. Through
the assessment of 16 prevalent NN models and their variants across four
brain-computer interface (BCI) paradigms, we gauged their information
representation capability. Rooted in comprehensive literature review findings,
we proposed EEGNeX, a novel, purely ConvNet-based architecture. We pitted it
against both existing cutting-edge strategies and the Mother of All BCI
Benchmarks (MOABB) involving 11 distinct EEG motor imagination (MI)
classification tasks and revealed that EEGNeX surpasses other state-of-the-art
methods. Notably, it shows up to 2.1%-8.5% improvement in the classification
accuracy in different scenarios with statistical significance (p < 0.05)
compared to its competitors. This study not only provides deeper insights into
designing efficient NN models for EEG data but also lays groundwork for future
explorations into the relationship between bioelectric brain signals and NN
architectures. For the benefit of broader scientific collaboration, we have
made all benchmark models, including EEGNeX, publicly available at
(https://github.com/chenxiachan/EEGNeX).Comment: 19 pages, 6 figure
Introducing causal inference in the energy-efficient building design process
“What-if” questions are intuitively generated and commonly asked during the design process. Engineers and architects need to inherently conduct design decisions, progressing from one phase to another. They either use empirical domain experience, simulations, or data-driven methods to acquire consequential feedback. We take an example from an interdisciplinary domain of energy-efficient building design to argue that the current methods for decision support have limitations or deficiencies in four aspects: parametric independency identification, gaps in integrating knowledge-based and data-driven approaches, less explicit model interpretation, and ambiguous decision support boundaries. In this study, we first clarify the nature of dynamic experience in individuals and constant principal knowledge in design. Subsequently, we introduce causal inference into the domain. A four-step process is proposed to discover and analyze parametric dependencies in a mathematically rigorous and computationally efficient manner by identifying the causal diagram with interventions. The causal diagram provides a nexus for integrating domain knowledge with data-driven methods, providing interpretability and testability against the domain experience within the design space. Extracting causal structures from the data is close to the nature design reasoning process. As an illustration, we applied the properties of the proposed estimators through simulations. The paper concludes with a feasibility study demonstrating the proposed framework's realization
- …