31 research outputs found

    Evaluating Point Cloud Quality via Transformational Complexity

    Full text link
    Full-reference point cloud quality assessment (FR-PCQA) aims to infer the quality of distorted point clouds with available references. Merging the research of cognitive science and intuition of the human visual system (HVS), the difference between the expected perceptual result and the practical perception reproduction in the visual center of the cerebral cortex indicates the subjective quality degradation. Therefore in this paper, we try to derive the point cloud quality by measuring the complexity of transforming the distorted point cloud back to its reference, which in practice can be approximated by the code length of one point cloud when the other is given. For this purpose, we first segment the reference and the distorted point cloud into a series of local patch pairs based on one 3D Voronoi diagram. Next, motivated by the predictive coding theory, we utilize one space-aware vector autoregressive (SA-VAR) model to encode the geometry and color channels of each reference patch in cases with and without the distorted patch, respectively. Specifically, supposing that the residual errors follow the multi-variate Gaussian distributions, we calculate the self-complexity of the reference and the transformational complexity between the reference and the distorted sample via covariance matrices. Besides the complexity terms, the prediction terms generated by SA-VAR are introduced as one auxiliary feature to promote the final quality prediction. Extensive experiments on five public point cloud quality databases demonstrate that the transformational complexity based distortion metric (TCDM) produces state-of-the-art (SOTA) results, and ablation studies have further shown that our metric can be generalized to various scenarios with consistent performance by examining its key modules and parameters

    Event-triggered Learning

    Get PDF
    Machine learning has seen many recent breakthroughs. Inspired by these, learningcontrol systems emerged. In essence, the goal is to learn models and control policies for dynamical systems. Dealing with learning-control systems is hard and there are several key challenges that differ from classical machine learning tasks. Conceptually, excitation and exploration play a major role in learning-control systems. On the one hand, we usually aim for controllers that stabilize a system with the goal of avoiding deviations from a setpoint or reference. However, we also need informative data for learning, which is often not the case when controllers work well. Therefore, there is a problem due to the opposing objectives of many control theoretical tasks and the requirements for successful learning outcomes. Additionally, change of dynamics or other conditions is often encountered for control systems in practice. For example, new tasks, changing load conditions, or different external conditions have a substantial influence on the underlying distribution. Learning can provide the flexibility to adapt the behavior of learning-control systems to these events. Since learning has to be applied with sufficient excitation there are many practical situations that hinge on the following problem: "When to trigger learning updates in learning-control systems?" This is the core question of this thesis and despite its relevance, there is no general method that provides an answer. We propose and develop a new paradigm for principled decision making on when to learn, which we call event-triggered learning (ETL). The first triggers that we discuss are designed for networked control systems. All agents use model-based predictions to anticipate the other agents’ behavior which makes communication only necessary when the predictions deviate too much. Essentially, an accurate model can save communication, while a poor model leads to poor predictions and thus frequent updates. The learning triggers are based on the inter-communication times (the time between two communication instances). They are independent and identically distributed random variables, which directly leads to sound guarantees. The framework is validated in experiments and leads to 70% communication savings for wireless sensor networks that monitor human walking. In the second part, we consider optimal control algorithms and start with linear quadratic regulators. A perfect model yields the best possible controller, while poor models result in poor controllers. Thus, by analyzing the control performance, we can infer the model’s accuracy. From a technical point of view, we have to deal with correlated data and work with more sophisticated tools to provide the desired theoretical guarantees. While we obtain a powerful test that is tightly tailored to the problem at hand, it does not generalize to different control architectures. Therefore, we also consider a more general point of view, where we recast the learning of linear systems as a filtering problem. We leverage Kalman filter-based techniques to derive a sound test and utilize the point estimate of the parameters for targeted learning experiments. The algorithm is independent of the underlying control architecture, but demonstrated for model predictive control. Most of the results in the first two parts critically depend on linearity assumptions in the dynamics and further problem-specific properties. In the third part, we take a step back and ask the fundamental question of how to compare (nonlinear) dynamical systems directly from state data. We propose a kernel two-sample test that compares stationary distributions of dynamical systems. Additionally, we introduce a new type of mixing that can directly be estimated from data to deal with the autocorrelations. In summary, this thesis introduces a new paradigm for deciding when to trigger updates in learning-control systems. Additionally, we develop three instantiations of this paradigm for different learning-control problems. Further, we present applications of the algorithms that yield substantial communication savings, effective controller updates, and the detection of anomalies in human walking data

    Toxicological Evaluation of Poly(ethylene imine) -based non-viral vector systems for pulmonary siRNA application

    Get PDF
    In this thesis, toxicity of PEI-based non-viral vector systems for siRNA application into the lungs was comprehensively described and analyzed in vitro as well as in vivo. Chapter 1 introduced in basic information about the lung anatomy and physiology and general considerations for pulmonary application as well as gave an overview of the two major groups of non-viral vector systems for pulmonary application and highlighted their impact in nanomedicine and nanotoxicology. The search for more predictive toxicity tools for (polymeric) non-viral vector systems is still of great concern in the community and was pointed out in this chapter. Chapter 2 described the toxicicological and immunomoldulatory effects of two different PEI-based nanocarriers for siRNA delivery in different murine lung cells. Two different PEI nanocarriers (branched vs. linear, and low vs. high molecular weight PEI) were evaluated regarding standard toxicity endpoints, but also immunomodulatory effects caused by the pure polymers and their respective polyplexes with siRNA. The results pointed out, that epithelial cells were much more sensitive in response to such polymers and the polyplexes appeared to be less toxic than the pure polymers. In addition, the immunomodulatory effects of such polymeric non-viral vector systems should be further investigated for their underlying mechanism. Chapter 3 hypothesized that poly(ethylene glycol) (PEG) reduces the cytotoxicity of high molecular weight, branched PEI25 kDa and investigated the cell-compatibility and cytotoxicity of a panel of different PEI-PEG polymers in vitro. This in vitro study highlighted the inflammatory potential of such PEI-PEG polymers which seemed to be higher when cytotoxicity was extremely reduced. Hypothesizing that inflammatory and oxidative stress response play an important role when using PEI-based nanocarriers, especially for pulmonary application, in Chapter 4 a toxicity and stress pathway focused gene expression profiling was described for selected PEI-PEG polymers. This gene array clearly stressed the inflammatory potential of the modified PEI-PEG polymers with reduced apoptotic signalling pathways, but increasing inflammatory and oxidative stress response, in contrast to PEI25 kDa. Due to the higher proinflammatory potential and elevated oxidative stress parameters, the question of genotoxicity was addressed in Chapter 5. The mutant frequency of selected PEI-based nanocarriers was investigated by using a transgenic lung epithelial cell culture in vitro model, but was regarded to be less and PEI-based nanocarriers were not mutagenic in such an in vitro model. After toxicity analysis in vitro two main questions raised (i) what kind of effects would be induced by the polymers or their polyplexes in vivo when directly administered to the lungs and (ii) could we find any in vitro/ in vivo correlation for biomarkers indicating toxicity, inflammation and/or oxidative stress? Chapter 6 focused on the in vivo toxicity, inflammatory, and oxidative stress response of selected PEI-based nanocarriers for siRNA in mice after intratracheal instillation and tried to answer the two upcoming questions from the in vitro studies. Almost all modified PEI-based nanocarriers showed very high acute inflammation, but with different resolving kinetics. Hydrophobic modification of low molecular weight PEI and highly hydrophilic PEGylated PEI-based nanocarriers seemed to be well tolerable in contrast to moderate hydrophilic PEGylated and fatty-acid modified PEI-based polymers which showed very high and sustained inflammation in the lungs. In contrast to safety issues (which represent the main part of this thesis) in chapter 7 the in vivo efficacy and the cell–type specific targeting was reported of PEI-based nanocarriers, same carriers selected as in Chapter 6, for pulmonary siRNA delivery. Surprisingly, the highly inflammatory PEI-based nanocarriers yielded high knock down effects, but only the fatty acid modified PEI-based nanocarrier, seemed to avoid off-target effects. Leucocytes were targeted to some extent, but seemed not to be the main targeted cell type in the lung after PEI-based nanocarriers application for siRNA delivery. Thus, for clinical trials the polymers should be carefully optimized and evaluated for cytotoxicity, high acute inflammatory and oxidative stress response and their in vivo performance of siRNA delivery. Development of polymers with reduced cytotoxicity and negligible off-target effects, but high in vivo efficacy represents one of the biggest challenges for the next decades before entry to clinics. In addition, optimized in vitro models for predictive toxicity are still needed

    Model predictive control of trailing edge flaps on a wind turbine blade

    Get PDF

    Universal Prediction

    Get PDF
    In this dissertation I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn what there is to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also touches on modern-day speculation about a fully automatized data-driven science. I investigate, in particular, a specific mathematical definition of a universal prediction method, that goes back to the early days of artificial intelligence and that has a direct line to modern developments in machine learning. This definition essentially aims to combine all possible prediction algorithms. An alternative interpretation is that this definition formalizes the idea that learning from data is equivalent to compressing data. In this guise, the definition is often presented as an implementation and even as a justification of Occam's razor, the principle that we should look for simple explanations. The conclusions of my investigation are negative. I show that the proposed definition cannot be interpreted as a universal prediction method, as turns out to be exposed by a mathematical argument that it was actually intended to overcome. Moreover, I show that the suggested justification of Occam's razor does not work, and I argue that the relevant notion of simplicity as compressibility is problematic itself

    Universal Prediction

    Get PDF
    In this thesis I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also relates to modern-day speculation about a fully automatized data-driven science. I investigate, in particular, a proposed definition of a universal prediction method that goes back to Solomonoff (1964) and Levin (1970). This definition marks the birth of the theory of Kolmogorov complexity, and has a direct line to the information-theoretic approach in modern machine learning. Solomonoff's work was inspired by Carnap's program of inductive logic, and the more precise definition due to Levin can be seen as an explicit attempt to escape the diagonal argument that Putnam (1963) famously launched against the feasibility of Carnap's program. The Solomonoff-Levin definition essentially aims at a mixture of all possible prediction algorithms. An alternative interpretation is that the definition formalizes the idea that learning from data is equivalent to compressing data. In this guise, the definition is often presented as an implementation and even as a justification of Occam's razor, the principle that we should look for simple explanations. The conclusions of my investigation are negative. I show that the Solomonoff-Levin definition fails to unite two necessary conditions to count as a universal prediction method, as turns out be entailed by Putnam's original argument after all; and I argue that this indeed shows that no definition can. Moreover, I show that the suggested justification of Occam's razor does not work, and I argue that the relevant notion of simplicity as compressibility is already problematic itself

    Advances in Condition Monitoring, Optimization and Control for Complex Industrial Processes

    Get PDF
    The book documents 25 papers collected from the Special Issue “Advances in Condition Monitoring, Optimization and Control for Complex Industrial Processes”, highlighting recent research trends in complex industrial processes. The book aims to stimulate the research field and be of benefit to readers from both academic institutes and industrial sectors

    Universal Prediction:A Philosophical Investigation

    Get PDF
    corecore