5,272 research outputs found

    Exploiting Structural Properties in the Analysis of High-dimensional Dynamical Systems

    Get PDF
    The physical and cyber domains with which we interact are filled with high-dimensional dynamical systems. In machine learning, for instance, the evolution of overparametrized neural networks can be seen as a dynamical system. In networked systems, numerous agents or nodes dynamically interact with each other. A deep understanding of these systems can enable us to predict their behavior, identify potential pitfalls, and devise effective solutions for optimal outcomes. In this dissertation, we will discuss two classes of high-dimensional dynamical systems with specific structural properties that aid in understanding their dynamic behavior. In the first scenario, we consider the training dynamics of multi-layer neural networks. The high dimensionality comes from overparametrization: a typical network has a large depth and hidden layer width. We are interested in the following question regarding convergence: Do network weights converge to an equilibrium point corresponding to a global minimum of our training loss, and how fast is the convergence rate? The key to those questions is the symmetry of the weights, a critical property induced by the multi-layer architecture. Such symmetry leads to a set of time-invariant quantities, called weight imbalance, that restrict the training trajectory to a low-dimensional manifold defined by the weight initialization. A tailored convergence analysis is developed over this low-dimensional manifold, showing improved rate bounds for several multi-layer network models studied in the literature, leading to novel characterizations of the effect of weight imbalance on the convergence rate. In the second scenario, we consider large-scale networked systems with multiple weakly-connected groups. Such a multi-cluster structure leads to a time-scale separation between the fast intra-group interaction due to high intra-group connectivity, and the slow inter-group oscillation, due to the weak inter-group connection. We develop a novel frequency-domain network coherence analysis that captures both the coherent behavior within each group, and the dynamical interaction between groups, leading to a structure-preserving model-reduction methodology for large-scale dynamic networks with multiple clusters under general node dynamics assumptions

    Precision Surface Processing and Software Modelling Using Shear-Thickening Polishing Slurries

    Get PDF
    Mid-spatial frequency surface error is a known manufacturing defect for aspherical and freeform precision surfaces. These surface ripples decrease imaging contrast and system signal-to-noise ratio. Existing sub-aperture polishing techniques are limited in their abilities to smooth mid-spatial frequency errors. Shear-thickening slurries have been hypothesised to reduce mid-spatial frequency errors on precision optical surfaces by increasing the viscosity at the tool-part interface. Currently, controlling the generation and mitigating existing mid-spatial frequency surface errors for aspherical and freeform surfaces requires extensive setup and the experience of seasoned workers. This thesis reports on the experimental trials of shear-thickening polishing slurries on glass surfaces. By incorporating shear-thickening slurries with the precessed bonnet technology, the aim is to enhance the ability of the precessions technology in mitigating mid-spatial frequency errors. The findings could facilitate a more streamlined manufacturing chain for precision optics for the versatile precessions technology from form correction and texture improvement, to MSF mitigation, without needing to rely on other polishing technologies. Such improvement on the existing bonnet polishing would provide a vital steppingstone towards building a fully autonomous manufacturing cell in a market of continual economic growth. The experiments in this thesis analysed the capabilities of two shear-thickening slurry systems: (1) polyethylene glycol with silica nanoparticle suspension, and (2) water and cornstarch suspension. Both slurry systems demonstrated the ability at mitigating existing surface ripples. Looking at power spectral density graphs, polyethylene glycol slurries reduced the power of the mid-spatial frequencies by ~50% and cornstarch suspension slurries by 60-90%. Experiments of a novel polishing approach are also reported in this thesis to rotate a precessed bonnet at a predetermined working distance above the workpiece surface. The rapidly rotating tool draws in the shear-thickening slurry through the gap to stiffen the fluid for polishing. This technique demonstrated material removal capabilities using cornstarch suspension slurries at a working distance of 1.0-1.5mm. The volumetric removal rate from this process is ~5% of that of contact bonnet polishing, so this aligns more as a finishing process. This polishing technique was given the term rheological bonnet finishing. The rheological properties of cornstarch suspension slurries were tested using a rheometer and modelled through CFD simulation. Using the empirical rheological data, polishing simulations of the rheological bonnet finishing process were modelled in Ansys to analyse the effects of various input parameters such as working distance, tool headspeed, precess angle, and slurry viscosity

    3D isogeometric indirect BEM solution based on virtual surface sources on the boundaries of Helmholtz acoustic problems

    Get PDF
    A solution for 3D Helmholtz acoustic problems is introduced based on an indirect boundary element method (indirect BEM) coupled with isogeometric analysis (IGA). The novelty of this work arises from using virtual surface sources placed directly on the scatterer boundaries,producing robust results. These virtual surface sources are discretized by the same Non-Uniform Rational B-Splines (NURBS) approximating the scatterer CAD model. This allows modelling of general irregular geometries. The proposed solution has the same features of BEM approaches, which do not need any domain discretization or truncation boundaries at the far-field. It shows an additional merit by arranging the linear system of equations directly depending on a single coefficient matrix, consuming less computational time compared to other BEM methods. A Greville abscissae collocation scheme is proposed with offsets at C0-continuities. This collocation scheme allows for easy evaluation for both free-terms and normals at the collocation points.The performance of the proposed solution is discussed on 3D numerical exterior problems and compared against other BEM methods. Then, the practical interior muffler problem with internal extended thin tubes is studied and the obtained results are compared against other numerical methods in addition to the available experimental data, showing the capability of the proposed solution in handling thin-walled geometries

    Classical and quantum algorithms for scaling problems

    Get PDF
    This thesis is concerned with scaling problems, which have a plethora of connections to different areas of mathematics, physics and computer science. Although many structural aspects of these problems are understood by now, we only know how to solve them efficiently in special cases.We give new algorithms for non-commutative scaling problems with complexity guarantees that match the prior state of the art. To this end, we extend the well-known (self-concordance based) interior-point method (IPM) framework to Riemannian manifolds, motivated by its success in the commutative setting. Moreover, the IPM framework does not obviously suffer from the same obstructions to efficiency as previous methods. It also yields the first high-precision algorithms for other natural geometric problems in non-positive curvature.For the (commutative) problems of matrix scaling and balancing, we show that quantum algorithms can outperform the (already very efficient) state-of-the-art classical algorithms. Their time complexity can be sublinear in the input size; in certain parameter regimes they are also optimal, whereas in others we show no quantum speedup over the classical methods is possible. Along the way, we provide improvements over the long-standing state of the art for searching for all marked elements in a list, and computing the sum of a list of numbers.We identify a new application in the context of tensor networks for quantum many-body physics. We define a computable canonical form for uniform projected entangled pair states (as the solution to a scaling problem), circumventing previously known undecidability results. We also show, by characterizing the invariant polynomials, that the canonical form is determined by evaluating the tensor network contractions on networks of bounded size

    HiTIC-Monthly: a monthly high spatial resolution (1 km) human thermal index collection over China during 2003–2020

    Get PDF
    Human-perceived thermal comfort (known as human-perceived temperature) measures the combined effects of multiple meteorological factors (e.g., temperature, humidity, and wind speed) and can be aggravated under the influences of global warming and local human activities. With the most rapid urbanization and the largest population, China is being severely threatened by aggravating human thermal stress. However, the variations of thermal stress in China at a fine scale have not been fully understood. This gap is mainly due to the lack of a high-resolution gridded dataset of human thermal indices. Here, we generated the first high spatial resolution (1 km) dataset of monthly human thermal index collection (HiTIC-Monthly) over China during 2003–2020. In this collection, 12 commonly used thermal indices were generated by the Light Gradient Boosting Machine (LightGBM) learning algorithm from multi-source data, including land surface temperature, topography, land cover, population density, and impervious surface fraction. Their accuracies were comprehensively assessed based on the observations at 2419 weather stations across the mainland of China. The results show that our dataset has desirable accuracies, with the mean R2, root mean square error, and mean absolute error of 0.996, 0.693 ∘C, and 0.512 ∘C, respectively, by averaging the 12 indices. Moreover, the data exhibit high agreements with the observations across spatial and temporal dimensions, demonstrating the broad applicability of our dataset. A comparison with two existing datasets also suggests that our high-resolution dataset can describe a more explicit spatial distribution of the thermal information, showing great potentials in fine-scale (e.g., intra-urban) studies. Further investigation reveals that nearly all thermal indices exhibit increasing trends in most parts of China during 2003–2020. The increase is especially significant in North China, Southwest China, the Tibetan Plateau, and parts of Northwest China, during spring and summer. The HiTIC-Monthly dataset is publicly available from Zenodo at https://doi.org/10.5281/zenodo.6895533 (Zhang et al., 2022a).</p

    Advances and Challenges of Multi-task Learning Method in Recommender System: A Survey

    Full text link
    Multi-task learning has been widely applied in computational vision, natural language processing and other fields, which has achieved well performance. In recent years, a lot of work about multi-task learning recommender system has been yielded, but there is no previous literature to summarize these works. To bridge this gap, we provide a systematic literature survey about multi-task recommender systems, aiming to help researchers and practitioners quickly understand the current progress in this direction. In this survey, we first introduce the background and the motivation of the multi-task learning-based recommender systems. Then we provide a taxonomy of multi-task learning-based recommendation methods according to the different stages of multi-task learning techniques, which including task relationship discovery, model architecture and optimization strategy. Finally, we raise discussions on the application and promising future directions in this area

    Witnessing environment dimension through temporal correlations

    Full text link
    We introduce a framework to compute upper bounds for temporal correlations achievable in open quantum system dynamics, obtained by repeated measurements on the system. As these correlations arise by virtue of the environment acting as a memory resource, such bounds are witnesses for the minimal dimension of an effective environment compatible with the observed statistics. These witnesses are derived from a hierarchy of semidefinite programs with guaranteed asymptotic convergence. We compute non-trivial bounds for various sequences involving a qubit system and a qubit environment, and compare the results to the best known quantum strategies producing the same outcome sequences. Our results provide a numerically tractable method to determine bounds on multi-time probability distributions in open quantum system dynamics and allow for the witnessing of effective environment dimensions through probing of the system alone.Comment: 24 pages, 7 figure

    QoE-Driven Video Transmission: Energy-Efficient Multi-UAV Network Optimization

    Full text link
    This paper is concerned with the issue of improving video subscribers' quality of experience (QoE) by deploying a multi-unmanned aerial vehicle (UAV) network. Different from existing works, we characterize subscribers' QoE by video bitrates, latency, and frame freezing and propose to improve their QoE by energy-efficiently and dynamically optimizing the multi-UAV network in terms of serving UAV selection, UAV trajectory, and UAV transmit power. The dynamic multi-UAV network optimization problem is formulated as a challenging sequential-decision problem with the goal of maximizing subscribers' QoE while minimizing the total network power consumption, subject to some physical resource constraints. We propose a novel network optimization algorithm to solve this challenging problem, in which a Lyapunov technique is first explored to decompose the sequential-decision problem into several repeatedly optimized sub-problems to avoid the curse of dimensionality. To solve the sub-problems, iterative and approximate optimization mechanisms with provable performance guarantees are then developed. Finally, we design extensive simulations to verify the effectiveness of the proposed algorithm. Simulation results show that the proposed algorithm can effectively improve the QoE of subscribers and is 66.75\% more energy-efficient than benchmarks

    Synopsis of lectures on the subject «Special technologies in mechanical engineering» for students of all forms of study Direction of preparation 131 " Applied mechanics"

    Get PDF
    1. METHODS OF PROCESSING STRUCTURAL MATERIALS 5 2. BASIC INFORMATION ABOUT METAL CUTTING AND METAL CUTTING MACHINES 15 3. METHODS OF RESTORATION OF PARTS 24 4. MAIN PARTS AND ELEMENTS OF THE CUTTER, ITS GEOMETRIC PARAMETERS 29 5. GEOMETRY OF CUTTERS 35 6. ELEMENTS OF CUTTING AND CUT LAYER 39 7. TECHNOLOGY OF PROCESSING BODY PARTS ON AUTOMATED MACHINES 44 8. TECHNOLOGY OF SHAFT PROCESSING 53 9. METHODS FOR FORMATION OF BASIC SURFACES OF CORRESPONDING PARTS 75 10. LITERATURE 9

    Towards compact bandwidth and efficient privacy-preserving computation

    Get PDF
    In traditional cryptographic applications, cryptographic mechanisms are employed to ensure the security and integrity of communication or storage. In these scenarios, the primary threat is usually an external adversary trying to intercept or tamper with the communication between two parties. On the other hand, in the context of privacy-preserving computation or secure computation, the cryptographic techniques are developed with a different goal in mind: to protect the privacy of the participants involved in a computation from each other. Specifically, privacy-preserving computation allows multiple parties to jointly compute a function without revealing their inputs and it has numerous applications in various fields, including finance, healthcare, and data analysis. It allows for collaboration and data sharing without compromising the privacy of sensitive data, which is becoming increasingly important in today's digital age. While privacy-preserving computation has gained significant attention in recent times due to its strong security and numerous potential applications, its efficiency remains its Achilles' heel. Privacy-preserving protocols require significantly higher computational overhead and bandwidth when compared to baseline (i.e., insecure) protocols. Therefore, finding ways to minimize the overhead, whether it be in terms of computation or communication, asymptotically or concretely, while maintaining security in a reasonable manner remains an exciting problem to work on. This thesis is centred around enhancing efficiency and reducing the costs of communication and computation for commonly used privacy-preserving primitives, including private set intersection, oblivious transfer, and stealth signatures. Our primary focus is on optimizing the performance of these primitives.Im Gegensatz zu traditionellen kryptografischen Aufgaben, bei denen Kryptografie verwendet wird, um die Sicherheit und Integrität von Kommunikation oder Speicherung zu gewährleisten und der Gegner typischerweise ein Außenstehender ist, der versucht, die Kommunikation zwischen Sender und Empfänger abzuhören, ist die Kryptografie, die in der datenschutzbewahrenden Berechnung (oder sicheren Berechnung) verwendet wird, darauf ausgelegt, die Privatsphäre der Teilnehmer voreinander zu schützen. Insbesondere ermöglicht die datenschutzbewahrende Berechnung es mehreren Parteien, gemeinsam eine Funktion zu berechnen, ohne ihre Eingaben zu offenbaren. Sie findet zahlreiche Anwendungen in verschiedenen Bereichen, einschließlich Finanzen, Gesundheitswesen und Datenanalyse. Sie ermöglicht eine Zusammenarbeit und Datenaustausch, ohne die Privatsphäre sensibler Daten zu kompromittieren, was in der heutigen digitalen Ära immer wichtiger wird. Obwohl datenschutzbewahrende Berechnung aufgrund ihrer starken Sicherheit und zahlreichen potenziellen Anwendungen in jüngster Zeit erhebliche Aufmerksamkeit erregt hat, bleibt ihre Effizienz ihre Achillesferse. Datenschutzbewahrende Protokolle erfordern deutlich höhere Rechenkosten und Kommunikationsbandbreite im Vergleich zu Baseline-Protokollen (d.h. unsicheren Protokollen). Daher bleibt es eine spannende Aufgabe, Möglichkeiten zu finden, um den Overhead zu minimieren (sei es in Bezug auf Rechen- oder Kommunikationsleistung, asymptotisch oder konkret), während die Sicherheit auf eine angemessene Weise gewährleistet bleibt. Diese Arbeit konzentriert sich auf die Verbesserung der Effizienz und Reduzierung der Kosten für Kommunikation und Berechnung für gängige datenschutzbewahrende Primitiven, einschließlich private Schnittmenge, vergesslicher Transfer und Stealth-Signaturen. Unser Hauptaugenmerk liegt auf der Optimierung der Leistung dieser Primitiven
    corecore