7,347 research outputs found

    Proceedings of the 10th International congress on architectural technology (ICAT 2024): architectural technology transformation.

    Get PDF
    The profession of architectural technology is influential in the transformation of the built environment regionally, nationally, and internationally. The congress provides a platform for industry, educators, researchers, and the next generation of built environment students and professionals to showcase where their influence is transforming the built environment through novel ideas, businesses, leadership, innovation, digital transformation, research and development, and sustainable forward-thinking technological and construction assembly design

    Modular lifelong machine learning

    Get PDF
    Deep learning has drastically improved the state-of-the-art in many important fields, including computer vision and natural language processing (LeCun et al., 2015). However, it is expensive to train a deep neural network on a machine learning problem. The overall training cost further increases when one wants to solve additional problems. Lifelong machine learning (LML) develops algorithms that aim to efficiently learn to solve a sequence of problems, which become available one at a time. New problems are solved with less resources by transferring previously learned knowledge. At the same time, an LML algorithm needs to retain good performance on all encountered problems, thus avoiding catastrophic forgetting. Current approaches do not possess all the desired properties of an LML algorithm. First, they primarily focus on preventing catastrophic forgetting (Diaz-Rodriguez et al., 2018; Delange et al., 2021). As a result, they neglect some knowledge transfer properties. Furthermore, they assume that all problems in a sequence share the same input space. Finally, scaling these methods to a large sequence of problems remains a challenge. Modular approaches to deep learning decompose a deep neural network into sub-networks, referred to as modules. Each module can then be trained to perform an atomic transformation, specialised in processing a distinct subset of inputs. This modular approach to storing knowledge makes it easy to only reuse the subset of modules which are useful for the task at hand. This thesis introduces a line of research which demonstrates the merits of a modular approach to lifelong machine learning, and its ability to address the aforementioned shortcomings of other methods. Compared to previous work, we show that a modular approach can be used to achieve more LML properties than previously demonstrated. Furthermore, we develop tools which allow modular LML algorithms to scale in order to retain said properties on longer sequences of problems. First, we introduce HOUDINI, a neurosymbolic framework for modular LML. HOUDINI represents modular deep neural networks as functional programs and accumulates a library of pre-trained modules over a sequence of problems. Given a new problem, we use program synthesis to select a suitable neural architecture, as well as a high-performing combination of pre-trained and new modules. We show that our approach has most of the properties desired from an LML algorithm. Notably, it can perform forward transfer, avoid negative transfer and prevent catastrophic forgetting, even across problems with disparate input domains and problems which require different neural architectures. Second, we produce a modular LML algorithm which retains the properties of HOUDINI but can also scale to longer sequences of problems. To this end, we fix the choice of a neural architecture and introduce a probabilistic search framework, PICLE, for searching through different module combinations. To apply PICLE, we introduce two probabilistic models over neural modules which allows us to efficiently identify promising module combinations. Third, we phrase the search over module combinations in modular LML as black-box optimisation, which allows one to make use of methods from the setting of hyperparameter optimisation (HPO). We then develop a new HPO method which marries a multi-fidelity approach with model-based optimisation. We demonstrate that this leads to improvement in anytime performance in the HPO setting and discuss how this can in turn be used to augment modular LML methods. Overall, this thesis identifies a number of important LML properties, which have not all been attained in past methods, and presents an LML algorithm which can achieve all of them, apart from backward transfer

    Ultra High Strength Steels for Roll Formed Automotive Body in White

    Get PDF
    One of the more recent steel developments is the quenching and partitioning process, first proposed by Speer et al. in 2003 on developing 3rd generation advanced high-strength steel (AHSS). The quenching and partitioning (Q&P) process set a new way of producing martensitic steels with enhanced austenite levels, realised through controlled thermal treatments. The main objective of the so-called 3rd generation steels was to realise comparable properties to the 2nd generation but without high alloying additions. Generally, Q&P steels have remained within lab-scale environments, with only a small number of Q&P steels produced industrially. Q&P steels are produced either by a one-step or two-step process, and the re-heating mechanism for the two-step adds additional complexities when heat treating the material industrially. The Q&P steels developed and tested throughout this thesis have been designed to achieve the desired microstructural evolution whilst fitting in with Tata’s continuous annealing processing line (CAPL) capabilities. The CALPHAD approach using a combination of thermodynamics, kinetics, and phase transformation theory with software packages ThermoCalc and JMatPro has been successfully deployed to find novel Q&P steels. The research undertaken throughout this thesis has led to two novel Q&P steels, which can be produced on CAPL without making any infrastructure changes to the line. The two novel Q&P steels show an apparent reduction in hardness mismatch, illustrated visually and numerically after nano-indentation experiments. The properties realised after Q&P heat treatments on the C-Mn-Si alloy with 0.2 Wt.% C and the C-Mn-Si alloy with the small Cr addition is superior to the commercially available QP980/1180 steels by BaoSteel. Both novel alloys had comparable levels of elongation and hole expansion ratio to QP1180 but are substantially stronger with a > 320MPa increase in tensile stress. The heat treatment is also less complex as there is no requirement to heat the steel back up after quenching due to one-step quenching and partitioning being employed on the novel alloys

    Southern Adventist University Undergraduate Catalog 2022-2023

    Get PDF
    Southern Adventist University\u27s undergraduate catalog for the academic year 2022-2023.https://knowledge.e.southern.edu/undergrad_catalog/1121/thumbnail.jp

    Towards Scalable Real-time Analytics:: An Architecture for Scale-out of OLxP Workloads

    Get PDF
    We present an overview of our work on the SAP HANA Scale-out Extension, a novel distributed database architecture designed to support large scale analytics over real-time data. This platform permits high performance OLAP with massive scale-out capabilities, while concurrently allowing OLTP workloads. This dual capability enables analytics over real-time changing data and allows fine grained user-specified service level agreements (SLAs) on data freshness. We advocate the decoupling of core database components such as query processing, concurrency control, and persistence, a design choice made possible by advances in high-throughput low-latency networks and storage devices. We provide full ACID guarantees and build on a logical timestamp mechanism to provide MVCC-based snapshot isolation, while not requiring synchronous updates of replicas. Instead, we use asynchronous update propagation guaranteeing consistency with timestamp validation. We provide a view into the design and development of a large scale data management platform for real-time analytics, driven by the needs of modern enterprise customers

    Specificity of the innate immune responses to different classes of non-tuberculous mycobacteria

    Get PDF
    Mycobacterium avium is the most common nontuberculous mycobacterium (NTM) species causing infectious disease. Here, we characterized a M. avium infection model in zebrafish larvae, and compared it to M. marinum infection, a model of tuberculosis. M. avium bacteria are efficiently phagocytosed and frequently induce granuloma-like structures in zebrafish larvae. Although macrophages can respond to both mycobacterial infections, their migration speed is faster in infections caused by M. marinum. Tlr2 is conservatively involved in most aspects of the defense against both mycobacterial infections. However, Tlr2 has a function in the migration speed of macrophages and neutrophils to infection sites with M. marinum that is not observed with M. avium. Using RNAseq analysis, we found a distinct transcriptome response in cytokine-cytokine receptor interaction for M. avium and M. marinum infection. In addition, we found differences in gene expression in metabolic pathways, phagosome formation, matrix remodeling, and apoptosis in response to these mycobacterial infections. In conclusion, we characterized a new M. avium infection model in zebrafish that can be further used in studying pathological mechanisms for NTM-caused diseases

    Anime Studies: media-specific approaches to neon genesis evangelion

    Get PDF
    Anime Studies: Media-Specific Approaches to Neon Genesis Evangelion aims at advancing the study of anime, understood as largely TV-based genre fiction rendered in cel, or cel-look, animation with a strong affinity to participatory cultures and media convergence. Making Neon Genesis Evangelion (Shin Seiki Evangerion, 1995-96) its central case and nodal point, this volumen forground anime as a media with clearly recognizable aesthetic properties, (sub)cultural affordances and situated discourses

    Light transport by topological confinement

    Full text link
    The growth of data capacity in optical communications links, which form the critical backbone of the modern internet, is facing a slowdown due to fundamental nonlinear limitations, leading to an impending "capacity crunch" on the horizon. Current technology has already exhausted degrees of freedom such as wavelength, amplitude, phase and polarization, leaving spatial multiplexing as the last available dimension to be efficiently exploited. To minimize the significant energy requirements associated with digital signal processing, it is critical to explore the upper limit of unmixed spatial channels in an optical fiber, which necessitates ideally packing spatial channels either in real space or in momentum space. The former strategy is realized by uncoupled multi-core fibers whose channel count has already saturated due to reliability constraint limiting fiber sizes. The later strategy is realized by the unmixed multimode fiber whose high spatial efficiency suggest the possibility of high channel-count scalability but the right subset of mode ought to be selected in order to mitigate mode coupling that is ever-present due to the plethora of perturbations a fiber normally experiences. The azimuthal modes in ring-core fibers turn out to be one of the most spatially efficient in this regard, by exploiting light’s orbital angular momentum (OAM). Unmixed mode counts have reached 12 in a ~1km fiber and 24 in a ~10m fiber. However, there is a fundamental bottleneck for scalability of conventionally bound modes and their relatively high crosstalks restricts their utility to device length applications. In this thesis, we provide a fundamental solution to further fuel the unmixed-channel count in an MMF. We utilize the phenomenon of topological confinement, which is a regime of light guidance beyond conventional cutoff that has, to the best of our knowledge, never been demonstrated till publications based on the subject matter of this thesis. In this regime, light is guided by the centrifugal barrier created by light’s OAM itself rather than conventional total internal reflection arising from the index inhomogeneity of the fiber. The loss of these topologically confined modes (TCMs) decreases down to negligible levels by increasing the OAM of fiber modes, because the centrifugal barrier that keeps photons confined to a fiber core increases with the OAM value of the mode. This leads to low-loss transmission in a km-scale fiber of these cutoff modes. Crucially, the mode-dependent confinement loss of TCMs further lifts the degeneracy of wavevectors in the complex space, leading to frustration of phase-matched coupling. This thus allows further scaling the mode count that was previously hindered by degenerate mode coupling in conventionally bound fiber modes. The frustrated coupling of TCMs thus enables a record amount of unmixed OAM modes in any type of fiber that features a high index contrast, whether specially structured as a ring-core, or simply constructed as a step-index fiber. Using all these favorable attributes, we achieve up to 50 low-loss modes with record low crosstalk (approaching -45 dB/km) over a 130-nm bandwidth in a ~1km-long ring-core fiber. The TCM effect promises to be inherently scalable, suggesting that even higher modes counts can be obtained in the future using this design methodology. Hence, the use of TCMs promises breaking the record spectral efficiency, potentially making it the choice for transmission links in future Space-Division-Multiplexing systems. Apart from their chief attribute of significantly increasing the information content per photon for quantum or classical networks, we expect that this new light guidance may find other applications such as in nonlinear signal processing and light-matter interactions

    Study and design of an interface for remote audio processing

    Get PDF
    This project focused on the study and design of an interface for remote audio processing, with the objective of acquiring by filtering, biasing, and amplifying an analog signal before digitizing it by means of two MCP3208 ADCs to achieve a 24-bit resolution signal. The resulting digital signal was then transmitted to a Raspberry Pi using SPI protocol, where it was processed by a Flask server that could be accessed from both local and remote networks. The design of the PCB was a critical component of the project, as it had to accommodate various components and ensure accurate signal acquisition and transmission. The PCB design was created using KiCad software, which allowed for the precise placement and routing of all components. A major challenge in the design of the interface was to ensure that the analog signal was not distorted during acquisition and amplification. This was achieved through careful selection of amplifier components and using high-pass and low-pass filters to remove any unwanted noise. Once the analog signal was acquired and digitized, the resulting digital signal was transmitted to the Raspberry Pi using SPI protocol. The Raspberry Pi acted as the host for a Flask server, which could be accessed from local and remote networks using a web browser. The Flask server allowed for the processing of the digital signal and provided a user interface for controlling the gain and filtering parameters of the analog signal. This enabled the user to adjust the signal parameters to suit their specific requirements, making the interface highly flexible and adaptable to a variety of audio processing applications. The final interface was capable of remote audio processing, making it highly useful in scenarios where the audio signal needed to be acquired and processed in a location separate from the user. For example, it could be used in a recording studio, where the audio signal from the microphone could be remotely processed using the interface. The gain and filtering parameters could be adjusted in real-time, allowing the sound engineer to fine-tune the audio signal to produce the desired recording. In conclusion, the project demonstrated the feasibility and potential benefits of using a remote audio processing system for various applications. The design of the PCB, selection of components, and use of the Flask server enabled the creation of an interface that was highly flexible, accurate, and adaptable to a variety of audio processing requirements. Overall, the project represents a significant step forward in the field of remote audio processing, with the potential to benefit many different applications in the future

    Towards Improved Hydrologic Land-Surface Modelling To Represent Permafrost

    Get PDF
    Permafrost affects hydrological, meteorological, and ecological processes in over one-quarter of the land surface in the Northern Hemisphere. Permafrost degradation has been observed over the last few decades and is projected to accelerate under climatic warming. However, simulating permafrost dynamics is challenging due to process complexity, scarcity of observations, spatial heterogeneity, and permafrost disequilibrium with external climate forcing. Hydrologic-land-surface models (H-LSMs), which act as the lower boundary condition of the current generation of Earth system models (ESMs), are suitable for diagnosing and predicting permafrost evolution, as they couple heat and water interactions across soil-vegetation-atmosphere interfaces and are applicable for large-scale assessments. This thesis aims to improve the ability of H-LSMs to simulate permafrost dynamics and concurrently represent hydrology. Specific research contributions are made on four fronts: (1) assessing the uncertainty introduced to the modelling due to permafrost initialization, (2) investigating the sensitivity of permafrost dynamics to different H-LSM parameters, associated issues of parameter identifiability, and sensitivity to external forcing datasets, (3) evaluating the strength of permafrost-hydrology coupling in H-LSMs in data-scarce regions under parameter uncertainty, and (4) assessing the fate of permafrost thaw and associated changes in streamflow under an ensemble of future climate projections. The analyses and results of this thesis that illuminate these central issues and various solutions for permafrost-based applications of H-LSMs are proposed. First, uncertainty in model initialization determines the length of required spin-up cycles; 200-1000 cycles may be required to ensure proper model initialization under different climatic conditions and initial soil moisture contents. Further, the uncertainty due to initialization can lead to divergent permafrost simulations, such as active layer thickness variations of up to ~2m. Second, the sensitivity of various permafrost characteristics is mainly driven by surface insulation (canopy height and snow-cover fraction) and soil properties (depth and fraction of organic matter content). Additionally, the results underscore the difficulties inherent in H-LSM simulation of all aspects of permafrost dynamics, primarily due to poor identifiability of influential parameters and the limitations of currently-available forcing data sets. Third, different H-LSM parameterizations favor different sources of data (i.e. streamflow, soil temperature profiles, and permafrost maps), and it is challenging to configure a model faithful to all data sources. Overall, the modelling results show that surface insulation (through snow cover) and model initialization are primary regulators of permafrost dynamics and different parameterizations produce different low-flow but similar high-flow regimes. Lastly, severe permafrost degradation is projected to occur under all climate change scenarios, even under the most optimistic ones. The degradation and climate change, collectively, are likely to alter several streamflow signatures, including an increase of winter and summer flows. Permafrost fate has strategic importance for the exchange of water, heat, and carbon fluxes over large areas, and can amplify the rate of climate change through a positive feedback mechanism. However, existing projections of permafrost are subject to significant uncertainty, stemming from several sources. This thesis quantifies and reduces this uncertainty by studying initialization, parameter identification, and evaluation of H-LSMs, which ultimately lead to configuring an H-LSM with higher fidelity to assess the impact of climate change. As a result, this work is a step forward in improving the realism of H-LSM simulations in permafrost regions. Further research is needed to refine simulation capability, and to develop improved observational datasets for permafrost and their associated climate forcing
    • …
    corecore