2,453 research outputs found

    Spectroscopic Studies of Brooker\u27s Merocyanine in Zeolite L

    Get PDF
    Zeolites are porous, crystalline substances that have very unique atomic organizations which allow for the formation of complex channels within the crystals. Each type of zeolite has a distinct shape and structure. To better understand the properties of zeolite channels, a dye molecule known as Brooker’s merocyanine was inserted into Zeolite L. Maximum dye loading into the zeolite channels was achieved by altering different experimental variables, such as heat, solution concentration, stirring, cation exchange, and light exposure. X-ray diffraction was used to verify the synthesis of zeolites, the cation exchange process, and dye loading. UV-Vis spectroscopy was used to measure the amount of dye adsorbed by the zeolite. By using the UV-Vis absorbance values and Beer’s Law, the concentration of dye in the zeolites was determined. The results showed that an increase of heat and stirring correlated to an increase of adsorption of dye by the zeolite. Due to the light sensitivity of Brooker’s merocyanine, it was found that limiting the amount of light exposure of the dye solutions also resulted in higher dye adsorption by the zeolites. An increase of the concentration of the dye solution increased the rate of adsorption in the channels. However, exchanging the potassium ions found within the synthesized Zeolite L channels with smaller hydrogen ions did not have an affect on the adsorption of dye in the channels. Characterizing how to achieve a maximum of dye adsorption in the zeolites allows for a better understanding of how dye molecules interact within the zeolite channels

    Error bounds for discrete minimizers of the Ginzburg-Landau energy in the high-Îș\kappa regime

    Full text link
    In this work, we study discrete minimizers of the Ginzburg-Landau energy in finite element spaces. Special focus is given to the influence of the Ginzburg-Landau parameter Îș\kappa. This parameter is of physical interest as large values can trigger the appearance of vortex lattices. Since the vortices have to be resolved on sufficiently fine computational meshes, it is important to translate the size of Îș\kappa into a mesh resolution condition, which can be done through error estimates that are explicit with respect to Îș\kappa and the spatial mesh width hh. For that, we first work in an abstract framework for a general class of discrete spaces, where we present convergence results in a problem-adapted Îș\kappa-weighted norm. Afterwards we apply our findings to Lagrangian finite elements and a particular generalized finite element construction. In numerical experiments we further explore the asymptotic optimality of our derived L2L^2- and H1H^1-error estimates with respect to Îș\kappa and hh. Preasymptotic effects are observed for large mesh sizes hh

    Deep Denoising for Hearing Aid Applications

    Full text link
    Reduction of unwanted environmental noises is an important feature of today's hearing aids (HA), which is why noise reduction is nowadays included in almost every commercially available device. The majority of these algorithms, however, is restricted to the reduction of stationary noises. In this work, we propose a denoising approach based on a three hidden layer fully connected deep learning network that aims to predict a Wiener filtering gain with an asymmetric input context, enabling real-time applications with high constraints on signal delay. The approach is employing a hearing instrument-grade filter bank and complies with typical hearing aid demands, such as low latency and on-line processing. It can further be well integrated with other algorithms in an existing HA signal processing chain. We can show on a database of real world noise signals that our algorithm is able to outperform a state of the art baseline approach, both using objective metrics and subject tests.Comment: submitted to IWAENC 201

    Harvest Timing and Moisture Determination: Forage Drying Rates and Moisture Probe Accuracy

    Get PDF
    Scott County Kentucky currently has a beef cattle herd of 28,509 head (USDA, 2017). These cattle utilize forage as a large part of their diets. Baleage is bales of wilted, high moisture forage which have been wrapped in several layers of UV‐resistant plastic and allowed to ensile like traditional chopped silage (Henning et al., 2021). Baleage has become another way for farmers to harvest and store forage to be used in cattle diets. It has some advantages over the traditional hay production in Kentucky. One advantage is it can be harvested, baled, and stored in a shorter period of time, which is ideal with Kentucky’s weather conditions. Baleage keeps a higher level of forage quality over time, unlike hay when it is loses nutrients through the drying process. Producers are still learning about best practices in producing baleage for livestock. If done incorrectly it can cause issues with storage, forage quality, spoilage, and even cattle death

    "So wirft Dirk Nowitzki!"

    Get PDF

    Continual Learning in Recurrent Neural Networks with Hypernetworks

    Full text link
    The last decade has seen a surge of interest in continual learning (CL), and a variety of methods have been developed to alleviate catastrophic forgetting. However, most prior work has focused on tasks with static data, while CL on sequential data has remained largely unexplored. Here we address this gap in two ways. First, we evaluate the performance of established CL methods when applied to recurrent neural networks (RNNs). We primarily focus on elastic weight consolidation, which is limited by a stability-plasticity trade-off, and explore the particularities of this trade-off when using sequential data. We show that high working memory requirements, but not necessarily sequence length, lead to an increased need for stability at the cost of decreased performance on subsequent tasks. Second, to overcome this limitation we employ a recent method based on hypernetworks and apply it to RNNs to address catastrophic forgetting on sequential data. By generating the weights of a main RNN in a task-dependent manner, our approach disentangles stability and plasticity, and outperforms alternative methods in a range of experiments. Overall, our work provides several key insights on the differences between CL in feedforward networks and in RNNs, while offering a novel solution to effectively tackle CL on sequential data.Comment: 13 pages and 4 figures in the main text; 20 pages and 2 figures in the supplementary material

    On the Complexity of the Smallest Grammar Problem over Fixed Alphabets

    Get PDF
    In the smallest grammar problem, we are given a word w and we want to compute a preferably small context-free grammar G for the singleton language {w} (where the size of a grammar is the sum of the sizes of its rules, and the size of a rule is measured by the length of its right side). It is known that, for unbounded alphabets, the decision variant of this problem is NP-hard and the optimisation variant does not allow a polynomial-time approximation scheme, unless P = NP. We settle the long-standing open problem whether these hardness results also hold for the more realistic case of a constant-size alphabet. More precisely, it is shown that the smallest grammar problem remains NP-complete (and its optimisation version is APX-hard), even if the alphabet is fixed and has size of at least 17. The corresponding reduction is robust in the sense that it also works for an alternative size-measure of grammars that is commonly used in the literature (i. e., a size measure also taking the number of rules into account), and it also allows to conclude that even computing the number of rules required by a smallest grammar is a hard problem. On the other hand, if the number of nonterminals (or, equivalently, the number of rules) is bounded by a constant, then the smallest grammar problem can be solved in polynomial time, which is shown by encoding it as a problem on graphs with interval structure. However, treating the number of rules as a parameter (in terms of parameterised complexity) yields W[1]-hardness. Furthermore, we present an O(3∣w∣) exact exponential-time algorithm, based on dynamic programming. These three main questions are also investigated for 1-level grammars, i. e., grammars for which only the start rule contains nonterminals on the right side; thus, investigating the impact of the “hierarchical depth” of grammars on the complexity of the smallest grammar problem. In this regard, we obtain for 1-level grammars similar, but slightly stronger results.Peer Reviewe

    On the Complexity of Grammar-Based Compression over Fixed Alphabets

    Get PDF
    It is shown that the shortest-grammar problem remains NP-complete if the alphabet is fixed and has a size of at least 24 (which settles an open question). On the other hand, this problem can be solved in polynomial-time, if the number of nonterminals is bounded, which is shown by encoding the problem as a problem on graphs with interval structure. Furthermore, we present an O(3n) exact exponential-time algorithm, based on dynamic programming. Similar results are also given for 1-level grammars, i.e., grammars for which only the start rule contains nonterminals on the right side (thus, investigating the impact of the "hierarchical depth" on the complexity of the shortest-grammar problem)
    • 

    corecore