72,194 research outputs found

    Methodology for inductor devices stripping

    Get PDF
    Methodology for inductor devices stripping. A device for Stripping insulation from the enamel, operating on the principle of induction heating copper wire in the gap of the hub of the magnetic flux, it is known [1] and produced in small batches. However, the effectiveness of its work on wires with a diameter less than 0.1 mm is low and the task of developing a device for Stripping wires of small diameter and Litz wire are still relevant. The aim of the research is the study of processes in a conductor of small diameter, also landratov in a magnetic field and on the basis of the obtained results, improving the device [1]. By "improvement", refers to the ability of the device to remove the enamel insulation from wires of diameter 0.08 or smaller, and the possibility of Stripping the insulation from landratov

    A comparative study of different strategies of batch effect removal in microarray data: a case study of three datasets

    Get PDF
    Batch effects refer to the systematic non-biological variability that is introduced by experimental design and sample processing in microarray experiments. It is a common issue in microarray data and could introduce bias into the analysis, if ignored. Many batch effect removal methods have been developed. Previous comparative work has been focused on their effectiveness of batch effects removal and impact on downstream classification analysis. The most common type of analysis for microarray data is differential expression (DE) analysis, yet no study has examined the impact of these methods on downstream DE analysis, which identifies markers that are significantly associated with the outcome of interest. In this project, we investigated the performance of five popular batch effect removal methods, mean-centering, ComBat_p, ComBat_n, SVA, and ratio based methods, on batch effects reduction and their impact on DE analysis using three experimental datasets with different sources of batch effects. We found that the performance of these methods is data-dependent: simple mean-centering method performed reasonably well in all three datasets, but the more complicated algorithms such as ComBat method’s performance could be unstable for certain dataset and should be applied with caution. Given a new dataset, we recommend either using the mean-centering method or carefully investigating a few different batch removal methods and choosing the one that is the best for the data, if possible. This study has important public health significance because better handling of batch effect in microarray data can reduce biased results and lead to improved biomarker identification

    Code of Practice for Organic Food Processing. With contributions from Ursula Kretzschmar, Angelika Ploeger and Otto Schmid.

    Get PDF
    The consumers of “low input” and organic foods have specific expectations with respect to quality parameters of processed food. These may relate to the degree of processing, concern about specific additives, nutritional composition, integrity or whole food concepts, the degree of convenience, the level of energy use and transportation distances, but also to food safety. For many processors, fulfilling all of these expectations represents a tremendous challenge in understanding and implementing the standards requirements in daily practise. Therefore, it is necessary to have in this field a guidance document for processors as well as standard setting institutions and certification/inspection bodies. In the EU project on “Quality of low input food” (QLIF, No. 50635), which deals with food safety and quality issues related to food from low-input and organic food systems, it was possible to elaborate a specific code of practise for food processing as part of the Subproject 5 on processing. The starting point for this publication was a literature survey about underlying principles of organic and low-input food processing (Schmid, Beck, Kretzschmar, 2004) and a broad European-wide consultation in 2 rounds, which was also undertaken in the QLIF-project. The results of these studies showed that many companies have serious questions related to the implementation practice of the complex requirements for organic food. Some recent scandals in this sector have made clear that in several areas an improvement of the current practises are necessary, e.g. the separation practises between organic and conventional foods. The aim of this “Code of good practice for organic food processing” (COPOF) is to give companies a comprehensive introduction to the most important requirements of the organic food sector applicable for the daily practice. Additionally, the COPOF offers a number of tools that make it possible to: a) improve the production skills effectively, b) improve and maintain the quality of organic foods and c) guarantee the safety of organic products. The basic idea of this publication was that the responsible persons in companies producing and handling the products have a strongest influence on the final products characteristics themselves. Therefore, their knowledge, abilities and the structural conditions for their work are most important factors to ensure a high quality and safety of the produced food

    CORe50: a New Dataset and Benchmark for Continuous Object Recognition

    Full text link
    Continuous/Lifelong learning of high-dimensional data streams is a challenging research problem. In fact, fully retraining models each time new data become available is infeasible, due to computational and storage issues, while na\"ive incremental strategies have been shown to suffer from catastrophic forgetting. In the context of real-world object recognition applications (e.g., robotic vision), where continuous learning is crucial, very few datasets and benchmarks are available to evaluate and compare emerging techniques. In this work we propose a new dataset and benchmark CORe50, specifically designed for continuous object recognition, and introduce baseline approaches for different continuous learning scenarios

    Shared Arrangements: practical inter-query sharing for streaming dataflows

    Full text link
    Current systems for data-parallel, incremental processing and view maintenance over high-rate streams isolate the execution of independent queries. This creates unwanted redundancy and overhead in the presence of concurrent incrementally maintained queries: each query must independently maintain the same indexed state over the same input streams, and new queries must build this state from scratch before they can begin to emit their first results. This paper introduces shared arrangements: indexed views of maintained state that allow concurrent queries to reuse the same in-memory state without compromising data-parallel performance and scaling. We implement shared arrangements in a modern stream processor and show order-of-magnitude improvements in query response time and resource consumption for interactive queries against high-throughput streams, while also significantly improving performance in other domains including business analytics, graph processing, and program analysis

    Aggregate effect on the concrete cone capacity of an undercut anchor under quasi-static tensile load

    Get PDF
    In the last decades, fastening systems have become an essential part of the construction industry. Post-installed mechanical anchors are frequently used in concrete members to connect them with other load bearing structural members, or to attach appliances. Their performance is limited by the concrete related failure modes which are highly influenced by the concrete mix design. This paper aims at investigating the effect that different aggregates used in the concrete mix have on the capacity of an undercut anchor under tensile quasi-static loading. Three concrete batches were cast utilising three different aggregate types. For two concrete ages (28 and 70 days), anchor tensile capacity and concrete properties were obtained. Concrete compressive strength, fracture energy and elastic modulus are used to normalize and compare the undercut anchor concrete tensile capacity employing some of the most widely used prediction models. For a more insightful comparison, a statistical method that yields also scatter information is introduced. Finally, the height and shape of the concrete cones are compared by highly precise and objective photogrammetric means

    Verification and Optimization of a PLC Control Schedule

    Get PDF
    We report on the use of the SPIN model checker for both the verification of a process control program and the derivation of optimal control schedules. This work was carried out as part of a case study for the EC VHS project (Verification of Hybrid Systems), in which the program for a Programmable Logic Controller (PLC) of an experimental chemical plant had to be designed and verified. The intention of our approach was to see how much could be achieved here using the standard model checking environment of SPIN/Promela. As the symbolic calculations of real-time model checkers can be quite expensive it is interesting to try and exploit the efficiency of established non-real-time model checkers like SPIN in those cases where promising work-arounds seem to exist. In our case we handled the relevant real-time properties of the PLC controller using a time-abstraction technique; for the scheduling we implemented in Promela a so-called variable time advance procedure. For this case study these techniques proved sufficient to verify the design of the controller and derive (time-)optimal schedules with reasonable time and space requirements

    iCaRL: Incremental Classifier and Representation Learning

    Full text link
    A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail.Comment: Accepted paper at CVPR 201
    corecore