80 research outputs found

    Real-time aging trajectory prediction using a base model-oriented gradient-correction particle filter for Lithium-ion batteries

    Get PDF
    Predicting batteries' future degradation is essential for developing durable electric vehicles. The technical challenges arise from the absence of full battery degradation model and the inevitable local aging fluctuations in the uncontrolled environments. This paper proposes a base model-oriented gradient-correction particle filter (GC-PF) to predict aging trajectories of Lithium-ion batteries. Specifically, under the framework of typical particle filter, a gradient corrector is employed for each particle, resulting in the evolution of particle could follow the direction of gradient descent. This gradient corrector is also regulated by a base model. In this way, global information suggested by the base model is fully utilized, and the algorithm's sensitivity could be reduced accordingly. Further, according to the prediction deviations of base model, weighting factors between the local observations and base model can be updated adaptively. Four different battery datasets are used to extensively verify the proposed algorithm. Quantitatively, the RMSEs of GC-PF can be limited to 1.75%, which is 44% smaller than that of the conventional particle filter. In addition, the consistency of predictions when using different size of training data is also improved by 32%. Due to the pure data-driven nature, the proposed algorithm can also be extendable to other battery types

    Predicting battery aging trajectory via a migrated aging model and Bayesian Monte Carlo method

    Get PDF
    Thanks to the fast development in battery technologies, the lifespan of the lithium-ion batteries increases to more than 3000 cycles. This brings new challenges to reliability related researches because the experimental time becomes overly long. In response, a migrated battery aging model is proposed to predict the battery aging trajectory. The normal-speed aging model is established based on the accelerate aging model through a migration process, whose migration factors are determined through the Bayesian Monte Carlo method and the stratified resampling technique. Experimental results show that the root-mean-square-error of the predicted aging trajectory is limited within 1% when using only 25% of the cyclic aging data for training. The proposed method is suitable for both offline prediction of battery lifespan and online prediction of the remaining useful life

    Tocilizumab (monoclonal anti-IL-6R antibody) reverses anlotinib resistance in osteosarcoma

    Get PDF
    PurposeAnlotinib, a tyrosine kinase inhibitor (TKI) has been in clinical application to inhibit malignant cell growth and lung metastasis in osteosarcoma (OS). However, a variety of drug resistance phenomena have been observed in the treatment. We aim to explore the new target to reverse anlotinib resistance in OS.Materials and MethodsIn this study, we established four OS anlotinib-resistant cell lines, and RNA-sequence was performed to evaluate differentially expressed genes. We verified the results of RNA-sequence by PCR, western blot and ELISA assay. We further explored the effects of tocilizumab (anti- IL-6 receptor), either alone or in combined with anlotinib, on the inhibition of anlotinib-resistant OS cells malignant viability by CCK8, EDU, colony formation, apoptosis, transwell, wound healing, Cytoskeletal stain assays, and xenograft nude mouse model. The expression of IL-6 in 104 osteosarcoma samples was tested by IHC.ResultsWe found IL-6 and its downstream pathway STAT3 were activated in anlotinib-resistant osteosarcoma. Tocilizumab impaired the tumor progression of anlotinib-resistant OS cells, and combined treatment with anlotinib augmented these effects by inhibiting STAT3 expressions. IL-6 was highly expressed in patients with OS and correlated with poor prognosis.ConclusionTocilizumab could reverse anlotinib resistance in OS by IL-6/STAT3 pathway and the combination treatment with anlotinib rationalized further studies and clinical treatment of OS

    Data Assimilation with Machine Learning for Dynamical Systems: Modelling Indoor Ventilation

    Get PDF
    Data assimilation is a method of combining physical observations with prior knowledge (for instance, a computational simulation) in order to produce an improved model; that is, improved over what thephysical observations or the computational simulation could offer in isolation. Recently, machine learning techniques have been deployed in order to address the significant computational burden that is associated with the procedures involved in data assimilation.In this paper we propose an approach that uses a non-intrusive reduced-order model (NIROM) as a surrogate for a high-resolution model thereby saving computational effort. The mismatch between observations and the surrogate model is propagated forwards and backwards in time in a manner similar to 4D-variational data assimilation methods. The observations and prior are reconciled in a new way which takes full advantage of the neural network used in the NIROM and also means that there is no need to form the sensitivities explicitly when propagating the mismatch. Instead, the observations are part of the input and output of the network.Modelling the air quality in a school classroom is the test case for our demonstration. Firstly, the data assimilation approach is shown to perform very well in a dual-twin type experiment, and secondly, theapproach is used to assimilate observations collected from a classroom in Houndsfield Primary School with predictions from the NIROM

    IRIM at TRECVID 2013: Semantic Indexing and Instance Search

    Get PDF
    International audienceThe IRIM group is a consortium of French teams working on Multimedia Indexing and Retrieval. This paper describes its participation to the TRECVID 2013 semantic indexing and instance search tasks. For the semantic indexing task, our approach uses a six-stages processing pipelines for computing scores for the likelihood of a video shot to contain a target concept. These scores are then used for producing a ranked list of images or shots that are the most likely to contain the target concept. The pipeline is composed of the following steps: descriptor extraction, descriptor optimization, classiffication, fusion of descriptor variants, higher-level fusion, and re-ranking. We evaluated a number of different descriptors and tried different fusion strategies. The best IRIM run has a Mean Inferred Average Precision of 0.2796, which ranked us 4th out of 26 participants

    IRIM at TRECVID 2012: Semantic Indexing and Instance Search

    Get PDF
    International audienceThe IRIM group is a consortium of French teams work- ing on Multimedia Indexing and Retrieval. This paper describes its participation to the TRECVID 2012 se- mantic indexing and instance search tasks. For the semantic indexing task, our approach uses a six-stages processing pipelines for computing scores for the likeli- hood of a video shot to contain a target concept. These scores are then used for producing a ranked list of im- ages or shots that are the most likely to contain the tar- get concept. The pipeline is composed of the following steps: descriptor extraction, descriptor optimization, classi cation, fusion of descriptor variants, higher-level fusion, and re-ranking. We evaluated a number of dif- ferent descriptors and tried di erent fusion strategies. The best IRIM run has a Mean Inferred Average Pre- cision of 0.2378, which ranked us 4th out of 16 partici- pants. For the instance search task, our approach uses two steps. First individual methods of participants are used to compute similrity between an example image of in- stance and keyframes of a video clip. Then a two-step fusion method is used to combine these individual re- sults and obtain a score for the likelihood of an instance to appear in a video clip. These scores are used to ob- tain a ranked list of clips the most likely to contain the queried instance. The best IRIM run has a MAP of 0.1192, which ranked us 29th on 79 fully automatic runs

    Achieving ultra‐high rate planar and dendrite‐free zinc electroplating for aqueous zinc battery anodes

    Get PDF
    Despite being one of the most promising candidates for grid-level energy storage, practical aqueous zinc batteries are limited by dendrite formation, which leads to significantly compromised safety and cycling performance. In this study, by using single-crystal Zn-metal anodes, reversible electrodeposition of planar Zn with a high capacity of 8 mAh cm−2 can be achieved at an unprecedentedly high current density of 200 mA cm−2. This dendrite-free electrode is well maintained even after prolonged cycling (>1200 cycles at 50 mA cm−2). Such excellent electrochemical performance is due to single-crystal Zn suppressing the major sources of defect generation during electroplating and heavily favoring planar deposition morphologies. As so few defect sites form, including those that would normally be found along grain boundaries or to accommodate lattice mismatch, there is little opportunity for dendritic structures to nucleate, even under extreme plating rates. This scarcity of defects is in part due to perfect atomic-stitching between merging Zn islands, ensuring no defective shallow-angle grain boundaries are formed and thus removing a significant source of non-planar Zn nucleation. It is demonstrated that an ideal high-rate Zn anode should offer perfect lattice matching as this facilitates planar epitaxial Zn growth and minimizes the formation of any defective regions

    The ABC130 barrel module prototyping programme for the ATLAS strip tracker

    Full text link
    For the Phase-II Upgrade of the ATLAS Detector, its Inner Detector, consisting of silicon pixel, silicon strip and transition radiation sub-detectors, will be replaced with an all new 100 % silicon tracker, composed of a pixel tracker at inner radii and a strip tracker at outer radii. The future ATLAS strip tracker will include 11,000 silicon sensor modules in the central region (barrel) and 7,000 modules in the forward region (end-caps), which are foreseen to be constructed over a period of 3.5 years. The construction of each module consists of a series of assembly and quality control steps, which were engineered to be identical for all production sites. In order to develop the tooling and procedures for assembly and testing of these modules, two series of major prototyping programs were conducted: an early program using readout chips designed using a 250 nm fabrication process (ABCN-25) and a subsequent program using a follow-up chip set made using 130 nm processing (ABC130 and HCC130 chips). This second generation of readout chips was used for an extensive prototyping program that produced around 100 barrel-type modules and contributed significantly to the development of the final module layout. This paper gives an overview of the components used in ABC130 barrel modules, their assembly procedure and findings resulting from their tests.Comment: 82 pages, 66 figure

    Adopting Argumentation Mining for Claim Extraction from TED Talks

    No full text
    Engagement is critical for academic learning. It's commonly believed that motivating students to learn is crucial in education. We think that by providing students some interesting content based on what they are learning is a good idea. Since TED Talks share attractive new ideas, we are planning to motivate students by recommending TED Talks relevant to their learning content. Also, we found it's important to have some ``teasing texts'', which are used to convince students to watch TED Talks we recommended. to get these texts, we are going to adopt an argumentation mining technique called ``Claim Extraction'' on TED Talk subtitles.Claim extraction uses classifiers trained on a dataset to extract claim sentences from the given texts. And these claim sentences can be used as the ``teasing texts''. Due to the fact that there isn't any TED Talk based corpus and building one is extremely expensive, we have to train classifiers on the existing Wikipedia dataset. It means we have to deal with the cross-domain learning problem. This thesis will introduce our approach of building a TED Talk claim extraction system. This system will use classifiers trained on existing corpus and can extract claim sentences from TED Talk subtitles. Also, this thesis proposes using claims extracted from TED Talk subtitles can promote students to watch the recommended TED Talks
    corecore