17,652 research outputs found

    PADS: A simple yet effective pattern-aware dynamic search method for fast maximal frequent pattern mining

    Full text link
    While frequent pattern mining is fundamental for many data mining tasks, mining maximal frequent patterns efficiently is important in both theory and applications of frequent pattern mining. The fundamental challenge is how to search a large space of item combinations. Most of the existing methods search an enumeration tree of item combinations in a depth-first manner. In this paper, we develop a new technique for more efficient max-pattern mining. Our method is pattern-aware: it uses the patterns already found to schedule its future search so that many search subspaces can be pruned. We present efficient techniques to implement the new approach. As indicated by a systematic empirical study using the benchmark data sets, our new approach outperforms the currently fastest max-pattern mining algorithms FPMax* and LCM2 clearly. The source code and the executable code (on both Windows and Linux platforms) are publicly available at http://www.cs.sfu.ca/~jpei/Software/PADS.zip. © Springer-Verlag London Limited 2008

    Critical Current Density and Resistivity of MgB2 Films

    Full text link
    The high resistivity of many bulk and film samples of MgB2 is most readily explained by the suggestion that only a fraction of the cross-sectional area of the samples is effectively carrying current. Hence the supercurrent (Jc) in such samples will be limited by the same area factor, arising for example from porosity or from insulating oxides present at the grain boundaries. We suggest that a correlation should exist, Jc ~ 1/{Rho(300K) - Rho(50K)}, where Rho(300K) - Rho(50K) is the change in the apparent resistivity from 300 K to 50 K. We report measurements of Rho(T) and Jc for a number of films made by hybrid physical-chemical vapor deposition which demonstrate this correlation, although the "reduced effective area" argument alone is not sufficient. We suggest that this argument can also apply to many polycrystalline bulk and wire samples of MgB2.Comment: 11 pages, 3 figure

    Mineral processing simulation based-environmental life cycle assessment for rare earth project development: a case study on the Songwe Hill project

    Get PDF
    This is the final version. Available on open access from Elsevier via the DOI in this recordRare earth elements (REE), including neodymium, praseodymium, and dysprosium are used in a range of low-carbon technologies, such as electric vehicles and wind turbines, and demand for these REE is forecast to grow. This study demonstrates that a process simulation-based life cycle assessment (LCA) carried out at the early stages of a REE project, such as at the pre-feasibility stage, can inform subsequent decision making during the development of the project and help reduce its environmental impacts. As new REE supply chains are established and new mines are opened. It is important that the environmental consequences of different production options are examined in a life cycle context in order that the environment footprint of these raw materials is kept as low as possible. Here, we present a cradle-to-gate and process simulation-based life cycle assessment (LCA) for a potential new supply of REE at Songwe Hill in Malawi. We examine different project options including energy selection and a comparison of on-site acid regeneration versus virgin acid consumption which were being considered for the project. The LCA results show that the global warming potential of producing 1 kg of rare earth oxide (REO) from Songwe Hill is between 17 and 87 kg CO2-eq.A scenario that combines on-site acid regeneration with off-peak hydroelectric and photovoltaic energy gives the lowest global warming potential and performs well in other impact categories.This approach can equally well be applied to all other types of ore deposits and should be considered as a routine addition to all pre-feasibility studies.Natural Environment Research Council (NERC

    Temporally explicit life cycle assessment as an environmental performance decision making tool in rare earth project development

    Get PDF
    This is the final version. Available from Elsevier via the DOI in this record.The study shows that a detailed LCA can be carried out for a proposed mining project as soon as Prefeasibility (PFS) data are available. The prefeasibility study is one of the key early steps in bringing a deposit towards production and results are often publically available. This study applies the technique to a rare earth deposit because rare earth element (REE) consumption is increasing owing to their use in low-carbon technologies such as electric vehicles and wind turbines. It is therefore particularly important to understand the environmental impacts of the raw materials. A number of REE deposits are under development to give additional supply and many possess novel mineral compositions and will require different processing methods than previously used. Assessing the environmental performance of the production of REE during the development of projects offers significant insights into how to improve the sustainability of a project. In this study we used life cycle assessment (LCA) to quantify the environmental impacts for producing rare earth oxide (REO) from the Bear Lodge Project, United States. The Life Cycle Impact Assessment results were produced for each year over the life of the project, generating insight about the relationships between ore composition, grade, processing method and environmental impacts. The environmental impacts vary significantly during the life of a project and a temporally explicit LCA can highlight these.Natural Environment Research Council (NERC

    Grassland: A Rapid Algebraic Modeling System for Million-variable Optimization

    Get PDF
    An algebraic modeling system (AMS) is a type of mathematical software for optimization problems, which allows users to define symbolic mathematical models in a specific language, instantiate them with given source of data, and solve them with the aid of external solver engines. With the bursting scale of business models and increasing need for timeliness, traditional AMSs are not sufficient to meet the following industry needs: 1) million-variable models need to be instantiated from raw data very efficiently; 2) Strictly feasible solution of million-variable models need to be delivered in a rapid manner to make up-to-date decisions against highly dynamic environments. Grassland is a rapid AMS that provides an end-to-end solution to tackle these emerged new challenges. It integrates a parallelized instantiation scheme for large-scale linear constraints, and a sequential decomposition method that accelerates model solving exponentially with an acceptable loss of optimality. Extensive benchmarks on both classical models and real enterprise scenario demonstrate 6-10x speedup of Grassland over state-of-the-art solutions on model instantiation. Our proposed system has been deployed in the large-scale real production planning scenario of Huawei. With the aid of our decomposition method, Grassland successfully accelerated Huawei's million-variable production planning simulation pipeline from hours to 3-5 minutes, supporting near-real-time production plan decision making against highly dynamic supply-demand environment

    TSS-Net: Two-stage with Sample selection and Semi-supervised Net for deep learning with noisy labels

    Get PDF
    The significant success of Deep Neural Networks (DNNs) relies on the availability of annotated large-scale datasets. However, it is time-consuming and expensive to obtain the available annotated datasets of huge size, which hinders the development of DNNs. In this paper, a novel two-stage framework is proposed for learning with noisy labels, called Two-Stage Sample selection and Semi-supervised learning Network (TSS-Net). It combines sample selection with semi-supervised learning. The first stage divides the noisy samples from the clean samples using cyclic training. The second stage uses the noisy samples as unlabelled data and the clean samples as labelled data for semi-supervised learning. Unlike previous approaches, TSS-Net does not require specifically designed robust loss functions and complex networks. It achieves decoupling of the two stages, which means that each stage can be replaced with a superior method to achieve better results, and this improves the inclusiveness of the network. Our experiments are conducted on several benchmark datasets in different settings. The experimental results demonstrate that TSS-Net outperforms many state-of-the-art methods

    A systematic TMRT observational study of Galactic 12^{12}C/13^{13}C ratios from Formaldehyde

    Full text link
    We present observations of the C-band 1101111_{10}-1_{11} (4.8 GHz) and Ku-band 2112122_{11}-2_{12} (14.5 GHz) K-doublet lines of H2_2CO and the C-band 1101111_{10}-1_{11} (4.6 GHz) line of H2_213^{13}CO toward a large sample of Galactic molecular clouds, through the Shanghai Tianma 65-m radio telescope (TMRT). Our sample with 112 sources includes strong H2_2CO sources from the TMRT molecular line survey at C-band and other known H2_2CO sources. All three lines are detected toward 38 objects (43 radial velocity components) yielding a detection rate of 34\%. Complementary observations of their continuum emission at both C- and Ku-bands were performed. Combining spectral line parameters and continuum data, we calculate the column densities, the optical depths and the isotope ratio H2_212^{12}CO/H2_213^{13}CO for each source. To evaluate photon trapping caused by sometimes significant opacities in the main isotopologue's rotational mm-wave lines connecting our measured K-doublets, and to obtain 12^{12}C/13^{13}C abundance ratios, we used the RADEX non-LTE model accounting for radiative transfer effects. This implied the use of the new collision rates from \citet{Wiesenfeld2013}. Also implementing distance values from trigonometric parallax measurements for our sources, we obtain a linear fit of 12^{12}C/13^{13}C = (5.08±\pm1.10)DGC_{GC} + (11.86±\pm6.60), with a correlation coefficient of 0.58. DGC_{GC} refers to Galactocentric distances. Our 12^{12}C/13^{13}C ratios agree very well with the ones deduced from CN and C18^{18}O but are lower than those previously reported on the basis of H2_2CO, tending to suggest that the bulk of the H2_2CO in our sources was formed on dust grain mantles and not in the gas phase.Comment: 27 pages, 8 figures, 7 tables. Accepted for publication in The Astrophysical Journa

    A solution method for image distortion correction model based on bilinear interpolation

    Get PDF
    In the process of the image generation, because the imaging system itself has differences in terms of nonlinear or cameraman perspective, the generated image will face the geometric distortion. Image distortion in general is also a kind of image degradation, which needs the geometric transform to correct each pixel position of the distorted images, so as to regain the original spatial relationships between pixels and the original grey value relation, and which is also one of important steps of image processing. From the point of view of the digital image processing, the distortion correction is actually a process of image restoration for a degraded image. In image processing, in terms of the image quality improvement and correction technology, namely the image restoration, with the wide expansion of digital image distortion correction processing applied, the processing technology of the image restoration has also become a research hotspot. In view of the image distortion issue, this paper puts forward the image distortion correction algorithm based on two-step and one-dimensional linear gray level interpolation to reduce the computation complexity of the bilinear interpolation method, and divide the distorted image into multiple quadrilaterals, and the area of the quadrilateral is associated with the distortion degree of the image in the given region, and express the region distortion of each quadrilateral with the bilinear model, thus determining parameters of bilinear model according to the position of the quadrilateral vertex in the target image and the distorted image. Experiments show that such algorithm in this paper can meet the requirements of distortion correction of most lenses, which can accurately extract the distorted edge of the image, thus making the corrected image closer to the ideal image
    corecore