54 research outputs found

    A methodology to liberate critical metals in waste solar panel

    Get PDF
    The availability of critical metals is one of the driving factor to secure the transition of energy production to a renewable, low carbon one because of the material requirement in photovoltaic technology (PV), wind power generation and batteries. For example, precious metals are vital to manufacture crystalline silicon solar panel and tellurium, germanium, indium and gallium are essential in thin film photovoltaic panels. However, the pressure on the supply of critical metals increases with the growth of photovoltaics. Considering the resource availability, the recycling of critical metals from waste solar panels can enhance the sustainability of end-of-life management, although the recycled metal input is limited in present state. Among the recycling techniques, the separation and liberation of metals from non-metals are crucial. This study investigate a methodology to liberate thin film materials from copper indium gallium selenide (CIGS) thin-film solar panel to recycle photovoltaic material including indium and gallium via a mechanical process.An experimental technique using mineral processing techniques, crushing and grinding, are proposed to recycle critical metals from CIGS solar panel. In this study, the crushing experiments were conducted and the size based elemental distribution was analysed. The results showed crushing is capable to delaminate glass substrate and Fuerstenau upgrading curves and the ore separation degree were used to show that selective liberation occurs and the critical metals concentrate in coarse size fraction but may not be fully liberated. The morphology test using SEM-EDS to observe the surface of broken panel and the classification of broken particle based on size, metal concentration and surface morphology were conducted. The results suggested that approximately 90 w% of functional materials are still laminated on EVA in the size fraction larger greater than 2360 μm. It shows crushing alone will not fully liberate the material. Grinding can be used as a second stage recycling method, de-coating the target materials. The grinding test resulted in a more than 80 w% recovery rate of indium and the fine particle less than 38 μm contains more than 1500 ppm indium, more than 480 ppm gallium and 1500 ppm molybdenum. It could show that the combination of crushing and grinding is suitable to delaminate the panel and de-coat the critical metals to liberate and concentrate the metals

    Can GPT-4 Perform Neural Architecture Search?

    Full text link
    We investigate the potential of GPT-4~\cite{gpt4} to perform Neural Architecture Search (NAS) -- the task of designing effective neural architectures. Our proposed approach, \textbf{G}PT-4 \textbf{E}nhanced \textbf{N}eural arch\textbf{I}tect\textbf{U}re \textbf{S}earch (GENIUS), leverages the generative capabilities of GPT-4 as a black-box optimiser to quickly navigate the architecture search space, pinpoint promising candidates, and iteratively refine these candidates to improve performance. We assess GENIUS across several benchmarks, comparing it with existing state-of-the-art NAS techniques to illustrate its effectiveness. Rather than targeting state-of-the-art performance, our objective is to highlight GPT-4's potential to assist research on a challenging technical problem through a simple prompting scheme that requires relatively limited domain expertise\footnote{Code available at \href{https://github.com/mingkai-zheng/GENIUS}{https://github.com/mingkai-zheng/GENIUS}.}. More broadly, we believe our preliminary results point to future research that harnesses general purpose language models for diverse optimisation tasks. We also highlight important limitations to our study, and note implications for AI safety

    SimMatchV2: Semi-Supervised Learning with Graph Consistency

    Full text link
    Semi-Supervised image classification is one of the most fundamental problem in computer vision, which significantly reduces the need for human labor. In this paper, we introduce a new semi-supervised learning algorithm - SimMatchV2, which formulates various consistency regularizations between labeled and unlabeled data from the graph perspective. In SimMatchV2, we regard the augmented view of a sample as a node, which consists of a label and its corresponding representation. Different nodes are connected with the edges, which are measured by the similarity of the node representations. Inspired by the message passing and node classification in graph theory, we propose four types of consistencies, namely 1) node-node consistency, 2) node-edge consistency, 3) edge-edge consistency, and 4) edge-node consistency. We also uncover that a simple feature normalization can reduce the gaps of the feature norm between different augmented views, significantly improving the performance of SimMatchV2. Our SimMatchV2 has been validated on multiple semi-supervised learning benchmarks. Notably, with ResNet-50 as our backbone and 300 epochs of training, SimMatchV2 achieves 71.9\% and 76.2\% Top-1 Accuracy with 1\% and 10\% labeled examples on ImageNet, which significantly outperforms the previous methods and achieves state-of-the-art performance. Code and pre-trained models are available at \href{https://github.com/mingkai-zheng/SimMatchV2}{https://github.com/mingkai-zheng/SimMatchV2}

    Knowledge Diffusion for Distillation

    Full text link
    The representation gap between teacher and student is an emerging topic in knowledge distillation (KD). To reduce the gap and improve the performance, current methods often resort to complicated training schemes, loss functions, and feature alignments, which are task-specific and feature-specific. In this paper, we state that the essence of these methods is to discard the noisy information and distill the valuable information in the feature, and propose a novel KD method dubbed DiffKD, to explicitly denoise and match features using diffusion models. Our approach is based on the observation that student features typically contain more noises than teacher features due to the smaller capacity of student model. To address this, we propose to denoise student features using a diffusion model trained by teacher features. This allows us to perform better distillation between the refined clean feature and teacher feature. Additionally, we introduce a light-weight diffusion model with a linear autoencoder to reduce the computation cost and an adaptive noise matching module to improve the denoising performance. Extensive experiments demonstrate that DiffKD is effective across various types of features and achieves state-of-the-art performance consistently on image classification, object detection, and semantic segmentation tasks. Code is available at https://github.com/hunto/DiffKD.Comment: NeurIPS 202

    Relational Self-Supervised Learning

    Full text link
    Self-supervised Learning (SSL) including the mainstream contrastive learning has achieved great success in learning visual representations without data annotations. However, most methods mainly focus on the instance level information (\ie, the different augmented images of the same instance should have the same feature or cluster into the same class), but there is a lack of attention on the relationships between different instances. In this paper, we introduce a novel SSL paradigm, which we term as relational self-supervised learning (ReSSL) framework that learns representations by modeling the relationship between different instances. Specifically, our proposed method employs sharpened distribution of pairwise similarities among different instances as \textit{relation} metric, which is thus utilized to match the feature embeddings of different augmentations. To boost the performance, we argue that weak augmentations matter to represent a more reliable relation, and leverage momentum strategy for practical efficiency. The designed asymmetric predictor head and an InfoNCE warm-up strategy enhance the robustness to hyper-parameters and benefit the resulting performance. Experimental results show that our proposed ReSSL substantially outperforms the state-of-the-art methods across different network architectures, including various lightweight networks (\eg, EfficientNet and MobileNet).Comment: Extended version of NeurIPS 2021 paper. arXiv admin note: substantial text overlap with arXiv:2107.0928

    CoNe: Contrast Your Neighbours for Supervised Image Classification

    Full text link
    Image classification is a longstanding problem in computer vision and machine learning research. Most recent works (e.g. SupCon , Triplet, and max-margin) mainly focus on grouping the intra-class samples aggressively and compactly, with the assumption that all intra-class samples should be pulled tightly towards their class centers. However, such an objective will be very hard to achieve since it ignores the intra-class variance in the dataset. (i.e. different instances from the same class can have significant differences). Thus, such a monotonous objective is not sufficient. To provide a more informative objective, we introduce Contrast Your Neighbours (CoNe) - a simple yet practical learning framework for supervised image classification. Specifically, in CoNe, each sample is not only supervised by its class center but also directly employs the features of its similar neighbors as anchors to generate more adaptive and refined targets. Moreover, to further boost the performance, we propose ``distributional consistency" as a more informative regularization to enable similar instances to have a similar probability distribution. Extensive experimental results demonstrate that CoNe achieves state-of-the-art performance across different benchmark datasets, network architectures, and settings. Notably, even without a complicated training recipe, our CoNe achieves 80.8\% Top-1 accuracy on ImageNet with ResNet-50, which surpasses the recent Timm training recipe (80.4\%). Code and pre-trained models are available at \href{https://github.com/mingkai-zheng/CoNe}{https://github.com/mingkai-zheng/CoNe}

    Impacts of Hypoxia-Inducible Factor-1 Knockout in the Retinal Pigment Epithelium on Choroidal Neovascularization

    Get PDF
    PURPOSE. Hypoxia-inducible factor (HIF)-1 is a key oxygen sensor and is believed to play an important role in neovascularization (NV). The purpose of this study is to determine the role of retinal pigment epithelium (RPE)-derived HIF-1a on ocular NV. METHODS. Conditional HIF-1a knockout (KO) mice were generated by crossing transgenic mice expressing Cre in the RPE with HIF-1a floxed mice, confirmed by immunohistochemistry, Western blot analysis, and fundus fluorescein angiography. The mice were used for the oxygen-induced retinopathy (OIR) and laser-induced choroidal neovascularization (CNV) models. RESULTS. HIF-1a levels were significantly decreased in the RPE layer of ocular sections and in primary RPE cells from the HIF1a KO mice. Under normal conditions, the HIF-1a KO mice exhibited no apparent abnormalities in retinal histology or visual function as shown by light microscopy and electroretinogram recording, respectively. The HIF-1a KO mice with OIR showed no significant difference from the wild-type (WT) mice in retinal levels of HIF-1a and VEGF as well as in the number of preretinal neovascular cells. In the laser-induced CNV model, however, the disruption of HIF-1a in the RPE attenuated the over expression of VEGF and the intercellular adhesion molecule 1 (ICAM-1), and reduced vascular leakage and CNV area. CONCLUSIONS. RPE-derived HIF-1a plays a key role in CNV, but not in ischemia-induced retinal NV. (Invest Ophthalmol Vis Sci

    Predicting the Energy Consumption of Commercial Buildings Based on Deep Forest Model and Its Interpretability

    No full text
    Building energy assessment models are considered to be one of the most informative methods in building energy efficiency design, and most of the current building energy assessment models have been developed based on machine learning algorithms. Deep learning models have proved their effectiveness in fields such as image and fault detection. This paper proposes a deep learning energy assessment framework with interpretability to support building energy efficiency design. The proposed framework is validated using the Commercial Building Energy Consumption Survey dataset, and the results show that the wrapper feature selection method (Sequential Forward Generation) significantly improves the performance of deep learning and machine learning models compared with the filtered (Mutual Information) and embedded (Least Absolute Shrinkage and Selection Operator) feature selection algorithms. Moreover, the Deep Forest model has an R2 of 0.90 and outperforms the Deep Multilayer Perceptron, the Convolutional Neural Network, the Backpropagation Neural Network, and the Radial Basis Function Network in terms of prediction performance. In addition, the model interpretability results reveal how the features affect the prediction results and the contribution of the features to the energy consumption in a single building sample. This study helps building energy designers assess the energy consumption of new buildings and develop improvement measures
    corecore