365 research outputs found

    A multi-stage stochastic programming for lot-sizing and scheduling under demand uncertainty

    Get PDF
    A stochastic lot-sizing and scheduling problem with demand uncertainty is studied in this paper. Lot-sizing determines the batch size for each product and scheduling decides the sequence of production. A multi-stage stochastic programming model is developed to minimize overall system costs including production cost, setup cost, inventory cost and backlog cost. We aim to find the optimal production sequence and resource allocation decisions. Demand uncertainty is represented by scenario trees using moment matching technique. Scenario reduction is used to select scenarios with the best representation of original set. A case study based on a manufacturing company has been conducted to illustrate and verify the model. We compared the two-stage stochastic programming model to the multi-stage stochastic programming model. The major motivation to adopt multi-stage stochastic programming models is that it extends the two-stage stochastic programming models by allowing revised decision at each period based on the previous realizations of uncertainty as well as decisions. Stability test and weak out-of-sample test are applied to find an appropriate scenario sample size. By using the multi-stage stochastic programming model, we improved the quality of solution by 10–13%

    On-site Smart Operation and Maintenance System for Substation Equipment Based on Mobile Network

    Get PDF
    The maintenance of substations is crucial for the safety of the electrical grid and power industry. However, for long time, the maintenance teams in the field and the experts in the power companies are divided. The data and expertise exchanges between the on-site maintenance teams and data center are delayed due to the lack of effective communication. This paper introduces an on-site smart operation maintenance system for substation equipment based on mobile network. It is able to establish real-time communication and data exchange channels between the maintenance teams and data center. It consists of an operation and maintenance system platform located on the data center side and smart operation and maintenance boxes with mobile APP which are carried to the field side by the maintenance teams. As the kernel of the system, the smart boxes are bridges between the data center and operation sites. On one hand, it is able to formally upload data to the data center in real-time. One the other hand, the operation and maintenance personnel are able to call for help from the resource on the data center anytime. Using the system proposed in the paper, both efficiency of the operation and maintenance and the normalization of the data can be improved

    Divided Attention: Unsupervised Multi-Object Discovery with Contextually Separated Slots

    Full text link
    We introduce a method to segment the visual field into independently moving regions, trained with no ground truth or supervision. It consists of an adversarial conditional encoder-decoder architecture based on Slot Attention, modified to use the image as context to decode optical flow without attempting to reconstruct the image itself. In the resulting multi-modal representation, one modality (flow) feeds the encoder to produce separate latent codes (slots), whereas the other modality (image) conditions the decoder to generate the first (flow) from the slots. This design frees the representation from having to encode complex nuisance variability in the image due to, for instance, illumination and reflectance properties of the scene. Since customary autoencoding based on minimizing the reconstruction error does not preclude the entire flow from being encoded into a single slot, we modify the loss to an adversarial criterion based on Contextual Information Separation. The resulting min-max optimization fosters the separation of objects and their assignment to different attention slots, leading to Divided Attention, or DivA. DivA outperforms recent unsupervised multi-object motion segmentation methods while tripling run-time speed up to 104FPS and reducing the performance gap from supervised methods to 12% or less. DivA can handle different numbers of objects and different image sizes at training and test time, is invariant to permutation of object labels, and does not require explicit regularization

    Closed-Loop Supply Chain Network Design under Uncertainties Using Fuzzy Decision Making

    Get PDF
    The importance of considering forward and backward flows simultaneously in supply chain networks spurs an interest to develop closed-loop supply chain networks (CLSCN). Due to the expanded scope in the supply chain, designing CLSCN often faces significant uncertainties. This paper proposes a fuzzy multi-objective mixed-integer linear programming model to deal with uncertain parameters in CLSCN. The two objective functions are minimization of overall system costs and minimization of negative environmental impact. Negative environmental impacts are measured and quantified through CO2 equivalent emission. Uncertainties include demand, return, scrap rate, manufacturing cost and negative environmental factors. The original formulation with uncertain parameters is firstly converted into a crisp model and then an aggregation function is applied to combine the objective functions. Numerical experiments have been carried out to demonstrate the effectiveness of the proposed model formulation and solution approach. Sensitivity analyses on degree of feasibility, the weighing of objective functions and coefficient of compensation have been conducted. This model can be applied to a variety of real-world situations, such as in the manufacturing production processes

    InfoNet: Neural Estimation of Mutual Information without Test-Time Optimization

    Full text link
    Estimating mutual correlations between random variables or data streams is essential for intelligent behavior and decision-making. As a fundamental quantity for measuring statistical relationships, mutual information has been extensively studied and utilized for its generality and equitability. However, existing methods often lack the efficiency needed for real-time applications, such as test-time optimization of a neural network, or the differentiability required for end-to-end learning, like histograms. We introduce a neural network called InfoNet, which directly outputs mutual information estimations of data streams by leveraging the attention mechanism and the computational efficiency of deep learning infrastructures. By maximizing a dual formulation of mutual information through large-scale simulated training, our approach circumvents time-consuming test-time optimization and offers generalization ability. We evaluate the effectiveness and generalization of our proposed mutual information estimation scheme on various families of distributions and applications. Our results demonstrate that InfoNet and its training process provide a graceful efficiency-accuracy trade-off and order-preserving properties. We will make the code and models available as a comprehensive toolbox to facilitate studies in different fields requiring real-time mutual information estimation

    A Learnable Optimization and Regularization Approach to Massive MIMO CSI Feedback

    Get PDF
    Channel state information (CSI) plays a critical role in achieving the potential benefits of massive multiple input multiple output (MIMO) systems. In frequency division duplex (FDD) massive MIMO systems, the base station (BS) relies on sustained and accurate CSI feedback from the users. However, due to the large number of antennas and users being served in massive MIMO systems, feedback overhead can become a bottleneck. In this paper, we propose a model-driven deep learning method for CSI feedback, called learnable optimization and regularization algorithm (LORA). Instead of using l1-norm as the regularization term, a learnable regularization module is introduced in LORA to automatically adapt to the characteristics of CSI. We unfold the conventional iterative shrinkage-thresholding algorithm (ISTA) to a neural network and learn both the optimization process and regularization term by end-toend training. We show that LORA improves the CSI feedback accuracy and speed. Besides, a novel learnable quantization method and the corresponding training scheme are proposed, and it is shown that LORA can operate successfully at different bit rates, providing flexibility in terms of the CSI feedback overhead. Various realistic scenarios are considered to demonstrate the effectiveness and robustness of LORA through numerical simulations

    Spiking NeRF: Making Bio-inspired Neural Networks See through the Real World

    Full text link
    Spiking neuron networks (SNNs) have been thriving on numerous tasks to leverage their promising energy efficiency and exploit their potentialities as biologically plausible intelligence. Meanwhile, the Neural Radiance Fields (NeRF) render high-quality 3D scenes with massive energy consumption, and few works delve into the energy-saving solution with a bio-inspired approach. In this paper, we propose spiking NeRF (SpikingNeRF), which aligns the radiance ray with the temporal dimension of SNN, to naturally accommodate the SNN to the reconstruction of Radiance Fields. Thus, the computation turns into a spike-based, multiplication-free manner, reducing the energy consumption. In SpikingNeRF, each sampled point on the ray is matched onto a particular time step, and represented in a hybrid manner where the voxel grids are maintained as well. Based on the voxel grids, sampled points are determined whether to be masked for better training and inference. However, this operation also incurs irregular temporal length. We propose the temporal condensing-and-padding (TCP) strategy to tackle the masked samples to maintain regular temporal length, i.e., regular tensors, for hardware-friendly computation. Extensive experiments on a variety of datasets demonstrate that our method reduces the 76.74%76.74\% energy consumption on average and obtains comparable synthesis quality with the ANN baseline

    RobustCCC: a robustness evaluation tool for cell-cell communication methods

    Get PDF
    Cell-cell communication (CCC) inference has become a routine task in single-cell data analysis. Many computational tools are developed for this purpose. However, the robustness of existing CCC methods remains underexplored. We develop a user-friendly tool, RobustCCC, to facilitate the robustness evaluation of CCC methods with respect to three perspectives, including replicated data, transcriptomic data noise and prior knowledge noise. RobustCCC currently integrates 14 state-of-the-art CCC methods and 6 simulated single-cell transcriptomics datasets to generate robustness evaluation reports in tabular form for easy interpretation. We find that these methods exhibit substantially different robustness performances using different simulation datasets, implying a strong impact of the input data on resulting CCC patterns. In summary, RobustCCC represents a scalable tool that can easily integrate more CCC methods, more single-cell datasets from different species (e.g., mouse and human) to provide guidance in selecting methods for identification of consistent and stable CCC patterns in tissue microenvironments. RobustCCC is freely available at https://github.com/GaoLabXDU/RobustCCC
    • …
    corecore