9 research outputs found

    Genetic algorithm for the cargo shunting cooperation between two hub-and-spoke logistics networks

    Get PDF
    Purpose: The overstocked goods flow in the hub of hub-and-spoke logistics network should be disposed of in time, to reduce delay loss and improve the utilization rate of logistics network resources. The problem we need to solve is to let logistics network cooperate by sharing network resources to shunt goods from one hub-and-spoke network to another hub-and-spoke network. Design/methodology/approach: This paper proposes the hub shunting cooperation between two hub-and-spoke networks. Firstly, a hybrid integer programming model was established to describe the problem, and then a multi-layer genetic algorithm was designed to solve it and two hub-and-spoke networks are expressed by different gene segments encoded by genes. The network data of two third-party logistics companies in southern and northern China are used for example analysis at the last step. Findings: The hub-and-spoke networks of the two companies were constructed simultaneously. The transfer cost coefficient between two networks and the volume of cargo flow in the network have an impact on the computation of hubs that needed to be shunt and the corresponding cooperation hubs in the other network. Originality/value: Previous researches on hub-and-spoke logistics network focus on one logistics network, while we study the cooperation and interaction between two hub-and-spoke networks. It shows that two hub-and-spoke network can cooperate across the network to shunt the goods in the hub and improve the operation efficiency of the logistics network.Peer Reviewe

    Combination of Evidential Sensor Reports with Distance Function and Belief Entropy in Fault Diagnosis

    Get PDF
    Although evidence theory has been applied in sensor data fusion, it will have unreasonable results when handling highly conflicting sensor reports. To address the issue, an improved fusing method with evidence distance and belief entropy is proposed. Generally, the goal is to obtain the appropriate weights assigning to different reports. Specifically, the distribution difference between two sensor reports is measured by belief entropy. The diversity degree is presented by the combination of evidence distance and the distribution difference. Then, the weight of each sensor report is determined based on the proposed diversity degree. Finally, we can use Dempster combination rule to make the decision. A real application in fault diagnosis and an example show the efficiency of the proposed method. Compared with the existing methods, the method not only has a better performance of convergence, but also less uncertainty

    Performer selection in Human Reliability analysis: D numbers approach

    Get PDF
    Dependence assessment among human errors in human reliability analysis (HRA) is an significant issue. Many previous works discussed the factors influencing the dependence level but failed to discuss how these factors like "similarity of performers" determine the final result. In this paper, the influence of performers on HRA is focused, in addition, a new way of D numbers which is usually used to handle with the multiple criteria decision making (MCDM) problems is introduced as well to determine the optimal performer. Experimental result demonstrates the validity of proposed methods in choosing the best performers with lowest the conditional human error probability (CHEP) under the same circumstance

    Advanced Applications Of Big Data Analytics

    Full text link
    Human life is progressing with advancements in technology such as laptops, smart phones, high speed communication networks etc., which helps us by reducing load in doing our daily activities. For instance, one can chat, talk, make video calls with his/her friends instantly using social networking platforms such as Facebook, Twitter, Google+, WhatsApp etc. LinkedIn, Indeed, etc., connects employees with potential employers. The number of people using these applications are increasing day-by-day, and so is the amount of data generated from these applications. Processing such vast amounts of data, may require new techniques for gaining valuable insights. Network theory concepts form the core of such techniques that are designed to uncover valuable insights from large social network datasets. Many interesting problems such as ranking top-K nodes and top-K communities that can effectively diffuse any given message into the network, restaurant recommendations, friendship recommendations on social networking websites, etc., can be addressed by using the concepts of network centrality. Network centrality measures such as In-degree centrality, Out-degree centrality, Eigen-vector centrality, Katz Broadcast centrality, Katz Receive centrality, and PageRank centrality etc., comes handy in solving these problems. In this thesis, we propose different formulae for computing the strength for identifying top-K nodes and communities that can spread viral marketing messages into the network. The strength formulae are based on Katz Broadcast centrality, Resolvent matrix measure and Personalized PageRank measure. Moreover, the effects of intercommunity and intracommunity connectivity in ranking top-K communities are studied. Top-K nodes for spreading any message effectively into the network are determined by using Katz Broadcast centrality measure. Results obtained through this technique are compared with the top-K nodes obtained by using Degree centrality measure. We also studied the effects of varying α on the number of nodes in search space. In Algorithms 2 and 3, top-K communities are obtained by using Resolvent matrix and Personalized PageRank measure. Algorithm 2 results were studied by varying the parameter α
    corecore