26 research outputs found

    Modeling and IAHA Solution for Task Scheduling Problem of Processing Crowdsourcing in the Context of Social Manufacturing

    No full text
    The paper addresses the discrete characteristics of the processing crowdsourcing task scheduling problem in the context of social manufacturing, divides it into two subproblems of social manufacturing unit selecting and subtask sorting, establishes its mixed-integer programming with the objective of minimizing the maximum completion time, and proposes an improved artificial hummingbird algorithm (IAHA) for solving it. The IAHA uses initialization rules of global selection, local selection, and random selection to improve the quality of the initial population, the Levy flight to improve guided foraging and territorial foraging, the simplex search strategy to improve migration foraging to enhance the merit-seeking ability, and the greedy decoding method to improve the quality of the solution and reduce solution time. For the IAHA, orthogonal tests are designed to obtain the optimal combination of parameters, and comparative tests are made with variants of the AHA and other algorithms on the benchmark case and a simulated crowdsourcing case. The experimental results show that the IAHA can obtain superior solutions in many cases with economy and effectiveness

    A Blockchain Approach of Model Architecture for Crowdsourcing Design Services under the Context of Social Manufacturing

    No full text
    Crowdsourcing design is generally monitored by the platform. However, the traditional crowdsourcing platforms face problems such as centralization, lack of credibility and vulnerability to single point of failure. Under the context of social manufacturing, how to address these potential issues has both research and substantial value. In this paper, we introduce decentralized blockchain technology for crowdsourcing service systems and propose a method to manage and control the process of crowdsourcing design services. We depict complex crowdsourcing logic by smart contract. The process of crowdsourcing design is not dependent on any third party. It is decentralized, tamper-proof, traceable and protects the privacy of users to a certain extent. We implement this crowdsourcing design system on a specific blockchain test network to experiment and test its functionalities. Experiment results show the feasibility and usability of our crowdsourcing design system. In the future, we will further improve the algorithmic logic of smart contracts so that they can run stably and securely in a complex node network environment

    Effective Moving Object Detection and Retrieval via Integrating Spatial-Temporal Multimedia Information

    No full text
    In the area of multimedia semantic analysis and video retrieval, automatic object detection techniques play an important role. Without the analysis of the object-level features, it is hard to achieve high performance on semantic retrieval. As a branch of object detection study, moving object detection also becomes a hot research field and gets a great amount of progress recently. This paper proposes a moving object detection and retrieval model that integrates the spatial and temporal information in video sequences and uses the proposed integral density method (adopted from the idea of integral images) to quickly identify the motion regions in an unsupervised way. First, key information locations on video frames are achieved as maxima and minima of the result of Difference of Gaussian (DoG) function. On the other hand, a motion map of adjacent frames is obtained from the diversity of the outcomes from Simultaneous Partition and Class Parameter Estimation (SPCPE) framework. The motion map filters key information locations into key motion locations (KMLs) where the existence of moving objects is implied. Besides showing the motion zones, the motion map also indicates the motion direction which guides the proposed integral density approach to quickly and accurately locate the motion regions. The detection results are not only illustrated visually, but also verified by the promising experimental results which show the concept retrieval performance can be improved by integrating the global and local visual information

    A Graph Matching Model for Designer Team Selection for Collaborative Design Crowdsourcing Tasks in Social Manufacturing

    No full text
    In order to find a suitable designer team for the collaborative design crowdsourcing task of a product, we consider the matching problem between collaborative design crowdsourcing task network graph and the designer network graph. Due to the difference in the nodes and edges of the two types of graphs, we propose a graph matching model based on a similar structure. The model first uses the Graph Convolutional Network to extract features of the graph structure to obtain the node-level embeddings. Secondly, an attention mechanism considering the differences in the importance of different nodes in the graph assigns different weights to different nodes to aggregate node-level embeddings into graph-level embeddings. Finally, the graph-level embeddings of the two graphs to be matched are input into a multi-layer fully connected neural network to obtain the similarity score of the graph pair after they are obtained from the concat operation. We compare our model with the basic model based on four evaluation metrics in two datasets. The experimental results show that our model can more accurately find graph pairs based on a similar structure. The crankshaft linkage mechanism produced by the enterprise is taken as an example to verify the practicality and applicability of our model and method

    Semantic Retrieval for Videos in Non-static Background Using Motion Saliency and Global Features

    No full text
    In this paper, a video semantic retrieval framework is proposed based on a novel unsupervised motion region detection algorithm which works reasonably well with dynamic background and camera motion. The proposed framework is inspired by biological mechanisms of human vision which make motion salience (defined as attention due to motion) is more "attractive" than some other low-level visual features to people while watching videos. Under this biological observation, motion vectors in frame sequences are calculated using the optical flow algorithm to estimate the movement of a block from one frame to another. Next, a center-surround coherency evaluation model is proposed to compute the local motion saliency in a completely unsupervised manner. The integral density algorithm is employed to search the globally optimal solution of the minimum coherency region as the motion region which is then integrated into the video semantic retrieval framework to enhance the performance of video semantic analysis and understanding. Our proposed framework is evaluated using video sequences in non-static background, and the promising experimental results reveal that the semantic retrieval performance can be improved by integrating the global texture and local motion information

    SEMANTIC MOTION CONCEPT RETRIEVAL IN NON-STATIC BACKGROUND UTILIZING SPATIAL-TEMPORAL VISUAL INFORMATION

    No full text
    Motion concepts mean those concepts containing motion information such as racing car and dancing. In order to achieve high retrieval accuracy comparing with those static concepts such as car or person in semantic retrieval tasks, the temporal information has to be considered. Additionally, if a video sequence is captured by an amateur using a hand-held camera containing significant camera motion, the complexities of the uncontrolled backgrounds would aggravate the difficulty of motion concept retrieval. Therefore, the retrieval of semantic concepts containing motion in non-static background is regarded as one of the most challenging tasks in multimedia semantic analysis and video retrieval. To address such a challenge, this paper proposes a motion concept retrieval framework including a motion region detection model and a concept retrieval model that integrates the spatial and temporal information in video sequences. The motion region detection model uses a new integral density method (adopted from the idea of integral images) to quickly identify the motion regions in an unsupervised way. Specially, key information locations on video frames are first obtained as maxima and minima of the result of Difference of Gaussian (DoG) function. Then a motion map of adjacent frames is generated from the diversity of the outcomes from the Simultaneous Partition and Class Parameter Estimation (SPCPE) framework. The usage of the motion map is to filter key information locations into key motion locations (KMLs) that imply the regions containing motion. The motion map can also indicate the motion direction which guides the proposed "integral density" approach to locate the motion regions quickly and accurately. Based on the motion region detection model, moving object-level information is extracted for semantic retrieval. In the proposed conceptual retrieval model, temporally semantic consistency among the consecutive shots is analyzed and presented into a conditional probability model, which is then used to re-rank the similarity scores to improve the final retrieval results. The results of our proposed novel motion concept retrieval framework are not only illustrated visually demonstrating its robustness in non-static background, but also verified by the promising experimental results demonstrating that the concept retrieval performance can be improved by integrating the spatial and temporal visual information

    A Graph Matching Model for Designer Team Selection for Collaborative Design Crowdsourcing Tasks in Social Manufacturing

    No full text
    In order to find a suitable designer team for the collaborative design crowdsourcing task of a product, we consider the matching problem between collaborative design crowdsourcing task network graph and the designer network graph. Due to the difference in the nodes and edges of the two types of graphs, we propose a graph matching model based on a similar structure. The model first uses the Graph Convolutional Network to extract features of the graph structure to obtain the node-level embeddings. Secondly, an attention mechanism considering the differences in the importance of different nodes in the graph assigns different weights to different nodes to aggregate node-level embeddings into graph-level embeddings. Finally, the graph-level embeddings of the two graphs to be matched are input into a multi-layer fully connected neural network to obtain the similarity score of the graph pair after they are obtained from the concat operation. We compare our model with the basic model based on four evaluation metrics in two datasets. The experimental results show that our model can more accurately find graph pairs based on a similar structure. The crankshaft linkage mechanism produced by the enterprise is taken as an example to verify the practicality and applicability of our model and method

    Spatial-temporal motion information integration for action detection and recognition in non-static background

    No full text
    Various motion detection methods have been proposed in the past decade, but there are seldom attempts to investigate the advantages and disadvantages of different detection mechanisms so that they can complement each other to achieve a better performance. Toward such a demand, this paper proposes a human action detection and recognition framework to bridge the semantic gap between low-level pixel intensity change and the high-level understanding of the meaning of an action. To achieve a robust estimation of the region of action with the complexities of an uncontrolled background, we propose the combination of the optical flow field and Harris3D corner detector to obtain a new spatial-temporal estimation in the video sequences. The action detection method, considering the integrated motion information, works well with the dynamic background and camera motion, and demonstrates the advantage of the proposed method of integrating multiple spatial-temporal cues. Then the local features (SIFT and STIP) extracted from the estimated region of action are used to learn the Universal Background Model (UBM) for the action recognition task. The experimental results on KTH and UCF YouTube Action (UCF11) data sets show that the proposed action detection and recognition framework can not only better estimate the region of action but also achieve better recognition accuracy comparing with the peer work
    corecore