50 research outputs found
CurveFormer: 3D Lane Detection by Curve Propagation with Curve Queries and Attention
3D lane detection is an integral part of autonomous driving systems. Previous
CNN and Transformer-based methods usually first generate a bird's-eye-view
(BEV) feature map from the front view image, and then use a sub-network with
BEV feature map as input to predict 3D lanes. Such approaches require an
explicit view transformation between BEV and front view, which itself is still
a challenging problem. In this paper, we propose CurveFormer, a single-stage
Transformer-based method that directly calculates 3D lane parameters and can
circumvent the difficult view transformation step. Specifically, we formulate
3D lane detection as a curve propagation problem by using curve queries. A 3D
lane query is represented by a dynamic and ordered anchor point set. In this
way, queries with curve representation in Transformer decoder iteratively
refine the 3D lane detection results. Moreover, a curve cross-attention module
is introduced to compute the similarities between curve queries and image
features. Additionally, a context sampling module that can capture more
relative image features of a curve query is provided to further boost the 3D
lane detection performance. We evaluate our method for 3D lane detection on
both synthetic and real-world datasets, and the experimental results show that
our method achieves promising performance compared with the state-of-the-art
approaches. The effectiveness of each component is validated via ablation
studies as well
Protecting User Privacy for Cloud Computing by Bivariate Polynomial Based Secret Sharing
Cloud computing is an Internet-based computing. In cloud computing, the service is fully served by the provider. Users need nothing but personal devices and Internet access. Computing services, such as data, storage, software, computing, and application, can be delivered to local devices through Internet. The major security issue of cloud computing is that cloud providers must ensure that their infrastructure is secure, and prevent illegal data accesses from outsiders, other clients, or even the unauthorized cloud employees. In this paper, we deal with key agreement and authentication for cloud computing. By using Elliptic Curve Diffie Hellman (ECDH) and symmetric bivariate polynomial based secret sharing, we design a secure cloud computing (SCC). Two types of SCC are proposed. One requires a trusted third party (TTP), and the other does not need a TTP. Additionally, via the homomorphism property of polynomial based secret sharing, our SCC can be extended to multi-server SCC (MSCC) to fit an environment where a multi-server system contains multiple servers to collaborate for serving applications
Primulina titan sp. nov. (Gesneriaceae) from a Limestone Area in Northern Guangxi, China
A new species of Gesneriaceae, Primulina titan, is described and photographed from northern Guangxi, China. It resembles P. hunanensis,but can be distinguished by combined morphological characters of leaf,bract, corolla, stamen and pistil. We found only one population with approx.800 mature individuals at the type locality. This species is provisionally assessed as vulnerable [VU D1] using IUCN criteria
Performance analysis and optimization for workflow authorization
Many workflow management systems have been developed to enhance the performance of workflow executions. The authorization policies deployed in the system may restrict the task executions. The common authorization constraints include role constraints, Separation of Duty (SoD), Binding of Duty (BoD) and temporal constraints. This paper presents the methods to check the feasibility of these constraints, and also determines the time durations when the temporal constraints will not impose negative impact on performance. Further, this paper presents an optimal authorization method, which is optimal in the sense that it can minimize a workflow’s delay caused by the temporal constraints. The authorization analysis methods are also extended to analyze the stochastic workflows, in which the tasks’ execution times are not known exactly, but follow certain probability distributions. Simulation experiments have been conducted to verify the effectiveness of the proposed authorization methods. The experimental results show that comparing with the intuitive authorization method, the optimal authorization method can reduce the delay caused by the authorization constraints and consequently reduce the workflows’ response time
A speculative approach to spatial-temporal efficiency with multi-objective optimization in a heterogeneous cloud environment
A heterogeneous cloud system, for example, a Hadoop 2.6.0 platform, provides distributed but cohesive services with rich features on large-scale management, reliability, and error tolerance. As big data processing is concerned, newly built cloud clusters meet the challenges of performance optimization focusing on faster task execution and more efficient usage of computing resources. Presently proposed approaches concentrate on temporal improvement, that is, shortening MapReduce time, but seldom focus on storage occupation; however, unbalanced cloud storage strategies could exhaust those nodes with heavy MapReduce cycles and further challenge the security and stability of the entire cluster. In this paper, an adaptive method is presented aiming at spatial–temporal efficiency in a heterogeneous cloud environment. A prediction model based on an optimized Kernel-based Extreme Learning Machine algorithm is proposed for faster forecast of job execution duration and space occupation, which consequently facilitates the process of task scheduling through a multi-objective algorithm called time and space optimized NSGA-II (TS-NSGA-II). Experiment results have shown that compared with the original load-balancing scheme, our approach can save approximate 47–55 s averagely on each task execution. Simultaneously, 1.254‰ of differences on hard disk occupation were made among all scheduled reducers, which achieves 26.6% improvement over the original scheme
Recommended from our members
An efficient missing tag identification approach in RFID collisions
Radio frequency identification technology has been widely used to verify the presence of items in many applications such as warehouse management and supply chain logistics. In these applications, the challenge of how to timely identify the missing tags (namely tag searching or missing tag identification) is a key focus. Existing missing tag identification solutions have not achieved their full potentials because collision slots have not been well explored. In this paper, we propose an approach named collision resolving based missing tag identification (CR-MTI) to break through the performance bottleneck of existing missing tag identification protocols. In CR-MTI, multiple tags are allowed to respond with different binary strings in a collision slot. Then, the reader can verify them together by using the bit tracking technology and particularly designed string, thereby significantly improve the time efficiency. CR-MTI also reduces the number of messages transmitted by the reader using customized coding. We further explore the optimal parameter settings to maximize the performance of our proposed CR-MTI. Extensive simulation results show that our proposed CR-MTI outperforms prior art in terms of time efficiency, total executive time and communication complexity
Recommended from our members
Identifying RFID Tags in Collisions
How to obtain the information from massive tags is a key focus of RFID applications. The occurrence of collisions leads to problems such as reduced identification efficiency in RFID networks. To tackle such challenges, most tag collision arbitration protocols focus on scheduling tag identification with collision avoidance. However, how to effectively identify tags in collisions to improve identification efficiency has not been well explored. In this paper, we propose a group query allocation method to divide the string space into mutually disjoint subsets which contains several strings. Each string can be viewed as a full ID or partial ID of a tag. When multiple string from a subset are sent simultaneously, the reader can identify all of them in a time slot. Based on the group query allocation method, a segment detection based characteristic group query tree (SD-CGQT) protocol is presented for fast tag identification by significantly reducing the collision slots and transmitted bits. Numerous experimental results verify the superiority of the proposed SD-CGQT, compared to prior arts in system efficiency, total identification time, communication complexity and energy consumption
An Adaptively Speculative Execution Strategy Based on Real-Time Resource Awareness in a Multi-Job Heterogeneous Environment
MapReduce (MRV1), a popular programming model, proposed by Google, has been well used to process large datasets in Hadoop, an open source cloud platform. Its new version MapReduce 2.0 (MRV2) developed along with the emerging of Yarn has achieved obvious improvement over MRV1. However, MRV2 suffers from long finishing time on certain types of jobs. Speculative Execution (SE) has been presented as an approach to the problem above by backing up those delayed jobs from low-performance machines to higher ones. In this paper, an adaptive SE strategy (ASE) is presented in Hadoop-2.6.0. Experiment results have depicted that the ASE duplicates tasks according to real-time resources usage among work nodes in a cloud. In addition, the performance of MRV2 is largely improved using the ASE strategy on job execution time and resource consumption, whether in a multi-job environment
Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment
Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs