17,028 research outputs found

    Comparing Deep Recurrent Networks Based on the MAE Random Sampling, a First Approach

    Get PDF
    Recurrent neural networks have demonstrated to be good at tackling prediction problems, however due to their high sensitivity to hyper-parameter configuration, finding an appropriate network is a tough task. Automatic hyper-parameter optimization methods have emerged to find the most suitable configuration to a given problem, but these methods are not generally adopted because of their high computational cost. Therefore, in this study we extend the MAE random sampling, a low-cost method to compare single-hidden layer architectures, to multiple-hidden-layer ones. We validate empirically our proposal and show that it is possible to predict and compare the expected performance of an hyper-parameter configuration in a low-cost way.Universidad de MĂĄlaga. Campus de Excelencia Internacional AndalucĂ­a Tech. This research was partially funded by Ministerio de Economı́a, Industria y Competitividad, Gobierno de España, and European Regional Development Fund grant numbers TIN2016-81766-REDT (http://cirti.es) and TIN2017-88213-R (http://6city.lcc.uma.es)

    An incrementally scalable and cost-efficient interconnection structure for datacenters

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the DOI in this record.The explosive growth in the volume of data storing and complexity of data processing drive data center networks (DCNs) to become incrementally scalable and cost-efficient while to maintain high network capacity and fault tolerance. To address these challenges, this paper proposes a new structure, called Totoro, which is defined recursively and hierarchically: dual-port servers and commodity switches are used to make Totoro affordable; a bunch of servers are connected to an intra-switch to form a basic partition; to construct a high-level structure, a half of the backup ports of servers in the low-level structures are connected by inter-switches in order to incrementally build a larger partition. Totoro is incrementally scalable since expanding the structure does not require any rewiring or routing alteration. We further design a distributed and fault-tolerant routing protocol to handle multiple types of failures. Experimental results demonstrate that Totoro is able to satisfy the demands of fault tolerance and high throughput. Furthermore, architecture analysis indicates that Totoro balances between performance and costs in terms of robustness, structural properties, bandwidth, economic costs and power consumption.This work is supported by the NSF of China under grant (no. 61272073, and no. 61572232), the NSF of Guangdong Province (no. S2013020012865)

    Learning-based Network Path Planning for Traffic Engineering

    Get PDF
    This is the author accepted manuscript. The final version is available from Elsevier via the DOI in this recordRecent advances in traffic engineering offer a series of techniques to address the network problems due to the explosive growth of Internet traffic. In traffic engineering, dynamic path planning is essential for prevalent applications, e.g., load balancing, traffic monitoring and firewall. Application-specific methods can indeed improve the network performance but can hardly be extended to general scenarios. Meanwhile, massive data generated in the current Internet has not been fully exploited, which may convey much valuable knowledge and information to facilitate traffic engineering. In this paper, we propose a learning-based network path planning method under forwarding constraints for finer-grained and effective traffic engineering. We form the path planning problem as the problem of inferring a sequence of nodes in a network path and adapt a sequence-to-sequence model to learn implicit forwarding paths based on empirical network traffic data. To boost the model performance, attention mechanism and beam search are adapted to capture the essential sequential features of the nodes in a path and guarantee the path connectivity. To validate the effectiveness of the derived model, we implement it in Mininet emulator environment and leverage the traffic data generated by both a real-world GEANT network topology and a grid network topology to train and evaluate the model. Experiment results exhibit a high testing accuracy and imply the superiority of our proposal.This work is partially supported by the UK EPSRC project (Grant No.:EP/R030863/1

    A Framework of Fog Computing: Architecture, Challenges and Optimization

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record.Fog Computing (FC) is an emerging distributed computing platform aimed at bringing computation close to its data sources, which can reduce the latency and cost of delivering data to a remote cloud. This feature and related advantages are desirable for many Internet-of-Things applications, especially latency sensitive and mission intensive services. With comparisons to other computing technologies, the definition and architecture of FC are presented in this article. The framework of resource allocation for latency reduction combined with reliability, fault tolerance, privacy, and underlying optimization problems are also discussed. We then investigate an application scenario and conduct resource optimization by formulating the optimization problem and solving it via a Genetic Algorithm. The resulting analysis generates some important insights on the scalability of FC systems.This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/P020224/1] and the EU FP7 QUICK project under Grant Agreement No. PIRSES-GA-2013-612652. Yang Liu was supported by the Chinese Research Council

    Resisting skew-accumulation for time-stepped applications in the cloud via exploiting parallelism

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the DOI in this record.Time-stepped applications are pervasive in scientific computing domain but perform poorly in the cloud because these applications execute in discrete time-step or tick and use logical synchronization barriers at tick boundaries to ensure correctness. As a result, the accumulated computational skew and communication skew that were unsolved in each tick can slow downtime-stepped applications significantly. However, the existing solutions have focused only on the skew in each tick and thus cannot resist the accumulation of skew. To fill in this gap, an efficient approach to resisting the accumulation of skew is proposed in this paper via fully exploiting parallelism among ticks. This new approach allows the user to decompose much computational part (also called asynchronous part) of the processing for an object, into several asynchronous sub-processes which are dependent on one data object. Each sub-process from different ticks can then proceed in advance using the idle time whenever the needed data object is available, redressing the negative effects caused by accumulated unsolved computational and communication skew. To efficiently support such an approach, a data-centric programming model and also a runtime system, namely AsyTick, coupled with an ad hoc scheduler are developed. Experimental results show that the proposed approach can improve the performance of time-stepped applications over a state-of-the-art computational skew-resistant approach up to 2.53 times.This paper is supported by China National Natural Science Foundation under grant No. 61272408, 61322210, National High-tech Research and Development Program of China (863 Program) under grant No.2012AA010905, CCCPC Youngth Talent Plan, Doctoral Fund of Ministry of Education of China under grant No. 20130142110048

    Quantum bound states for a derivative nonlinear Schrodinger model and number theory

    Get PDF
    A derivative nonlinear Schrodinger model is shown to support localized N-body bound states for several ranges (called bands) of the coupling constant eta. The ranges of eta within each band can be completely determined using number theoretic concepts such as Farey sequences and continued fractions. For N > 2, the N-body bound states can have both positive and negative momentum. For eta > 0, bound states with positive momentum have positive binding energy, while states with negative momentum have negative binding energy.Comment: Revtex, 7 pages including 2 figures, to appear in Mod. Phys. Lett.

    Analysis of Power-aware Buffering Schemes in Wireless Sensor Networks

    Full text link
    We study the power-aware buffering problem in battery-powered sensor networks, focusing on the fixed-size and fixed-interval buffering schemes. The main motivation is to address the yet poorly understood size variation-induced effect on power-aware buffering schemes. Our theoretical analysis elucidates the fundamental differences between the fixed-size and fixed-interval buffering schemes in the presence of data size variation. It shows that data size variation has detrimental effects on the power expenditure of the fixed-size buffering in general, and reveals that the size variation induced effects can be either mitigated by a positive skewness or promoted by a negative skewness in size distribution. By contrast, the fixed-interval buffering scheme has an obvious advantage of being eminently immune to the data-size variation. Hence the fixed-interval buffering scheme is a risk-averse strategy for its robustness in a variety of operational environments. In addition, based on the fixed-interval buffering scheme, we establish the power consumption relationship between child nodes and parent node in a static data collection tree, and give an in-depth analysis of the impact of child bandwidth distribution on parent's power consumption. This study is of practical significance: it sheds new light on the relationship among power consumption of buffering schemes, power parameters of radio module and memory bank, data arrival rate and data size variation, thereby providing well-informed guidance in determining an optimal buffer size (interval) to maximize the operational lifespan of sensor networks

    Identity-based remote data integrity checking with perfect data privacy preserving for cloud storage

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the DOI in this record.Remote data integrity checking (RDIC) enables a data storage server, such as a cloud server, to prove to a verifier that it is actually storing a data owner’s data honestly. To date, a number of RDIC protocols have been proposed in the literature, but almost all the constructions suffer from the issue of a complex key management, that is, they rely on the expensive public key infrastructure (PKI), which might hinder the deployment of RDIC in practice. In this paper, we propose a new construction of identity-based (ID-based) RDIC protocol by making use of key-homomorphic cryptographic primitive to reduce the system complexity and the cost for establishing and managing the public key authentication framework in PKI based RDIC schemes. We formalize ID-based RDIC and its security model including security against a malicious cloud server and zero knowledge privacy against a third party verifier. We then provide a concrete construction of ID-based RDIC scheme which leaks no information of the stored files to the verifier during the RDIC process. The new construction is proven secure against the malicious server in the generic group model and achieves zero knowledge privacy against a verifier. Extensive security analysis and implementation results demonstrate that the proposed new protocol is provably secure and practical in the real-world applications.This work is supported by the National Natural Science Foundation of China (61501333,61300213,61272436,61472083), Fok Ying Tung Education Foundation (141065), Program for New Century Excellent Talents in Fujian University (JA1406

    Network Function Virtualization in Dynamic Networks: A Stochastic Perspective

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this recordAs a key enabling technology for 5G network softwarization, Network Function Virtualization (NFV) provides an efficient paradigm to optimize network resource utility for the benefits of both network providers and users. However, the inherent network dynamics and uncertainties from 5G infrastructure, resources and applications are slowing down the further adoption of NFV in many emerging networking applications. Motivated by this, in this paper, we investigate the issues of network utility degradation when implementing NFV in dynamic networks, and design a proactive NFV solution from a fully stochastic perspective. Unlike existing deterministic NFV solutions, which assume given network capacities and/or static service quality demands, this paper explicitly integrates the knowledge of influential network variations into a twostage stochastic resource utilization model. By exploiting the hierarchical decision structures in this problem, a distributed computing framework with two-level decomposition is designed to facilitate a distributed implementation of the proposed model in large-scale networks. The experimental results demonstrate that the proposed solution not only improves 3∌5 folds of network performance, but also effectively reduces the risk of service quality violation.The work of Xiangle Cheng is partially supported by the China Scholarship Council for the study at the University of Exeter. This work is also partially supported by the UK EPSRC project (Grant No.: EP/R030863/1)

    Statistical Features-Based Real-Time Detection of Drifted Twitter Spam

    Get PDF
    AcceptedThis is the author accepted manuscript. The final version is available from the publisher via the DOI in this record.Twitter spam has become a critical problem nowadays. Recent works focus on applying machine learning techniques for Twitter spam detection, which make use of the statistical features of tweets. In our labeled tweets data set, however, we observe that the statistical properties of spam tweets vary over time, and thus, the performance of existing machine learning-based classifiers decreases. This issue is referred to as “Twitter Spam Drift”. In order to tackle this problem, we first carry out a deep analysis on the statistical features of one million spam tweets and one million non-spam tweets, and then propose a novel Lfun scheme. The proposed scheme can discover “changed” spam tweets from unlabeled tweets and incorporate them into classifier’s training process. A number of experiments are performed to evaluate the proposed scheme. The results show that our proposed Lfun scheme can significantly improve the spam detection accuracy in real-world scenarios.This work was supported by the ARC Linkage Project under Grant LP120200266. The work of J. Zhang was supported by the National Natural Science Foundation of China under Grant 61401371
    • 

    corecore