6 research outputs found

    应用层组播动态调整问题的研究

    No full text
    通过分析影响应用层组播性能的因素及这些因素的变化对组播性能的影响,指出了在组播过程中随着成员的加入退出、网络波动和结点处理能力的变化,组播的结构和性能都会发生变化。在组播开始时构建的组播树无法适应这些变化,导致了组播性能的下降。针对该问题提出了在组播过程中根据变化调整组播树的结构,并给出了一种改善组播性能的动态调整算法。实验结果表明,动态调整算法能够有效的反映组播的结构、网络状况和结点情况,改善组播的性能

    Increment Based Data Transmission Technique for Cloud Storage Service

    No full text
    随着云计算技术的飞速发展,越来越多的用户选择使用云存储服务来保存个人文件。云存储共享与协作技术允许用户之间共享云端文件,支持其他用户通过各种智能终端上的客户端对文件进行读写操作。云存储共享与协作技术带来了文件历史版本大规模共享场景的需求,这对云存储系统的并发I/O性能是极大的考验。针对云存储服务共享场景的特点,挖掘文件历史版本之间的关系,采用基于增量传输的优化技术来提升云存储系统的传输性能。在此基础上,优化算法中强弱校验过程的内存占用和磁盘读写,利用文件历史版本数据优化同步流程,有效的减少数据传输量,并且提高系统的存储性能,适用于带宽有限和网络不稳定和大规模共享同步等极端场景。 With the fast development of cloud computing technology, more and more users choose cloud storage to store personal files. Storage and share technique allows users share files and visit others’ files with different kinds of client on the cloud. Storage and share technique brings the demand of large-scale share scene for the versioned files. This is a big challenge for the performance of simultaneously I/O. In this paper, according to the characters of the share scene in cloud storage, we try to dig the relationships between the versions of the file and take the increment based data transmission technique. By doing this, we optimized the performance of rolling checksum skill in increment algorithm and reduce the transmission quantity and improve system storage performance. In addition, this technique can help data transmission work in limited bandwidth and network instability scenario and large-scale share-synchronization scenario

    a highly reliable multicast tree recovery method

    No full text
    应用层组播树中某个非叶子节点失效后,需要重新构建组播树保证失效节点的子孙节点能够正确接收数据。针对这一问题,考虑满足高可靠性环境中保证恢复完整性 的情况,提出一种基于备用父节点的组播树预先式恢复方法,即为每个非根节点找到一个备用父节点,使得当某一非叶节点失效时可以迅速的恢复组播树。首先建立 模型并对其求解构造恢复方法,然后论证此方法保证组播树恢复的完整性,最后通过仿真实验验证了此方法的有效性以及其在恢复延迟和管理代价上的改进。When a non-leaf node in an application layer multicast(ALM) tree fails,it is necessary to reconstruct the multicast tree so that all descendant nodes can receive data correctly.Considering the promise of satisfying the integrity of recovery in a highly reliable environment,the paper proposes a multicast tree prior recovery method based on standby parent node,i.e.,to designate a standby parent node for every non-root node,so that when a non-leaf node fails,the multicast tree can be quickly recovered.The paper firstly builds a model and searches for its construction recovery method,then verifies that the method ensures the integrity of multicast tree recovery,and finally validates through emulation experiments the effectiveness of the method as well as its improvements on recovery delay and management cost

    AN AUTOMATIC DEVELOPMENT AND INTEGRATION APPROACH FOR BIG DATA ANALYSIS MODULES

    No full text
    随着大数据时代的到来,数据分析需求日趋多样化,大数据分析工具自带的算法库已无法满足个性化的数据分析需求,亟需开发或集成新的算法。但现有的大数据分 析工具算法开发集成学习成本高,给新算法的开发集成带来一定困难。提出一种针对大数据分析工具自动化开发集成算法的方法,算法以组件的形式集成到分析工具 中。首先定义组件模型,其次给出组件模型自动化生成流程,最后重点分析组件代码的自动生成和代码检测问题,给出基于元信息的代码生成方案和基于Soot控 制流的静态代码检测方法。实验表明,该方法可以完成大数据分析组件的自动化开发集成。As the coming of big data era,the need of data analysis is becoming increasingly diverse,this results in the incapability of big data analysis tools to meet the customised data analysis requirements by using its own build-in algorithm libraries,to develop or integrate new algorithm is urgently necessary. But existing big data analysis tools algorithm has high learning cost in development and integration,and makes it difficult to develop and integrate a new one. This paper proposes an approach targeted at the automatic algorithms development and integration for big data analysis tools,the algorithms are integrated into analysis tools as modules. The approach first defines the module model,and then presents the automatic generation flow of the module model,finally it puts the emphasis on analysing the automatic code generation and code detection method of modules,and proposes the metadata-based code generation scheme and the Soot control flow-based static code detection algorithm. As the experiment shows,this approach can complete the automatic development and integration for big data analysis modules

    Operation log based synchronization algorithm for cloud storage service with multiple clients

    No full text
    传统的基于状态的数据同步算法具有数据传输量大、每次都需要重新开始等缺点,不能满足实际应用需求,提出了一种基于操作日志的云存储数据同步算法.通过在服务器端记录用户的操作日志,对比操作日志生成同步操作序列,回放操作序列的方法实现了高效的数据同步.与传统算法相比,该算法具有数据传输量小,快速高效,对云端服务器负载小,支持双向和增量同步等优点.算法支持同步过程中的失效恢复,适用于带宽有限和网络不稳定等极端场景.The traditional data synchronization algorithm is based on the file status. Although it is simple to understand and implement, it does not support incremental update and lacks flexibility. So it does not meet the user's requirements. An operation logs based on synchronization algorithm is proposed, which records operation logs both at the server side and at the client side, and generates synchronization operation sequences by merging these logs. In contrast with the traditional method, the log based algorithm is more effective and has fewer loads to the cloud-side server. What is more, this algorithm supports failure recover, which is especially suitable for applying to the limited bandwidth and network instability scenario

    a heterogeneous forms exchange platform for logistics enterprises

    No full text
    物流链企业之间存在着大量的电子表单交换需求,如订货单、发票等。这些表单主要以EXCEL表格、TXT文件、XML文件为载体。由于交互的企业之间使用的操作系统不一致、应用系统不一致、数据格式不一致,这些异构表单的交换已经成为物流企业数据交换的瓶颈。研究可扩展的异构表单交换平台架构、复杂表单的自动识别等技术,设计并实现一种面向物流企业的第三方异构表单交换平台。国家自然科学基金项目; 新疆大学SRT重点项目There are a large number of exchange requirements on electronic forms among enterprises in logistics chain,such as the orders, invoices,and so on. These forms mainly use EXCEL spreadsheet,TXT file and XML file as the carrier. Since the enterprises differ from each other in their operating systems and application systems used,which cause the incongruence in data formats,the exchange of heterogeneous forms has become the bottleneck of logistics enterprise data exchange. Therefore in this paper we study the architecture of scalable exchange platform for heterogeneous forms and the automatic identification technology for complex forms,and have designed and implemented a thirdparty logistics-oriented heterogeneous forms exchange platform
    corecore