8 research outputs found

    Comparison of hashing methods for supporting of consistency in distributed databases

    Get PDF
    Distributed systems provide ability to use any services independently from users’ geolocation and to save productivity on the high level. Highly loaded systems need particularly meticulous optimization at all levels. Such systems include government, military and financial institutions, social networks, IoT and more. Data technologies are improving and giving developers more opportunities to enhance the usage of existing protocols. The growing intensity of information exchange makes it difficult to maintain consistency in distributed systems. This, in turn, may lead to the quality deterioration of end-user services or impose a number of restrictions with increasing latency. As both scenarios are undesirable, there is a need to improve parallel service processes to maintain consistency. One of the undesirable factors is the possibility of collisions during the synchronization of distributed database nodes. Hashing is applied to quickly check the changes. This method is simple in terms of implementation, but collision is unstable, especially in conditions of different variations of input data sets. Conflict situations may not be isolated due to the rapid updating of data and changes in their volume, which in turn produces a large number of combinations. In any case, the storage of outdated data can create conflict situations at the user level. The criticality of this depends on the industry where the system operates and how important data consistency is. For example, domain DNS updates are performed once a day and in most cases this is sufficient, but in the case of military systems this is unacceptable, as it can lead to human loss and financial loss due to the use of obsolete information

    A collision-resistant hashing algorithm for maintaining consistency in distributed nosql databases

    Get PDF
    A distributed database is a combination of copies of databases of one or different types using computer networks. Management of such systems is transparent to end users, that cannot be said about emergency situations and certain changes in the number of nodes. Globally defined properties include consistency, availability, and allocation tolerance. They appear as a result of the need for horizontal extension, which entails the need for copies of the stored data. This is due not only to the issue of productivity, but also to the issue of availability. These two properties are diametrically opposite: technologies and methods which improve one of them, worsen automatically the condition of the other. In addition, any existing information system uses a large set of algorithms. Each algorithm is necessary to solve some problem. The latter is quite diverse: sorting, structuring and searching for data, obtaining a unique digital fingerprint from a data set. The possibilities of usage are not limited to a certain direction and only encourage researchers to seek for new ones. This includes hashing algorithms, which are widely used in databases, in checking the integrity of files and network packets. Hashing has a wide range of usage and is not limited to use only for checking integrity, but can be used as an analogue for indexing instead of balanced trees by building hash tables [1]. Despite a great diversity, new problems arise that need to be solved. With the development of data transmission and storage technologies, there is a nicessity to improve consistency support in distributed NoSQL databases. Existing hashing algorithms are deterministic and based on bitwise operations, which make it impossible to predict collisions. Thus, the main goal of developing a new algorithm is the idea of creating such an algorithm that will improve collision resistance when changing the size of input data and which will allow estimating the possible number of collisions

    Development Of Mobile Robot Control Algorithm based On The Fuzzification Of The Local Terrain Map

    Get PDF
    Based on a review and analysis of the literature for controlling a small light mobile robot based on fuzzy logic, an algorithm was developed based on the fuzzification of the local area and on determining the danger of traffic. The obtained algorithm can be used in the development of control systems for wheeled robots for various purposes, in particular robots designed to operate in confined spaces

    Methods of optimization of distributed databases

    Get PDF
    The great variety of existing databases is emphasized and their brief description is given. When analyzing a distributed database management system, it is emphasized that they consist of a single logical database, divided into a number of fragments. Each fragment of the database is stored on one or more computers (nodes, sites), which are interconnected by a communication network and each of which operates under the control of a separate database. The advantages, disadvantages and necessary requirements to the distributed database are analyzed. The most effective ways to optimize the structure of distributed databases are listed separately, approaches to assessing their effectiveness (performance, scalability (extensibility), reliability, data protection, availability, ease of application development, the level of interaction with the user). The fragment of grammar of T-SQL language is resulted and the basic generalizing recommendations on writing of inquiries which are convenient for the optimizer and effective at execution are resulted

    Combined Indexing Method In NoSQL Databases

    Get PDF
    Any system must process requests quickly. This is one of the main conditions for a successful system. Higher data processing rates come along with new technologies. An example is 5G technology, which allows data to be exchanged at speed of up to 100 Mbps for downloads and up to 50 Mbps for uploads. The operation of the database depends on many factors, including the characteristics of the server, the number of requests to the server and the requests themselves. Improperly worded queries can degrade the speed of the system in general. The situation can be corrected by indexing, which allows you to increase the speed of searching for information in the database itself

    Modification of hashing algorithm to increase rate of operations in NOSQL databases

    Get PDF
    Optimizing the database is quite a difficult task and involves solving a range of interrelated problems. This is to ensure acceptable performance and functionality of the database, user convenience, optimization of resources consumed, for example, by the criterion of minimizing memory costs and maximizing network usage. The most important aspect of optimization is to increase the performance of the database. To increase the performance of the database, you can use general methods to increase the speed of programs, such as increasing the power of hardware, operating system configuration, optimizing the structure of external media and placing the database on them, and others. In addition, special tools are used to optimize the database, already built into. In particular, most modern relational databases have a special component - query optimizer, which allows you quickly and efficiently process selection requests and data manipulation requests. The most common way to optimize database performance is to compress the database. It optimizes the placement of database objects on external media and the return of free disk space for future use. The most common compression technology is based on differences, when a value is replaced by information about its differences from the previous value. Another type of compression technology is based on hierarchical data compression. The essence is in the encoding of individual characters with bit strings of different lengths. Indexing and hashing are used to speed up access to database data at the request of users. Indexing speeds up search operations in the database, as well as other operations that require search: delete, edit, sort. The purpose of using indexing is to speed up data retrieval by reducing the number of disk I / O operations. Another common way to organize records and access data is hashing, a technology for quick direct access to a database record based on a given value of some record field, usually a key one

    The Concept Of Deep Learning: Recognizing Elements In Cartographic Images

    Get PDF
    In the process of digitizing archives, there is a problem of transferring the obtained data to a vector image to further work with the obtained result, for example, take the cartographic schemes of hydraulic structures. To solve this problem, consider deep learning. Deep learning, in turn, is a class of machine learning algorithms that uses a multilayer system of nonlinear filters to separate the required characteristics with transformations. Consider how deep learning works, on the example of recognizing elements in a cartographic image

    Features Of Indexing In Databases And The Choice Of The Optimal Implementation

    Get PDF
    Database management systems use indexing to improve performance and speed up search queries. There are several possible indexing implementations. The problem is the choice of the optimal implementation depending on certain conditions. To address this issue a review of the main indexing implementations used in modern database management systems is provided. The data structures underlying indexes are considered. Examples and features of using each of the main indexing implementations are given
    corecore