10 research outputs found
On Finding Maximum Cardinality Subset of Vectors with a Constraint on Normalized Squared Length of Vectors Sum
In this paper, we consider the problem of finding a maximum cardinality
subset of vectors, given a constraint on the normalized squared length of
vectors sum. This problem is closely related to Problem 1 from (Eremeev,
Kel'manov, Pyatkin, 2016). The main difference consists in swapping the
constraint with the optimization criterion.
We prove that the problem is NP-hard even in terms of finding a feasible
solution. An exact algorithm for solving this problem is proposed. The
algorithm has a pseudo-polynomial time complexity in the special case of the
problem, where the dimension of the space is bounded from above by a constant
and the input data are integer. A computational experiment is carried out,
where the proposed algorithm is compared to COINBONMIN solver, applied to a
mixed integer quadratic programming formulation of the problem. The results of
the experiment indicate superiority of the proposed algorithm when the
dimension of Euclidean space is low, while the COINBONMIN has an advantage for
larger dimensions.Comment: To appear in Proceedings of the 6th International Conference on
Analysis of Images, Social Networks, and Texts (AIST'2017
On Evaluation of Bangla Word Analogies
This paper presents a high-quality dataset for evaluating the quality of
Bangla word embeddings, which is a fundamental task in the field of Natural
Language Processing (NLP). Despite being the 7th most-spoken language in the
world, Bangla is a low-resource language and popular NLP models fail to perform
well. Developing a reliable evaluation test set for Bangla word embeddings are
crucial for benchmarking and guiding future research. We provide a
Mikolov-style word analogy evaluation set specifically for Bangla, with a
sample size of 16678, as well as a translated and curated version of the
Mikolov dataset, which contains 10594 samples for cross-lingual research. Our
experiments with different state-of-the-art embedding models reveal that Bangla
has its own unique characteristics, and current embeddings for Bangla still
struggle to achieve high accuracy on both datasets. We suggest that future
research should focus on training models with larger datasets and considering
the unique morphological characteristics of Bangla. This study represents the
first step towards building a reliable NLP system for the Bangla language1
ΠΠ΅ΠΊΡΠΎΡΠ½ΠΎΠ΅ ΠΏΡΠ΅Π΄ΡΡΠ°Π²Π»Π΅Π½ΠΈΠ΅ ΡΠ»ΠΎΠ² Ρ ΡΠ΅ΠΌΠ°Π½ΡΠΈΡΠ΅ΡΠΊΠΈΠΌΠΈ ΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΡΠΌΠΈ: ΡΠΊΡΠΏΠ΅ΡΠΈΠΌΠ΅Π½ΡΠ°Π»ΡΠ½ΡΠ΅ Π½Π°Π±Π»ΡΠ΄Π΅Π½ΠΈΡ
The ability to identify semantic relations between words has made a word2vec model widely used in NLP tasks. The idea of word2vec is based on a simple rule that a higher similarity can be reached if two words have a similar context. Each word can be represented as a vector, so the closest coordinates of vectors can be interpreted as similar words. It allows to establish semantic relations (synonymy, relations of hypernymy and hyponymy and other semantic relations) by applying an automatic extraction. The extraction of semantic relations by hand is considered as a time-consuming and biased task, requiring a large amount of time and some help of experts. Unfortunately, the word2vec model provides an associative list of words which does not consist of relative words only. In this paper, we show some additional criteria that may be applicable to solve this problem. Observations and experiments with well-known characteristics, such as word frequency, a position in an associative list, might be useful for improving results for the task of extraction of semantic relations for the Russian language by using word embedding. In the experiments, the word2vec model trained on the Flibusta and pairs from Wiktionary are used as examples with semantic relationships. Semantically related words are applicable to thesauri, ontologies and intelligent systems for natural language processing.ΠΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡ ΠΈΠ΄Π΅Π½ΡΠΈΡΠΈΠΊΠ°ΡΠΈΠΈ ΡΠ΅ΠΌΠ°Π½ΡΠΈΡΠ΅ΡΠΊΠΎΠΉ Π±Π»ΠΈΠ·ΠΎΡΡΠΈ ΠΌΠ΅ΠΆΠ΄Ρ ΡΠ»ΠΎΠ²Π°ΠΌΠΈ ΡΠ΄Π΅Π»Π°Π»Π° ΠΌΠΎΠ΄Π΅Π»Ρ word2vec ΡΠΈΡΠΎΠΊΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΠΎΠΉ Π² NLP-Π·Π°Π΄Π°ΡΠ°Ρ
. ΠΠ΄Π΅Ρ word2vec ΠΎΡΠ½ΠΎΠ²Π°Π½Π° Π½Π° ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡΠ½ΠΎΠΉ Π±Π»ΠΈΠ·ΠΎΡΡΠΈ ΡΠ»ΠΎΠ². ΠΠ°ΠΆΠ΄ΠΎΠ΅ ΡΠ»ΠΎΠ²ΠΎ ΠΌΠΎΠΆΠ΅Ρ Π±ΡΡΡ ΠΏΡΠ΅Π΄ΡΡΠ°Π²Π»Π΅Π½ΠΎ Π² Π²ΠΈΠ΄Π΅ Π²Π΅ΠΊΡΠΎΡΠ°, Π±Π»ΠΈΠ·ΠΊΠΈΠ΅ ΠΊΠΎΠΎΡΠ΄ΠΈΠ½Π°ΡΡ Π²Π΅ΠΊΡΠΎΡΠΎΠ² ΠΌΠΎΠ³ΡΡ Π±ΡΡΡ ΠΈΠ½ΡΠ΅ΡΠΏΡΠ΅ΡΠΈΡΠΎΠ²Π°Π½Ρ ΠΊΠ°ΠΊ Π±Π»ΠΈΠ·ΠΊΠΈΠ΅ ΠΏΠΎ ΡΠΌΡΡΠ»Ρ ΡΠ»ΠΎΠ²Π°. Π’Π°ΠΊΠΈΠΌ ΠΎΠ±ΡΠ°Π·ΠΎΠΌ, ΠΈΠ·Π²Π»Π΅ΡΠ΅Π½ΠΈΠ΅ ΡΠ΅ΠΌΠ°Π½ΡΠΈΡΠ΅ΡΠΊΠΈΡ
ΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΠΉ (ΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΠ΅ ΡΠΈΠ½ΠΎΠ½ΠΈΠΌΠΈΠΈ, ΡΠΎΠ΄ΠΎ-Π²ΠΈΠ΄ΠΎΠ²ΡΠ΅ ΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΡ ΠΈ Π΄ΡΡΠ³ΠΈΠ΅) ΠΌΠΎΠΆΠ΅Ρ Π±ΡΡΡ Π°Π²ΡΠΎΠΌΠ°ΡΠΈΠ·ΠΈΡΠΎΠ²Π°Π½ΠΎ. Π£ΡΡΠ°Π½ΠΎΠ²Π»Π΅Π½ΠΈΠ΅ ΡΠ΅ΠΌΠ°Π½ΡΠΈΡΠ΅ΡΠΊΠΈΡ
ΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΠΉ Π²ΡΡΡΠ½ΡΡ ΡΡΠΈΡΠ°Π΅ΡΡΡ ΡΡΡΠ΄ΠΎΠ΅ΠΌΠΊΠΎΠΉ ΠΈ Π½Π΅ΠΎΠ±ΡΠ΅ΠΊΡΠΈΠ²Π½ΠΎΠΉ Π·Π°Π΄Π°ΡΠ΅ΠΉ, ΡΡΠ΅Π±ΡΡΡΠ΅ΠΉ Π±ΠΎΠ»ΡΡΠΎΠ³ΠΎ ΠΊΠΎΠ»ΠΈΡΠ΅ΡΡΠ²Π° Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΈ ΠΏΡΠΈΠ²Π»Π΅ΡΠ΅Π½ΠΈΡ ΡΠΊΡΠΏΠ΅ΡΡΠΎΠ². ΠΠΎ ΡΡΠ΅Π΄ΠΈ Π°ΡΡΠΎΡΠΈΠ°ΡΠΈΠ²Π½ΡΡ
ΡΠ»ΠΎΠ², ΡΡΠΎΡΠΌΠΈΡΠΎΠ²Π°Π½Π½ΡΡ
Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ΠΌ ΠΌΠΎΠ΄Π΅Π»ΠΈ word2vec, Π²ΡΡΡΠ΅ΡΠ°ΡΡΡΡ ΡΠ»ΠΎΠ²Π°, Π½Π΅ ΠΏΡΠ΅Π΄ΡΡΠ°Π²Π»ΡΡΡΠΈΠ΅ Π½ΠΈΠΊΠ°ΠΊΠΈΡ
ΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΠΉ Ρ Π³Π»Π°Π²Π½ΡΠΌ ΡΠ»ΠΎΠ²ΠΎΠΌ, Π΄Π»Ρ ΠΊΠΎΡΠΎΡΠΎΠ³ΠΎ Π±ΡΠ» ΠΏΡΠ΅Π΄ΡΡΠ°Π²Π»Π΅Π½ Π°ΡΡΠΎΡΠΈΠ°ΡΠΈΠ²Π½ΡΠΉ ΡΡΠ΄. Π ΡΠ°Π±ΠΎΡΠ΅ ΡΠ°ΡΡΠΌΠ°ΡΡΠΈΠ²Π°ΡΡΡΡ Π΄ΠΎΠΏΠΎΠ»Π½ΠΈΡΠ΅Π»ΡΠ½ΡΠ΅ ΠΊΡΠΈΡΠ΅ΡΠΈΠΈ, ΠΊΠΎΡΠΎΡΡΠ΅ ΠΌΠΎΠ³ΡΡ Π±ΡΡΡ ΠΏΡΠΈΠΌΠ΅Π½ΠΈΠΌΡ Π΄Π»Ρ ΡΠ΅ΡΠ΅Π½ΠΈΡ Π΄Π°Π½Π½ΠΎΠΉ ΠΏΡΠΎΠ±Π»Π΅ΠΌΡ. ΠΠ°Π±Π»ΡΠ΄Π΅Π½ΠΈΡ ΠΈ ΠΏΡΠΎΠ²Π΅Π΄Π΅Π½Π½ΡΠ΅ ΡΠΊΡΠΏΠ΅ΡΠΈΠΌΠ΅Π½ΡΡ Ρ ΠΎΠ±ΡΠ΅ΠΈΠ·Π²Π΅ΡΡΠ½ΡΠΌΠΈ Ρ
Π°ΡΠ°ΠΊΡΠ΅ΡΠΈΡΡΠΈΠΊΠ°ΠΌΠΈ, ΡΠ°ΠΊΠΈΠΌΠΈ ΠΊΠ°ΠΊ ΡΠ°ΡΡΠΎΡΠ° ΡΠ»ΠΎΠ², ΠΏΠΎΠ·ΠΈΡΠΈΡ Π² Π°ΡΡΠΎΡΠΈΠ°ΡΠΈΠ²Π½ΠΎΠΌ ΡΡΠ΄Ρ, ΠΌΠΎΠ³ΡΡ Π±ΡΡΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½Ρ Π΄Π»Ρ ΡΠ»ΡΡΡΠ΅Π½ΠΈΡ ΡΠ΅Π·ΡΠ»ΡΡΠ°ΡΠΎΠ² ΠΏΡΠΈ ΡΠ°Π±ΠΎΡΠ΅ Ρ Π²Π΅ΠΊΡΠΎΡΠ½ΡΠΌ ΠΏΡΠ΅Π΄ΡΡΠ°Π²Π»Π΅Π½ΠΈΠ΅ΠΌ ΡΠ»ΠΎΠ² Π² ΡΠ°ΡΡΠΈ ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΡ ΡΠ΅ΠΌΠ°Π½ΡΠΈΡΠ΅ΡΠΊΠΈΡ
ΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΠΉ Π΄Π»Ρ ΡΡΡΡΠΊΠΎΠ³ΠΎ ΡΠ·ΡΠΊΠ°. Π ΡΠΊΡΠΏΠ΅ΡΠΈΠΌΠ΅Π½ΡΠ°Ρ
ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ ΠΎΠ±ΡΡΠ΅Π½Π½Π°Ρ Π½Π° ΠΊΠΎΡΠΏΡΡΠ°Ρ
Π€Π»ΠΈΠ±ΡΡΡΡ ΠΌΠΎΠ΄Π΅Π»Ρ word2vec ΠΈ ΡΠ°Π·ΠΌΠ΅ΡΠ΅Π½Π½ΡΠ΅ Π΄Π°Π½Π½ΡΠ΅ ΠΠΈΠΊΠΈΡΠ»ΠΎΠ²Π°ΡΡ Π² ΠΊΠ°ΡΠ΅ΡΡΠ²Π΅ ΠΎΠ±ΡΠ°Π·ΡΠΎΠ²ΡΡ
ΠΏΡΠΈΠΌΠ΅ΡΠΎΠ², Π² ΠΊΠΎΡΠΎΡΡΡ
ΠΎΡΡΠ°ΠΆΠ΅Π½Ρ ΡΠ΅ΠΌΠ°Π½ΡΠΈΡΠ΅ΡΠΊΠΈΠ΅ ΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΡ. Π‘Π΅ΠΌΠ°Π½ΡΠΈΡΠ΅ΡΠΊΠΈ ΡΠ²ΡΠ·Π°Π½Π½ΡΠ΅ ΡΠ»ΠΎΠ²Π° (ΠΈΠ»ΠΈ ΡΠ΅ΡΠΌΠΈΠ½Ρ) Π½Π°ΡΠ»ΠΈ ΡΠ²ΠΎΠ΅ ΠΏΡΠΈΠΌΠ΅Π½Π΅Π½ΠΈΠ΅ Π² ΡΠ΅Π·Π°ΡΡΡΡΠ°Ρ
, ΠΎΠ½ΡΠΎΠ»ΠΎΠ³ΠΈΡΡ
, ΠΈΠ½ΡΠ΅Π»Π»Π΅ΠΊΡΡΠ°Π»ΡΠ½ΡΡ
ΡΠΈΡΡΠ΅ΠΌΠ°Ρ
Π΄Π»Ρ ΠΎΠ±ΡΠ°Π±ΠΎΡΠΊΠΈ Π΅ΡΡΠ΅ΡΡΠ²Π΅Π½Π½ΠΎΠ³ΠΎ ΡΠ·ΡΠΊΠ°
Watset : automatic induction of synsets from a graph of synonyms
This paper presents a new graph-based approach that induces synsets using synonymy dictionaries and word embeddings. First, we build a weighted graph of synonyms extracted from commonly available resources, such as Wiktionary. Second, we apply word sense induction to deal with ambiguous words. Finally, we cluster the disambiguated version of the ambiguous input graph into synsets. Our meta-clustering approach lets us use an efficient hard clustering algorithm to perform a fuzzy clustering of the graph. Despite its simplicity, our approach shows excellent results, outperforming five competitive state-of-the-art methods in terms of F-score on three gold standard datasets for English and Russian derived from large-scale manually constructed lexical resources
ΠΠ°ΡΠ°Π»Π»Π΅Π»ΡΠ½ΡΠ΅ ΡΠ΅Π°Π»ΠΈΠ·Π°ΡΠΈΠΈ Π°Π»Π³ΠΎΡΠΈΡΠΌΠΎΠ² Π½Π°Ρ ΠΎΠΆΠ΄Π΅Π½ΠΈΡ Π·Π΅ΡΠΊΠ°Π»ΡΠ½ΠΎΠΉ ΡΠΈΠΌΠΌΠ΅ΡΡΠΈΠΈ Π±ΠΈΠ½Π°ΡΠ½ΡΡ ΡΠ°ΡΡΡΠΎΠ²ΡΡ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ
ΠΡΠ΅Π½ΠΊΠ° ΡΠΈΠΌΠΌΠ΅ΡΡΠΈΡΠ½ΠΎΡΡΠΈ ΡΠΈΠ³ΡΡ ΡΠ²Π»ΡΠ΅ΡΡΡ Π²Π°ΠΆΠ½ΡΠΌ ΡΡΠ°ΠΏΠΎΠΌ Π°Π½Π°Π»ΠΈΠ·Π° Π±ΠΈΠ½Π°ΡΠ½ΡΡ
ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ ΠΈ ΠΌΠΎΠΆΠ΅Ρ ΠΏΡΠΈΠΌΠ΅Π½ΡΡΡΡΡ Π΄Π»Ρ ΡΠ΅ΡΠ΅Π½ΠΈΡ ΠΌΠ½ΠΎΠ³ΠΈΡ
ΠΏΡΠΈΠΊΠ»Π°Π΄Π½ΡΡ
Π·Π°Π΄Π°Ρ ΠΊΠΎΠΌΠΏΡΡΡΠ΅ΡΠ½ΠΎΠ³ΠΎ Π·ΡΠ΅Π½ΠΈΡ, ΡΠ°ΠΊΠΈΡ
, Π½Π°ΠΏΡΠΈΠΌΠ΅Ρ, ΠΊΠ°ΠΊ Π°Π½Π°Π»ΠΈΠ· ΡΡΠ»ΠΎΠ²ΠΈΠΉ ΠΏΡΠΎΠΈΠ·ΡΠ°ΡΡΠ°Π½ΠΈΡ ΡΠ°ΡΡΠ΅Π½ΠΈΠΉ, Π±ΠΈΠ»Π°ΡΠ΅ΡΠ°Π»ΡΠ½ΠΎΠΉ ΡΠΈΠΌΠΌΠ΅ΡΡΠΈΠΈ Π½Π°ΡΠ΅ΠΊΠΎΠΌΡΡ
. ΠΠ·Π²Π΅ΡΡΠ½ΡΠ΅ Π°Π»Π³ΠΎΡΠΈΡΠΌΡ ΠΏΠΎΠΈΡΠΊΠ° ΠΎΡΠΈ Π·Π΅ΡΠΊΠ°Π»ΡΠ½ΠΎΠΉ ΡΠΈΠΌΠΌΠ΅ΡΡΠΈΠΈ ΠΏΠΎΠ·Π²ΠΎΠ»ΡΡΡ Π½Π°ΠΉΡΠΈ Π»ΠΈΡΡ ΠΏΡΠΈΠ±Π»ΠΈΠΆΠ΅Π½Π½ΠΎΠ΅ ΡΠ΅ΡΠ΅Π½ΠΈΠ΅ Π΄Π°Π½Π½ΠΎΠΉ Π·Π°Π΄Π°ΡΠΈ, ΠΊΠ°ΠΊ ΠΏΡΠ°Π²ΠΈΠ»ΠΎ, Π½Π΅ ΠΏΡΠ΅Π΄ΠΎΡΡΠ°Π²Π»ΡΡ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΠΈ ΠΎΡΠ΅Π½ΠΈΡΡ ΠΊΠ°ΡΠ΅ΡΡΠ²ΠΎ ΠΏΠΎΠ»ΡΡΠ΅Π½Π½ΠΎΠ³ΠΎ ΡΠ΅ΡΠ΅Π½ΠΈΡ. ΠΡΡΠ΅ΡΡΠ²Π΅Π½Π½ΡΠΌ ΡΠΏΠΎΡΠΎΠ±ΠΎΠΌ ΠΎΡΠ΅Π½ΠΊΠΈ ΠΊΠ°ΡΠ΅ΡΡΠ²Π° Π² Π΄Π°Π½Π½ΠΎΠΌ ΡΠ»ΡΡΠ°Π΅ ΡΠ²Π»ΡΠ΅ΡΡΡ ΡΡΠ°Π²Π½Π΅Π½ΠΈΠ΅ Ρ ΡΠΎΡΠ½ΡΠΌ ΡΠ΅ΡΠ΅Π½ΠΈΠ΅ΠΌ - ΡΡΠ°Π»ΠΎΠ½Π½ΠΎΠΉ ΠΎΡΡΡ ΡΠΈΠΌΠΌΠ΅ΡΡΠΈΠΈ, ΠΌΠ΅ΡΠ° ΡΠΈΠΌΠΌΠ΅ΡΡΠΈΡΠ½ΠΎΡΡΠΈ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΡ ΠΎΡΠ½ΠΎΡΠΈΡΠ΅Π»ΡΠ½ΠΎ ΠΊΠΎΡΠΎΡΠΎΠΉ ΠΈΠΌΠ΅Π΅Ρ ΠΌΠ°ΠΊΡΠΈΠΌΠ°Π»ΡΠ½ΠΎΠ΅ Π·Π½Π°ΡΠ΅Π½ΠΈΠ΅. Π Π΄Π°Π½Π½ΠΎΠΉ ΡΠ°Π±ΠΎΡΠ΅ ΠΈΡΡΠ»Π΅Π΄ΡΠ΅ΡΡΡ ΡΠΎΡΠ½ΡΠΉ ΠΌΠ΅ΡΠΎΠ΄ ΠΏΠΎΠΈΡΠΊΠ° ΡΠ°ΠΊΠΎΠΉ ΡΡΠ°Π»ΠΎΠ½Π½ΠΎΠΉ ΠΎΡΠΈ ΡΠΈΠΌΠΌΠ΅ΡΡΠΈΠΈ, ΠΎΡΠ½ΠΎΠ²Π°Π½Π½ΡΠΉ Π½Π° ΠΏΠΎΠ»Π½ΠΎΠΌ ΠΏΠ΅ΡΠ΅Π±ΠΎΡΠ΅ Π²ΡΠ΅Ρ
ΠΏΠΎΡΠ΅Π½ΡΠΈΠ°Π»ΡΠ½ΠΎ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΡΡ
ΠΎΡΠ΅ΠΉ ΠΈ ΠΎΡΠ΅Π½ΠΊΠ΅ ΡΠΈΠΌΠΌΠ΅ΡΡΠΈΡΠ½ΠΎΡΡΠΈ ΡΠΈΠ³ΡΡΡ Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ΠΌ ΡΠ΅ΠΎΡΠ΅ΡΠΈΠΊΠΎ-ΠΌΠ½ΠΎΠΆΠ΅ΡΡΠ²Π΅Π½Π½ΠΎΠ³ΠΎ ΠΏΠΎΠ΄ΠΎΠ±ΠΈΡ ΠΠ°ΠΊΠΊΠ°ΡΠ΄Π°, ΠΏΡΠΈΠΌΠ΅Π½ΡΠ΅ΠΌΠΎΠ³ΠΎ ΠΊ ΠΏΠΎΠ΄ΠΌΠ½ΠΎΠΆΠ΅ΡΡΠ²Π°ΠΌ ΠΏΠΈΠΊΡΠ΅Π»Π΅ΠΉ ΡΠΈΠ³ΡΡΡ ΠΏΡΠΈ Π΄Π΅Π»Π΅Π½ΠΈΠΈ Π΅Π΅ ΠΎΡΡΡ. ΠΠ»Π³ΠΎΡΠΈΡΠΌ ΠΏΠΎΠ»Π½ΠΎΠ³ΠΎ ΠΏΠ΅ΡΠ΅Π±ΠΎΡΠ° Π³Π°ΡΠ°Π½ΡΠΈΡΠΎΠ²Π°Π½Π½ΠΎ Π½Π°Ρ
ΠΎΠ΄ΠΈΡ ΡΡΠ°Π»ΠΎΠ½Π½ΡΡ ΠΎΡΡ ΡΠΈΠΌΠΌΠ΅ΡΡΠΈΠΈ, Π½ΠΎ ΡΡΠ΅Π±ΡΠ΅Ρ Π²Π΅ΡΡΠΌΠ° Π·Π½Π°ΡΠΈΡΠ΅Π»ΡΠ½ΠΎΠ³ΠΎ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ Π½Π° ΠΎΠ±ΡΠ°Π±ΠΎΡΠΊΡ ΠΊΠ°ΠΆΠ΄ΠΎΠ³ΠΎ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΡ. ΠΠ»Ρ Π΄ΠΎΡΡΠΈΠΆΠ΅Π½ΠΈΡ ΡΠΊΠΎΡΠΎΡΡΠΈ, ΠΏΠΎΠ·Π²ΠΎΠ»ΡΡΡΠ΅ΠΉ ΡΠ°Π±ΠΎΡΠ°ΡΡ Ρ Π±ΠΎΠ»ΡΡΠΈΠΌΠΈ Π±Π°Π·Π°ΠΌΠΈ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ, Π±ΡΠ»Π° ΡΠ°Π·ΡΠ°Π±ΠΎΡΠ°Π½Π° ΠΏΠ°ΡΠ°Π»Π»Π΅Π»ΡΠ½Π°Ρ Π²Π΅ΡΡΠΈΡ Π΄Π°Π½Π½ΠΎΠ³ΠΎ Π°Π»Π³ΠΎΡΠΈΡΠΌΠ°, ΠΊΠΎΡΠΎΡΠ°Ρ Π±ΡΠ»Π° ΡΠ΅Π°Π»ΠΈΠ·ΠΎΠ²Π°Π½Π° Π½Π° ΡΠ·ΡΠΊΠ΅ C++ Ρ ΠΏΡΠΈΠΌΠ΅Π½Π΅Π½ΠΈΠ΅ΠΌ ΡΠ΅Ρ
Π½ΠΎΠ»ΠΎΠ³ΠΈΠΈ ΠΏΠ°ΡΠ°Π»Π»Π΅Π»ΡΠ½ΠΎΠ³ΠΎ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΠΈΡΠΎΠ²Π°Π½ΠΈΡ MPI ΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠΈΡΠΎΠ²Π°Π½Π° Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ΠΌ ΡΠ΅ΡΡΡΡΠΎΠ² ΡΡΠΏΠ΅ΡΠΊΠΎΠΌΠΏΡΡΡΠ΅ΡΠ½ΠΎΠ³ΠΎ ΠΊΠΎΠΌΠΏΠ»Π΅ΠΊΡΠ° ΠΠΠ£ ΠΈΠΌΠ΅Π½ΠΈ Π.Π. ΠΠΎΠΌΠΎΠ½ΠΎΡΠΎΠ²Π°. ΠΠΊΡΠΏΠ΅ΡΠΈΠΌΠ΅Π½ΡΠ°Π»ΡΠ½ΡΠ΅ ΠΈΡΡΠ»Π΅Π΄ΠΎΠ²Π°Π½ΠΈΡ Π½Π° Π±Π°Π·Π΅ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ Β«ΠΠ°Π±ΠΎΡΠΊΠΈΒ» ΠΏΠΎΠΊΠ°Π·Π°Π»ΠΈ, ΡΡΠΎ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½Π½ΡΠΉ Π°Π»Π³ΠΎΡΠΈΡΠΌ ΠΏΠΎΠ·Π²ΠΎΠ»ΡΠ΅Ρ Π½Π°ΠΉΡΠΈ ΡΡΠ°Π»ΠΎΠ½Π½ΡΡ ΠΎΡΡ ΡΠΈΠΌΠΌΠ΅ΡΡΠΈΠΈ Π·Π° Π²ΡΠ΅ΠΌΡ, ΠΏΡΠΈΠ΅ΠΌΠ»Π΅ΠΌΠΎΠ΅ Π΄Π»Ρ ΠΎΠ±ΡΠ°Π±ΠΎΡΠΊΠΈ Π±Π°Π·, ΡΠΎΡΡΠΎΡΡΠΈΡ
ΠΈΠ· ΡΠΎΡΠ΅Π½ ΠΈ ΡΡΡΡΡ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ, ΡΡΠΎ ΡΠ΄Π΅Π»Π°Π»ΠΎ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΡΠΌ Π΅Π³ΠΎ ΠΏΡΠΈΠΌΠ΅Π½Π΅Π½ΠΈΠ΅ Π΄Π»Ρ Π°Π²ΡΠΎΠΌΠ°ΡΠΈΡΠ΅ΡΠΊΠΎΠΉ ΡΠ°Π·ΠΌΠ΅ΡΠΊΠΈ Π±Π°Π· ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ, Π° ΡΠ°ΠΊΠΆΠ΅ ΠΎΡΠ»Π°Π΄ΠΊΠΈ ΠΈ ΡΠ΅ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π½Π° Π½ΠΈΡ
ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½Π½ΡΡ
ΡΠ°Π½Π΅Π΅ Π°Π²ΡΠΎΡΠ°ΠΌΠΈ ΠΏΡΠΈΠ±Π»ΠΈΠΆΠ΅Π½Π½ΡΡ
ΠΏΡΠΎΡΠ΅Π΄ΡΡ ΠΏΠΎΠΈΡΠΊΠ° ΠΎΡΠΈ ΡΠΈΠΌΠΌΠ΅ΡΡΠΈΠΈ. Π Π°Π·ΡΠ°Π±ΠΎΡΠ°Π½Π½Π°Ρ ΠΏΠ°ΡΠ°Π»Π»Π΅Π»ΡΠ½Π°Ρ Π²Π΅ΡΡΠΈΡ ΠΎΠ΄Π½ΠΎΠ³ΠΎ ΠΈΠ· ΠΏΡΠΈΠ±Π»ΠΈΠΆΠ΅Π½Π½ΡΡ
Π°Π»Π³ΠΎΡΠΈΡΠΌΠΎΠ² ΠΎΠ±Π΅ΡΠΏΠ΅ΡΠΈΠ²Π°Π΅Ρ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡ ΡΠ΅ΡΠ΅Π½ΠΈΡ ΠΏΡΠΈΠΊΠ»Π°Π΄Π½ΡΡ
Π·Π°Π΄Π°Ρ Π°Π½Π°Π»ΠΈΠ·Π° ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ Π² ΡΡΠ»ΠΎΠ²ΠΈΡΡ
, Π±Π»ΠΈΠ·ΠΊΠΈΡ
ΠΊ ΡΠ΅ΠΆΠΈΠΌΡ ΡΠ΅Π°Π»ΡΠ½ΠΎΠ³ΠΎ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ, ΠΏΠΎΠ·Π²ΠΎΠ»ΡΡ Π΄ΠΎΡΡΠΈΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΎΠ±ΡΠ°Π±ΠΎΡΠΊΠΈ, ΠΈΡΡΠΈΡΠ»ΡΠ΅ΠΌΠΎΠ³ΠΎ Π² Π΄ΠΎΠ»ΡΡ
ΡΠ΅ΠΊΡΠ½Π΄Ρ Π΄Π°ΠΆΠ΅ Π½Π° ΠΎΠ±ΡΡΠ½ΡΡ
ΠΌΠ½ΠΎΠ³ΠΎΡΠ΄Π΅ΡΠ½ΡΡ
ΠΏΠ΅ΡΡΠΎΠ½Π°Π»ΡΠ½ΡΡ
ΠΊΠΎΠΌΠΏΡΡΡΠ΅ΡΠ°Ρ
, ΡΠΎΡ
ΡΠ°Π½ΡΡ ΠΏΡΠΈ ΡΡΠΎΠΌ ΠΌΠ°ΠΊΡΠΈΠΌΠ°Π»ΡΠ½ΠΎΠ΅, Π»ΠΈΠ±ΠΎ Π±Π»ΠΈΠ·ΠΊΠΎΠ΅ ΠΊ ΠΌΠ°ΠΊΡΠΈΠΌΠ°Π»ΡΠ½ΠΎΠΌΡ ΠΊΠ°ΡΠ΅ΡΡΠ²ΠΎ ΡΠ΅ΡΠ΅Π½ΠΈΡ.Π Π°Π±ΠΎΡΠ° Π²ΡΠΏΠΎΠ»Π½Π΅Π½Π° ΠΏΡΠΈ ΠΏΠΎΠ΄Π΄Π΅ΡΠΆΠΊΠ΅ Π³ΡΠ°Π½ΡΠ° Π Π€Π€Π 16-57-52042