In recent years, concepts and methods of complex networks have been employed
to tackle the word sense disambiguation (WSD) task by representing words as
nodes, which are connected if they are semantically similar. Despite the
increasingly number of studies carried out with such models, most of them use
networks just to represent the data, while the pattern recognition performed on
the attribute space is performed using traditional learning techniques. In
other words, the structural relationship between words have not been explicitly
used in the pattern recognition process. In addition, only a few investigations
have probed the suitability of representations based on bipartite networks and
graphs (bigraphs) for the problem, as many approaches consider all possible
links between words. In this context, we assess the relevance of a bipartite
network model representing both feature words (i.e. the words characterizing
the context) and target (ambiguous) words to solve ambiguities in written
texts. Here, we focus on the semantical relationships between these two type of
words, disregarding the relationships between feature words. In special, the
proposed method not only serves to represent texts as graphs, but also
constructs a structure on which the discrimination of senses is accomplished.
Our results revealed that the proposed learning algorithm in such bipartite
networks provides excellent results mostly when topical features are employed
to characterize the context. Surprisingly, our method even outperformed the
support vector machine algorithm in particular cases, with the advantage of
being robust even if a small training dataset is available. Taken together, the
results obtained here show that the proposed representation/classification
method might be useful to improve the semantical characterization of written
texts