548,367 research outputs found

    Complexity theory and the web

    Full text link

    Persistence of complex food webs in metacommunities

    Get PDF
    Metacommunity theory is considered a promising approach for explaining species diversity and food web complexity. Recently Pillai et al. proposed a simple modeling framework for the dynamics of food webs at the metacommunity level. Here, we employ this framework to compute general conditions for the persistence of complex food webs in metacommunities. The persistence conditions found depend on the connectivity of the resource patches and the structure of the assembled food web, thus linking the underlying spatial patch-network and the species interaction network. We find that the persistence of omnivores is more likely when it is feeding on (a) prey on low trophic levels, and (b) prey on similar trophic levels

    Making the Connection: Moore’s Theory of Transactional Distance and Its Relevance to the Use of a Virtual Classroom in Postgraduate Online Teacher Education

    Get PDF
    This study explored the use of the Web-based virtual environment, Adobe Connect Pro, in a postgraduate online teacher education programme at the University of Waikato. It applied the tenets of Moore’s Theory of Transactional Distance (Moore, 1997) in examining the efficacy of using the virtual classroom to promote quality dialogue and explored how both internal and external structural elements related to the purpose and use of the classroom affected the sense of learner autonomy. The study provides an illustration of the complexity of the relationship that exists between the elements of Moore’s theory, and how the implementation of an external structuring technology such as the virtual classroom, can have both positive impacts (dialogue creation) and negative impacts (diminished sense of learner autonomy). It also suggests that, although Moore’s theory provides a useful conceptual “lens” through which to analyse online learning practices, its tenets may need revisiting to reflect the move toward the use of synchronous communication tools in online distance learning

    Rough clustering for web transactions

    Get PDF
    Grouping web transactions into clusters is important in order to obtain better understanding of user's behavior. Currently, the rough approximation-based clustering technique has been used to group web transactions into clusters. It is based on the similarity of upper approximations of transactions by given threshold. However, the processing time is still an issue due to the high complexity for finding the similarity of upper approximations of a transaction which used to merge between two or more clusters. In this study, an alternative technique for grouping web transactions using rough set theory is proposed. It is based on the two similarity classes which is nonvoid intersection. The technique is implemented in MATLAB ® version 7.6.0.324 (R2008a). The two UCI benchmark datasets taken from: http:/kdd.ics.uci.edu/ databases/msnbc/msnbc.html and http:/kdd.ics.uci.edu/databases/ Microsoft / microsoft.html are opted in the simulation processes. The simulation reveals that the proposed technique significantly requires lower response time up to 62.69 % and 66.82 % as compared to the rough approximation-based clustering, severally. Meanwhile, for cluster purity it performs better until 2.5 % and 14.47%, respectively

    Normalized Information Distance

    Get PDF
    The normalized information distance is a universal distance measure for objects of all kinds. It is based on Kolmogorov complexity and thus uncomputable, but there are ways to utilize it. First, compression algorithms can be used to approximate the Kolmogorov complexity if the objects have a string representation. Second, for names and abstract concepts, page count statistics from the World Wide Web can be used. These practical realizations of the normalized information distance can then be applied to machine learning tasks, expecially clustering, to perform feature-free and parameter-free data mining. This chapter discusses the theoretical foundations of the normalized information distance and both practical realizations. It presents numerous examples of successful real-world applications based on these distance measures, ranging from bioinformatics to music clustering to machine translation.Comment: 33 pages, 12 figures, pdf, in: Normalized information distance, in: Information Theory and Statistical Learning, Eds. M. Dehmer, F. Emmert-Streib, Springer-Verlag, New-York, To appea

    The Google Similarity Distance

    Full text link
    Words and phrases acquire meaning from the way they are used in society, from their relative semantics to other words and phrases. For computers the equivalent of `society' is `database,' and the equivalent of `use' is `way to search the database.' We present a new theory of similarity between words and phrases based on information distance and Kolmogorov complexity. To fix thoughts we use the world-wide-web as database, and Google as search engine. The method is also applicable to other search engines and databases. This theory is then applied to construct a method to automatically extract similarity, the Google similarity distance, of words and phrases from the world-wide-web using Google page counts. The world-wide-web is the largest database on earth, and the context information entered by millions of independent users averages out to provide automatic semantics of useful quality. We give applications in hierarchical clustering, classification, and language translation. We give examples to distinguish between colors and numbers, cluster names of paintings by 17th century Dutch masters and names of books by English novelists, the ability to understand emergencies, and primes, and we demonstrate the ability to do a simple automatic English-Spanish translation. Finally, we use the WordNet database as an objective baseline against which to judge the performance of our method. We conduct a massive randomized trial in binary classification using support vector machines to learn categories based on our Google distance, resulting in an a mean agreement of 87% with the expert crafted WordNet categories.Comment: 15 pages, 10 figures; changed some text/figures/notation/part of theorem. Incorporated referees comments. This is the final published version up to some minor changes in the galley proof

    Linguistic complexity in high-school students’ EFL writing

    Get PDF
    This study examined the syntactic and semantic complexity of L2 English writing in a Bosnian- Herzegovinian high school. Forty texts written by individual students, ten per grade, were quanti-tatively analyzed by applying methods established in previous research. The syntactic portion of the analysis, based on the t-unit analysis introduced by Hunt (1965), was done using the Web-based L2 Syntactic Complexity Analyzer (Lu, 2010), while the semantic portion, largely based on the theory laid out in systemic functional linguistics (Halliday & Matthiessen, 2014), was done using the Web-based Lexical Complexity Analyzer (Ai & Lu, 2010) as well as manual identifica-tion of grammatical metaphors. The statistical analysis included tests of variance, correlation, and effect size. It was found that the syntactic and semantic complexity of writing increases in later grades; however, this increase is not consistent across all grades
    corecore