164,997 research outputs found

    Using Wikis to Create Online Communities

    Get PDF
    A wiki allows anyone the ability to take part in the creation and editing of web content. With its simplified text-formatting rules that anyone can easily learn, it truly puts experienced web designers and web novices on equal footing. In public libraries, where the technological skills of employees can range from high to non-existent, wikis can allow everyone the ability to develop the website. The resulting website would reflect the imagination and good ideas of the entire organization, not just a select few with the requisite "tech-savvy." The possibilities for what libraries can do with wikis are endless. At their least, they are spaces for quick and easy collaborative work. At their best, they can become true community resources that can position the library as an online hub of their local community

    Comment on Reply to Comment of Finger et al. (2013) on: 'Evidence for an Early-Middle Miocene age of the Navidad Formation (central Chile): Paleontological, paleoclimatic and tectonic implications' of Gutiérrez et al. (2013, Andean Geology 40 (1): 66-78)

    Get PDF
    IndexaciĂłn: Web of Science; ScieloIn their answer to our Comment (Finger et al., 2013), Le Roux et al. (2013) misunderstand several of our remarks and present what we view as f lawed arguments, principally their case for a shallow-marine environment for part of the Navidad Formation. We do not wish to see this exchange evolve into an endless discussion, but we feel obligated to clarify some points. We think this is necessary because of history and importance of the Navidad Formation as the reference for the marine Miocene of Chile. Here we also expound upon some concepts relevant to the distinction between shallow-and deep-marine environments

    Loop Drop: Live Electronic Music Software powered by Web Audio (demo)

    Get PDF
    As a follow up to my talk "Is Web Audio ready for the stage / dancefloor" I also submit a demo of the software Loop Drop (http:// loopjs.com) that I will be mostly talking about. I'll also be available to discuss with attendees in more depth about the various associated challenges and the endless possibilities opened up by these new web standards

    Using Wikis to Create Online Communities

    Get PDF
    A wiki allows anyone the ability to take part in the creation and editing of web content. With its simplified text-formatting rules that anyone can easily learn, it truly puts experienced web designers and web novices on equal footing. In public libraries, where the technological skills of employees can range from high to non-existent, wikis can allow everyone the ability to develop the website. The resulting website would reflect the imagination and good ideas of the entire organization, not just a select few with the requisite "tech-savvy." The possibilities for what libraries can do with wikis are endless. At their least, they are spaces for quick and easy collaborative work. At their best, they can become true community resources that can position the library as an online hub of their local community

    musicSpace: integrating musicology's heterogeneous data sources

    No full text
    A significant barrier to the research endeavours of musicologists (and humanities scholars more generally) is the sheer amount of potentially relevant information that has accumulated over centuries. Whereas researchers once faced the daunting prospect of physically scouring through endless primary and secondary sources in order to answer the basic whats, wheres and whens of history, these sources and the data they contain are now increasingly available online. Yet the vast increase in the online availability of data, the heterogeneity of this data, the plethora of data providers, and, moreover, the inability of current search tools to manipulate metadata in useful and intelligent ways, means that extracting large tranches of basic factual information or running multi-part search queries is still enormously and needlessly time consuming. Accordingly, the musicSpace project is exploiting Semantic Web technologies (Berners-Lee et al., 2001) to develop a search interface that integrates access to musicology’s largest and most significant online resources. This will make previously intractable search queries tractable, thus allowing our users to spend their research time more efficiently and ultimately aiding the attainment of new knowledge. This brief paper gives an overview of our work

    E-HRM: Innovation or irritation. An explorative empirical study in five large companies on web-based HRM

    Get PDF
    Technological optimistic voices assume that, from a technical perspective, the IT possibilities for HRM are endless: in principal all HR processes can be supported by IT. E-HRM is the relatively new term for this IT supported HRM, especially through the use of web technology. This paper aims at demystifying e-HRM by answering the following questions: what actually is e-HRM?, what are the goals of starting with e-HRM?, what types can be distinguished? and what are the outcomes of e-HRM? Based upon the literature, an e-HRM research model is developed and, guided by this model, five organizations have been studied that have already been on the "e-HR road" for a number of years. We conclude that the goals of e-HRM are mainly to improve HR's administrative efficiency/to achieve cost reduction. Next to this goals, international companies seem to use the introduction of e-HRM to standardize/harmonize HR policies and processes. Further, there is a 'gap' between e-HRM in a technical sense and e-HRM in a practical sense in the five companies involved in our study. Finally, e-HRM hardly helped to improve employee competences, but resulted in cost reduction and a reduction of the administrative burden

    Swarm Theology

    Full text link
    After showing the unsuitability of continuing to use some earlier models of divine action, the author examines the implications for God\'s involvement in the world suggested by the current understanding of the behavior of complex systems

    Web 2.0 and the ever elusive balance between information explosion and data mining

    Get PDF
    In a fascinating tussle of perspectives in Volume 6 and issue no 3 of april 2008 Frontiers in Ecology and Evolution, Martin A Nuñez and Gregory M Crustinger enlist ways to brace up with the recent literature. They advocate that PhD students should choose a well studied system to have better chances of success. On the other side of this fascinating tussle Daniel Simberloff and Nathan J Sanders agree with there enthusiastic case and approaches for bracing up with the recent literature. However, they disagree with the play safe strategy of start-ups in ecology and evolution.

For me this tussle of perspectives is really fascinating. It underlines the story of our times. In the times of information explosion, should we concentrate on mining this information for knowledge or should we keep adding to this information explosion. This dilemma fits well in the context of the tussle by a quote from Nassim Nicholas Taleb “ It is almost impossible these days to finish PhD without excessive intellectual curiosity and it is impossible to get a faculty position without narrowly specializing in a chosen field ”.
Here in this letter, I want to harmonize the two perspectives by arguing that coexistence of information explosion and data mining is possible. We should strive for that ever elusive balance and I discuss below how Web 2.0 will enable it. Web 2.0, still in its infancy already shows promising results and I discuss below these usages

    Automatic supervised information extraction of structured web data

    Get PDF
    The overall purpose of this project is, in short words, to create a system able to extract vital information from product web pages just like a human would. Information like the name of the product, its description, price tag, company that produces it, and so on. At a first glimpse, this may not seem extraordinary or technically difficult, since web scraping techniques exist from long ago (like the python library Beautiful Soup for instance, an HTML parser1 released in 2004). But let us think for a second on what it actually means being able to extract desired information from any given web source: the way information is displayed can be extremely varied, not only visually, but also semantically. For instance, some hotel booking web pages display at once all prices for the different room types, while medium-sized consumer products in websites like Amazon offer the main product in detail and then more small-sized product recommendations further down the page, being the latter the preferred way of displaying assets by most retail companies. And each with its own styling and search engines. With the above said, the task of mining valuable data from the web now does not sound as easy as it first seemed. Hence the purpose of this project is to shine some light on the Automatic Supervised Information Extraction of Structured Web Data problem. It is important to think if developing such a solution is really valuable at all. Such an endeavour both in time and computing resources should lead to a useful end result, at least on paper, to justify it. The opinion of this author is that it does lead to a potentially valuable result. The targeted extraction of information of publicly available consumer-oriented content at large scale in an accurate, reliable and future proof manner could provide an incredibly useful and large amount of data. This data, if kept updated, could create endless opportunities for Business Intelligence, although exactly which ones is beyond the scope of this work. A simple metaphor explains the potential value of this work: if an oil company were to be told where are all the oil reserves in the planet, it still should need to invest in machinery, workers and time to successfully exploit them, but half of the job would have already been done2. As the reader will see in this work, the way the issue is tackled is by building a somehow complex architecture that ends in an Artificial Neural Network3. A quick overview of such architecture is as follows: first find the URLs that lead to the product pages that contain the desired data that is going to be extracted inside a given site (like URLs that lead to ”action figure” products inside the site ebay.com); second, per each URL passed, extract its HTML and make a screenshot of the page, and store this data in a suitable and scalable fashion; third, label the data that will be fed to the NN4; fourth, prepare the aforementioned data to be input in an NN; fifth, train the NN; and sixth, deploy the NN to make [hopefully accurate] predictions

    Introduction of an advanced caching layer leveraging the Varnish technology stack and integrating it to the existing web platform

    Get PDF
    Web performance nowadays plays a significant role for many leading enterprises and the ones that trying to gain more visibility and users. Multiple studies and research papers in the area show that poor performance have a negative impact on business goals. An endless waiting for slow Web pages
    • …
    corecore