17,625 research outputs found

    User-centric adaptation of structured Web documents for small devices

    Get PDF
    Content adaptation is a crucial step in making desktop-oriented web resources available to mobile, small device users. In this paper, we propose a decision engine comprising a content analysis module and a negotiation module to serve as the core of a content adaptation architecture. The content analysis module parses a structured web document originally intended for the desktop into small sections and transforms the document into a form that is best suited for rendering in a constrained mobile device. The transformation also provides the user with the best content value in an adapted web page while preserving content integrity. With the transformed document, the negotiation module selects the best rendering parameters to be used in the synthesis of an optimal adapted version of the content. The decisions made are based on the user's preference and QoS considerations. We have built a prototype to demonstrate the viability of our approach. Ā© 2005 IEEE.published_or_final_versio

    Social Media and the Public Sector

    Get PDF
    {Excerpt} Social media is revolutionizing the way we live, learn, work, and play. Elements of the private sector have begun to thrive on opportunities to forge, build, and deepen relationships. Some are transforming their organizational structures and opening their corporate ecosystems in consequence. The public sector is a relative newcomer. It too can drive stakeholder involvement and satisfaction. Global conversations, especially among Generation Y, were born circa 2004. Beginning 1995 until then, the internet had hosted static, one-way websites. These were places to visit passively, retrieve information from, and perhaps post comments about by electronic mail. Sixteen years later, Web 2.0 enables many-to-many connections in numerous domains of interest and practice, powered by the increasing use of blogs, image and video sharing, mashups, podcasts, ratings, Really Simple Syndication, social bookmarking, tweets, widgets, and wikis, among others. Today, people expect the internet to be user-centric

    Cognition and the Web

    No full text
    Empirical research related to the Web has typically focused on its impact to social relationships and wider society; however, the cognitive impact of the Web is also an increasing focus of scientific interest and research attention. In this paper, I attempt to provide an overview of what I see as the important issues in the debate regarding the relationship between human cognition and the Web. I argue that the Web is potentially poised to transform our cognitive and epistemic profiles, but that in order to understand the nature of this influence we need to countenance a position that factors in the available scientific evidence, the changing nature of our interaction with the Web, and the possibility that many of our everyday cognitive achievements rely on complex webs of social and technological scaffolding. I review the literature relating to the cognitive effects of current Web technology, and I attempt to anticipate the cognitive impact of next-generation technologies, such as Web-based augmented reality systems and the transition to data-centric modes of information representation. I suggest that additional work is required to more fully understand the cognitive impact of both current and future Web technologies, and I identify some of the issues for future scientific work in this area. Given that recent scientific effort around the Web has coalesced into a new scientific discipline, namely that of Web Science, I suggest that many of the issues related to cognition and the Web could form part of the emerging Web Science research agenda

    Meta data to support context aware mobile applications

    Get PDF
    Published versio

    Cuypers : a semi-automatic hypermedia generation system

    Get PDF
    The report describes the architecture of emph{Cuypers, a system supporting second and third generation Web-based multimedia. First generation Web-content encodes information in handwritten (HTML) Web pages. Second generation Web content generates HTML pages on demand, e.g. by filling in templates with content retrieved dynamically from a database or transformation of structured documents using style sheets (e.g. XSLT). Third generation Web pages will make use of rich markup (e.g. XML) along with metadata (e.g. RDF) schemes to make the content not only machine readable but also machine processable --- a necessary pre-requisite to the emph{Semantic Web. While text-based content on the Web is already rapidly approaching the third generation, multimedia content is still trying to catch up with second generation techniques. Multimedia document processing has a number of fundamentally different requirements from text which make it more difficult to incorporate within the document processing chain. In particular, multimedia transformation uses different document and presentation abstractions, its formatting rules cannot be based on text-flow, it requires feedback from the formatting back-end and is hard to describe in the functional style of current style languages. We state the requirements for second generation processing of multimedia and describe how these have been incorporated in our prototype multimedia document transformation environment, emph{Cuypers. The system overcomes a number of the restrictions of the text-flow based tool sets by integrating a number of conceptually distinct processing steps in a single runtime execution environment. We describe the need for these different processing steps and describe them in turn (semantic structure, communicative device, qualitative constraints, quantitative constraints, final form presentation), and illustrate our approach by means of an example. We conclude by discussing the models and techniques required for the creation of third generation multimedia content

    Information scraps: how and why information eludes our personal information management tools

    No full text
    In this paper we describe information scraps -- a class of personal information whose content is scribbled on Post-it notes, scrawled on corners of random sheets of paper, buried inside the bodies of e-mail messages sent to ourselves, or typed haphazardly into text files. Information scraps hold our great ideas, sketches, notes, reminders, driving directions, and even our poetry. We define information scraps to be the body of personal information that is held outside of its natural or We have much still to learn about these loose forms of information capture. Why are they so often held outside of our traditional PIM locations and instead on Post-its or in text files? Why must we sometimes go around our traditional PIM applications to hold on to our scraps, such as by e-mailing ourselves? What are information scraps' role in the larger space of personal information management, and what do they uniquely offer that we find so appealing? If these unorganized bits truly indicate the failure of our PIM tools, how might we begin to build better tools? We have pursued these questions by undertaking a study of 27 knowledge workers. In our findings we describe information scraps from several angles: their content, their location, and the factors that lead to their use, which we identify as ease of capture, flexibility of content and organization, and avilability at the time of need. We also consider the personal emotive responses around scrap management. We present a set of design considerations that we have derived from the analysis of our study results. We present our work on an application platform, jourknow, to test some of these design and usability findings

    Towards Second and Third Generation Web-Based Multimedia

    Get PDF
    First generation Web-content encodes information in handwritten (HTML) Web pages. Second generation Web content generates HTML pages on demand, e.g. by filling in templates with content retrieved dynamically from a database or transformation of structured documents using style sheets (e.g. XSLT). Third generation Web pages will make use of rich markup (e.g. XML) along with metadata (e.g. RDF) schemes to make the content not only machine readable but also machine processable - a necessary pre-requisite to the emphSemantic Web. While text-based content on the Web is already rapidly approaching the third generation, multimedia content is still trying to catch up with second generation techniques. Multimedia document processing has a number of fundamentally different requirements from text which make it more difficult to incorporate within the document processing chain. In particular, multimedia transformation uses different document and presentation abstractions, its formatting rules cannot be based on text-flow, it requires feedback from the formatting back-end and is hard to describe in the functional style of current style languages. We state the requirements for second generation processing of multimedia and describe how these have been incorporated in our prototype multimedia document transformation environment, emphCuypers. The system overcomes a number of the restrictions of the text-flow based tool sets by integrating a number of conceptually distinct processing steps in a single runtime execution environment. We describe the need for these different processing steps and describe them in turn (semantic structure, communicative device, qualitative constraints, quantitative constraints, final form presentation), and illustrate our approach by means of an example. We conclude by discussing the models and techniques required for the creation of third generation multimedia content

    Semantic representation of context models: a framework for analyzing and understanding

    No full text
    pp. 10International audienceContext-aware systems are applications that adapt themselves to several situations involving user, network, data, hardware and the application itself. In this paper, we review several context models proposed in different domains: content adaptation, service adaptation, information retrieval, etc. The purpose of this review is to expose the representation of this notion semantically. According to this, we propose a framework that analyzes and compares different context models. Such a framework intends helping understanding and analyzing of such models, and consequently the definition of new ones. This framework is based on the fact that context-aware systems use context models in order to formalize and limit the notion of context and that relevant information differs from a domain to another and depends on the effective use of this information. Based on this framework, we consider in this paper a particular application domain, Business Processes, in which the notion of context remains unexplored, although it is required for flexibility and adaptability. We propose, in this paper, an ontology-based context model focusing on this particular domain
    • ā€¦
    corecore