4,905 research outputs found

    Goal-driven requirements analysis for hypermedia-intensive Web applications

    Get PDF
    Requirements analysis for Web applications still needs to employ effective RE practices to accommodate some distinctive aspects: capturing high-level communication goals, considering several user profiles, defining hypermedia-specific requirements, bridging the gap between requirements and Web design, and reusing requirements for an effective usability evaluation. Techniques should be usable, informal, require little training effort, and show relative advantage to project managers. On the basis of the i * framework, this paper presents a proposal for defining hypermedia requirements (concerning aspects such as content, interaction, navigation, and presentation) for Web applications. The model adopts a goal-driven approach coupled with scenario-based techniques, introduces a hypermedia requirement taxonomy to facilitate Web conceptual design, and paves the way for systematic usability evaluation. Particular attention is paid to the empirical validation of the model based on the perceived quality attributes theory. A case study developed with industrial partners is discusse

    Mastering the requirements analysis for communication-intensive websites

    Get PDF
    Web application development still needs to employ effective methods to accommodate some distinctive aspects of the requirements analysis process: capturing high-level communication goals, considering several user profiles and stakeholders, defining hypermedia-specific requirements (concerning navigation, content, information structure and presentation aspects), and reusing requirements for an effective usability evaluation. Techniques should be usable by both stakeholders and the design team, require little training effort, and show relative advantage to project managers. Over the last few years, requirements methodologies applied to web-based applications have considered mainly the transactional and operational aspects typical of traditional information systems. The communicational aspects of web sites have been neglected in regards to systematic requirements methods. This thesis, starting from key achievements in Requirements Engineering (hereafter RE), introduces a model (AWARE) for defining and analyzing requirements for web applications mainly conceived as strategic communication means for an institution or organization. The model extends traditional goal and scenario-based approaches for refining highlevel goals into website requirements, by introducing the analysis of ill-defined user goals, stakeholder communication goals, and a hypermedia requirement taxonomy to facilitate web conceptual design, and paving the way for a systematic usability evaluation. AWARE comprises a conceptual toolkit and a notation for effective requirements documentation. AWARE concepts and notation represent a useful communication and analysis conceptual tool that may support in the elicitation, negotiation, analysis and validation of requirements from the relevant stakeholders (users included). The empirical validation of the model is carried out in two ways. Firstly, the model has been employed in web projects on the field. These case studies and the lessons learnt will be presented and discussed to assess advantages and limits of the proposal. Secondly, a sample of web analysts and designers has been asked to study and apply the model: the feedback gathered is positive and encouraging for further improvement.Lo sviluppo di applicazioni web necessita di strumenti efficaci per gestire alcuni aspetti essenziali del processo di analisi dei requisiti: l'identificazione di obiettivi di comunicazione strategici, la presenza di una varietĂ  di profili utente e di stakeholders, le definizione di requisiti ipermediali (riguardanti navigazione, interazione, contenuto e presentazione), e il riuso dei requisiti per una pianificazione efficace della valutazione dell'usabilitĂ . Sono necessarie tecniche usabili sia dagli stakeholders che dai progettisti, che richiedono un tempo breve per essere appresi ed usati con efficacia, mostrando vantaggi significativi ai gestori di progetti complessi. La tesi definisce AWARE (Analysis of Web Application Requirements) - una metodologia per l'analisi dei requisiti specifica per la gestione di siti web (ed applicazioni interattive) con forti componenti comunicative. La metodologia estende le tecniche esistenti dell''analisi dei requisiti basate su approcci goal-oriented e scenario-based, introducendo una tassonomia di requisiti specifica per siti web (che permette di dare un input strutturato all'attivitĂ  di progetazione), strumenti per l'identificazione e l'analisi di obiettivi ill-defined (generici o mal-definiti) e di obiettivi comunicativi e supporto metodologico per la valutazione dell'usabilitĂ  basata sui requisiti dell'applicazione. La metodologia AWARE Ăš stata valutata sul campo attraverso progetti con professionisti del settore (web designers e IT managers), e grazie ad interventi di formazione in aziende specializzate nella comunicazione su web

    Factors shaping the evolution of electronic documentation systems

    Get PDF
    The main goal is to prepare the space station technical and managerial structure for likely changes in the creation, capture, transfer, and utilization of knowledge. By anticipating advances, the design of Space Station Project (SSP) information systems can be tailored to facilitate a progression of increasingly sophisticated strategies as the space station evolves. Future generations of advanced information systems will use increases in power to deliver environmentally meaningful, contextually targeted, interconnected data (knowledge). The concept of a Knowledge Base Management System is emerging when the problem is focused on how information systems can perform such a conversion of raw data. Such a system would include traditional management functions for large space databases. Added artificial intelligence features might encompass co-existing knowledge representation schemes; effective control structures for deductive, plausible, and inductive reasoning; means for knowledge acquisition, refinement, and validation; explanation facilities; and dynamic human intervention. The major areas covered include: alternative knowledge representation approaches; advanced user interface capabilities; computer-supported cooperative work; the evolution of information system hardware; standardization, compatibility, and connectivity; and organizational impacts of information intensive environments

    Applying digital content management to support localisation

    Get PDF
    The retrieval and presentation of digital content such as that on the World Wide Web (WWW) is a substantial area of research. While recent years have seen huge expansion in the size of web-based archives that can be searched efficiently by commercial search engines, the presentation of potentially relevant content is still limited to ranked document lists represented by simple text snippets or image keyframe surrogates. There is expanding interest in techniques to personalise the presentation of content to improve the richness and effectiveness of the user experience. One of the most significant challenges to achieving this is the increasingly multilingual nature of this data, and the need to provide suitably localised responses to users based on this content. The Digital Content Management (DCM) track of the Centre for Next Generation Localisation (CNGL) is seeking to develop technologies to support advanced personalised access and presentation of information by combining elements from the existing research areas of Adaptive Hypermedia and Information Retrieval. The combination of these technologies is intended to produce significant improvements in the way users access information. We review key features of these technologies and introduce early ideas for how these technologies can support localisation and localised content before concluding with some impressions of future directions in DCM

    A Process Framework for Semantics-aware Tourism Information Systems

    Get PDF
    The growing sophistication of user requirements in tourism due to the advent of new technologies such as the Semantic Web and mobile computing has imposed new possibilities for improved intelligence in Tourism Information Systems (TIS). Traditional software engineering and web engineering approaches cannot suffice, hence the need to find new product development approaches that would sufficiently enable the next generation of TIS. The next generation of TIS are expected among other things to: enable semantics-based information processing, exhibit natural language capabilities, facilitate inter-organization exchange of information in a seamless way, and evolve proactively in tandem with dynamic user requirements. In this paper, a product development approach called Product Line for Ontology-based Semantics-Aware Tourism Information Systems (PLOSATIS) which is a novel hybridization of software product line engineering, and Semantic Web engineering concepts is proposed. PLOSATIS is presented as potentially effective, predictable and amenable to software process improvement initiatives

    Designing communication-intensive web applications: experience and lessons from a real case

    Get PDF
    Who uses requirements engineering and design methodologies besides the people who invented them? Are researchers -at least- actually trying to use them in real-world complex projects and not in "paper project"? In this paper, we dare to recount the experience and the lessons we gained in trying to use seriously and in-depth a requirements engineering method (called AWARE) combined with a conceptual user-centered design method (called W2000) for the development of a real-world web application. The project is recounted through the process followed and the artefacts produced, as well as by crystallizing our experience in using and transferring the method to industry in practical and methodological recommendations.Facultad de InformĂĄtic

    Towards Modeling of DataWeb Applications - A Requirement\u27s Perspective

    Get PDF
    The web is more and more used as a platform for fullfledged, increasingly complex information systems, where a huge amount of change-intensive data is managed by underlying database systems. From a software engineering point of view, the development of such so called DataWeb applications requires proper modeling methods in order to ensure architectural soundness and maintainability. The goal of this paper is twofold. First, a framework of requirements, covering the design space of DataWeb modeling methods in terms of three orthogonal dimensions is suggested. Second, on the basis of this framework, eight representative modeling methods for DataWeb applications are surveyed and general shortcomings are identified pointing the way to nextgeneration modeling methods

    Surveying navigation modelling approaches

    Get PDF
    Recently, a number of authors who work on web application modelling have paid attention to the ideas regarding separation of concerns that underlie aspect-orientation, as well as some ideas that come from the model-driven development community. They attempt to improve the representation and separation of some concerns such as customisation or navigational concerns that are scattered throughout different software artifacts and tangled with other concerns in order to give a best support to the evolution of web applications. This paper surveys recent proposals in this field and compares them within a homogeneous framework that bridges the gap between the many different terminologies used, and highlights open problems that need further research.Ministerio de Ciencia y TecnologĂ­a TIN2007-64119Ministerio de Ciencia y TecnologĂ­a TIN-2007-67843-C06-0

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web
    • 

    corecore