73 research outputs found

    A Journal-Driven Bibliography of Digital Humanities

    Get PDF
    Digital Humanities Quarterly (DHQ) seeks Level II funding to develop a bibliographic resource through which the journal can create, manage, export, and publish high-quality bibliographic data from DHQ articles and their citations, as well as from the broader digital humanities research domain. Drawing on data from this resource, we will develop visualizations through which readers can explore citation networks and find related articles. We will also publish the full bibliography as a public web-based service that reflects the profile of current digital humanities research. The bibliography will be maintained and expanded through incoming DHQ articles and citations, and through contributions from the DH community. DHQ is an open-access online journal published by the Alliance of Digital Humanities Organizations (ADHO), hosted at Brown University and Indiana University, and serves as a crucial point of encounter between digital humanities research and the wider humanities community

    An Engineering Method for Adaptive, Context-aware Web Applications

    Get PDF
    Users of Web-based software encounter growing complexity of the software resulting from the increasing amount of information and service offering. As a consequence, the likelihood that users employ the software in a manner compatible with the provider's interest decreases. Depending on the purpose of the Web application, a provider's goal can be to guide and influence user choices in information and service selection, or to assure user productivity. An approach at addressing these goals is to adapt the software's behavior during operation to the context in which it is being used. The term context-awareness originates in mobile computing, where research projects have studied context recognition and adaptation in specific scenarios. Context-awareness is now being studied in a variety of systems, including Web applications. However, how to account for context in a Web Engineering process is not yet established, nor is a generic means of using context in a Web software architecture. This dissertation addresses the question of how context-awareness can be applied in a general-purpose, systematic process for Web application development: that is, in a Web Engineering process. A model for representing an application's context factors in ontologies is presented. A general-purpose methodology for Web Engineering is extended to account for context, by putting in relation context ontologies with elements of the application domain. The application model is extended with adaptation specifications, defining at which places in the application adaptation to context is to occur, and according to what strategy. Application and context models are system interpretable, in order to support automatic adaptation of a system's behavior during its operation, that is, consequently to user requests. Requirements for a corresponding Web software architecture for context are established first at the conceptual level, then specifically in a content-based architecture based on an XML stack. The CATWALK software framework, an implementation of an architecture enabling adaptation to context is described. The framework provides mechanisms for interpreting application and context models to generate an adaptive application, meaning to generate responses to user requests, where the generation process makes decisions based on context information. For this purpose, the framework contains default implementations for context recognition and adaptation mechanisms. The approach presented supports a model-based development of Web applications which adapt to context. The CATWALK framework is an mplementation for model interpretation in a run-time system and thus simplifies the development of Web applications which adapt to context. As the framework is component-based and follows a strict separation of concerns, the default mechanisms can be extended or replaced, allowing to reduce the amount of custom code required to implement specific context-aware Web applications or to study alternative context inference or adaptation strategies. The use of the framework is illustrated in a case study, in which models are defined for a prototypical application, and this application is generated by the framework. The purpose of the case study is to illustrate effects of adaptation to context, based on context description and adaptation specifications in the application model

    The Historical Hazards of Finding Aids

    Get PDF
    Archivists have traditionally understood access through finding aids, assuming that—through creating them—they are effectively providing access to archival materials. This article is a history of finding aids in American archival practice that demonstrates how finding aids have negatively colored how archivists have understood access. It shows how finding aids were originally a compromise between resource constraints and the more familiar access that users expected, how a discourse centered on finding aids hindered the standardization of archival description as data, and how the characteristics of finding aids as tools framed and negatively impacted the Encoded Archival Description (EAD) standard. It questions whether finding aids are a productive or useful framework for understanding how archivists provide access to collections

    Rule-Based Intelligence on the Semantic Web: Implications for Military Capabilities

    No full text
    Rules are a key element of the Semantic Web vision, promising to provide a foundation for reasoning capabilities that underpin the intelligent manipulation and exploitation of information content. Although ontologies provide the basis for some forms of reasoning, it is unlikely that ontologies, by themselves, will support the range of knowledge-based services that are likely to be required on the Semantic Web. As such, it is important to consider the contribution that rule-based systems can make to the realization of advanced machine intelligence on the Semantic Web. This report aims to review the current state-of-the-art with respect to semantic rule-based technologies. It provides an overview of the rules, rule languages and rule engines that are currently available to support ontology-based reasoning, and it discusses some of the limitations of these technologies in terms of their inability to cope with uncertain or imprecise data and their poor performance in some reasoning contexts. This report also describes the contribution of reasoning systems to military capabilities, and suggests that current technological shortcomings pose a significant barrier to the widespread adoption of reasoning systems within the defence community. Some solutions to these shortcomings are presented and a timescale for technology adoption within the military domain is proposed. It is suggested that application areas such as semantic integration, semantic interoperability, data fusion and situation awareness provide the best opportunities for technology adoption within the 2015 timeframe. Other capabilities, such as decision support and the emulation of human-style reasoning capabilities are seen to depend on the resolution of significant challenges that may hinder attempts at technology adoption and exploitation within the 2020 timeframe

    CBiX a model for content-based billing in XML environments

    Get PDF
    The new global economy is based on knowledge and information. Further- more, the Internet is facilitating new forms of revenue generation of which one recognized potential source is content delivery over the Internet. One aspect that is critical to ensuring a content-based revenue stream is billing. While there are a number of content-based billing systems commercially available, as far as can be determined these products are not based on a common model that can ensure interoperability and communication between the billing sys- tems. This dissertation addresses the need for a content-based billing model by developing the CBiX (Content-based Billing in XML Environments) model. This model, developed in a phased approach as a family of billing models, incorporates three aspects. The rst aspect is access control. The second as- pect is pricing, in the form of document, element and inherited element level pricing for content. The third aspect is XML as the platform for information exchange. The nature of the Internet facilitates information interchange, exible web business models and exible pricing. These facts, coupled with CBiX being concerned with billing for content over the Internet, leads to a number of decisions regarding the model: The CBiX model has to incorporate exible pricing. Therefore pricing is evolved through the development of the family of models from doc- ument level pricing to element level pricing to inherited element level pricing. The CBiX model has to be based on a platform for information inter- change that enables content delivery. XML provides a broad family of standards that is widely supported and creating the next generation Internet. XML is therefore selected as the environment for information exchange for CBiX. The CBiX model requires a form of access control that can provide access to content based on user properties. Credential-based Access Control is therefore selected as the method of access control for CBiX, whereby authorization is granted based on a set of user credentials. Furthermore, this dissertation reports on the development of a prototype. This serves a dual purpose: rstly, to assist the author in understanding the technologies and principles involved; secondly, to illustrate CBiX0 and therefore present a proof-of-concept of at least the base model. The CBiX model provides a base to guide and assist developers with regards to the issues involved with developing a billing system for XML- based environments

    Leveraging Naval Riverine forces to achieve information superiority in stability operations

    Get PDF
    The conflicts of Iraq and Afghanistan have provided an undeniable storyline: U.S. forces can conduct a conventional mission better than any in the world, but that mission, accomplished in short order, leaves behind a situation for which conventional forces and equipment are ill-prepared. This situation requires a new mission: Stability Operations. The blue-water is not where these 21st century conflicts will likely take place, and forces such as the U.S. Navy Riverines are among the many forces that provide a capability to integrate and communicate with local populations that cannot be matched by blue-water forces. While the riverine force's mission set is one that could become heavily utilized in stability operations, the ability to conduct those missions is currently hindered by a lack of implementation of information technology. The current disadvantages that greatly increase operational risk include a reduced capability to engage the population, reduced situational awareness, and limited communication reach-back capability. A riverine force properly equipped with and trained with biometric, unmanned, and information sharing systems would provide the NECC, and U.S. Navy as a whole, a more comprehensive ability to conduct stability operations in brown-water areas, something no other conventional Navy unit can currently accomplish.http://archive.org/details/leveragingnavalr109455075US Navy (USN) authorApproved for public release; distribution is unlimited

    Strategies for the intelligent selection of components

    Get PDF
    It is becoming common to build applications as component-intensive systems - a mixture of fresh code and existing components. For application developers the selection of components to incorporate is key to overall system quality - so they want the `best\u27. For each selection task, the application developer will de ne requirements for the ideal component and use them to select the most suitable one. While many software selection processes exist there is a lack of repeatable, usable, exible, automated processes with tool support. This investigation has focussed on nding and implementing strategies to enhance the selection of software components. The study was built around four research elements, targeting characterisation, process, strategies and evaluation. A Post-positivist methodology was used with the Spiral Development Model structuring the investigation. Data for the study is generated using a range of qualitative and quantitative methods including a survey approach, a range of case studies and quasiexperiments to focus on the speci c tuning of tools and techniques. Evaluation and review are integral to the SDM: a Goal-Question-Metric (GQM)-based approach was applied to every Spiral
    • …
    corecore