415 research outputs found

    'Yet of Books There Are A Plenty': Bibliography, Data, and the Construction of American Fiction

    Get PDF
    This project investigates the case of a prominent bibliography and dataset of American fiction, the Wright American Fiction bibliography, and traces how the discrete items within that set come to compose a part of the whole as a result of human decisions, circumstances, and interpretations. Lyle Wright created a three-volume bibliography of American fiction from 1776 to 1900 in which he described over 10,000 texts. Wright’s work became a guide for libraries and archives, but it has also informed the creation of digital datasets of American literature, including Indiana University’s Wright American Fiction Project or Gale Cengage’s American Fiction 1774-1920 collection, which provide digital facsimiles and plain text versions of the titles listed by Wright for scholars. The bibliography’s corpus has been invaluable for big data scholars desiring access to early American texts, but its use does not come without consequences. Minority authors, particularly Indigenous American authors are excluded. Some works are erroneously included, such as Harriet Jacobs’ autobiographical Incidents in the Life of a Slave Girl (1861). Canonical works are sometimes omitted, such as Walt Whitman’s novel Franklin Evans (1842) or Louisa May Alcott’s Little Women (1868). The projects that use Wright as their basis reproduce these errors and decisions in their digitizing of Wright’s original list, ultimately affecting the datasets used by scholars. This work demonstrates how these idiosyncrasies of the Wright American Fiction bibliography come into existence and the effects Wright’s decisions have had on work that relies on his list. As the humanities become increasingly interested in data, and the use of computational methods of analysis become more prominent, research such as mine is positioned to affect the ways in which scholars view the objects from which they derive their arguments. This work demonstrates how a list of American fiction titles is assembled, and reveals the process to be an interpretive and debatable process

    'Yet of Books There Are A Plenty': Bibliography, Data, and the Construction of American Fiction

    Get PDF
    This project investigates the case of a prominent bibliography and dataset of American fiction, the Wright American Fiction bibliography, and traces how the discrete items within that set come to compose a part of the whole as a result of human decisions, circumstances, and interpretations. Lyle Wright created a three-volume bibliography of American fiction from 1776 to 1900 in which he described over 10,000 texts. Wright’s work became a guide for libraries and archives, but it has also informed the creation of digital datasets of American literature, including Indiana University’s Wright American Fiction Project or Gale Cengage’s American Fiction 1774-1920 collection, which provide digital facsimiles and plain text versions of the titles listed by Wright for scholars. The bibliography’s corpus has been invaluable for big data scholars desiring access to early American texts, but its use does not come without consequences. Minority authors, particularly Indigenous American authors are excluded. Some works are erroneously included, such as Harriet Jacobs’ autobiographical Incidents in the Life of a Slave Girl (1861). Canonical works are sometimes omitted, such as Walt Whitman’s novel Franklin Evans (1842) or Louisa May Alcott’s Little Women (1868). The projects that use Wright as their basis reproduce these errors and decisions in their digitizing of Wright’s original list, ultimately affecting the datasets used by scholars. This work demonstrates how these idiosyncrasies of the Wright American Fiction bibliography come into existence and the effects Wright’s decisions have had on work that relies on his list. As the humanities become increasingly interested in data, and the use of computational methods of analysis become more prominent, research such as mine is positioned to affect the ways in which scholars view the objects from which they derive their arguments. This work demonstrates how a list of American fiction titles is assembled, and reveals the process to be an interpretive and debatable process

    CoAKTinG: Collaborative Advanced Knowledge Technologies in the Grid

    Get PDF
    Grid infrastructures coupled with semantic web linkage and reasoning open up intriguing new possibilities for scientific collaboration. In this short paper, we outline the research agenda and collaboration technologies under development within the CoAKTinG project: Collaborative Advanced Knowledge Technologies in the Grid. CoAKTinG will provide tools to assist scientific collaboration by integrating intelligent meeting spaces, ontologically annotated media streams from online meetings, decision rationale and group memory capture, meeting facilitation, issue handling, planning and coordination support, constraint satisfaction, and instant messaging/presence. Their integration is illustrated through an extended use scenario

    'Yet of Books There Are A Plenty': Bibliography, Data, and the Construction of American Fiction

    Get PDF
    This project investigates the case of a prominent bibliography and dataset of American fiction, the Wright American Fiction bibliography, and traces how the discrete items within that set come to compose a part of the whole as a result of human decisions, circumstances, and interpretations. Lyle Wright created a three-volume bibliography of American fiction from 1776 to 1900 in which he described over 10,000 texts. Wright’s work became a guide for libraries and archives, but it has also informed the creation of digital datasets of American literature, including Indiana University’s Wright American Fiction Project or Gale Cengage’s American Fiction 1774-1920 collection, which provide digital facsimiles and plain text versions of the titles listed by Wright for scholars. The bibliography’s corpus has been invaluable for big data scholars desiring access to early American texts, but its use does not come without consequences. Minority authors, particularly Indigenous American authors are excluded. Some works are erroneously included, such as Harriet Jacobs’ autobiographical Incidents in the Life of a Slave Girl (1861). Canonical works are sometimes omitted, such as Walt Whitman’s novel Franklin Evans (1842) or Louisa May Alcott’s Little Women (1868). The projects that use Wright as their basis reproduce these errors and decisions in their digitizing of Wright’s original list, ultimately affecting the datasets used by scholars. This work demonstrates how these idiosyncrasies of the Wright American Fiction bibliography come into existence and the effects Wright’s decisions have had on work that relies on his list. As the humanities become increasingly interested in data, and the use of computational methods of analysis become more prominent, research such as mine is positioned to affect the ways in which scholars view the objects from which they derive their arguments. This work demonstrates how a list of American fiction titles is assembled, and reveals the process to be an interpretive and debatable process

    'Yet of Books There Are A Plenty': Bibliography, Data, and the Construction of American Fiction

    Get PDF
    This project investigates the case of a prominent bibliography and dataset of American fiction, the Wright American Fiction bibliography, and traces how the discrete items within that set come to compose a part of the whole as a result of human decisions, circumstances, and interpretations. Lyle Wright created a three-volume bibliography of American fiction from 1776 to 1900 in which he described over 10,000 texts. Wright’s work became a guide for libraries and archives, but it has also informed the creation of digital datasets of American literature, including Indiana University’s Wright American Fiction Project or Gale Cengage’s American Fiction 1774-1920 collection, which provide digital facsimiles and plain text versions of the titles listed by Wright for scholars. The bibliography’s corpus has been invaluable for big data scholars desiring access to early American texts, but its use does not come without consequences. Minority authors, particularly Indigenous American authors are excluded. Some works are erroneously included, such as Harriet Jacobs’ autobiographical Incidents in the Life of a Slave Girl (1861). Canonical works are sometimes omitted, such as Walt Whitman’s novel Franklin Evans (1842) or Louisa May Alcott’s Little Women (1868). The projects that use Wright as their basis reproduce these errors and decisions in their digitizing of Wright’s original list, ultimately affecting the datasets used by scholars. This work demonstrates how these idiosyncrasies of the Wright American Fiction bibliography come into existence and the effects Wright’s decisions have had on work that relies on his list. As the humanities become increasingly interested in data, and the use of computational methods of analysis become more prominent, research such as mine is positioned to affect the ways in which scholars view the objects from which they derive their arguments. This work demonstrates how a list of American fiction titles is assembled, and reveals the process to be an interpretive and debatable process

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    The multispace adaptable building concept and its extension into mass customisation

    Get PDF
    UK government Policy Planning Guidance promotes optimum use of the existing building stock through mixed use in urban centres and encourages conversion of redundant office and retail space into leisure, service or residential uses. Whilst social pressures are evident in the push to more effectively utilise existing building stock, new building stock also has to meet the commercial requirements of the client, which often translates into maximum occupancy of the building. This is encouraging greater innovation in the design of new buildings to allow change of use throughout the structure’s lifetime. This paper describes the concepts surrounding an adaptable design for new buildings, along with a review of factors influencing the mode of use. The major physical parameters of storey height, building proximity, plan depth, structural design, services, fire safety, cladding and noise abatement are evaluated in the context of adaptable building use. In addition to improved building utilisation, the UK government has identified a weakness in the productivity of the construction industry. The report ‘Rethinking Construction’ (Egan, 1998) suggested that up to 80% of inputs into buildings are repeated and that parallels should be drawn with the designing and planning of new cars in the automotive sector. This suggests that improvements in quality, cost and delivery time of new structures could be achieved through mass-customisation incorporating a significant element of pre-design

    Chromosomelocation and feature extraction using neural networks

    No full text
    We present a technique for initial location of scattered chromosomal objects within multi-resolution images of human blood cells. Kohonen Self Organising Maps learn to extract salient image features in the vicinity of located objects. Featureextraction is to form the first stage in a neuralnetwork system applied to the problem of recognizing structural aberrations in chromosomes

    Collaboration in the Semantic Grid: a Basis for e-Learning

    Get PDF
    The CoAKTinG project aims to advance the state of the art in collaborative mediated spaces for the Semantic Grid. This paper presents an overview of the hypertext and knowledge based tools which have been deployed to augment existing collaborative environments, and the ontology which is used to exchange structure, promote enhanced process tracking, and aid navigation of resources before, after, and while a collaboration occurs. While the primary focus of the project has been supporting e-Science, this paper also explores the similarities and application of CoAKTinG technologies as part of a human-centred design approach to e-Learning

    Factors influencing the market for branded mass customized buildings

    Get PDF
    The concept of mass customisation is not new yet the UK construction industry has yet to grasp this opportunity to deliver greater value to its customers. The government report Rethinking Construction [Egan 1998] clearly identifies this issue: ‘ We have repeatedly heard the claim that construction is different from manufacturing because every product is unique. We do not agree. Not only are many buildings such as houses, essentially repeat products which can be continually improved but, more importantly, the process of construction is itself repeated in its essentials from project to project.’ Egan delivered this report in 1998 but CLASP, for example, highlighted the advantages of standardisation in 1959 in the conclusions to their Annual Report [CLASP 1959]:’The consortium is now an established and powerful force in building, responsible for a significant number of the country’s new schools as well as for a growing number of other public buildings. The second year of operations has confirmed that the consortium with its big orders and its design resources, is the kind of organization most capable of realizing the full economic advantage of factory production methods. It leads therefore towards the more enlightened building industry for which we all strive.’ A review of government funded construction reports between 1944-98 [Murray 2003] emphasises the continued presence of these recurring themes in appraisal of the construction process. The opportunity is seemingly clear. Designing and constructing from scratch, each time a client requires building infrastructure, is wasteful and inefficient. A radical market change is needed where built environment customers experience much greater certainty and value whilst retaining choice, and at the same time enabling constructors to improve their profit margins by sharing the rewards of jointly maximising value. This vision requires the replacement of a significant portion of the current bespoke market for the design, delivery and procurement of non-residential buildings with a combination of standardised and customised product offerings. This paper details information obtained to date from an ongoing IMCRC funded study entitled ‘Building the Brand’ at Loughborough University
    • 

    corecore