32 research outputs found

    BIBFRAME Development

    Get PDF
    The Library of Congress has been exploring “linked data” for over 10 years. The genesis of this development goes back to the W3C projects in the late 1990s on SGML, then XML, HTML, and finally a linked data oriented format RDF. In 2007 the Library of Congress organized a community inquiry into the “future of bibliographic control,” that became a catalyst for exploration and change. A wide ranging report was written and some major recommendations were made. Some have been acted upon, others are still on the table such as a rethinking of subject vocabularies. But two technical recommendations were ideally suited for exploration using the emerging linked data framework of the W3C: the use of technology to get broader use of library curated vocabularies and the replacement of the MARC format with a data interchange framework that makes library data more readily available on the web. The Library was investigating the linked data framework for standards and models for exposing its vocabularies such as the LCSH. Accordingly, LCSH was made publicly available as linked data in 2009 followed by name authorities, countries, languages, and many other controlled lists used in bibliographic standards such as MARC, MODS, and PREMIS. This project became the Library of Congress Linked Data Service. Its aim is to establish stable identifiers in URI form for entities and concepts that are useful for description of cultural heritage material. Then in 2011 the Library of Congress announced the start of the Bibliographic Framework Initiative (subsequently labelled “BIBFRAME”) to respond to the second major technical recommendation of the future of bibliographic control report: to replace MARC for interchange and to make library resources more visible on the web. The Library of Congress, with the library community, is tackling the challenges described above and this paper looks at main aspects of that development.The Library of Congress has been exploring “linked data” for over 10 years. The genesis of this development goes back to the W3C projects in the late 1990s on SGML, then XML, HTML, and finally a linked data oriented format RDF. In 2007 the Library of Congress organized a community inquiry into the “future of bibliographic control,” that became a catalyst for exploration and change. A wide ranging report was written and some major recommendations were made. Some have been acted upon, others are still on the table such as a rethinking of subject vocabularies. But two technical recommendations were ideally suited for exploration using the emerging linked data framework of the W3C: the use of technology to get broader use of library curated vocabularies and the replacement of the MARC format with a data interchange framework that makes library data more readily available on the web. The Library was investigating the linked data framework for standards and models for exposing its vocabularies such as the LCSH. Accordingly, LCSH was made publicly available as linked data in 2009 followed by name authorities, countries, languages, and many other controlled lists used in bibliographic standards such as MARC, MODS, and PREMIS. This project became the Library of Congress Linked Data Service. Its aim is to establish stable identifiers in URI form for entities and concepts that are useful for description of cultural heritage material. Then in 2011 the Library of Congress announced the start of the Bibliographic Framework Initiative (subsequently labelled “BIBFRAME”) to respond to the second major technical recommendation of the future of bibliographic control report: to replace MARC for interchange and to make library resources more visible on the web. The Library of Congress, with the library community, is tackling the challenges described above and this paper looks at main aspects of that development

    ARL White Paper on Wikidata: Opportunities and Recommendations

    Get PDF
    This white paper highlights opportunities for research library involvement in Wikidata, particularly in community-based collections, community-owned infrastructure, and collective collections

    No father required? The welfare assessment in the Human Fertilisation and Embryology Act 2008

    Get PDF
    Of all the changes to the Human Fertilisation and Embryology Act 1990 that were introduced in 2008 by legislation of the same name, foremost to excite media attention and popular controversy was the amendment of the so-called welfare clause. This clause forms part of the licensing conditions which must be met by any clinic before offering those treatment services covered by the legislation. The 2008 Act deleted the statutory requirement that clinicians consider the need for a father of any potential child before offering a woman treatment, substituting for it a requirement that clinicians must henceforth consider the child’s need for “supportive parenting”. In this paper, we first briefly recall the history of the introduction of s 13(5) in the 1990 Act, before going on to track discussion of its amendment through the lengthy reform process that preceded the introduction of the 2008 Act. We then discuss the meaning of the phrase “supportive parenting” with reference to guidance regarding its interpretation offered by the Human Fertilisation and Embryology Authority. While the changes to s 13(5) have been represented as suggesting a major change in the law, we suggest that the reworded section does not represent a significant break from the previous law as it had been interpreted in practice. This raises the question of why it was that an amendment that is likely to make very little difference to clinical practice tended to excite such attention (and with such polarising force). To this end, we locate debates regarding s 13(5) within a broader context of popular anxieties regarding the use of reproductive technologies and, specifically, what they mean for the position of men within the family

    ANSI/NISO Z39.99-2017 ResourceSync Framework Specification

    Get PDF
    This ResourceSync specification describes a synchronization framework for the web consisting of various capabilities that allow third-party systems to remain synchronized with a server’s evolving resources. The capabilities may be combined in a modular manner to meet local or community requirements. This specification also describes how a server should advertise the synchronization capabilities it supports and how third-party systems may discover this information. The specification repurposes the document formats defined by the Sitemap protocol and introduces extensions for them

    Genomic analyses in Cornelia de Lange Syndrome and related diagnoses: Novel candidate genes, <scp>genotype–phenotype</scp> correlations and common mechanisms

    Get PDF
    Cornelia de Lange Syndrome (CdLS) is a rare, dominantly inherited multisystem developmental disorder characterized by highly variable manifestations of growth and developmental delays, upper limb involvement, hypertrichosis, cardiac, gastrointestinal, craniofacial, and other systemic features. Pathogenic variants in genes encoding cohesin complex structural subunits and regulatory proteins (NIPBL, SMC1A, SMC3, HDAC8, and RAD21) are the major pathogenic contributors to CdLS. Heterozygous or hemizygous variants in the genes encoding these five proteins have been found to be contributory to CdLS, with variants in NIPBL accounting for the majority (&gt;60%) of cases, and the only gene identified to date that results in the severe or classic form of CdLS when mutated. Pathogenic variants in cohesin genes other than NIPBL tend to result in a less severe phenotype. Causative variants in additional genes, such as ANKRD11, EP300, AFF4, TAF1, and BRD4, can cause a CdLS‐like phenotype. The common role that these genes, and others, play as critical regulators of developmental transcriptional control has led to the conditions they cause being referred to as disorders of transcriptional regulation (or “DTRs”). Here, we report the results of a comprehensive molecular analysis in a cohort of 716 probands with typical and atypical CdLS in order to delineate the genetic contribution of causative variants in cohesin complex genes as well as novel candidate genes, genotype–phenotype correlations, and the utility of genome sequencing in understanding the mutational landscape in this population

    Organizations Contributing to Development of Library Standards

    Get PDF
    published or submitted for publicatio
    corecore