126 research outputs found

    Lost in abundance. The paradox of the electronic records

    Get PDF
    Promjene koje u komunikaciji među ljudima, a time i u obliku pisanih dokumenata, izazivaju informatičke tehnologije, čini aktualnim pitanje o sadržaju djelatnosti arhivske struke. Osnova te djelatnosti je upravljanje spisima tijekom čitava njihova životnog ciklusa te je stoga nužno jasno razumjeti što je spis, odnosno dokument i što ga razlikuje od drugih oblika informacija. Arhivistička literatura poznaje dva različita tipa definicije dokumenta: pravni i teorijski. U članku se opisuju dva pristupa definiranju elektroničkih dokumenata u kontekstu istraživačkih projekata u posljednje vrijeme: jedan koji definiciju izvodi iz poslovnih transakcija, i drugi, s uporištem u diplomatici, koji polazi od tradicionalne definicije arhivskog dokumenta, pri čemu se specifičnost elektroničkog dokumenta vidi u fizičkoj odvojenosti teksta i kontekstualnih podataka. Iz definicije i svojstava dokumenta izvode se i funkcionalni zahtjevi sustava za upravljanje spisima i dokumentima.More and more people communicate by electronic mail, in stead of calling, and by sending and receiving c-mail messages they create potential records. It looks like that again orality is loosing its position in social life. Every peace of recorded information is potentially a record. Today in some countries, in the near future in other countries, archives will be surrounded by the electronic records, and overwhelmed by a continuously increasing of records production. These forthcoming changes, these challenges, make us put ourselves the eternal question; What business are we in. To give at least one answer: We arc in the business of keeping records. That is our core business, and we might be in some other related business, such as in the information business, but it is all about records. Not about current records, or semi-current records or non-current records - no, just records. Since we are in the business of keeping records, we need a clear understanding of what a record is, and what distinguishes a record from other pieces of information. In the current archival literature on this subject, we sec two different perspectives for defining electronic records: a legal one, often chosen by archives as a basis for interventions in public administration, and an academic one, chosen by universities, a basis for fundamental archival research. Looking more closely at the academic perspective, we may discover two different approaches. The University of Pittsburgh, for example takes the business transactions of organisations as a starting point. A business transaction creates, and uses records. The records arc the evidence of the transactions that created and used them. Contrary to Pittsburgh the UBC project provides with clear definitions of electronic records, and in a structured, in diplomatics founded way. The starting point for the definition of an electronic record is the definition of a document, than of an archival document. An archival document is not a document with archival value, in the sense that it is worth to be preserved permanently, but a document which has re-cordness, played a role in a business process, and may serve as evidence of that process, An electronic document is a document created and communicated in a digital format and by computer technology. The advantage of the UBC definition is that it links up with traditional thinking, although that might be a risk as well. At least for the short term the use of the concept of the document as a metaphor has advantages for understanding. Whereas Pittsburgh appears to be more dynamic and open for any kind of emerging technology, it boars the risk of being one step too far. Defining a record as a document, and an electronic record as an electronic do¬cument implies that one characteristic of both an electronic and a traditional record is that it must be complete, intelligible in itself. One characteristic is according to the UBC project most essential for any kind of record: its interrelationships. A record can only be fully understood in conjunction with other records, for instance being part of a scries or a case file. Archivists are first of all responsible for keeping records, therefore a major question is how electronic records must be kept. The assumption that electronic records must be kept electronically, contrary to printing them out on paper, find a strong support by many archival thinkers, because of the recognition that a record created, communicated and used electronically, finds its authentic form in its origi¬nal electronic format. For this very reason it is needed for each legal, political and societal system to define functional requirements forrccordkecping. Starting with the identification of the required quality of the records any recordkeeping system must preserve these required quality. It must be able to keep the records complete, reliable, authentic. It must be able to protect the records against change, illegal access, unwanted deletion. It must preserve information about the context in which the records have been created, information about who created the record, when it was communicated, when it was read, and in the course of what business process. These requirements derive from archival theory, from diplomatics, from the legal systems, from political demands, societal behaviour, financial regulations. One other functional requirement about which the opinions seem to agree, is that the recordkeeping system has to be or to become pro-active, not waiting for records eventually to enter into the system, but avoiding any risk of having records be lost, or violated. Recordkeeping systems, unlike information systems, provide time bound, non-manipulable, and highly redundant information. A recordkeeping system is not apiece of software, an application. It is more than that, it is the whole of procedures, rules, knowledge, hardware, software, tools, methodologies, and people, including the records themselves of an organisations, preserving them and making them available for use by providing access for those who have the rights to access them

    Adestrando o elefante: uma abordagem ortodoxa do Princípio da Proveniência

    Get PDF
    O princípio da proveniência está no cerne da prática e da teoria arquivística. Mas o que esse princípio significa e como deve ser interpretado? É amplamente aceito pela comunidade arquivística que um fundo de arquivo deve ser mantido como um todo, no entanto, a interpretação de que o arranjo original deve ser preservado, e eventualmente reconstituído, está aberta ao debate. Este artigo, publicado originalmente nos anos 1990, explica os principais argumentos para esse debate e apresenta novas interpretações para os arquivos criados no século XX, propiciando uma reflexão atual também no que concerne aos documentos arquivísticos produzidos em ambiente digital, afirmando que o princípio é sobre o respeito ao contexto de criação e arquivamento

    Plane stories

    Get PDF

    The validity of abbreviated forms of the National Adult Reading Test and Spot-the-Word 2 for estimating full-scale IQ

    Get PDF
    In this study, we validate an earlier proposal for an abridged 17-item National Adult Reading Test (NART) by comparing its performance in estimating full-scale IQ against both the full test and the Spot-the-Word 2 (STW-2) test in a new cohort. We also compare the performance of the 17-item NART to two previous attempts to shorten this test, the Mini-NART and the Short NART. Findings include that NART 17 is numerically stronger and statistically equivalent to the other short variants, the full 50-word NART, and STW-2. Unlike the Short NART, the 17-item NART is usable for participants of all ability levels rather than only those with low reading ability, while offering equally precise premorbid estimates. We also compute that two-thirds of STW-2 is ostensibly redundant for full-scale IQ estimation and we, therefore, propose that, subject to additional verification in an independent sample, an abridged version of this test may also benefit clinical practice

    SNPExpress: integrated visualization of genome-wide genotypes, copy numbers and gene expression levels

    Get PDF
    Background: Accurate analyses of comprehensive genome-wide SNP genotyping and gene expression data sets is challenging for many researchers. In fact, obtaining an integrated view of both large scale SNP genotyping and gene expression is currently complicated since only a limited number of appropriate software tools are available. Results: We present SNPExpress, a software tool to accurately analyze Affymetrix and Illumina SNP genotype calls, copy numbers, polymorphic copy number variations (CNVs) and Affymetrix gene expression in a combinatorial and efficient way. In addition, SNPExpress allows concurrent interpretation of these items with Hidden-Markov Model (HMM) inferred Loss-of-Heterozygosity (LOH)- and copy number regions. Conclusion: The combined analyses with the easily accessible software tool SNPExpress will not only facilitate the recognition of recurrent genetic lesions, but also the identification of critical pathogenic genes

    TF Target Mapper: A BLAST search tool for the identification of Transcription Factor target genes

    Get PDF
    BACKGROUND: In the current era of high throughput genomics a major challenge is the genome-wide identification of target genes for specific transcription factors. Chromatin immunoprecipitation (ChIP) allows the isolation of in vivo binding sites of transcription factors and provides a powerful tool for examining gene regulation. Crosslinked chromatin is immunoprecipitated with antibodies against specific transcription factors, thus enriching for sequences bound in vivo by these factors in the immunoprecipitated DNA. Cloning and sequencing the immunoprecipitated sequences allows identification of transcription factor target genes. Routinely, thousands of such sequenced clones are used in BLAST searches to map their exact location in the genome and the genes located in the vicinity. These genes represent potential targets of the transcription factor of interest. Such bioinformatics analysis is very laborious if performed manually and for this reason there is a need for developing bioinformatic tools to automate and facilitate it. RESULTS: In order to facilitate this analysis we generated TF Target Mapper (Transcription Factor Target Mapper). TF Target Mapper is a BLAST search tool allowing rapid extraction of annotated information on genes around each hit. It combines sequence cleaning/filtering, pattern searching and BLAST searches with extraction of information on genes located around each BLAST hit and comparisons of the output list of genes or gene ontology IDs with user-implemented lists. We successfully applied and tested TF Target Mapper to analyse sequences bound in vivo by the transcription factor GATA-1. We show that TF Target Mapper efficiently extracted information on genes around ChIPed sequences, thus identifying known (e.g. α-globin and ζ-globin) and potentially novel GATA-1 gene targets. CONCLUSION: TF Target Mapper is a very efficient BLAST search tool that allows the rapid extraction of annotated information on the genes around each hit. It can contribute to the comprehensive bioinformatic transcriptome/regulome analysis, by providing insight into the mechanisms of action of specific transcription factors, thus helping to elucidate the pathways these factors regulate

    Ang II (Angiotensin II) Conversion to Angiotensin-(1-7) in the Circulation Is POP (Prolyloligopeptidase)-Dependent and ACE2 (Angiotensin-Converting Enzyme 2)-Independent

    Get PDF
    The Ang II (Angiotensin II)-Angiotensin-(1-7) axis of the Renin Angiotensin System encompasses 3 enzymes that form Angiotensin-(1-7) [Ang-(1-7)] directly from Ang II: ACE2 (angiotensin-converting enzyme 2), PRCP (prolylcarboxypeptidase), and POP (prolyloligopeptidase). We investigated their relative contribution to Ang-(1-7) formation in vivo and also ex vivo in serum, lungs, and kidneys using models of genetic ablation coupled with pharmacological inhibitors. In wild-type (WT) mice, infusion of Ang II resulted in a rapid increase of plasma Ang-(1-7). In ACE2−/−/PRCP−/− mice, Ang II infusion resulted in a similar increase in Ang-(1-7) as in WT (563±48 versus 537±70 fmol/mL, respectively), showing that the bulk of Ang-(1-7) formation in circulation is essentially independent of ACE2 and PRCP. By contrast, a POP inhibitor, Z-Pro-Prolinal reduced the rise in plasma Ang-(1-7) after infusing Ang II to control WT mice. In POP−/− mice, the increase in Ang-(1-7) was also blunted as compared with WT mice (309±46 and 472±28 fmol/mL, respectively P=0.01), and moreover, the rate of recovery from acute Ang II-induced hypertension was delayed (P=0.016). In ex vivo studies, POP inhibition with ZZP reduced Ang-(1-7) formation from Ang II markedly in serum and in lung lysates. By contrast, in kidney lysates, the absence of ACE2, but not POP, obliterated Ang-(1-7) formation from added Ang II. We conclude that POP is the main enzyme responsible for Ang II conversion to Ang-(1-7) in the circulation and in the lungs, whereas Ang-(1-7) formation in the kidney is mainly ACE2-dependent.Peer reviewe

    ImmunoGlobulin galaxy (IGGalaxy) for simple determination and quantitation of immunoglobulin heavy chain rearrangements from NGS

    Get PDF
    Background: Sequence analysis of immunoglobulin heavy chain (IGH) gene rearrangements and frequency analysis is a powerful tool for studying the immune repertoire, immune responses and immune dysregulation in health and disease. The challenge is to provide user friendly, secure and reproducible analytical services that are available for both small and large laboratories which are determining VDJ repertoire using NGS technology. Results: In this study we describe ImmunoGlobulin Galaxy (IGGalaxy)- a convenient web based application for analyzing next-generation sequencing results and reporting IGH gene rearrangements for both repertoire and clonality studies. IGGalaxy has two analysis options one using the built in igBLAST algorithm and the second using output from IMGT; in either case repertoire summaries for the B-cell populations tested are available. IGGalaxy supports multi-sample and multi-replicate input analysis for both igBLAST and IMGT/HIGHV-QUEST. We demonstrate the technical validity of this platform using a standard dataset, S22, used for benchmarking the performance of antibody alignment utilities with a 99.9 % concordance with previous results. Re-analysis of NGS data from our samples of RAG-deficient patients demonstrated the validity and user friendliness of this tool. Conclusions: IGGalaxy provides clinical researchers with detailed insight into the repertoire of the B-cell population per individual sequenced and between control and pathogenic genomes. IGGalaxy was developed for 454 NGS results but is capable of analyzing alternative NGS data (e.g. Illumina, Ion Torrent). We demonstrate the use of a Galaxy virtual machine to determine the VDJ repertoire for reference data and from B-cells taken from immune deficient patients. IGGalaxy is available as a VM for download and use on a desktop PC or on a server
    corecore