191,146 research outputs found

    ABAEnrichment: An R package to test for gene set expression enrichment in the adult and developing human brain

    No full text
    Summary: We present ABAEnrichment, an R package that tests for expression enrichment in specific brain regions at different developmental stages using expression information gathered from multiple regions of the adult and developing human brain, together with ontologically organized structural information about the brain, both provided by the Allen Brain Atlas. We validate ABAEnrichment by successfully recovering the origin of gene sets identified in specific brain cell-types and developmental stages. Availability and Implementation: ABAEnrichment was implemented as an R package and is available under GPL (≄ 2) from the Bioconductor website (http://bioconductor.org/packages/3.3/bioc/html/ABAEnrichment.html). Contacts: [email protected], [email protected] or [email protected] Supplementary information: Supplementary data are available at Bioinformatics online

    Sustainable typography

    Full text link
    We need to radically re-think typography for text-rich business documents & publications (not referring to books). Most designers assume people have time to read. In reality the following occurs: Observations: 1) We browse/forage (71%) then read (11%) 2) People have different time tolerances and requirements for detail i.e. the same information is required to different levels of detailing dependent on the time the reader can allocate to it (Senior directors will have less time than juniors). 3) People want choice as to whether they wish to view information on paper, i-phone, PowerPoint or via web/screen. 4) Most publications do not follow the cognitive principles of how we are ƒwired‚ to interpret visual signals. Message-based Design & Message-based Writing (MBD/MBW) is a system that addresses these 4 points and allows key messages to be understood prior to reading simply by scanning the page with its embedded ƒvisual hooks‚ to draw the reader in. Thus it overcomes ƒfilter failure‚ a phrase coined and first used by Clay Shirky at the Web 2.0 Expo. It collapses to a summary and exploits the way we are wired. Additionally it caters for up to 4 time tolerances of readers and morphs‚ from paper to screen effortlessly

    Case studies on data-rich and data-poor countries

    Get PDF
    The aim of Work Package 5 is to assess the needs of decision-makers and end-users involved in the process of post-disaster recovery and to provide useful guidance, tools and recommendations for extracting information from the affected area to help with their decisions. This report follows from Deliverables D5.1 “Comparison of outcomes with end-user needs” and D5.2 “Semi-automated data extraction” where the team had set out to explore the needs of decision-makers and suggested protocols for tools to address their information requirements. This report begins with a summary of findings from the scenario planning game and a review of end-user priorities; it will then describe the methods of detecting post-disaster recovery evaluation and monitoring attributes to aid decision making. The proposed methods in the deliverables D2.6 “Supervised/Unsupervised change detection” and D5.2 “Semi-automated data extraction” for use in post-disaster recovery evaluation and monitoring are tested in detail for data-poor and data-rich scenarios. Semi-automated and automated methods of finding the recovery indicators pertaining to early recovery and monitoring are discussed. Step-by-step guidance for an analyst to follow in order to prepare the images and GIS data layers necessary to execute the semi-automated and automated methods are discussed in section 2. The outputs are presented in detail using case studies in section 3. In order to develop and assess the proposed detection methods, images from two case studies, namely Van in Turkey and Muzaffarabad in Pakistan, both recovering from recent earthquakes, have been used to highlight the differences between data-rich and data-poor countries and hence the constraints on outputs on the proposed methods

    Conservation status of New Zealand freshwater invertebrates, 2013

    Get PDF
    The conservation status of 644 freshwater invertebrate taxa, across five Phyla, 28 Orders and 75 Families, was assessed using the New Zealand Threat Classification System (NZTCS) criteria. Forty-six species were ranked Nationally Critical, 11 Nationally Endangered and 16 Nationally Vulnerable. One hundred and seventy-two taxa were listed as Data Deficient. A full list is presented, along with summaries and brief notes on the most important changes. This list replaces all previous NZTCS lists for freshwater invertebrates

    The inverse solution of the atomic mixing equations by an operator-splitting method

    Get PDF
    The quantification problem of recovering the original material distribution from secondary ion mass spectrometry (SIMS) data is considered in this paper. It is an inverse problem, is ill-posed and hence it requires a special technique for its solution. The quantification problem is essentially an inverse diffusion or (classically) a backward heat conduction problem. In this paper an operator-splitting method (that is proposed in a previous paper by the first author for the solution of inverse diffusion problems) is developed for the solution of the problem of recovering the original structure from the SIMS data. A detailed development of the quantification method is given and it is applied to typical data to demonstrate its effectiveness

    Constructions of Batch Codes via Finite Geometry

    Full text link
    A primitive kk-batch code encodes a string xx of length nn into string yy of length NN, such that each multiset of kk symbols from xx has kk mutually disjoint recovering sets from yy. We develop new explicit and random coding constructions of linear primitive batch codes based on finite geometry. In some parameter regimes, our proposed codes have lower redundancy than previously known batch codes.Comment: 7 pages, 1 figure, 1 tabl

    Recovering from Biased Data: Can Fairness Constraints Improve Accuracy?

    Get PDF
    Multiple fairness constraints have been proposed in the literature, motivated by a range of concerns about how demographic groups might be treated unfairly by machine learning classifiers. In this work we consider a different motivation; learning from biased training data. We posit several ways in which training data may be biased, including having a more noisy or negatively biased labeling process on members of a disadvantaged group, or a decreased prevalence of positive or negative examples from the disadvantaged group, or both. Given such biased training data, Empirical Risk Minimization (ERM) may produce a classifier that not only is biased but also has suboptimal accuracy on the true data distribution. We examine the ability of fairness-constrained ERM to correct this problem. In particular, we find that the Equal Opportunity fairness constraint [Hardt et al., 2016] combined with ERM will provably recover the Bayes optimal classifier under a range of bias models. We also consider other recovery methods including re-weighting the training data, Equalized Odds, and Demographic Parity, and Calibration. These theoretical results provide additional motivation for considering fairness interventions even if an actor cares primarily about accuracy
    • 

    corecore