293 research outputs found

    Homogenization methods and macro-strength of composites

    Get PDF
    A multi-phase periodic composite subjected to inhomogeneous shrinkage or temperature deformation and prescribed mechanical loads is considered. The asymptotic homogenisation is applied for calculation of homogenized macrostresses. A non-local approximate macro-strength condition, defined on homogenised stress-field, is derived from the micro-strength conditions and their convergence to the approximate macro-strength condition, as the structure period tends to zero, is proved

    Niche Modeling: Ecological Metaphors for Sustainable Software in Science

    Full text link
    This position paper is aimed at providing some history and provocations for the use of an ecological metaphor to describe software development environments. We do not claim that the ecological metaphor is the best or only way of looking at software - rather we want to ask if it can indeed be a productive and thought provoking one.Comment: Position paper submitted to: Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE) SC13, Sunday, 17 November 2013, Denver, CO, US

    Scoop: An Adaptive Indexing Scheme for Stored Data in Sensor Networks

    Get PDF
    In this paper, we present the design of Scoop, a system for indexing and querying stored data in sensor networks. Scoop works by collecting statistics about the rate of queries and distribution of sensor readings over a sensor network, and uses those statistics to build an index that tells nodes where in the network to store their readings. Using this index, a users queries over that stored data can be answered efficiently, without flooding those queries throughout the network. This approach offers a substantial advantage over other solutions that either store all data externally on a basestation (requiring every reading to be collected from all nodes), or that store all data locally on the node that produced it (requiring queries to be flooded throughout the network). Our results, in fact, show that Scoop offers a factor of four improvement over existing techniques in a real implementation on a 64-node mote-based sensor network. These results also show that Scoop is able to efficciently adapt to changes in the distribution and rates of data and queries

    Adaptive indexing scheme for stored data in sensor networks

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 53-56).We present the design of Scoop, a system that is designed to efficiently store and query relational data collected by nodes in a bandwidth-constrained sensor network. Sensor networks allow remote environments to be monitored at very fine levels of granularity; often such monitoring deployments generate large amounts of data which may be impractical to collect due to bandwidth limitations, but which can easily stored in-network for some period of time. Existing approaches to querying stored data in sensor networks have typically assumed that all data either is stored locally, at the node that produced it, or is hashed to some location in the network using a predefined uniform hash function. These two approaches are at the extremes of a trade-off between storage and query costs. In the former case, the costs of storing data ate low, since no transmissions are required, but queries must flood the entire network. In the latter case, some queries can be executed efficiently by using the hash function to find the nodes of interest, but storage is expensive as readings must be transmitted to some (likely far away) location in the network. In contrast, Scoop monitors changes in the distribution of sensor readings, queried values, and network connectivity to determine the best location to store data. We formulate this as an optimization problem and present a practical algorithm that solves this problem in Scoop. We have built a complete implementation of Scoop for TinyOS mote [1] sensor network hardware and evaluated its performance on a 60-node testbed and in the TinyOS simulator, TOSSIM. Our results show that Scoop not only provides substantial performance benefits over alternative approaches on a range of data sets, but is also able to efficiently adapt to changes in the distribution and rates of data and queries.by Thomer M. Gil.S.M

    Reconstructing the development of legacy database platforms

    Get PDF
    Through the “Migrating Research Data Collections” project, we are working to better understand and support data collection migration and change over time. We are developing case studies of database migration at a number of memory institutions, first focusing on Natural History Museums (NHMs; Thomer, Rayburn and Tyler, 2020; Thomer, Weber and Twidale 2018). In developing our case studies, we have found that there are surprisingly few published histories of database platforms and software – particularly for domain- or museum-specific platforms. Additionally, documentation of the function and format of legacy systems can be quite hard to find. This has motivated our efforts to reconstruct the development history of the data systems described by our study participants in interviews. The timeline presented on this poster has been developed through review of academic publications (if any) and other documentation of these data systems. Much of this has been found through digital archival research (e.g. the Internet Archive’s Wayback Machine). The full dataset, with references, underlying this timeline is available at bit.ly/36vaEoM We are using this resource to contextualize the evolution of the data systems in each of our case studies, and we further hope this work will be of interest to others studying infrastructure development and change.IMLS # RE-07-18-0118-18http://deepblue.lib.umich.edu/bitstream/2027.42/162598/1/IDCC2020_FINALTOPRINT.pdfDescription of IDCC2020_FINALTOPRINT.pdf : PosterSEL

    The craft of database curation: Taking cues from quiltmaking

    Get PDF
    Data migration within library, archive and museum collections is a critical process to maintaining collection data and ensuring its availability for future users. This work is also an under supported component of digital curation. In this poster we present the findings from 20 semi-structured interviews with archivists, collection managers and curators who have recently completed a data migration. One of our main findings is the similarities between craft work and migration practices in memory institutions. To demonstrate these similarities, we use quiltmaking as as a framework. These similarities include the practice of piecing multiple systems together to complete a workflow, relying on community collaboration, and inter-generational labor. Our hope is that by highlighting the craftful qualities already embedded in this work we can show alternative best practices to data migration and database management. This is in an effort to get a broader understanding of what a successful data migration can look like

    Site-based data curation: bridging data collection protocols and curatorial processes at scientifically significant sites

    Get PDF
    Research conducted at scientifically significant sites produces an abundance of important and highly valuable data. Yet, though sites are logical points for coordinating the curation of these data, their unique needs have been under supported. Previous studies have shown that two principal stakeholder groups – scientific researchers and local resource managers – both need information that is most effectively collected and curated early in research workflows. However, well-designed site-based data curation interventions are necessary to accomplish this. Additionally, further research is needed to understand and align the data curation needs of researchers and resource managers, and to guide coordination of the data collection protocols used by researchers in the field and the data curation processes applied later by resource managers. This dissertation develops two case studies of research and curation at scientifically significant sites: geobiology at Yellowstone National Park and paleontology at the La Brea Tar Pits. The case studies investigate: What information do different stakeholders value about the natural sites at which they work? How do these values manifest in data collection protocols, curatorial processes, and infrastructures? And how are sometimes conflicting stakeholder priorities mediated through the use and development of shared information infrastructures? The case studies are developed through interviews with researchers and resource managers, as well as participatory methods to collaboratively develop “minimum information frameworks” – high level models of the information needed by all stakeholders. Approaches from systems analysis are adapted to model data collection and curation workflows, identifying points of curatorial intervention early in the processes of generating and working with data. Additionally, a general information model for site-based data collections is proposed with three classes of information documenting key aspects of the research project, a site’s structure, and individual specimens and measurements. This research contributes to our understanding of how data from scientifically significant sites can be aggregated, integrated and reused over the long term, and how both researcher and resource manager needs can be reflected and supported during information modeling, workflow documentation and the development of data infrastructure policy. It contributes prototypes of minimal information frameworks for both sites, as well as a general model that can serve as the basis for later site-based standards and infrastructure development

    Complications in Climate Data Classification: The Political and Cultural Production of Variable Names

    Get PDF
    Model intercomparison projects are a unique and highly specialized form of data—intensive collaboration in the earth sciences. Typically, a set of pre‐determined boundary conditions (scenarios) are agreed upon by a community of model developers that then test and simulate each of those scenarios with individual ‘runs’ of a climate model. Because both the human expertise, and the computational power needed to produce an intercomparison project are exceptionally expensive, the data they produce are often archived for the broader climate science community to use in future research. Outside of high energy physics and astronomy sky surveys, climate modeling intercomparisons are one of the largest and most rapid methods of producing data in the natural sciences (Overpeck et al., 2010).But, like any collaborative eScience project, the discovery and broad accessibility of this data is dependent on classifications and categorizations in the form of structured metadata—namely the Climate and Forecast (CF) metadata standard, which provides a controlled vocabulary to normalize the naming of a dataset’s variables. Intriguingly, the CF standard’s original publication notes, “
conventions have been developed only for things we know we need. Instead of trying to foresee the future, we have added features as required and will continue to do this” (Gregory, 2003). Yet, qualitatively we’ve observed that  this is not the case; although the time period of intercomparison projects remains stable (2-3 years), the scale and complexity of models and their output continue to grow—and thus, data creation and variable names consistently outpace the ratification of CF.

    Supporting the long‐term curation and migration of natural history museum collections databases

    Full text link
    Migration of data collections from one platform to another is an important component of data curation – yet, there is surprisingly little guidance for information professionals faced with this task. Data migration may be particularly challenging when these data collections are housed in relational databases, due to the complex ways that data, data schemas, and relational database management software become intertwined over time. Here we present results from a study of the maintenance, evolution and migration of research databases housed in Natural History Museums. We find that database migration is an on‐going – rather than occasional – process for many Collection managers, and that they creatively appropriate and innovate on many existing technologies in their migration work. This paper contributes descriptions of a preliminary set of common adaptations and “migration patterns” in the practices of database curators. It also outlines the strategies they use when facing collection‐level data migration and describes the limitations of existing tools in supporting LAM and “small science” research database migration. We conclude by outlining future research directions for the maintenance and migration of collections and complex digital objects.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/147782/1/pra214505501055.pd

    Three Approaches to Documenting Database Migrations

    Get PDF
    Database migration is a crucial aspect of digital collections management, yet there are few best practices to guide practitioners in this work. There is also limited research on the patterns of use and processes motivating database migrations. In the “Migrating Research Data Collections” project, we are developing these best practices through a multi-case study of database and digital collections migration. We find that a first and fundamental problem faced by collection staff is a sheer lack of documentation about past database migrations. We contribute a discussion of ways information professionals can reconstruct missing documentation, and some three approaches that others might take for documenting migrations going forward. [This paper is a conference pre-print presented at IDCC 2020 after lightweight peer review.
    • 

    corecore