10,975 research outputs found

    Scientific Explanation as a Guide to Ground

    Get PDF
    Ground is all the rage in contemporary metaphysics. But what is its nature? Some metaphysicians defend what we could call, following Skiles and Trogdon (2021), the inheritance view: it is because constitutive forms of metaphysical explanation are such-and-such that we should believe that ground is so-and-so. However, many putative instances of inheritance are not primarily motivated by scientific considerations. This limitation is harmless if one thinks that ground and science are best kept apart. Contrary to this view, we believe that ground is a highly serviceable tool for investigating metaphysical areas of science. In this paper, we defend a naturalistic version of the inheritance view which takes constitutive scientific explanation as a better guide to ground. After illustrating the approach and its merits, we discuss some implications of the emerging scientific conception for the theory of ground at large

    Hybrid modeling of a biorefinery separation process to monitor short-term and long-term membrane fouling

    Get PDF
    Membrane filtration is commonly used in biorefineries to separate cells from fermentation broths containing the desired products. However, membrane fouling can cause short-term process disruption and long-term membrane degradation. The evolution of membrane resistance over time can be monitored to track fouling, but this calls for adequate sensors in the plant. This requirement might not be fulfilled even in modern biorefineries, especially when multiple, tightly interconnected membrane modules are used. Therefore, characterization of fouling in industrial facilities remains a challenge. In this study, we propose a hybrid modeling strategy to characterize both reversible and irreversible fouling in multi-module biorefinery membrane separation systems. We couple a linear data-driven model, to provide high-frequency estimates of trans-membrane pressures from the available measurements, with a simple nonlinear knowledge-driven model, to compute the resistances of the individual membrane modules. We test the proposed strategy using real data from the world's first industrial biorefinery manufacturing 1,4-bio-butanediol via fermentation of renewable raw materials. We show how monitoring of individual resistances, even when done by simple visual inspection, offers valuable insight on the reversible and irreversible fouling state of the membranes. We also discuss the advantage of the proposed approach, over monitoring trans-membrane pressures and permeate fluxes, from the standpoints of data variability, effect of process changes, interaction between module in multi-module systems, and fouling dynamics

    Meta-critical thinking, paradox, and probabilities

    Get PDF
    There is as much lack of clarity concerning what “critical thinking” involves, even among those charged with teaching it, as there is consensus that we need more emphasis on it in both academia and society. There is an apparent need to think critically about critical thinking, an exercise that might be called meta-critical thinking. It involves emphasizing a practice in terms of which “critical thinking” is helpfully carried out and clarifying one or more of the concepts in terms of which “critical thinking” is usually defined. The practice is distinction making and the concept that of evidence. Science advances by constructing models that explain real-world processes. Once multiple potential models have been distinguished, there remains the task of identifying which models match the real-world process better than others. Since statistical inference has in large part to do with showing how data provide support, i.e., furnish evidence, that the model/hypothesis is more or less likely while still uncertain, we turn to it to help make the concept more precise and thereby useful. In fact, two of the leading methodological paradigms—Bayesian and likelihood—can be taken to provide answers to the questions of the extent to which as well as how data provide evidence for conclusions. Examining these answers in some detail is a highly promising way to make progress. We do so by way of the analysis of three well-known statistical paradoxes—the Lottery, the Old Evidence, and Humphreys’—and the identification of distinctions on the basis of which their plausible resolutions depend. These distinctions, among others between belief and evidence and different concepts of probability, in turn have more general applications. They are applied here to two highly contested public policy issues—the efficacy of COVID vaccinations and the fossil fuel cause of climate change. Our aim is to provide some tools, they might be called “healthy habits of mind,” with which to assess statistical arguments, in particular with respect to the nature and extent of the evidence they furnish, and to illustrate their use in well-defined ways

    Cognitive Inhibition as a Core Component of Executive Functions:Exploring Intra- and Interindividual Differences

    Get PDF
    Cognitive inhibition is an essential executive function that we use in our everyday lives. Numerous factors have been claimed to influence this construct including video gaming, exercise and expertise in musical instruments. However, in this thesis, I focus on an understudied factor, the alignment of chronotype and testing time, and a heavily studied yet controversial factor, bilingualism. Throughout this thesis, with one exception, I present a series of experiments which have been conducted online. In the first empirical chapter, I examined a relatively novel Faces task which the authors have claimed to measure three cognitive processes, including two different forms of inhibition and task switching (Chapter 2). Based on this chapter's findings, I decided to use the Faces task in Chapters 3, 4 and 6. The next two chapters determined whether the alignment of time of testing and chronotype influences inhibition and task switching among the young adult (Chapter 3) and older adult (Chapter 4) population. Afterwards, I explored how conflict is resolved through a mouse tracking paradigm and by extension, whether this paradigm can be used for a variety of inhibition tasks (Chapter 5). For the final empirical chapter, I identified whether training inhibition in a verbal domain impacts inhibition in a non-verbal domain (i.e., far transfer effects). To achieve this, I investigated whether bilingualism, which can be seen as a form of cognitive training within the verbal domain, influences performance in non-verbal tasks which index inhibition (Chapter 6). The main findings of this thesis suggest that cognitive inhibition is not substantially impacted by synchrony effects nor by bilingualism. Furthermore, the findings imply that mouse tracking could be a promising tool to use to examine cognitive inhibition

    Spatial adaptive settlement systems in archaeology. Modelling long-term settlement formation from spatial micro interactions

    Get PDF
    Despite research history spanning more than a century, settlement patterns still hold a promise to contribute to the theories of large-scale processes in human history. Mostly they have been presented as passive imprints of past human activities and spatial interactions they shape have not been studied as the driving force of historical processes. While archaeological knowledge has been used to construct geographical theories of evolution of settlement there still exist gaps in this knowledge. Currently no theoretical framework has been adopted to explore them as spatial systems emerging from micro-choices of small population units. The goal of this thesis is to propose a conceptual model of adaptive settlement systems based on complex adaptive systems framework. The model frames settlement system formation processes as an adaptive system containing spatial features, information flows, decision making population units (agents) and forming cross scale feedback loops between location choices of individuals and space modified by their aggregated choices. The goal of the model is to find new ways of interpretation of archaeological locational data as well as closer theoretical integration of micro-level choices and meso-level settlement structures. The thesis is divided into five chapters, the first chapter is dedicated to conceptualisation of the general model based on existing literature and shows that settlement systems are inherently complex adaptive systems and therefore require tools of complexity science for causal explanations. The following chapters explore both empirical and theoretical simulated settlement patterns based dedicated to studying selected information flows and feedbacks in the context of the whole system. Second and third chapters explore the case study of the Stone Age settlement in Estonia comparing residential location choice principles of different periods. In chapter 2 the relation between environmental conditions and residential choice is explored statistically. The results confirm that the relation is significant but varies between different archaeological phenomena. In the third chapter hunter-fisher-gatherer and early agrarian Corded Ware settlement systems were compared spatially using inductive models. The results indicated a large difference in their perception of landscape regarding suitability for habitation. It led to conclusions that early agrarian land use significantly extended land use potential and provided a competitive spatial benefit. In addition to spatial differences, model performance was compared and the difference was discussed in the context of proposed adaptive settlement system model. Last two chapters present theoretical agent-based simulation experiments intended to study effects discussed in relation to environmental model performance and environmental determinism in general. In the fourth chapter the central place foragingmodel was embedded in the proposed model and resource depletion, as an environmental modification mechanism, was explored. The study excluded the possibility that mobility itself would lead to modelling effects discussed in the previous chapter. The purpose of the last chapter is the disentanglement of the complex relations between social versus human-environment interactions. The study exposed non-linear spatial effects expected population density can have on the system and the general robustness of environmental inductive models in archaeology to randomness and social effect. The model indicates that social interactions between individuals lead to formation of a group agency which is determined by the environment even if individual cognitions consider the environment insignificant. It also indicates that spatial configuration of the environment has a certain influence towards population clustering therefore providing a potential pathway to population aggregation. Those empirical and theoretical results showed the new insights provided by the complex adaptive systems framework. Some of the results, including the explanation of empirical results, required the conceptual model to provide a framework of interpretation

    Evaluation Methodologies in Software Protection Research

    Full text link
    Man-at-the-end (MATE) attackers have full control over the system on which the attacked software runs, and try to break the confidentiality or integrity of assets embedded in the software. Both companies and malware authors want to prevent such attacks. This has driven an arms race between attackers and defenders, resulting in a plethora of different protection and analysis methods. However, it remains difficult to measure the strength of protections because MATE attackers can reach their goals in many different ways and a universally accepted evaluation methodology does not exist. This survey systematically reviews the evaluation methodologies of papers on obfuscation, a major class of protections against MATE attacks. For 572 papers, we collected 113 aspects of their evaluation methodologies, ranging from sample set types and sizes, over sample treatment, to performed measurements. We provide detailed insights into how the academic state of the art evaluates both the protections and analyses thereon. In summary, there is a clear need for better evaluation methodologies. We identify nine challenges for software protection evaluations, which represent threats to the validity, reproducibility, and interpretation of research results in the context of MATE attacks

    Engineering Systems of Anti-Repressors for Next-Generation Transcriptional Programming

    Get PDF
    The ability to control gene expression in more precise, complex, and robust ways is becoming increasingly relevant in biotechnology and medicine. Synthetic biology has sought to accomplish such higher-order gene regulation through the engineering of synthetic gene circuits, whereby a gene’s expression can be controlled via environmental, temporal, or cellular cues. A typical approach to gene regulation is through transcriptional control, using allosteric transcription factors (TFs). TFs are regulatory proteins that interact with operator DNA elements located in proximity to gene promoters to either compromise or activate transcription. For many TFs, including the ones discussed here, this interaction is modulated by binding to a small molecule ligand for which the TF evolved natural specificity and a related metabolism. This modulation can occur with two main phenotypes: a TF shows the repressor (X+) phenotype if its binding to the ligand causes it to dissociate from the DNA, allowing transcription, while a TF shows the anti-repressor (XA) phenotype if its binding to the ligand causes it to associate to the DNA, preventing transcription. While both functional phenotypes are vital components of regulatory gene networks, anti-repressors are quite rare in nature compared to repressors and thus must be engineered. We first developed a generalized workflow for engineering systems of anti-repressors from bacterial TFs in a family of transcription factors related to the ubiquitous lactose repressor (LacI), the LacI/GalR family. Using this workflow, which is based on a re-routing of the TF’s allosteric network, we engineered anti-repressors in the fructose repressor (anti-FruR – responsive to fructose-1,6-phosphate) and ribose repressor (anti-RbsR – responsive to D-ribose) scaffolds, to complement XA TFs engineered previously in the LacI scaffold (anti-LacI – responsive to IPTG). Engineered TFs were then conferred with alternate DNA binding. To demonstrate their utility in synthetic gene circuits, systems of engineered TFs were then deployed to construct transcriptional programs, achieving all of the NOT-oriented Boolean logical operations – NOT, NOR, NAND, and XNOR – in addition to BUFFER and AND. Notably, our gene circuits built using anti-repressors are far simpler in design and, therefore, exert decreased burden on the chassis cells compared to the state-of-the-art as anti-repressors represent compressed logical operations (gates). Further, we extended this workflow to engineer ligand specificity in addition to regulatory phenotype. Performing the engineering workflow with a fourth member of the LacI/GalR family, the galactose isorepressor (GalS – naturally responsive to D-fucose), we engineered IPTG-responsive repressor and anti-repressor GalS mutants in addition to a D-fucose responsive anti-GalS TF. These engineered TFs were then used to create BANDPASS and BANDSTOP biological signal processing filters, themselves compressed compared to the state-of-the-art, and open-loop control systems. These provided facile methods for dynamic turning ‘ON’ and ‘OFF’ of genes in continuous growth in real time. This presents a general advance in gene regulation, moving beyond simple inducible promoters. We then demonstrated the capabilities of our engineered TFs to function in combinatorial logic using a layered logic approach, which currently stands as the state-of-the art. Using our anti-repressors in layered logic had the advantage of reducing cellular metabolic burden, as we were able to create the fundamental NOT/NOR operations with fewer genetic parts. Additionally, we created more TFs to use in layered logic approaches to prevent cellular cross-talk and minimize the number of TFs necessary to create these gene circuits. Here we demonstrated the successful deployment of our XA-built NOR gate system to create the BUFFER, NOT, NOR, OR, AND, and NAND gates. The work presented here describes a workflow for engineering (i) allosteric phenotype, (ii) ligand selectivity, and (iii) DNA specificity in allosteric transcription factors. The products of the workflow themselves serve as vital tools for the construction of next-generation synthetic gene circuits and genetic regulatory devices. Further, from the products of the workflow presented here, certain design heuristics can be gleaned, which should better facilitate the design of allosteric TFs in the future, moving toward a semi-rational engineering approach. Additionally, the work presented here outlines a transcriptional programming structure and metrology which can be broadly adapted and scaled for future applications and expansion. Consequently, this thesis presents a means for advanced control of gene expression, with promise to have long-reaching implications in the future.Ph.D

    A Methodology to Enable Concurrent Trade Space Exploration of Space Campaigns and Transportation Systems

    Get PDF
    Space exploration campaigns detail the ways and means to achieve goals for our human spaceflight programs. Significant strategic, financial, and programmatic investments over long timescales are required to execute them, and therefore must be justified to decision makers. To make an informed down-selection, many alternative campaign designs are presented at the conceptual-level, as a set and sequence of individual missions to perform that meets the goals and constraints of the campaign, either technical or programmatic. Each mission is executed by in-space transportation systems, which deliver either crew or cargo payloads to various destinations. Design of each of these transportation systems is highly dependent on campaign goals and even small changes in subsystem design parameters can prompt significant changes in the overall campaign strategy. However, the current state of the art describes campaign and vehicle design processes that are generally performed independently, which limits the ability to assess these sensitive impacts. The objective of this research is to establish a methodology for space exploration campaign design that represents transportation systems as a collection of subsystems and integrates its design process to enable concurrent trade space exploration. More specifically, the goal is to identify existing campaign and vehicle design processes to use as a foundation for improvement and eventual integration. In the past two decades, researchers have adopted terrestrial logistics and supply chain optimization processes to the space campaign design problem by accounting for the challenges that accompany space travel. Fundamentally, a space campaign is formulated as a network design problem where destinations, such as orbits or surfaces of planetary bodies, are represented as nodes with the routes between them as arcs. The objective of this design problem is to optimize the flow of commodities within network using available transport systems. Given the dynamic nature and the number of commodities involved, each campaign can be modeled as a time-expanded, generalized multi-commodity network flow and solved using a mixed integer programming algorithm. To address the challenge of modeling complex concept of operations (ConOps), this formulation was extended to include paths as a set of arcs, further enabling the inclusion of vehicle stacks and payload transfers in the campaign optimization process. Further, with the focus of transportation system within this research, the typical fixed orbital nodes in the logistics network are modified to represent ranges of orbits, categorized by their characteristic energy. This enables the vehicle design process to vary each orbit in the mission as it desires to find the best one per vehicle. By extension, once integrated, arc costs of dV and dT are updated each iteration. Once campaign goals and external constraints are included, the formulated campaign design process generates alternatives at the conceptual level, where each one identifies the optimal set and sequence of missions to perform. Representing transportation systems as a collection of subsystems introduces challenges in the design of each vehicle, with a high degree of coupling between each subsystem as well as the driving mission. Additionally, sizing of each subsystem can have many inputs and outputs linked across the system, resulting in a complex, multi-disciplinary analysis, and optimization problem. By leveraging the ontology within the Dynamic Rocket Equation Tool, DYREQT, this problem can be solved rapidly by defining each system as a hierarchy of elements and subelements, the latter corresponding to external subsystem-level sizing models. DYREQT also enables the construction of individual missions as a series of events, which can be directly driven and generated by the mission set found by the campaign optimization process. This process produces sized vehicles iteratively by using the mission input, subsystem level sizing models, and the ideal rocket equation. By conducting a literature review of campaign and vehicle design processes, the different pieces of the overall methodology are identified, but not the structure. The specific iterative solver, the corresponding convergence criteria, and initialization scheme are the primary areas for experimentation of this thesis. Using NASA’s reference 3-element Human Landing System campaign, the results of these experiments show that the methodology performs best with the vehicle sizing and synthesis process initializing and a path guess that minimizes dV. Further, a converged solution is found faster using non-linear Gauss Seidel fixed point iteration over Jacobi and set of convergence criteria that covers vehicle masses and mission data. To show improvement over the state of the art, and how it enables concurrent trade studies, this methodology is used at scale in a demonstration using NASA’s Design Reference Architecture 5.0. The LH2 Nuclear Thermal Propulsion (NTP) option is traded with NH3and H2O at the vehicle-level as a way to show the impacts of alternative propellants on the vehicle sizing and campaign strategy. Martian surface stay duration is traded at the campaign-level through two options: long-stay and short-stay. The methodology was able to produce four alternative campaigns over the course of two weeks, which provided data about the launch and aggregation strategy, mission profiles, high-level figures of merit, and subsystem-level vehicle sizes for each alternative. Expectedly, with their lower specific impulses, alternative NTP propellants showed significant growth in the overall mass required to execute each campaign, subsequently represented the number of drop tanks and launches. Further, the short-stay campaign option showed a similar overall mass required compared to its long-stay counterpart, but higher overall costs even given the fewer elements required. Both trade studies supported the overall hypothesis and that integrating the campaign and vehicle design processes addresses the coupling between then and directly shows the impacts of their sensitivities on each other. As a result, the research objective was fulfilled by producing a methodology that was able to address the key gaps identified in the current state of the art.Ph.D

    Mathematical Problems in Rock Mechanics and Rock Engineering

    Get PDF
    With increasing requirements for energy, resources and space, rock engineering projects are being constructed more often and are operated in large-scale environments with complex geology. Meanwhile, rock failures and rock instabilities occur more frequently, and severely threaten the safety and stability of rock engineering projects. It is well-recognized that rock has multi-scale structures and involves multi-scale fracture processes. Meanwhile, rocks are commonly subjected simultaneously to complex static stress and strong dynamic disturbance, providing a hotbed for the occurrence of rock failures. In addition, there are many multi-physics coupling processes in a rock mass. It is still difficult to understand these rock mechanics and characterize rock behavior during complex stress conditions, multi-physics processes, and multi-scale changes. Therefore, our understanding of rock mechanics and the prevention and control of failure and instability in rock engineering needs to be furthered. The primary aim of this Special Issue “Mathematical Problems in Rock Mechanics and Rock Engineering” is to bring together original research discussing innovative efforts regarding in situ observations, laboratory experiments and theoretical, numerical, and big-data-based methods to overcome the mathematical problems related to rock mechanics and rock engineering. It includes 12 manuscripts that illustrate the valuable efforts for addressing mathematical problems in rock mechanics and rock engineering
    • 

    corecore