369 research outputs found

    Scutoids are a geometrical solution to three-dimensional packing of epithelia

    Get PDF
    As animals develop, tissue bending contributes to shape the organs into complex three-dimensional structures. However, the architecture and packing of curved epithelia remains largely unknown. Here we show by means of mathematical modelling that cells in bent epithelia can undergo intercalations along the apico-basal axis. This phenomenon forces cells to have different neighbours in their basal and apical surfaces. As a consequence, epithelial cells adopt a novel shape that we term “scutoid”. The detailed analysis of diverse tissues confirms that generation of apico-basal intercalations between cells is a common feature during morphogenesis. Using biophysical arguments, we propose that scutoids make possible the minimization of the tissue energy and stabilize three-dimensional packing. Hence, we conclude that scutoids are one of nature's solutions to achieve epithelial bending. Our findings pave the way to understand the three-dimensional organization of epithelial organs.España Ministerio de Ciencia y Tecnología BFU2013-48988-C2-1-P and BFU2016-8079

    Modeling movement probabilities within heterogeneous spatial fields

    Get PDF
    Recent efforts have focused on modeling the internal structure of space-time prisms to estimate the unequal movement opportunities within. This paper further develops this area of research by formulating a model for field-based time geography that can be used to probabilistically model movement opportunities conditioned on underlying heterogeneous spatial fields. The development of field-based time geography draws heavily on well-established methods for cost-distance analysis, common to most GIS software packages. The field-based time geographic model is compared with two alternative approaches that are commonly employed to estimate probabilistic space-time prisms - (truncated) Brownian bridges and time geographic kernel density estimation. Using simulated scenarios it is demonstrated that only field-based time geography captures underlying heterogeneity in output movement probabilities. Field-based time geography has significant potential in the field of wildlife tracking (an example is provided), where Brownian bridge models are preferred, but fail to adequately capture underlying barriers to movement.Publisher PDFPeer reviewe

    Scutoids are a geometrical solution to three-dimensional packing of epithelia

    Get PDF
    As animals develop, tissue bending contributes to shape the organs into complex three-dimensional structures. However, the architecture and packing of curved epithelia remains largely unknown. Here we show by means of mathematical modelling that cells in bent epithelia can undergo intercalations along the apico-basal axis. This phenomenon forces cells to have different neighbours in their basal and apical surfaces. As a consequence, epithelial cells adopt a novel shape that we term “scutoid”. The detailed analysis of diverse tissues confirms that generation of apico-basal intercalations between cells is a common feature during morphogenesis. Using biophysical arguments, we propose that scutoids make possible the minimization of the tissue energy and stabilize three-dimensional packing. Hence, we conclude that scutoids are one of nature's solutions to achieve epithelial bending. Our findings pave the way to understand the three-dimensional organization of epithelial organs

    Individual accessibility and segregation on activity spaces: an agent-based modelling approach

    Get PDF
    One of the main challenges of cities is the increasing social inequality imposed by the way population groups, jobs, amenities and services, as well as the transportation infrastructure, are distributed across urban space. In this thesis, the concepts of accessibility and segregation are used to study these inequalities. They can be defined as the interaction of individuals with urban opportunities and with individuals from other population groups, respectively. Interactions are made possible by people’s activities and movement within a city, which characterise accessibility and segregation as inherently dynamic and individual-based concepts. Nevertheless, they are largely studied from a static and place-based perspective. This thesis proposes an analytical and exploratory framework for studying individual-based accessibility and segregation in cities using individuals’ travel trajectories in space and time. An agent-based simulation model was developed to generate individual trajectories dynamically, employing standard datasets such as census and OD matrices and allowing for multiple perspectives of analysis by grouping individuals based on their attributes. The model’s ability to simulate people’s trajectories realistically was validated through systematic sensitivity tests and statistical comparison with real-world trajectories from Rio de Janeiro, Brazil, and travel times from London, UK. The approach was applied to two exploratory studies: São Paulo, Brazil, and London, UK. The first revealed inequalities in accessibility by income, education and gender and also unveiled within-group differences beyond place-based patterns. The latter explored ethnic segregation, unveiling patterns of potential interaction among ethnic groups in the urban space beyond their residential and workplace locations. Those studies demonstrated how inequality in accessibility and segregation can be studied both at large metropolitan scales and at fine level of detail, using standard datasets, with modest computational requirements and ease of operationalisation. The proposed approach opens up avenues for the study of complex dynamics of interaction of urban populations in a variety of urban contexts

    Individual accessibility and segregation on activity spaces: an agent-based modelling approach

    Get PDF
    One of the main challenges of cities is the increasing social inequality imposed by the way population groups, jobs, amenities and services, as well as the transportation infrastructure, are distributed across urban space. In this thesis, the concepts of accessibility and segregation are used to study these inequalities. They can be defined as the interaction of individuals with urban opportunities and with individuals from other population groups, respectively. Interactions are made possible by people’s activities and movement within a city, which characterise accessibility and segregation as inherently dynamic and individual-based concepts. Nevertheless, they are largely studied from a static and place-based perspective. This thesis proposes an analytical and exploratory framework for studying individual-based accessibility and segregation in cities using individuals’ travel trajectories in space and time. An agent-based simulation model was developed to generate individual trajectories dynamically, employing standard datasets such as census and OD matrices and allowing for multiple perspectives of analysis by grouping individuals based on their attributes. The model’s ability to simulate people’s trajectories realistically was validated through systematic sensitivity tests and statistical comparison with real-world trajectories from Rio de Janeiro, Brazil, and travel times from London, UK. The approach was applied to two exploratory studies: São Paulo, Brazil, and London, UK. The first revealed inequalities in accessibility by income, education and gender and also unveiled within-group differences beyond place-based patterns. The latter explored ethnic segregation, unveiling patterns of potential interaction among ethnic groups in the urban space beyond their residential and workplace locations. Those studies demonstrated how inequality in accessibility and segregation can be studied both at large metropolitan scales and at fine level of detail, using standard datasets, with modest computational requirements and ease of operationalisation. The proposed approach opens up avenues for the study of complex dynamics of interaction of urban populations in a variety of urban contexts

    Multiscale structure and mechanics of collagen

    Get PDF
    While we are 70% water, in a very real sense collagen is the stuff we are made of. It is the most abundant protein in multicellular organisms, such as ourselves, making up roughly 25% of our total protein content. If you have ever wondered how the human body holds together all its different parts in shape, here is your answer: it is largely due to collagen. Collagen is the main ingredient of so called connective tissue which serves to hold the various parts of the body together. In fact, without collagen we would quite literally fall apart. Some genetic diseases, such as Osteogenesis Imperfecta (brittle bone disease) and Ehlers Danlos syndrome (characterized by abnormally stretchy skin and loose joints), are known to be the result of defective collagen. One can see why it is necessary to achieve a good understanding of how the strength or mechanical properties of collagen come about, and how it contributes to the state of well-being of an individual. This is essentially the aim of my research. Despite its relative abundance and about a century of research by many scientists, many features of collagen remain not fully understood. For example, it has been difficult to ascertain the precise structure of collagenous tissue. Nevertheless, a lot of progress has been made during the past couple of decades. Understanding the internal structure of collagen is important since we expect its strength to largely depend on its structure. The tools that physicists use to figure out the structure of biological tissue include advanced microscopes, such as the Atomic Force Microscope and the Electron Microscope, and X-ray diffraction analysis, in which one tries to determine the collagen structure at very small length scales (a billionth of a meter) by observing the fringe patterns made when an X-ray beam is scattered by the atoms that make up collagen (see Figure 1.2). These studies have revealed that there are 28 different types of collagens. The types are numbered with roman numerals I - XXVIII. In my work I have focused on the most common form of collagen, namely Type I collagen, which occurs mostly in scar tissue, bone and tendon. Tissue made from Type I collagen is made up of atoms that are arranged in such a way that a clear hierarchical structure emerges. This hierarchical organization is illustrated in Figure 1.1. The hierarchy can be described as follows: approximately 10-20 atoms are grouped together to form amino acids (there are 21 types of amino acids which, incidentally, nature also uses to encode our genetic information). About a 1000 amino acids of mainly two types, Glycine and Proline, are stringed together in a special repeating sequence to form polymers, or strands, called alpha helices (which, as the name implies, are shaped like helices, the shape of a spring) which are left-handed. Every three alpha helices associate with one another in a braided conformation that is a righthanded triple-helix called tropocollagen (see Figure 1.1 for an image)2. This is the basic building unit of collagen tissue. Certain cells of our bodies, called fibroblasts, are responsible for manufacturing tropocollagen molecules. Inside the body, after a fibroblast makes a tropocollagen molecule, it extrudes the molecule and aids in laying the tropocollagen molecule among other already existing tropocollagen molecules in order to forma long fine bundle known as a fibril. This process can also occur outside the body in the laboratory, unaided by fibroblasts but merely driven by thermal agitation at a particular range of temperatures. This is an example of a process known as self-assembly. The arrangement of tropocollagen molecules within a fibril is ‘staggered’, somewhat similar to the arrangement of bricks in a brick wall. In the body, many fibrils occur lying side by side and bundled together to form fibers which are then cross-linked to form part of the connective tissue. Special proteins, known as glycosaminoglycans (GAGs) are responsible for binding the fibrils together. For fibrils self-assembled outside the body and in the absence of GAGs, fibers do not form, but rather a network of fibrils with a well-defined diameter emerges. While the sequence of atoms that constitute collagen alpha-helices is precisely known, the precise arrangement of tropocollagen molecules within a fibril is difficult to ascertain by experimental means. This is because tropocollagen is a very light and flexible polymer, hence it is constantly changing its bent conformation in response to the erratic bombardment of fast moving atoms of the surrounding medium. This happens even within the closely packed environment of a fibril. Indeed, as the temperature of the surrounding medium increases, the atoms move even faster causing the tropocollagen molecule to wriggle even more. The consequence of this behaviour is that the molecule’s resistance to a stretching force increases with increasing temperature, just as for a rubber band when it is heated for example.3 Therefore the apparent randomness of the tropocollagen molecule’s wriggling form and motion affects the strength of collagenous tissue at its various levels within its hierarchical organization. In order to quantify this behaviour, special mathematical, or computational models, representing tropocollagen need to be proposed. Quantum Physics gives a precise mathematical description of the physical laws that govern how moving atoms interact with each other. One might think that the precise knowledge of all these atoms and how exactly they interact with each other should straightforwardly lead to the explanation and prediction of all possible phenomena in Nature. This may be true, in principle. However since all these interactions have been expressed in terms of mathematical theories and equations whose solutions can be difficult to compute, especially when such large numbers of atoms are simultaneously involved, physicists propose simplistic models called ‘coarse-grained models’ that are easier to manage computationally and that are assumed to encapsulate the essential properties of groups of these atoms. They go on to show by simulations of these models that the phenomena exhibited by these models do not necessarily have to depend on the internal details of the atoms they represent (nor on their interactions). In this thesis, we developed a coarse-grained model of the tropocollagen molecule that captures the essential features of tropocollagen. These features include its flexibility, its volume, and its tendency to stick to other tropocollagen molecules that come near it. We then generated an entire collagen fibril using many copies of this simplistic model of the tropocollagen molecule and attaching (or cross-linking) them to each other at specific points on the molecules. Then we simulated the entire cross-linked assembly of tropocollagen molecules on a computer at a particular temperature and then attempted to estimate the strength the fibril. In our coarse-grained model, each tropocollagen molecule was transformed into a chain of about 100 identical rigid sticks (called bonds) but balljointed at their ends to one another in a linear sequence (see Figure 7.1). The ease with which the joints could rotate depended on a single quantity known as the bending stiffness of tropocollagen, which has been measured by experiments in the laboratory. This model is called the ‘discrete worm-like chain’. Because in reality the tropocollagen molecule is constantly wriggling, it is difficult to follow precisely its full motion in time, even on a computer. So we rather chose a statistical treatment. This is where statistical physics and Monte Carlo methods become very useful. Simply put, Monte Carlo methods repeatedly generate sets of random numbers and use them to propose different configurations of the tropocollagen monomers every time. This is somewhat like throwing dice to obtain different numbers every time they are thrown. However, the Monte Carlo method that is employed should take care not to change the lengths of the bonds of the simplistic model, also it must never destroy any of the cross-links in the system. Prior to this work it has not been possible satisfy all these constraints during the simulation of an arbitrarily cross-linked assembly of coarse-grained polymers. In this thesis, however, we invented a Monte Carlo method that overcomes these issues. We named the method ‘TRACTRIX’, because it is based on the construction of a special curve which in mathematics is called a tractrix. The tractrix is the answer to the following question: "Given two points linked by a rigid joint, if one point moves along a given curve, how does the other point move?" (See Figure 4.3 for an example of how a tractrix looks like.) The details of this method are described in Chapter 4. In Chapter 5, we demonstrated that TRACTRIX works, that is, it is accurate and trustworthy, because it reproduced expected results for certain model polymer networks for which we already knew the exact answers. TRACTRIX was then used to simulate our model of the collagen fibril, and we were thus able to estimate the strength of the collagen fibril. Other interesting results were also established, such as the shape that the fibril finally settled into, and also the average conformation of its constituent tropocollagen molecules. Our model and simulations set the stage for further investigation into the mechanics and other properties of collagen fibrils. Also this exciting new method TRACTRIX promises to be useful in simulating many different types of biological networks, not only collagen

    Detection of breast cancer with electrical impedance mammography

    Get PDF
    Electrical Impedance Tomography (EIT) is a medical imaging technique that reconstructs internal electrical conductivity distribution of a body from impedance data that is measured on the body surface, and Electrical Impedance Mammography (EIM) is the technique that applies EIT in breast cancer detection. The use of EIM for breast cancer identification is highly desirable because it is a non-invasive and low-cost imaging technology. EIM has the potential in detecting early stage cancer, however there are still challenges that hindering EIM to be provided as a routine health care system. There are three major groups of obstacles. One is the hardware design, which includes the selection of electronic components, electrode-skin contacting methods, etc. Second is theoretical problems such as electrode configurations, image reconstruction and regularization methods. Third is the development of analysis methods and generation of a cancerous tissue database. Research reported in this thesis strives to understand these problems and aims to provide possible solutions to build a clinical EIM system. The studies are carried out in four parts. First the functionalities of the Sussex Mk4 EIM system have been studied. Sensitivity of the system was investigated to find out the strength and weakness of the system. Then work has been made on image reconstruction and regularization methods in order to enhance the system’s endurance to noise, also to balance the reconstruction conductivity distribution throughout the reconstructed object. Then a novel cancer diagnosis technique was proposed. It was developed based on the electrical property of human breast tissue and the behaviour or systematic noise, to provide repeatable results for each patient. Finally evaluation has been made on previous EIM systems to find out the major problems. Based on sensitivity analysis, an optimal combined electrode configuration has been proposed to improve sensitivity. The system has been developed and produced meaningful clinical images. The work makes significant contributions to society. This novel cancer diagnosis method has high accuracy for cancer identification. The combined electrode configuration has also provided flexibilities in the designing of current driving and voltage receiving patterns, thus sensitivity of the EIM system can be greatly improved

    Insights into Rockfall from Constant 4D Monitoring

    Get PDF
    Current understanding of the nature of rockfall and their controls stems from the capabilities of slope monitoring. These capabilities are fundamentally limited by the frequency and resolution of data that can be captured. Various assumptions have therefore arisen, including that the mechanisms that underlie rockfall are instantaneous. Clustering of rockfall across rock faces and sequencing through time have been observed, sometimes with an increase in pre-failure deformation and pre-failure rockfall activity prior to catastrophic failure. An inherent uncertainty, however, lies in whether the behaviour of rockfall monitored over much shorter time intervals (Tint) is consistent with that previously monitored at monthly intervals, including observed failure mechanisms, their response to external drivers, and pre-failure deformation. To address the limitations of previous studies on this topic, 8 987 terrestrial laser scans have been acquired over 10 months from continuous near-real time monitoring of an actively failing coastal rock slope (Tint = 0.5 h). A workflow has been devised that automatically resolves depth changes at the surface to 0.03 m. This workflow filters points with high positional uncertainty and detects change in 3D, with both approaches tailored to natural rock faces, which commonly feature sharp edges and partially occluded areas. Analysis of the resulting rockfall inventory, which includes > 180 000 detachments, shows that the proportion of rockfall < 0.1 m3 increases with more frequent surveys for Tint < ca. 100 h, but this trend does not continue for surface comparison over longer time intervals. Therefore, and advantageously, less frequent surveys will derive the same rockfall magnitude-frequency distribution if captured at ca. 100 h intervals as compared to one month or even longer intervals. The shape and size of detachments shows that they are more shallow and smaller than observable rock mass structure, but appear to be limited in size and extent by jointing. Previously explored relationships between rockfall timing and environmental and marine conditions do not appear to apply to this inventory, however, significant relationships between rockfall and rainfall, temperature gradient and tides are demonstrated over short timescales. Pre-failure deformation and rockfall activity is observed in the footprint of incipient rockfall. Rockfall activity occurs predominantly within the same ca. 100 h timescale observed in the size-distribution analysis, and accelerated deformation is common for the largest rockfall during the final 2 h before block detachment. This study provides insights into the nature and development of rockfall during the period prior to detachment, and the controls upon it. This holds considerable implications for our understanding of rockfall and the improvement of future rockfall monitoring

    Physics-based motion planning for grasping and manipulation

    Get PDF
    This thesis develops a series of knowledge-oriented physics-based motion planning algorithms for grasping and manipulation in cluttered an uncertain environments. The main idea is to use high-level knowledge-based reasoning to define the manipulation constraints that define the way how robot should interact with the objects in the environment. These interactions are modeled by incorporating the physics-based model of rigid body dynamics in planning. The first part of the thesis is focused on the techniques to integrate the knowledge with physics-based motion planning. The knowledge is represented in terms of ontologies, a prologbased knowledge inference process is introduced that defines the manipulation constraints. These constraints are used in the state validation procedure of sampling-based kinodynamic motion planners. The state propagator of the motion planner is replaced by a physics-engine that takes care of the kinodynamic and physics-based constraints. To make the interaction humanlike, a low-level physics-based reasoning process is introduced that dynamically varies the control bounds by evaluating the physical properties of the objects. As a result, power efficient motion plans are obtained. Furthermore, a framework has been presented to incorporate linear temporal logic within physics-based motion planning to handle complex temporal goals. The second part of this thesis develops physics-based motion planning approaches to plan in cluttered and uncertain environments. The uncertainty is considered in 1) objects’ poses due to sensing and due to complex robot-object or object-object interactions; 2) uncertainty in the contact dynamics (such as friction coefficient); 3) uncertainty in robot controls. The solution is framed with sampling-based kinodynamic motion planners that solve the problem in open-loop, i.e., it considers uncertainty while planning and computes the solution in such a way that it successfully moves the robot from the start to the goal configuration even if there is uncertainty in the system. To implement the above stated approaches, a knowledge-oriented physics-based motion planning tool is presented. It is developed by extending The Kautham Project, a C++ based tool for sampling-based motion planning. Finally, the current research challenges and future research directions to extend the above stated approaches are discussedEsta tesis desarrolla una serie de algoritmos de planificación del movimientos para la aprehensión y la manipulación de objetos en entornos desordenados e inciertos, basados en la física y el conocimiento. La idea principal es utilizar el razonamiento de alto nivel basado en el conocimiento para definir las restricciones de manipulación que definen la forma en que el robot debería interactuar con los objetos en el entorno. Estas interacciones se modelan incorporando en la planificación el modelo dinámico de los sólidos rígidos. La primera parte de la tesis se centra en las técnicas para integrar el conocimiento con la planificación del movimientos basada en la física. Para ello, se representa el conocimiento mediante ontologías y se introduce un proceso de razonamiento basado en Prolog para definir las restricciones de manipulación. Estas restricciones se usan en los procedimientos de validación del estado de los algoritmos de planificación basados en muestreo, cuyo propagador de estado se susituye por un motor basado en la física que tiene en cuenta las restricciones físicas y kinodinámicas. Además se ha implementado un proceso de razonamiento de bajo nivel que permite adaptar los límites de los controles aplicados a las propiedades físicas de los objetos. Complementariamente, se introduce un marco de desarrollo para la inclusión de la lógica temporal lineal en la planificación de movimientos basada en la física. La segunda parte de esta tesis extiende el enfoque a planificación del movimiento basados en la física en entornos desordenados e inciertos. La incertidumbre se considera en 1) las poses de los objetos debido a la medición y a las interacciones complejas robot-objeto y objeto-objeto; 2) incertidumbre en la dinámica de los contactos (como el coeficiente de fricción); 3) incertidumbre en los controles del robot. La solución se enmarca en planificadores kinodinámicos basados en muestro que solucionan el problema en lazo abierto, es decir que consideran la incertidumbre en la planificación para calcular una solución robusta que permita mover al robot de la configuración inicial a la final a pesar de la incertidumbre. Para implementar los enfoques mencionados anteriormente, se presenta una herramienta de planificación del movimientos basada en la física y guiada por el conocimiento, desarrollada extendiendo The Kautham Project, una herramienta implementada en C++ para la planificación de movimientos basada en muestreo. Finalmente, de discute los retos actuales y las futuras lineas de investigación a seguir para extender los enfoques presentados

    Simulated tsunami inundation for a range of Cascadia megathrust earthquake scenarios at Bandon, Oregon, USA

    Get PDF
    Characterizations of tsunami hazards along the Cascadia subduction zone hinge on uncertainties in megathrust rupture models used for simulating tsunami inundation. To explore these uncertainties, we constructed 15 megathrust earthquake scenarios using rupture models that supply the initial conditions for tsunami simulations at Bandon, Oregon. Tsunami inundation varies with the amount and distribution of fault slip assigned to rupture models, including models where slip is partitioned to a splay fault in the accretionary wedge and models that vary the updip limit of slip on a buried fault. Constraints on fault slip come from onshore and offshore paleoseismological evidence. We rank each rupture model using a logic tree that evaluates a model\u27s consistency with geological and geophysical data. The scenarios provide inputs to a hydrodynamic model, SELFE, used to simulate tsunami generation, propagation, and inundation on unstructured grids with \u3c 5-15 m resolution in coastal areas. Tsunami simulations delineate the likelihood that Cascadia tsunamis will exceed mapped inundation lines. Maximum wave elevations at the shoreline varied from similar to 4 m to 25 m for earthquakes with 9-44 m slip and M-w 8.7-9.2. Simulated tsunami inundation agrees with sparse deposits left by the A. D. 1700 and older tsunamis. Tsunami simulations for large (22-30 m slip) and medium (14-19 m slip) splay fault scenarios encompass 80%-95% of all inundation scenarios and provide reasonable guidelines for landuse planning and coastal development. The maximum tsunami inundation simulated for the greatest splay fault scenario (3644 m slip) can help to guide development of local tsunami evacuation zones
    corecore