1,563 research outputs found

    Tessellations and Pattern Formation in Plant Growth and Development

    Get PDF
    The shoot apical meristem (SAM) is a dome-shaped collection of cells at the apex of growing plants from which all above-ground tissue ultimately derives. In Arabidopsis thaliana (thale cress), a small flowering weed of the Brassicaceae family (related to mustard and cabbage), the SAM typically contains some three to five hundred cells that range from five to ten microns in diameter. These cells are organized into several distinct zones that maintain their topological and functional relationships throughout the life of the plant. As the plant grows, organs (primordia) form on its surface flanks in a phyllotactic pattern that develop into new shoots, leaves, and flowers. Cross-sections through the meristem reveal a pattern of polygonal tessellation that is suggestive of Voronoi diagrams derived from the centroids of cellular nuclei. In this chapter we explore some of the properties of these patterns within the meristem and explore the applicability of simple, standard mathematical models of their geometry.Comment: Originally presented at: "The World is a Jigsaw: Tessellations in the Sciences," Lorentz Center, Leiden, The Netherlands, March 200

    Analysis of Online-Delaunay Navigation for Time Sensitive Targeting

    Get PDF
    Given the drawbacks of leaving time-sensitive targeting (TST) strictly to humans, there is value to the investigation of alternative approaches to TST operations that employ autonomous systems. This paper accomplishes five things. First, it proposes a short-hop abbreviated routing paradigm (SHARP) - based on Delaunay triangulations (DT), ad-hoc communication, and autonomous control - for recognizing and engaging TSTs that, in theory, will improve upon persistence, the volume of influence, autonomy, range, and situational awareness. Second, it analyzes the minimum timeframe need by a strike (weapons enabled) aircraft to navigate to the location of a TST under SHARP. Third, it shows the distribution of the transmission radius required to communicate between an arbitrary sender and receiver. Fourth, it analyzes the extent to which connectivity, among nodes with constant communication range, decreases as the number of nodes decreases. Fifth, it shows the how SHARP reduces the amount of energy required to communicate between two nodes. Mathematica 5.0.1.0 is used to generate data for all metrics. JMP 5.0.1.2 is used to analyze the statistical nature of Mathematica\u27s output

    The Proceedings of 15th Australian Information Security Management Conference, 5-6 December, 2017, Edith Cowan University, Perth, Australia

    Get PDF
    Conference Foreword The annual Security Congress, run by the Security Research Institute at Edith Cowan University, includes the Australian Information Security and Management Conference. Now in its fifteenth year, the conference remains popular for its diverse content and mixture of technical research and discussion papers. The area of information security and management continues to be varied, as is reflected by the wide variety of subject matter covered by the papers this year. The papers cover topics from vulnerabilities in “Internet of Things” protocols through to improvements in biometric identification algorithms and surveillance camera weaknesses. The conference has drawn interest and papers from within Australia and internationally. All submitted papers were subject to a double blind peer review process. Twenty two papers were submitted from Australia and overseas, of which eighteen were accepted for final presentation and publication. We wish to thank the reviewers for kindly volunteering their time and expertise in support of this event. We would also like to thank the conference committee who have organised yet another successful congress. Events such as this are impossible without the tireless efforts of such people in reviewing and editing the conference papers, and assisting with the planning, organisation and execution of the conference. To our sponsors, also a vote of thanks for both the financial and moral support provided to the conference. Finally, thank you to the administrative and technical staff, and students of the ECU Security Research Institute for their contributions to the running of the conference

    Random lattice particle modeling of fracture processes in cementitious materials

    Get PDF
    The capability of representing fracture processes in non-homogeneous media is of great interest among the scientific community for at least two reasons: the first one stems from the fact that the use of composite materials is ubiquitous within structural applications, since the advantages of the constituents can be exploited to improve material performance; the second consists of the need to assess the non-linear post-peak behavior of such structures to properly determine margins of safety with respect to strong excitations (e.g. earthquakes, blast or impact loadings). Different kinds of theories and methodologies have been developed in the last century in order to model such phenomena, starting from linear elastic equivalent methods, then moving to plastic theories and fracture mechanics. Among the different modeling techniques available, in recent years lattice models have established themselves as a powerful tool for simulating failure modes and crack paths in heterogeneous materials. The basic idea dates back to the pioneeristic work of Hrennikoff: a continuum medium can be modeled through the interaction of unidimensional elements (e.g. springs or beams) spatially arranged in different ways. The set of nodes that interconnect the elements can be regularly or irregularly placed inside the domain, leading to regular or random lattices. It has been shown that lattices with regular geometry can strongly bias the direction of cracking, leading to incorrect results. A variety of lattice models have been developed. Such models have seen a wide field of applications, ranging from aerodynamics (using Lattice-Boltzman models) to heat transfer, crystallography and many others. Every material used in civil and infrastructure engineering is constituted of different phases. This is due to the fact that the different features of different elements are usually coupled in order to obtain greater advantages with respect to the original constituents. Even structural steel, which is usually thought of as a homogeneous continuum-type medium, includes carbon particles that can be seen as inhomogeneities at the microscopic level. The mechanical behavior of concrete, which is the main object of the present work, is strongly affected not only by the presence of inclusions (i.e. the aggregates pieces) but also by their arrangement. For this reason, the explicit, statistical representation of their presence is of great interest in the simulations of concrete behavior. Lattice models can directly account for the presence of different phases, and so are advantageous from this perspective. The definition of such models, their implementation in a computer program, together with validation on laboratory tests will be presented. The present work will briefly review the state of the art and the basic principles of these models, starting from the geometrical and computing tools needed to build the simulations. The implementation of this technique in the Matlab environment will be presented, highlighting the theoretical background. The numerical results will be validated based on two complementary experimental campaigns,which focused on the meso- and macro-scales of concrete. Whereas the aim of this work is the representation of the quasi-brittle fracture processes in cementitious materials such as concrete, the discussed approach is general, and therefore valid for the representation of damage and crack growth in a variety of different materials

    Parallel Graph Partitioning for Complex Networks

    Full text link
    Processing large complex networks like social networks or web graphs has recently attracted considerable interest. In order to do this in parallel, we need to partition them into pieces of about equal size. Unfortunately, previous parallel graph partitioners originally developed for more regular mesh-like networks do not work well for these networks. This paper addresses this problem by parallelizing and adapting the label propagation technique originally developed for graph clustering. By introducing size constraints, label propagation becomes applicable for both the coarsening and the refinement phase of multilevel graph partitioning. We obtain very high quality by applying a highly parallel evolutionary algorithm to the coarsened graph. The resulting system is both more scalable and achieves higher quality than state-of-the-art systems like ParMetis or PT-Scotch. For large complex networks the performance differences are very big. For example, our algorithm can partition a web graph with 3.3 billion edges in less than sixteen seconds using 512 cores of a high performance cluster while producing a high quality partition -- none of the competing systems can handle this graph on our system.Comment: Review article. Parallelization of our previous approach arXiv:1402.328

    Development of a GIS-based method for sensor network deployment and coverage optimization

    Get PDF
    Au cours des dernières années, les réseaux de capteurs ont été de plus en plus utilisés dans différents contextes d’application allant de la surveillance de l’environnement au suivi des objets en mouvement, au développement des villes intelligentes et aux systèmes de transport intelligent, etc. Un réseau de capteurs est généralement constitué de nombreux dispositifs sans fil déployés dans une région d'intérêt. Une question fondamentale dans un réseau de capteurs est l'optimisation de sa couverture spatiale. La complexité de l'environnement de détection avec la présence de divers obstacles empêche la couverture optimale de plusieurs zones. Par conséquent, la position du capteur affecte la façon dont une région est couverte ainsi que le coût de construction du réseau. Pour un déploiement efficace d'un réseau de capteurs, plusieurs algorithmes d'optimisation ont été développés et appliqués au cours des dernières années. La plupart de ces algorithmes reposent souvent sur des modèles de capteurs et de réseaux simplifiés. En outre, ils ne considèrent pas certaines informations spatiales de l'environnement comme les modèles numériques de terrain, les infrastructures construites humaines et la présence de divers obstacles dans le processus d'optimisation. L'objectif global de cette thèse est d'améliorer les processus de déploiement des capteurs en intégrant des informations et des connaissances géospatiales dans les algorithmes d'optimisation. Pour ce faire, trois objectifs spécifiques sont définis. Tout d'abord, un cadre conceptuel est développé pour l'intégration de l'information contextuelle dans les processus de déploiement des réseaux de capteurs. Ensuite, sur la base du cadre proposé, un algorithme d'optimisation sensible au contexte local est développé. L'approche élargie est un algorithme local générique pour le déploiement du capteur qui a la capacité de prendre en considération de l'information spatiale, temporelle et thématique dans différents contextes d'applications. Ensuite, l'analyse de l'évaluation de la précision et de la propagation d'erreurs est effectuée afin de déterminer l'impact de l'exactitude des informations contextuelles sur la méthode d'optimisation du réseau de capteurs proposée. Dans cette thèse, l'information contextuelle a été intégrée aux méthodes d'optimisation locales pour le déploiement de réseaux de capteurs. L'algorithme développé est basé sur le diagramme de Voronoï pour la modélisation et la représentation de la structure géométrique des réseaux de capteurs. Dans l'approche proposée, les capteurs change leur emplacement en fonction des informations contextuelles locales (l'environnement physique, les informations de réseau et les caractéristiques des capteurs) visant à améliorer la couverture du réseau. La méthode proposée est implémentée dans MATLAB et est testée avec plusieurs jeux de données obtenus à partir des bases de données spatiales de la ville de Québec. Les résultats obtenus à partir de différentes études de cas montrent l'efficacité de notre approche.In recent years, sensor networks have been increasingly used for different applications ranging from environmental monitoring, tracking of moving objects, development of smart cities and smart transportation system, etc. A sensor network usually consists of numerous wireless devices deployed in a region of interest. A fundamental issue in a sensor network is the optimization of its spatial coverage. The complexity of the sensing environment with the presence of diverse obstacles results in several uncovered areas. Consequently, sensor placement affects how well a region is covered by sensors as well as the cost for constructing the network. For efficient deployment of a sensor network, several optimization algorithms are developed and applied in recent years. Most of these algorithms often rely on oversimplified sensor and network models. In addition, they do not consider spatial environmental information such as terrain models, human built infrastructures, and the presence of diverse obstacles in the optimization process. The global objective of this thesis is to improve sensor deployment processes by integrating geospatial information and knowledge in optimization algorithms. To achieve this objective three specific objectives are defined. First, a conceptual framework is developed for the integration of contextual information in sensor network deployment processes. Then, a local context-aware optimization algorithm is developed based on the proposed framework. The extended approach is a generic local algorithm for sensor deployment, which accepts spatial, temporal, and thematic contextual information in different situations. Next, an accuracy assessment and error propagation analysis is conducted to determine the impact of the accuracy of contextual information on the proposed sensor network optimization method. In this thesis, the contextual information has been integrated in to the local optimization methods for sensor network deployment. The extended algorithm is developed based on point Voronoi diagram in order to represent geometrical structure of sensor networks. In the proposed approach sensors change their location based on local contextual information (physical environment, network information and sensor characteristics) aiming to enhance the network coverage. The proposed method is implemented in MATLAB and tested with several data sets obtained from Quebec City spatial database. Obtained results from different case studies show the effectiveness of our approach

    A network model to assess base-filter combinations

    Get PDF
    Granular filters retain base material within the narrowest constrictions of their void network. A direct comparison of the base material particle size distribution (PSD) and the filter constriction size distribution (CSD) cannot easily be used to assess fi lter - base compatibility. Here a conceptually simple random - walk network model using a filter CSD derived from discrete element modelling and base PSD is used to assess filter - base compatibility. Following verification using experimental data the model is a pplied to assess empirical ratios between filter and base characteristic diameters. The effects of filter density, void connectivity and blocking in the first few filter layers are highlighted

    PocketPicker: analysis of ligand binding-sites with shape descriptors

    Get PDF
    Background Identification and evaluation of surface binding-pockets and occluded cavities are initial steps in protein structure-based drug design. Characterizing the active site's shape as well as the distribution of surrounding residues plays an important role for a variety of applications such as automated ligand docking or in situ modeling. Comparing the shape similarity of binding site geometries of related proteins provides further insights into the mechanisms of ligand binding. Results We present PocketPicker, an automated grid-based technique for the prediction of protein binding pockets that specifies the shape of a potential binding-site with regard to its buriedness. The method was applied to a representative set of protein-ligand complexes and their corresponding apo-protein structures to evaluate the quality of binding-site predictions. The performance of the pocket detection routine was compared to results achieved with the existing methods CAST, LIGSITE, LIGSITEcs, PASS and SURFNET. Success rates PocketPicker were comparable to those of LIGSITEcs and outperformed the other tools. We introduce a descriptor that translates the arrangement of grid points delineating a detected binding-site into a correlation vector. We show that this shape descriptor is suited for comparative analyses of similar binding-site geometry by examining induced-fit phenomena in aldose reductase. This new method uses information derived from calculations of the buriedness of potential binding-sites. Conclusions The pocket prediction routine of PocketPicker is a useful tool for identification of potential protein binding-pockets. It produces a convenient representation of binding-site shapes including an intuitive description of their accessibility. The shape-descriptor for automated classification of binding-site geometries can be used as an additional tool complementing elaborate manual inspections

    Bayesian Model Averaging in the Context of Spatial Hedonic Pricing: An Application to Farmland Values

    Get PDF
    Since 1973, British Columbia created an Agricultural Land Reserve to protect farmland from development. In this study, we employ GIS-based hedonic pricing models of farmland values to examine factors that affect farmland prices. We take spatial lag and error dependence into explicit account. However, the use of spatial econometric techniques in hedonic pricing models is problematic because there is uncertainty with respect to the choice of the explanatory variables and the spatial weighting matrix. Bayesian model averaging techniques in combination with Markov Chain Monte Carlo Model Composition are used to allow for both types of model uncertainty.Bayesian model averaging, Markov Chain Monte Carlo Model Composition, spatial econometrics, hedonic pricing, GIS, urban-rural fringe, farmland fragmentation
    • …
    corecore