10,059 research outputs found

    No Transfers Required: Integrating Last Mile with Public Transit Using Opti-Mile

    Full text link
    Public transit is a popular mode of transit due to its affordability, despite the inconveniences due to the necessity of transfers required to reach most areas. For example, in the bus and metro network of New Delhi, only 30\% of stops can be directly accessed from any starting point, thus requiring transfers for most commutes. Additionally, last-mile services like rickshaws, tuk-tuks or shuttles are commonly used as feeders to the nearest public transit access points, which further adds to the complexity and inefficiency of a journey. Ultimately, users often face a tradeoff between coverage and transfers to reach their destination, regardless of the mode of transit or the use of last-mile services. To address the problem of limited accessibility and inefficiency due to transfers in public transit systems, we propose ``opti-mile," a novel trip planning approach that combines last-mile services with public transit such that no transfers are required. Opti-mile allows users to customise trip parameters such as maximum walking distance, and acceptable fare range. We analyse the transit network of New Delhi, evaluating the efficiency, feasibility and advantages of opti-mile for optimal multi-modal trips between randomly selected source-destination pairs. We demonstrate that opti-mile trips lead to a 10% reduction in distance travelled for 18% increase in price compared to traditional shortest paths. We also show that opti-mile trips provide better coverage of the city than public transit, without a significant fare increase

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Technical Report on: Tripedal Dynamic Gaits for a Quadruped Robot

    Full text link
    A vast number of applications for legged robots entail tasks in complex, dynamic environments. But these environments put legged robots at high risk for limb damage. This paper presents an empirical study of fault tolerant dynamic gaits designed for a quadrupedal robot suffering from a single, known ``missing'' limb. Preliminary data suggests that the featured gait controller successfully anchors a previously developed planar monopedal hopping template in the three-legged spatial machine. This compositional approach offers a useful and generalizable guide to the development of a wider range of tripedal recovery gaits for damaged quadrupedal machines.Comment: Updated *increased font size on figures 2-6 *added a legend, replaced text with colors in figure 5a and 6a *made variables representing vectors boldface in equations 8-10 *expanded on calculations in equations 8-10 by adding additional lines *added a missing "2" to equation 8 (typo) *added mass of the robot to tables II and III *increased the width of figures 1 and

    Using machine learning to predict pathogenicity of genomic variants throughout the human genome

    Get PDF
    Geschätzt mehr als 6.000 Erkrankungen werden durch Veränderungen im Genom verursacht. Ursachen gibt es viele: Eine genomische Variante kann die Translation eines Proteins stoppen, die Genregulation stören oder das Spleißen der mRNA in eine andere Isoform begünstigen. All diese Prozesse müssen überprüft werden, um die zum beschriebenen Phänotyp passende Variante zu ermitteln. Eine Automatisierung dieses Prozesses sind Varianteneffektmodelle. Mittels maschinellem Lernen und Annotationen aus verschiedenen Quellen bewerten diese Modelle genomische Varianten hinsichtlich ihrer Pathogenität. Die Entwicklung eines Varianteneffektmodells erfordert eine Reihe von Schritten: Annotation der Trainingsdaten, Auswahl von Features, Training verschiedener Modelle und Selektion eines Modells. Hier präsentiere ich ein allgemeines Workflow dieses Prozesses. Dieses ermöglicht es den Prozess zu konfigurieren, Modellmerkmale zu bearbeiten, und verschiedene Annotationen zu testen. Der Workflow umfasst außerdem die Optimierung von Hyperparametern, Validierung und letztlich die Anwendung des Modells durch genomweites Berechnen von Varianten-Scores. Der Workflow wird in der Entwicklung von Combined Annotation Dependent Depletion (CADD), einem Varianteneffektmodell zur genomweiten Bewertung von SNVs und InDels, verwendet. Durch Etablierung des ersten Varianteneffektmodells für das humane Referenzgenome GRCh38 demonstriere ich die gewonnenen Möglichkeiten Annotationen aufzugreifen und neue Modelle zu trainieren. Außerdem zeige ich, wie Deep-Learning-Scores als Feature in einem CADD-Modell die Vorhersage von RNA-Spleißing verbessern. Außerdem werden Varianteneffektmodelle aufgrund eines neuen, auf Allelhäufigkeit basierten, Trainingsdatensatz entwickelt. Diese Ergebnisse zeigen, dass der entwickelte Workflow eine skalierbare und flexible Möglichkeit ist, um Varianteneffektmodelle zu entwickeln. Alle entstandenen Scores sind unter cadd.gs.washington.edu und cadd.bihealth.org frei verfügbar.More than 6,000 diseases are estimated to be caused by genomic variants. This can happen in many possible ways: a variant may stop the translation of a protein, interfere with gene regulation, or alter splicing of the transcribed mRNA into an unwanted isoform. It is necessary to investigate all of these processes in order to evaluate which variant may be causal for the deleterious phenotype. A great help in this regard are variant effect scores. Implemented as machine learning classifiers, they integrate annotations from different resources to rank genomic variants in terms of pathogenicity. Developing a variant effect score requires multiple steps: annotation of the training data, feature selection, model training, benchmarking, and finally deployment for the model's application. Here, I present a generalized workflow of this process. It makes it simple to configure how information is converted into model features, enabling the rapid exploration of different annotations. The workflow further implements hyperparameter optimization, model validation and ultimately deployment of a selected model via genome-wide scoring of genomic variants. The workflow is applied to train Combined Annotation Dependent Depletion (CADD), a variant effect model that is scoring SNVs and InDels genome-wide. I show that the workflow can be quickly adapted to novel annotations by porting CADD to the genome reference GRCh38. Further, I demonstrate the integration of deep-neural network scores as features into a new CADD model, improving the annotation of RNA splicing events. Finally, I apply the workflow to train multiple variant effect models from training data that is based on variants selected by allele frequency. In conclusion, the developed workflow presents a flexible and scalable method to train variant effect scores. All software and developed scores are freely available from cadd.gs.washington.edu and cadd.bihealth.org

    Studies on genetic and epigenetic regulation of gene expression dynamics

    Get PDF
    The information required to build an organism is contained in its genome and the first biochemical process that activates the genetic information stored in DNA is transcription. Cell type specific gene expression shapes cellular functional diversity and dysregulation of transcription is a central tenet of human disease. Therefore, understanding transcriptional regulation is central to understanding biology in health and disease. Transcription is a dynamic process, occurring in discrete bursts of activity that can be characterized by two kinetic parameters; burst frequency describing how often genes burst and burst size describing how many transcripts are generated in each burst. Genes are under strict regulatory control by distinct sequences in the genome as well as epigenetic modifications. To properly study how genetic and epigenetic factors affect transcription, it needs to be treated as the dynamic cellular process it is. In this thesis, I present the development of methods that allow identification of newly induced gene expression over short timescales, as well as inference of kinetic parameters describing how frequently genes burst and how many transcripts each burst give rise to. The work is presented through four papers: In paper I, I describe the development of a novel method for profiling newly transcribed RNA molecules. We use this method to show that therapeutic compounds affecting different epigenetic enzymes elicit distinct, compound specific responses mediated by different sets of transcription factors already after one hour of treatment that can only be detected when measuring newly transcribed RNA. The goal of paper II is to determine how genetic variation shapes transcriptional bursting. To this end, we infer transcriptome-wide burst kinetics parameters from genetically distinct donors and find variation that selectively affects burst sizes and frequencies. Paper III describes a method for inferring transcriptional kinetics transcriptome-wide using single-cell RNA-sequencing. We use this method to describe how the regulation of transcriptional bursting is encoded in the genome. Our findings show that gene specific burst sizes are dependent on core promoter architecture and that enhancers affect burst frequencies. Furthermore, cell type specific differential gene expression is regulated by cell type specific burst frequencies. Lastly, Paper IV shows how transcription shapes cell types. We collect data on cellular morphologies, electrophysiological characteristics, and measure gene expression in the same neurons collected from the mouse motor cortex. Our findings show that cells belonging to the same, distinct transcriptomic families have distinct and non-overlapping morpho-electric characteristics. Within families, there is continuous and correlated variation in all modalities, challenging the notion of cell types as discrete entities

    Introduction to Facial Micro Expressions Analysis Using Color and Depth Images: A Matlab Coding Approach (Second Edition, 2023)

    Full text link
    The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment. FMER is a subset of image processing and it is a multidisciplinary topic to analysis. So, it requires familiarity with other topics of Artifactual Intelligence (AI) such as machine learning, digital image processing, psychology and more. So, it is a great opportunity to write a book which covers all of these topics for beginner to professional readers in the field of AI and even without having background of AI. Our goal is to provide a standalone introduction in the field of MFER analysis in the form of theorical descriptions for readers with no background in image processing with reproducible Matlab practical examples. Also, we describe any basic definitions for FMER analysis and MATLAB library which is used in the text, that helps final reader to apply the experiments in the real-world applications. We believe that this book is suitable for students, researchers, and professionals alike, who need to develop practical skills, along with a basic understanding of the field. We expect that, after reading this book, the reader feels comfortable with different key stages such as color and depth image processing, color and depth image representation, classification, machine learning, facial micro-expressions recognition, feature extraction and dimensionality reduction. The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment.Comment: This is the second edition of the boo

    Optimising water quality outcomes for complex water resource systems and water grids

    Get PDF
    As the world progresses, water resources are likely to be subjected to much greater pressures than in the past. Even though the principal water problem revolves around inadequate and uncertain water supplies, water quality management plays an equally important role. Availability of good quality water is paramount to sustainability of human population as well as the environment. Achieving water quality and quantity objectives can be conflicting and becomes more complicated with challenges like, climate change, growing populations and changed land uses. Managing adequate water quality in a reservoir gets complicated by multiple inflows with different water quality levels often resulting in poor water quality. Hence, it is fundamental to approach this issue in a more systematic, comprehensive, and coordinated fashion. Most previous studies related to water resources management focused on water quantity and considered water quality separately. However, this research study focused on considering water quantity and quality objectives simultaneously in a single model to explore and understand the relationship between them in a reservoir system. A case study area was identified in Western Victoria, Australia with water quantity and quality challenges. Taylors Lake of Grampians System in Victoria, Australia receives water from multiple sources of differing quality and quantity and has the abovesaid problems. A combined simulation and optimisation approach was adopted to carry out the analysis. A multi-objective optimisation approach was applied to achieve optimal water availability and quality in the storage. The multi-objective optimisation model included three objective functions which were: water volume and two water quality parameters: salinity and turbidity. Results showed competing nature of water quantity and quality objectives and established the trade-offs. It further showed that it was possible to generate a range of optimal solutions to effectively manage those trade-offs. The trade-off analysis explored and informed that selective harvesting of inflows is effective to improve water quality in storage. However, with strict water quality restriction there is a considerable loss in water volume. The robustness of the optimisation approach used in this study was confirmed through sensitivity and uncertainty analysis. The research work also incorporated various spatio-temporal scenario analyses to systematically articulate long-term and short-term operational planning strategies. Operational decisions around possible harvesting regimes while achieving optimal water quantity and quality and meeting all water demands were established. The climate change analysis revealed that optimal management of water quantity and quality in storage became extremely challenging under future climate projections. The high reduction in storage volume in the future will lead to several challenges such as water supply shortfall and inability to undertake selective harvesting due to reduced water quality levels. In this context, selective harvesting of inflows based on water quality will no longer be an option to manage water quantity and quality optimally in storage. Some significant conclusions of this research work included the establishment of trade-offs between water quality and quantity objectives particular to this configuration of water supply system. The work demonstrated that selective harvesting of inflows will improve the stored water quality, and this finding along with the approach used is a significant contribution to decision makers working within the water sector. The simulation-optimisation approach is very effective in providing a range of optimal solutions, which can be used to make more informed decisions around achieving optimal water quality and quantity in storage. It was further demonstrated that there are range of planning periods, both long-term (>10 years) and short-term (<1 year), all of which offer distinct advantages and provides useful insights, making this an additional key contribution of the work. Importantly, climate change was also considered where it was found that diminishing water resources, particularly to this geographic location, makes it increasingly difficult to optimise both quality and quantity in storage providing further useful insights from this work.Doctor of Philosoph

    Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions

    Get PDF
    In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request

    CHARACTERISTICS OF REFRACTIVITY AND SEA STATE IN THE MARINE ATMOSPHERIC SURFACE LAYER AND THEIR INFLUENCE ON X-BAND PROPAGATION

    Get PDF
    Predictions of environmental conditions within the marine atmospheric surface layer (MASL) are important to X-band radar system performance. Anomalous propagation occurs in conditions of non-standard atmospheric refractivity, driven by the virtually permanent presence of evaporation ducts (ED) in marine environments. Evaporation ducts are commonly characterized by the evaporation duct height (EDH), evaporation duct strength, and the gradients below the EDH, known as the evaporation duct curvature. Refractivity, and subsequent features, are estimated in the MASL primarily using four methods: in-situ measurements, numerical weather and surface layer modeling, boundary layer theory, and inversion methods. The existing refractivity estimation techniques often assume steady homogeneous conditions, and discrepancies between measured and simulated propagation predictions exist. These discrepancies could be attributed to the exclusion of turbulent fluctuations of the refractive index, exclusion of spatially heterogeneous refractive environments, and inaccurate characterization of the sea surface in propagation simulations. Due to the associated complexity and modeling challenges, unsteady inhomogeneous refractivity and rough sea surfaces are often omitted from simulations. This dissertation first investigates techniques for steady homogeneous refractivity and characterizes refractivity predictions using EDH and profile curvature, examining their effects on X-band propagation. Observed differences between techniques are explored with respect to prevailing meteorological conditions. Significant characteristics are then utilized in refractivity inversions for mean refractivity based-on point-to-point EM measurements. The inversions are compared to the other previously examined techniques. Differences between refractivity estimation methods are generally observed in relation to EDH, resulting in the largest variations in propagation, where most significant EDH discrepancies occur in stable conditions. Further, discrepancies among the refractivity estimation methods (in-situ, numerical models, theory, and inversion) when conditions are unstable and the mean EDH are similar, could be attributed to the neglect of spatial heterogeneity of EDH and turbulent fluctuations in the refractive index. To address this, a spectral-based turbulent refractive index fluctuation model (TRIF) is applied to emulate refractive index fluctuations. TRIF is verified against in-situ meteorological measurements and integrated with a heterogenous EDH model to estimate a comprehensive propagation environment. Lastly, a global sensitivity analysis is applied to evaluate the leading-order effects and non-linear interactions between the parameters of the comprehensive refractivity model and the sea surface in a parabolic wave equation propagation simulation under different atmospheric stability regimes (stable, neutral, and unstable). In neutral and stable regimes, mean evaporation duct characteristics (EDH and refractive gradients below the EDH) have the greatest impact on propagation, particularly beyond the geometric horizon. In unstable conditions, turbulence also plays a significant role. Regardless of atmospheric stability, forward scattering from the rough sea surface has a substantial effect on propagation predictions, especially within the lowest 10 m of the atmosphere
    corecore