864 research outputs found

    Revisiting the capitalization of public transport accessibility into residential land value: an empirical analysis drawing on Open Science

    Get PDF
    Background: The delivery and effective operation of public transport is fundamental for a for a transition to low-carbon emission transport systems’. However, many cities face budgetary challenges in providing and operating this type of infrastructure. Land value capture (LVC) instruments, aimed at recovering all or part of the land value uplifts triggered by actions other than the landowner, can alleviate some of this pressure. A key element of LVC lies in the increment in land value associated with a particular public action. Urban economic theory supports this idea and considers accessibility to be a core element for determining residential land value. Although the empirical literature assessing the relationship between land value increments and public transport infrastructure is vast, it often assumes homogeneous benefits and, therefore, overlooks relevant elements of accessibility. Advancements in the accessibility concept in the context of Open Science can ease the relaxation of such assumptions. Methods: This thesis draws on the case of Greater Mexico City between 2009 and 2019. It focuses on the effects of the main public transport network (MPTN) which is organised in seven temporal stages according to its expansion phases. The analysis incorporates location based accessibility measures to employment opportunities in order to assess the benefits of public transport infrastructure. It does so by making extensive use of the open-source software OpenTripPlanner for public transport route modelling (≈ 2.1 billion origin-destination routes). Potential capitalizations are assessed according to the hedonic framework. The property value data includes individual administrative mortgage records collected by the Federal Mortgage Society (≈ 800,000). The hedonic function is estimated using a variety of approaches, i.e. linear models, nonlinear models, multilevel models, and spatial multilevel models. These are estimated by the maximum likelihood and Bayesian methods. The study also examines possible spatial aggregation bias using alternative spatial aggregation schemes according to the modifiable areal unit problem (MAUP) literature. Results: The accessibility models across the various temporal stages evidence the spatial heterogeneity shaped by the MPTN in combination with land use and the individual perception of residents. This highlights the need to transition from measures that focus on the characteristics of transport infrastructure to comprehensive accessibility measures which reflect such heterogeneity. The estimated hedonic function suggests a robust, positive, and significant relationship between MPTN accessibility and residential land value in all the modelling frameworks in the presence of a variety of controls. The residential land value increases between 3.6% and 5.7% for one additional standard deviation in MPTN accessibility to employment in the final set of models. The total willingness to pay (TWTP) is considerable, ranging from 0.7 to 1.5 times the equivalent of the capital costs of the bus rapid transit Line-7 of the Metrobús system. A sensitivity analysis shows that the hedonic model estimation is sensitive to the MAUP. In addition, the use of a post code zoning scheme produces the closest results compared to the smallest spatial analytical scheme (0.5 km hexagonal grid). Conclusion: The present thesis advances the discussion on the capitalization of public transport on residential land value by adopting recent contributions from the Open Science framework. Empirically, it fills a knowledge gap given the lack of literature around this topic in this area of study. In terms of policy, the findings support LVC as a mechanism of considerable potential. Regarding fee-based LVC instruments, there are fairness issues in relation to the distribution of charges or exactions to households that could be addressed using location based measures. Furthermore, the approach developed for this analysis serves as valuable guidance for identifying sites with large potential for the implementation of development based instruments, for instance land readjustments or the sale/lease of additional development rights

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    On the Utility of Representation Learning Algorithms for Myoelectric Interfacing

    Get PDF
    Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden

    Structural optimization in steel structures, algorithms and applications

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Deep learning for characterizing full-color 3D printers: accuracy, robustness, and data-efficiency

    Get PDF
    High-fidelity color and appearance reproduction via multi-material-jetting full-color 3D printing has seen increasing applications, including art and cultural artifacts preservation, product prototypes, game character figurines, stop-motion animated movie, and 3D-printed prostheses such as dental restorations or prosthetic eyes. To achieve high-quality appearance reproduction via full-color 3D printing, a prerequisite is an accurate optical printer model that is a predicting function from an arrangement or ratio of printing materials to the optical/visual properties (e.g. spectral reflectance, color, and translucency) of the resulting print. For appearance 3D printing, the model needs to be inverted to determine the printing material arrangement that reproduces distinct optical/visual properties such as color. Therefore, the accuracy of optical printer models plays a crucial role for the final print quality. The process of fitting an optical printer model's parameters for a printing system is called optical characterization, which requires test prints and optical measurements. The objective of developing a printer model is to maximize prediction performance such as accuracy, while minimizing optical characterization efforts including printing, post-processing, and measuring. In this thesis, I aim at leveraging deep learning to achieve holistically-performant optical printer models, in terms of three different performance aspects of optical printer models: 1) accuracy, 2) robustness, and 3) data efficiency. First, for model accuracy, we propose two deep learning-based printer models that both achieve high accuracies with only a moderate number of required training samples. Experiments show that both models outperform the traditional cellular Neugebauer model by large margins: up to 6 times higher accuracy, or, up to 10 times less data for a similar accuracy. The high accuracy could enhance or even enable color- and translucency-critical applications of 3D printing such as dental restorations or prosthetic eyes. Second, for model robustness, we propose a methodology to induce physically-plausible constraints and smoothness into deep learning-based optical printer models. Experiments show that the model not only almost always corrects implausible relationships between material arrangement and the resulting optical/visual properties, but also ensures significantly smoother predictions. The robustness and smoothness improvements are important to alleviate or avoid unacceptable banding artifacts on textures of the final printouts, particularly for applications where texture details must be preserved, such as for reproducing prosthetic eyes whose texture must match the companion (healthy) eye. Finally, for data efficiency, we propose a learning framework that significantly improves printer models' data efficiency by employing existing characterization data from other printers. We also propose a contrastive learning-based approach to learn dataset embeddings that are extra inputs required by the aforementioned learning framework. Experiments show that the learning framework can drastically reduce the number of required samples for achieving an application-specific prediction accuracy. For some printers, it requires only 10% of the samples to achieve a similar accuracy as the state-of-the-art model. The significant improvement in data efficiency makes it economically possible to frequently characterize 3D printers to achieve more consistent output across different printers over time, which is crucial for color- and translucency-critical individualized mass production. With these proposed deep learning-based methodologies significantly improving the three performance aspects (i.e. accuracy, robustness, and data efficiency), a holistically-performant optical printer model can be achieved, which is particularly important for color- and translucency-critical applications such as dental restorations or prosthetic eyes

    Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence

    Get PDF
    This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1A2C1011198) , (Institute for Information & communications Technology Planning & Evaluation) (IITP) grant funded by the Korea government (MSIT) under the ICT Creative Consilience Program (IITP-2021-2020-0-01821) , and AI Platform to Fully Adapt and Reflect Privacy-Policy Changes (No. 2022-0-00688).Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated applications, but the outcomes of many AI models are challenging to comprehend and trust due to their black-box nature. Usually, it is essential to understand the reasoning behind an AI mode ľs decision-making. Thus, the need for eXplainable AI (XAI) methods for improving trust in AI models has arisen. XAI has become a popular research subject within the AI field in recent years. Existing survey papers have tackled the concepts of XAI, its general terms, and post-hoc explainability methods but there have not been any reviews that have looked at the assessment methods, available tools, XAI datasets, and other related aspects. Therefore, in this comprehensive study, we provide readers with an overview of the current research and trends in this rapidly emerging area with a case study example. The study starts by explaining the background of XAI, common definitions, and summarizing recently proposed techniques in XAI for supervised machine learning. The review divides XAI techniques into four axes using a hierarchical categorization system: (i) data explainability, (ii) model explainability, (iii) post-hoc explainability, and (iv) assessment of explanations. We also introduce available evaluation metrics as well as open-source packages and datasets with future research directions. Then, the significance of explainability in terms of legal demands, user viewpoints, and application orientation is outlined, termed as XAI concerns. This paper advocates for tailoring explanation content to specific user types. An examination of XAI techniques and evaluation was conducted by looking at 410 critical articles, published between January 2016 and October 2022, in reputed journals and using a wide range of research databases as a source of information. The article is aimed at XAI researchers who are interested in making their AI models more trustworthy, as well as towards researchers from other disciplines who are looking for effective XAI methods to complete tasks with confidence while communicating meaning from data.National Research Foundation of Korea Ministry of Science, ICT & Future Planning, Republic of Korea Ministry of Science & ICT (MSIT), Republic of Korea 2021R1A2C1011198Institute for Information amp; communications Technology Planning amp; Evaluation) (IITP) - Korea government (MSIT) under the ICT Creative Consilience Program IITP-2021-2020-0-01821AI Platform to Fully Adapt and Reflect Privacy-Policy Changes2022-0-0068

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    2014 GREAT Day Program

    Get PDF
    SUNY Geneseo’s Eighth Annual GREAT Day.https://knightscholar.geneseo.edu/program-2007/1008/thumbnail.jp

    Computational modeling of biological nanopores

    Full text link
    Throughout our history, we, humans, have sought to better control and understand our environment. To this end, we have extended our natural senses with a host of sensors-tools that enable us to detect both the very large, such as the merging of two black holes at a distance of 1.3 billion light-years from Earth, and the very small, such as the identification of individual viral particles from a complex mixture. This dissertation is devoted to studying the physical mechanisms that govern a tiny, yet highly versatile sensor: the biological nanopore. Biological nanopores are protein molecules that form nanometer-sized apertures in lipid membranes. When an individual molecule passes through this aperture (i.e., "translocates"), the temporary disturbance of the ionic current caused by its passage reveals valuable information on its identity and properties. Despite this seemingly straightforward sensing principle, the complexity of the interactions between the nanopore and the translocating molecule implies that it is often very challenging to unambiguously link the changes in the ionic current with the precise physical phenomena that cause them. It is here that the computational methods employed in this dissertation have the potential to shine, as they are capable of modeling nearly all aspects of the sensing process with near atomistic precision. Beyond familiarizing the reader with the concepts and state-of-the-art of the nanopore field, the primary goals of this dissertation are fourfold: (1) Develop methodologies for accurate modeling of biological nanopores; (2) Investigate the equilibrium electrostatics of biological nanopores; (3) Elucidate the trapping behavior of a protein inside a biological nanopore; and (4) Mapping the transport properties of a biological nanopore. In the first results chapter of this thesis (Chapter 3), we used 3D equilibrium simulations [...]Comment: PhD thesis, 306 pages. Source code available at https://github.com/willemsk/phdthesis-tex
    corecore