229 research outputs found

    G-PECNet: Towards a Generalizable Pedestrian Trajectory Prediction System

    Full text link
    Navigating dynamic physical environments without obstructing or damaging human assets is of quintessential importance for social robots. In this work, we solve autonomous drone navigation's sub-problem of predicting out-of-domain human and agent trajectories using a deep generative model. Our method: General-PECNet or G-PECNet observes an improvement of 9.5\% on the Final Displacement Error (FDE) on 2020's benchmark: PECNet through a combination of architectural improvements inspired by periodic activation functions and synthetic trajectory (data) augmentations using Hidden Markov Models (HMMs) and Reinforcement Learning (RL). Additionally, we propose a simple geometry-inspired metric for trajectory non-linearity and outlier detection, helpful for the task. Code available at https://github.com/Aryan-Garg/PECNet-Pedestrian-Trajectory-Prediction.gitComment: Notable ICLR Tiny Paper 202

    Analysis of Learned Features and Framework for Potato Disease Detection

    Full text link
    For applications like plant disease detection, usually, a model is trained on publicly available data and tested on field data. This means that the test data distribution is not the same as the training data distribution, which affects the classifier performance adversely. We handle this dataset shift by ensuring that the features are learned from disease spots in the leaf or healthy regions, as applicable. This is achieved using a faster Region-based convolutional neural network (RCNN) as one of the solutions and an attention-based network as the other. The average classification accuracies of these classifiers are approximately 95% while evaluated on the test set corresponding to their training dataset. These classifiers also performed equivalently, with an average score of 84% on a dataset not seen during the training phase.Comment: 15 pages, 8 figure

    Prevalence of comorbidities and their relationship to functional status of children with cerebral palsy

    Get PDF
    Background: Cerebral palsy (CP) is the most common motor disorder in children. Associated comorbidities are very common. Gross motor functional classification system (GMFCS), manual ability classification system (MACS), and communication function classification system (CFCS) are used to decide functional ability. Functional ability and comorbidities have the greatest impact on the child with CP. There is a paucity of data regarding the functional level and their correlation with comorbidity. Objective: The aim of the study was to find the prevalence of comorbidities in CP and their correlation to functional status in children. Materials and Methods: A total of 154 consecutive children with CP attending district early intervention center and pediatric department from the period of January to December 2018 were enrolled. Cases were evaluated by history, clinical examination, and investigations. CP was classified in subtypes. Cases were screened for comorbidities. Functional assessment was done as per GMFCS-ER, MACS, and CFCS. Results: Study showed that 76% of children had spastic CP, 7% dyskinetic, 6% hypotonic/ataxic, and 11% of them had mixed CP. Mean age was 4 years. Perinatal asphyxia was the most common insult. Comorbidities were intellectual disability (81%), epilepsy (50%), visual problems (70%), hearing problems (12%), malnutrition (36%), and drooling (61%). About 63% were having GMFCS level ≥3. About 60% had MACS and CFCS level ≥3 with significant correlation. Comorbidities were dichotomously distributed across GMFCS levels. There was a strong correlation between comorbidity burden and GMFCS level. Conclusion: Comorbidities were significantly observed and disproportionally distributed across GMFCS levels. The burden of comorbidities was more in higher levels of GMFCS

    On the role of performance interference in consolidated environments

    Get PDF
    Cotutela Universitat Politècnica de Catalunya i KTH Royal Institute of TechnologyWith the advent of resource shared environments such as the Cloud, virtualization has become the de facto standard for server consolidation. While consolidation improves utilization, it causes performance-interference between Virtual Machines (VMs) from contention in shared resources such as CPU, Last Level Cache (LLC) and memory bandwidth. Over-provisioning resources for performance sensitive applications can guarantee Quality of Service (QoS), however, it results in low machine utilization. Thus, assuring QoS for performance sensitive applications while allowing co-location has been a challenging problem. In this thesis, we identify ways to mitigate performance interference without undue over-provisioning and also point out the need to model and account for performance interference to improve the reliability and accuracy of elastic scaling. The end goal of this research is to leverage on the observations to provide efficient resource management that is both performance and cost aware. Our main contributions are threefold; first, we improve the overall machine utilization by executing best-e↵ort applications along side latency critical applications without violating its performance requirements. Our solution is able to dynamically adapt and leverage on the changing workload/phase behaviour to execute best-e↵ort applications without causing excessive interference on performance; second, we identify that certain performance metrics used for elastic scaling decisions may become unreliable if performance interference is unaccounted. By modelling performance interference, we show that these performance metrics become reliable in a multi-tenant environment; and third, we identify and demonstrate the impact of interference on the accuracy of elastic scaling and propose a solution to significantly minimise performance violations at a reduced cost.Con la aparición de entornos con recurso compartidos tales como la nube, la virtualización se ha convertido en el estándar de facto para la consolidación de servidores. Mientras que la consolidación mejora la utilización, también causa interferencia en el rendimiento de las máquinas virtuales (VM) debido a la contención en recursos compartidos, tales como CPU, el último nivel de cache (LLC) y el ancho de banda de memoria. El exceso de aprovisionamiento de recursos para aplicaciones sensibles al rendimiento puede garantizar la calidad de servicio (QoS), sin embargo, resulta en una baja utilización de la maquina. Por lo tanto, asegurar QoS en aplicaciones sensibles al rendimiento, al tiempo que permitir la co-localización ha sido un problema difícil. En esta tesis, se identifican las formas de mitigar la interferencia sin necesidad de sobre-aprovisionamiento y también se señala la necesidad de modelar y contabilizar la interferencia en el desempeño para mejorar la fiabilidad y la precisión del escalado elástico. El objetivo final de esta investigación consiste en aprovechar las observaciones para proporcionar una gestión eficiente de los recursos considerando tanto el rendimiento como el coste. Nuestras contribuciones principales son tres; primero, mejoramos la utilización total de la maquina mediante la ejecución de aplicaciones best-effort junto con aplicaciones críticas en latencia sin vulnerar sus requisitos de rendimiento. Nuestra solución es capaz de adaptarse de forma dinámica y sacar provecho del comportamiento cambiante de la carga de trabajo y sus cambios de fase para ejecutar aplicaciones best-effort, sin causar interferencia excesiva en el rendimiento; segundo, identificamos que ciertos parámetros de rendimiento utilizados para las decisiones de escalado elástico pueden no ser fiables si no se tiene en cuenta la interferencia en el rendimiento. Al modelar la interferencia en el rendimiento, se muestra que estas métricas de rendimiento resultan fiables en un entorno multi-proveedor; y tercero, se identifica y muestra el impacto de la interferencia en la precisión del escalado elástico y se propone una solución para minimizar significativamente vulneraciones de rendimiento con un coste reducido.Postprint (published version

    Analyzing frameworks and strategies for converting neural networks for neuromorphic processors

    Get PDF
    Neuromorphic computing employs innovative algorithms to mimic how the human brain interacts with the world, aiming to achieve capabilities that closely resemble human cognition. Neuromorphic processors are based on an entirely new computing paradigm and they come with new Machine Learning (ML) algorithms. Programming a neuromorphic processor often entails creating a Spiking Neural Network (SNN) that closely mimics the biological neural networks and can be deployed to the neuromorphic processor. Neuromorphic processors leverage these asynchronous, event-based SNNs to achieve substantial increase in power and performance over conventional architectures. However, training such networks is difficult due to the non differentiable nature of spike events. This thesis investigates the frameworks and strategies for converting standard neural networks, particularly Deep Learning (DL) models, to SNNs suitable for neuromorphic processors. Focusing on three neuromorphic platforms-Brainchip Akida, Intel Loihi, and SynSense-the thesis aims to develop a standardized conversion pipeline, address current limitations, and conduct metric-based analyses of the model developed using different frameworks. The thesis utilizes the PilotNet model, a Convolutional Neural Network (CNN) designed for autonomous driving, to evaluate the conversion processes and performance on the selected neuromorphic framework. Results demonstrate the varying degrees of efficiency and challenges associated with each neuromorphic processor, providing insights for optimizing the conversion process and further advancing neuromorphic computing for practical applications such as autonomous driving, robotics, and edge computing. The findings emphasize the need for continued development in conversion techniques and the optimization of neuromorphic hardware to fully harness the potential of AI-driven systems

    Ambient Pressure XPS Study of Mixed Conducting Perovskite-type SOFC Cathode and Anode Materials under Well-Defined Electrochemical Polarization

    Get PDF
    The oxygen exchange activity of mixed conducting oxide surfaces has been widely investigated, but a detailed understanding of the corresponding reaction mechanisms and the rate-limiting steps is largely still missing. Combined in situ investigation of electrochemically polarized model electrode surfaces under realistic temperature and pressure conditions by near-ambient pressure (NAP) XPS and impedance spectroscopy enables very surface-sensitive chemical analysis and may detect species that are involved in the rate-limiting step. In the present study, acceptor-doped perovskite-type La0.6Sr0.4CoO3-δ (LSC), La0.6Sr0.4FeO3-δ (LSF), and SrTi0.7Fe0.3O3-δ (STF) thin film model electrodes were investigated under well-defined electrochemical polarization as cathodes in oxidizing (O2) and as anodes in reducing (H2/H2O) atmospheres. In oxidizing atmosphere all materials exhibit additional surface species of strontium and oxygen. The polaron-type electronic conduction mechanism of LSF and STF and the metal-like mechanism of LSC are reflected by distinct differences in the valence band spectra. Switching between oxidizing and reducing atmosphere as well as electrochemical polarization cause reversible shifts in the measured binding energy. This can be correlated to a Fermi level shift due to variations in the chemical potential of oxygen. Changes of oxidation states were detected on Fe, which appears as FeIII in oxidizing atmosphere and as mixed FeII/III in H2/H2O. Cathodic polarization in reducing atmosphere leads to the reversible formation of a catalytically active Fe0 phase

    X-ray spectral analysis of the jet termination shock in pictor A on subarcsecond scales with Chandra

    Get PDF
    Hot spots observed at the edges of extended radio lobes in high-power radio galaxies and quasars mark the position of mildly relativistic termination shock, where the jet bulk kinetic energy is converted to the internal energy of the jet particles. These are the only astrophysical systems where mildly relativistic shocks can be directly resolved at various wavelengths of the electromagnetic spectrum. The western hot spot in the radio galaxy Pictor A is an exceptionally good target in this respect, due to the combination of its angular size and high surface brightness. In our previous work, after a careful Chandra image deconvolution, we resolved this hot spot into a disk-like feature perpendicular to the jet axis, and identified it as the front of the jet termination shock. We argued for a synchrotron origin of the observed X-ray photons, which implied electron energies reaching at least 10–100 TeV at the shock front. Here, we present a follow-up on that analysis, proposing, in particular, a novel method for constraining the shape of the X-ray continuum emission with subarcsecond resolution. The method is based on a Chandra hardness map analysis, using separately deconvolved maps in the soft and hard X-ray bands. In this way, we have found there is a systematic, yet statistically significant gradient in the hardness ratio across the shock, such that the implied electron energy index ranges from s \leq 2.2 at the shock front to s > 2.7 in the near downstream. We discuss the implications of the obtained results for a general understanding of particle acceleration at mildly relativistic shocks
    corecore