19,715 research outputs found

    Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control

    Full text link
    This paper provides an overview of the current state-of-the-art in selective harvesting robots (SHRs) and their potential for addressing the challenges of global food production. SHRs have the potential to increase productivity, reduce labour costs, and minimise food waste by selectively harvesting only ripe fruits and vegetables. The paper discusses the main components of SHRs, including perception, grasping, cutting, motion planning, and control. It also highlights the challenges in developing SHR technologies, particularly in the areas of robot design, motion planning and control. The paper also discusses the potential benefits of integrating AI and soft robots and data-driven methods to enhance the performance and robustness of SHR systems. Finally, the paper identifies several open research questions in the field and highlights the need for further research and development efforts to advance SHR technologies to meet the challenges of global food production. Overall, this paper provides a starting point for researchers and practitioners interested in developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic

    The [OIII] profiles of far-infrared active and non-active optically-selected green valley galaxies

    Full text link
    We present a study of the [OIII]λ5007\rm{[OIII]\lambda\,5007} line profile in a sub-sample of 8 active galactic nuclei (AGN) and 6 non-AGN in the optically-selected green valley at z<0.5\rm{z\,<\,0.5} using long-slit spectroscopic observations with the 11 m Southern African Large Telescope. Gaussian decomposition of the line profile was performed to study its different components. We observe that the AGN profile is more complex than the non-AGN one. In particular, in most AGN (5/8) we detect a blue wing of the line. We derive the FWHM velocities of the wing and systemic component, and find that AGN show higher FWHM velocity than non-AGN in their core component. We also find that the AGN show blue wings with a median velocity width of approximately 600 kms1\rm{km\,s^{-1}}, and a velocity offset from the core component in the range -90 to -350 kms1\rm{km\,s^{-1}}, in contrast to the non-AGN galaxies, where we do not detect blue wings in any of their [OIII]λ5007\rm{[OIII]\lambda\,5007} line profiles. Using spatial information in our spectra, we show that at least three of the outflow candidate galaxies have centrally driven gas outflows extending across the whole galaxy. Moreover, these are also the galaxies which are located on the main sequence of star formation, raising the possibility that the AGN in our sample are influencing SF of their host galaxies (such as positive feedback). This is in agreement with our previous work where we studied SF, morphology, and stellar population properties of a sample of green valley AGN and non-AGN galaxies.Comment: 15 pages, 6 figures, accepted for publication in Ap

    Loop Closure Detection Based on Object-level Spatial Layout and Semantic Consistency

    Full text link
    Visual simultaneous localization and mapping (SLAM) systems face challenges in detecting loop closure under the circumstance of large viewpoint changes. In this paper, we present an object-based loop closure detection method based on the spatial layout and semanic consistency of the 3D scene graph. Firstly, we propose an object-level data association approach based on the semantic information from semantic labels, intersection over union (IoU), object color, and object embedding. Subsequently, multi-view bundle adjustment with the associated objects is utilized to jointly optimize the poses of objects and cameras. We represent the refined objects as a 3D spatial graph with semantics and topology. Then, we propose a graph matching approach to select correspondence objects based on the structure layout and semantic property similarity of vertices' neighbors. Finally, we jointly optimize camera trajectories and object poses in an object-level pose graph optimization, which results in a globally consistent map. Experimental results demonstrate that our proposed data association approach can construct more accurate 3D semantic maps, and our loop closure method is more robust than point-based and object-based methods in circumstances with large viewpoint changes

    CEERS: Diversity of Lyman-Alpha Emitters during the Epoch of Reionization

    Full text link
    We analyze rest-frame ultraviolet to optical spectra of three z7.47z\simeq7.47 - 7.757.75 galaxies whose Lyα\alpha-emission lines were previously detected with Keck/MOSFIRE observations, using the JWST/NIRSpec observations from the Cosmic Evolution Early Release Science (CEERS) survey. From NIRSpec data, we confirm the systemic redshifts of these Lyα\alpha emitters, and emission-line ratio diagnostics indicate these galaxies were highly ionized and metal poor. We investigate Lyα\alpha line properties, including the line flux, velocity offset, and spatial extension. For the one galaxy where we have both NIRSpec and MOSFIRE measurements, we find a significant offset in their flux measurements (5×\sim5\times greater in MOSFIRE) and a marginal difference in the velocity shifts. The simplest interpretation is that the Lyα\alpha emission is extended and not entirely encompassed by the NIRSpec slit. The cross-dispersion profiles in NIRSpec reveal that Lyα\alpha in one galaxy is significantly more extended than the non-resonant emission lines. We also compute the expected sizes of ionized bubbles that can be generated by the Lyα\alpha sources, discussing viable scenarios for the creation of sizable ionized bubbles (>>1 physical Mpc). The source with the highest-ionization condition is possibly capable of ionizing its own bubble, while the other two do not appear to be capable of ionizing such a large region, requiring additional sources of ionizing photons. Therefore, the fact that we detect Lyα\alpha from these galaxies suggests diverse scenarios on escape of Lyα\alpha during the epoch of reionization. High spectral resolution spectra with JWST/NIRSpec will be extremely useful for constraining the physics of patchy reionization.Comment: Submitted to ApJ (18 pages, 7 figures, 2 tables

    Examples of works to practice staccato technique in clarinet instrument

    Get PDF
    Klarnetin staccato tekniğini güçlendirme aşamaları eser çalışmalarıyla uygulanmıştır. Staccato geçişlerini hızlandıracak ritim ve nüans çalışmalarına yer verilmiştir. Çalışmanın en önemli amacı sadece staccato çalışması değil parmak-dilin eş zamanlı uyumunun hassasiyeti üzerinde de durulmasıdır. Staccato çalışmalarını daha verimli hale getirmek için eser çalışmasının içinde etüt çalışmasına da yer verilmiştir. Çalışmaların üzerinde titizlikle durulması staccato çalışmasının ilham verici etkisi ile müzikal kimliğe yeni bir boyut kazandırmıştır. Sekiz özgün eser çalışmasının her aşaması anlatılmıştır. Her aşamanın bir sonraki performans ve tekniği güçlendirmesi esas alınmıştır. Bu çalışmada staccato tekniğinin hangi alanlarda kullanıldığı, nasıl sonuçlar elde edildiği bilgisine yer verilmiştir. Notaların parmak ve dil uyumu ile nasıl şekilleneceği ve nasıl bir çalışma disiplini içinde gerçekleşeceği planlanmıştır. Kamış-nota-diyafram-parmak-dil-nüans ve disiplin kavramlarının staccato tekniğinde ayrılmaz bir bütün olduğu saptanmıştır. Araştırmada literatür taraması yapılarak staccato ile ilgili çalışmalar taranmıştır. Tarama sonucunda klarnet tekniğin de kullanılan staccato eser çalışmasının az olduğu tespit edilmiştir. Metot taramasında da etüt çalışmasının daha çok olduğu saptanmıştır. Böylelikle klarnetin staccato tekniğini hızlandırma ve güçlendirme çalışmaları sunulmuştur. Staccato etüt çalışmaları yapılırken, araya eser çalışmasının girmesi beyni rahatlattığı ve istekliliği daha arttırdığı gözlemlenmiştir. Staccato çalışmasını yaparken doğru bir kamış seçimi üzerinde de durulmuştur. Staccato tekniğini doğru çalışmak için doğru bir kamışın dil hızını arttırdığı saptanmıştır. Doğru bir kamış seçimi kamıştan rahat ses çıkmasına bağlıdır. Kamış, dil atma gücünü vermiyorsa daha doğru bir kamış seçiminin yapılması gerekliliği vurgulanmıştır. Staccato çalışmalarında baştan sona bir eseri yorumlamak zor olabilir. Bu açıdan çalışma, verilen müzikal nüanslara uymanın, dil atış performansını rahatlattığını ortaya koymuştur. Gelecek nesillere edinilen bilgi ve birikimlerin aktarılması ve geliştirici olması teşvik edilmiştir. Çıkacak eserlerin nasıl çözüleceği, staccato tekniğinin nasıl üstesinden gelinebileceği anlatılmıştır. Staccato tekniğinin daha kısa sürede çözüme kavuşturulması amaç edinilmiştir. Parmakların yerlerini öğrettiğimiz kadar belleğimize de çalışmaların kaydedilmesi önemlidir. Gösterilen azmin ve sabrın sonucu olarak ortaya çıkan yapıt başarıyı daha da yukarı seviyelere çıkaracaktır

    SOFIA and ALMA Investigate Magnetic Fields and Gas Structures in Massive Star Formation: The Case of the Masquerading Monster in BYF 73

    Full text link
    We present SOFIA+ALMA continuum and spectral-line polarisation data on the massive molecular cloud BYF 73, revealing important details about the magnetic field morphology, gas structures, and energetics in this unusual massive star formation laboratory. The 154μ\mum HAWC+ polarisation map finds a highly organised magnetic field in the densest, inner 0.55×\times0.40 pc portion of the cloud, compared to an unremarkable morphology in the cloud's outer layers. The 3mm continuum ALMA polarisation data reveal several more structures in the inner domain, including a pc-long, \sim500 M_{\odot} "Streamer" around the central massive protostellar object MIR 2, with magnetic fields mostly parallel to the east-west Streamer but oriented north-south across MIR 2. The magnetic field orientation changes from mostly parallel to the column density structures to mostly perpendicular, at thresholds NcritN_{\rm crit} = 6.6×\times1026^{26} m2^{-2}, ncritn_{\rm crit} = 2.5×\times1011^{11} m3^{-3}, and BcritB_{\rm crit} = 42±\pm7 nT. ALMA also mapped Goldreich-Kylafis polarisation in 12^{12}CO across the cloud, which traces in both total intensity and polarised flux, a powerful bipolar outflow from MIR 2 that interacts strongly with the Streamer. The magnetic field is also strongly aligned along the outflow direction; energetically, it may dominate the outflow near MIR 2, comprising rare evidence for a magnetocentrifugal origin to such outflows. A portion of the Streamer may be in Keplerian rotation around MIR 2, implying a gravitating mass 1350±\pm50 M_{\odot} for the protostar+disk+envelope; alternatively, these kinematics can be explained by gas in free fall towards a 950±\pm35 M_{\odot} object. The high accretion rate onto MIR 2 apparently occurs through the Streamer/disk, and could account for \sim33% of MIR 2's total luminosity via gravitational energy release.Comment: 33 pages, 32 figures, accepted by ApJ. Line-Integral Convolution (LIC) images and movie versions of Figures 3b, 7, and 29 are available at https://gemelli.spacescience.org/~pbarnes/research/champ/papers

    Modelling uncertainties for measurements of the H → γγ Channel with the ATLAS Detector at the LHC

    Get PDF
    The Higgs boson to diphoton (H → γγ) branching ratio is only 0.227 %, but this final state has yielded some of the most precise measurements of the particle. As measurements of the Higgs boson become increasingly precise, greater import is placed on the factors that constitute the uncertainty. Reducing the effects of these uncertainties requires an understanding of their causes. The research presented in this thesis aims to illuminate how uncertainties on simulation modelling are determined and proffers novel techniques in deriving them. The upgrade of the FastCaloSim tool is described, used for simulating events in the ATLAS calorimeter at a rate far exceeding the nominal detector simulation, Geant4. The integration of a method that allows the toolbox to emulate the accordion geometry of the liquid argon calorimeters is detailed. This tool allows for the production of larger samples while using significantly fewer computing resources. A measurement of the total Higgs boson production cross-section multiplied by the diphoton branching ratio (σ × Bγγ) is presented, where this value was determined to be (σ × Bγγ)obs = 127 ± 7 (stat.) ± 7 (syst.) fb, within agreement with the Standard Model prediction. The signal and background shape modelling is described, and the contribution of the background modelling uncertainty to the total uncertainty ranges from 18–2.4 %, depending on the Higgs boson production mechanism. A method for estimating the number of events in a Monte Carlo background sample required to model the shape is detailed. It was found that the size of the nominal γγ background events sample required a multiplicative increase by a factor of 3.60 to adequately model the background with a confidence level of 68 %, or a factor of 7.20 for a confidence level of 95 %. Based on this estimate, 0.5 billion additional simulated events were produced, substantially reducing the background modelling uncertainty. A technique is detailed for emulating the effects of Monte Carlo event generator differences using multivariate reweighting. The technique is used to estimate the event generator uncertainty on the signal modelling of tHqb events, improving the reliability of estimating the tHqb production cross-section. Then this multivariate reweighting technique is used to estimate the generator modelling uncertainties on background V γγ samples for the first time. The estimated uncertainties were found to be covered by the currently assumed background modelling uncertainty

    An Improved eXplainable Point Cloud Classifier (XPCC)

    Get PDF
    Classification of objects from 3D point clouds has become an increasingly relevant task across many computer vision applications. However, few studies have investigated explainable methods. In this paper, a new prototype-based and explainable classification method called eXplainable Point Cloud Classifier (XPCC) is proposed. The XPCC method offers several advantages over previous explainable and non-explainable methods. First, the XPCC method uses local densities and global multivariate generative distributions. Therefore, the XPCC provides comprehensive and interpretable object-based classification. Furthermore, the proposed method is built on recursive calculations, thus, is computationally very efficient. Second, the model learns continuously without the need for complete re-training and is domain transferable. Third, the proposed XPCC expands on the underlying learning method, xDNN, and is specific to 3D. As such, three new layers are added to the original xDNN architecture: i) the 3D point cloud feature extraction, ii) the global compound prototype weighting, and iii) the SoftMax function. Experiments were performed with the ModelNet40 benchmark which demonstrated that XPCC is the only explainable point cloud classifier to increase classification accuracy relative to the base algorithm when applied to the same problem. Additionally, this paper proposes a novel prototype-based visual representation that provides model- and object-based explanations. The prototype objects are superimposed to create a prototypical class representation of their data density within the feature space, called the Compound Prototype Cloud. They allow a user to visualize the explainable aspects of the model and identify object regions that contribute to the classification in a human-understandable way

    Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond

    Full text link
    [ES] Esta tesis se enmarca en la intersección entre las técnicas modernas de Machine Learning, como las Redes Neuronales Profundas, y el modelado probabilístico confiable. En muchas aplicaciones, no solo nos importa la predicción hecha por un modelo (por ejemplo esta imagen de pulmón presenta cáncer) sino también la confianza que tiene el modelo para hacer esta predicción (por ejemplo esta imagen de pulmón presenta cáncer con 67% probabilidad). En tales aplicaciones, el modelo ayuda al tomador de decisiones (en este caso un médico) a tomar la decisión final. Como consecuencia, es necesario que las probabilidades proporcionadas por un modelo reflejen las proporciones reales presentes en el conjunto al que se ha asignado dichas probabilidades; de lo contrario, el modelo es inútil en la práctica. Cuando esto sucede, decimos que un modelo está perfectamente calibrado. En esta tesis se exploran tres vias para proveer modelos más calibrados. Primero se muestra como calibrar modelos de manera implicita, que son descalibrados por técnicas de aumentación de datos. Se introduce una función de coste que resuelve esta descalibración tomando como partida las ideas derivadas de la toma de decisiones con la regla de Bayes. Segundo, se muestra como calibrar modelos utilizando una etapa de post calibración implementada con una red neuronal Bayesiana. Finalmente, y en base a las limitaciones estudiadas en la red neuronal Bayesiana, que hipotetizamos que se basan en un prior mispecificado, se introduce un nuevo proceso estocástico que sirve como distribución a priori en un problema de inferencia Bayesiana.[CA] Aquesta tesi s'emmarca en la intersecció entre les tècniques modernes de Machine Learning, com ara les Xarxes Neuronals Profundes, i el modelatge probabilístic fiable. En moltes aplicacions, no només ens importa la predicció feta per un model (per ejemplem aquesta imatge de pulmó presenta càncer) sinó també la confiança que té el model per fer aquesta predicció (per exemple aquesta imatge de pulmó presenta càncer amb 67% probabilitat). En aquestes aplicacions, el model ajuda el prenedor de decisions (en aquest cas un metge) a prendre la decisió final. Com a conseqüència, cal que les probabilitats proporcionades per un model reflecteixin les proporcions reals presents en el conjunt a què s'han assignat aquestes probabilitats; altrament, el model és inútil a la pràctica. Quan això passa, diem que un model està perfectament calibrat. En aquesta tesi s'exploren tres vies per proveir models més calibrats. Primer es mostra com calibrar models de manera implícita, que són descalibrats per tècniques d'augmentació de dades. S'introdueix una funció de cost que resol aquesta descalibració prenent com a partida les idees derivades de la presa de decisions amb la regla de Bayes. Segon, es mostra com calibrar models utilitzant una etapa de post calibratge implementada amb una xarxa neuronal Bayesiana. Finalment, i segons les limitacions estudiades a la xarxa neuronal Bayesiana, que es basen en un prior mispecificat, s'introdueix un nou procés estocàstic que serveix com a distribució a priori en un problema d'inferència Bayesiana.[EN] This thesis is framed at the intersection between modern Machine Learning techniques, such as Deep Neural Networks, and reliable probabilistic modeling. In many machine learning applications, we do not only care about the prediction made by a model (e.g. this lung image presents cancer) but also in how confident is the model in making this prediction (e.g. this lung image presents cancer with 67% probability). In such applications, the model assists the decision-maker (in this case a doctor) towards making the final decision. As a consequence, one needs that the probabilities provided by a model reflects the true underlying set of outcomes, otherwise the model is useless in practice. When this happens, we say that a model is perfectly calibrated. In this thesis three ways are explored to provide more calibrated models. First, it is shown how to calibrate models implicitly, which are decalibrated by data augmentation techniques. A cost function is introduced that solves this decalibration taking as a starting point the ideas derived from decision making with Bayes' rule. Second, it shows how to calibrate models using a post-calibration stage implemented with a Bayesian neural network. Finally, and based on the limitations studied in the Bayesian neural network, which we hypothesize that came from a mispecified prior, a new stochastic process is introduced that serves as a priori distribution in a Bayesian inference problem.Maroñas Molano, J. (2022). Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181582TESI

    Application of advanced fluorescence microscopy and spectroscopy in live-cell imaging

    Get PDF
    Since its inception, fluorescence microscopy has been a key source of discoveries in cell biology. Advancements in fluorophores, labeling techniques and instrumentation have made fluorescence microscopy a versatile quantitative tool for studying dynamic processes and interactions both in vitro and in live-cells. In this thesis, I apply quantitative fluorescence microscopy techniques in live-cell environments to investigate several biological processes. To study Gag processing in HIV-1 particles, fluorescence lifetime imaging microscopy and single particle tracking are combined to follow nascent HIV-1 virus particles during assembly and release on the plasma membrane of living cells. Proteolytic release of eCFP embedded in the Gag lattice of immature HIV-1 virus particles results in a characteristic increase in its fluorescence lifetime. Gag processing and rearrangement can be detected in individual virus particles using this approach. In another project, a robust method for quantifying Förster resonance energy transfer in live-cells is developed to allow direct comparison of live-cell FRET experiments between laboratories. Finally, I apply image fluctuation spectroscopy to study protein behavior in a variety of cellular environments. Image cross-correlation spectroscopy is used to study the oligomerization of CXCR4, a G-protein coupled receptor on the plasma membrane. With raster image correlation spectroscopy, I measure the diffusion of histones in the nucleoplasm and heterochromatin domains of the nuclei of early mouse embryos. The lower diffusion coefficient of histones in the heterochromatin domain supports the conclusion that heterochromatin forms a liquid phase-separated domain. The wide range of topics covered in this thesis demonstrate that fluorescence microscopy is more than just an imaging tool but also a powerful instrument for the quantification and elucidation of dynamic cellular processes
    corecore