1,813 research outputs found

    New opportunities for light-based tumor treatment with an “iron fist”

    Full text link
    The efficacy of photodynamic treatments of tumors can be significantly improved by using a new generation of nanoparticles that take advantage of the unique properties of the tumor microenvironmen

    Thirty years of artificial intelligence in medicine (AIME) conferences: A review of research themes

    Get PDF
    Over the past 30 years, the international conference on Artificial Intelligence in MEdicine (AIME) has been organized at different venues across Europe every 2 years, establishing a forum for scientific exchange and creating an active research community. The Artificial Intelligence in Medicine journal has published theme issues with extended versions of selected AIME papers since 1998

    Quo vadis, nanoparticle-enabled in vivo fluorescence imaging?

    Full text link
    The exciting advancements that we are currently witnessing in terms of novel materials and synthesis approaches are leading to the development of colloidal nanoparticles (NPs) with increasingly greater tunable properties. We have now reached a point where it is possible to synthesize colloidal NPs with functionalities tailored to specific societal demands. The impact of this new wave of colloidal NPs has been especially important in the field of biomedicine. In that vein, luminescent NPs with improved brightness and near-infrared working capabilities have turned out to be optimal optical probes that are capable of fast and high-resolution in vivo imaging. However, luminescent NPs have thus far only reached a limited portion of their potential. Although we believe that the best is yet to come, the future might not be as bright as some of us think (and have hoped!). In particular, translation of NP-based fluorescence imaging from preclinical studies to clinics is not straightforward. In this Perspective, we provide a critical assessment and highlight promising research avenues based on the latest advances in the fields of luminescent NPs and imaging technologies. The disillusioned outlook we proffer herein might sound pessimistic at first, but we consider it necessary to avoid pursuing "pipe dreams"and redirect the efforts toward achievable - yet ambitious - goalsThis work has been cofinanced by European Structural and Investment Fund and by the European Union’s Horizon 2020 FET Open program under grant agreement no. 801305 (NanoTBTech). E.X. is grateful for a Juan de la Cierva Formacion scholarship (FJC2018-036734-I). A.B. acknowl- ́ edges funding from Comunidad de Madrid through TALENTO grant ref. 2019-T1/IND-14014. D.

    Less is more: dimensionality reduction as a general strategy for more precise luminescence thermometry

    Get PDF
    Thermal resolution (also referred to as temperature uncertainty) establishes the minimum discernible temperature change sensed by luminescent thermometers and is a key figure of merit to rank them. Much has been done to minimize its value via probe optimization and correction of readout artifacts, but little effort was put into a better exploitation of calibration datasets. In this context, this work aims at providing a new perspective on the definition of luminescence-based thermometric parameters using dimensionality reduction techniques that emerged in the last years. The application of linear (Principal Component Analysis) and non-linear (t-distributed Stochastic Neighbor Embedding) transformations to the calibration datasets obtained from rare-earth nanoparticles and semiconductor nanocrystals resulted in an improvement in thermal resolution compared to the more classical intensity-based and ratiometric approaches. This, in turn, enabled precise monitoring of temperature changes smaller than 0.1 °C. The methods here presented allow choosing superior thermometric parameters compared to the more classical ones, pushing the performance of luminescent thermometers close to the experimentally achievable limits.publishe

    Smoothness and effective regularizations in learned embeddings for shape matching

    Full text link
    Many innovative applications require establishing correspondences among 3D geometric objects. However, the countless possible deformations of smooth surfaces make shape matching a challenging task. Finding an embedding to represent the different shapes in high-dimensional space where the matching is easier to solve is a well-trodden path that has given many outstanding solutions. Recently, a new trend has shown advantages in learning such representations. This novel idea motivated us to investigate which properties differentiate these data-driven embeddings and which ones promote state-of-the-art results. In this study, we analyze, for the first time, properties that arise in data-driven learned embedding and their relation to the shape-matching task. Our discoveries highlight the close link between matching and smoothness, which naturally emerge from training. Also, we demonstrate the relation between the orthogonality of the embedding and the bijectivity of the correspondence. Our experiments show exciting results, overcoming well-established alternatives and shedding a different light on relevant contexts and properties for learned embeddings

    Object pop-up: Can we infer 3D objects and their poses from human interactions alone?

    Full text link
    The intimate entanglement between objects affordances and human poses is of large interest, among others, for behavioural sciences, cognitive psychology, and Computer Vision communities. In recent years, the latter has developed several object-centric approaches: starting from items, learning pipelines synthesizing human poses and dynamics in a realistic way, satisfying both geometrical and functional expectations. However, the inverse perspective is significantly less explored: Can we infer 3D objects and their poses from human interactions alone? Our investigation follows this direction, showing that a generic 3D human point cloud is enough to pop up an unobserved object, even when the user is just imitating a functionality (e.g., looking through a binocular) without involving a tangible counterpart. We validate our method qualitatively and quantitatively, with synthetic data and sequences acquired for the task, showing applicability for XR/VR. The code is available at https://github.com/ptrvilya/object-popup.Comment: Accepted at CVPR'2

    Shape Registration in the Time of Transformers

    Get PDF
    In this paper, we propose a transformer-based procedure for the efficient registration of non-rigid 3D point clouds. The proposed approach is data-driven and adopts for the first time the transformers architecture in the registration task. Our method is general and applies to different settings. Given a fixed template with some desired properties (e.g. skinning weights or other animation cues), we can register raw acquired data to it, thereby transferring all the template properties to the input geometry. Alternatively, given a pair of shapes, our method can register the first onto the second (or vice-versa), obtaining a high-quality dense correspondence between the two. In both contexts, the quality of our results enables us to target real applications such as texture transfer and shape interpolation. Furthermore, we also show that including an estimation of the underlying density of the surface eases the learning process. By exploiting the potential of this architecture, we can train our model requiring only a sparse set of ground truth correspondences (10∌20% of the total points). The proposed model and the analysis that we perform pave the way for future exploration of transformer-based architectures for registration and matching applications. Qualitative and quantitative evaluations demonstrate that our pipeline outperforms state-of-the-art methods for deformable and unordered 3D data registration on different datasets and scenarios

    Interaction Replica: Tracking human-object interaction and scene changes from human motion

    Full text link
    Humans naturally change their environment through interactions, e.g., by opening doors or moving furniture. To reproduce such interactions in virtual spaces (e.g., metaverse), we need to capture and model them, including changes in the scene geometry, ideally from egocentric input alone (head camera and body-worn inertial sensors). While the head camera can be used to localize the person in the scene, estimating dynamic object pose is much more challenging. As the object is often not visible from the head camera (e.g., a human not looking at a chair while sitting down), we can not rely on visual object pose estimation. Instead, our key observation is that human motion tells us a lot about scene changes. Motivated by this, we present iReplica, the first human-object interaction reasoning method which can track objects and scene changes based solely on human motion. iReplica is an essential first step towards advanced AR/VR applications in immersive virtual universes and can provide human-centric training data to teach machines to interact with their surroundings. Our code, data and model will be available on our project page at http://virtualhumans.mpi-inf.mpg.de/ireplica
    • 

    corecore