76,097 research outputs found

    Printing of wirelessly rechargeable solid-state supercapacitors for soft, smart contact lenses with continuous operations

    Get PDF
    Recent advances in smart contact lenses are essential to the realization of medical applications and vision imaging for augmented reality through wireless communication systems. However, previous research on smart contact lenses has been driven by a wired system or wireless power transfer with temporal and spatial restrictions, which can limit their continuous use and require energy storage devices. Also, the rigidity, heat, and large sizes of conventional batteries are not suitable for the soft, smart contact lens. Here, we describe a human pilot trial of a soft, smart contact lens with a wirelessly rechargeable, solid-state supercapacitor for continuous operation. After printing the supercapacitor, all device components (antenna, rectifier, and light-emitting diode) are fully integrated with stretchable structures for this soft lens without obstructing vision. The good reliability against thermal and electromagnetic radiations and the results of the in vivo tests provide the substantial promise of future smart contact lenses

    MedShapeNet - A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    Get PDF
    Prior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    Full text link
    Prior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbackComment: 16 page

    MedShapeNet - A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    Get PDF
    Prior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac

    Augmented Reality in Minimally Invasive Surgery

    Get PDF
    In the last 15 years Minimally Invasive Surgery, with techniques such as laparoscopy or endoscopy, has become very important and research in this field is increasing since these techniques provide the surgeons with less invasive means of reaching the patient’s internal anatomy and allow for entire procedures to be performed with only minimal trauma to the patient. The advantages of the use of this surgical method are evident for patients because the possible trauma is reduced, postoperative recovery is generally faster and there is less scarring. Despite the improvement in outcomes, indirect access to the operation area causes restricted vision, difficulty in hand-eye coordination, limited mobility handling instruments, two-dimensional imagery with a lack of detailed information and a limited visual field during the whole operation. The use of the emerging Augmented Reality technology shows the way forward by bringing the advantages of direct visualization (which you have in open surgery) back to minimally invasive surgery and increasing the physician's view of his surroundings with information gathered from patient medical images. Augmented Reality can avoid some drawbacks of Minimally Invasive Surgery and can provide opportunities for new medical treatments. After two decades of research into medical Augmented Reality, this technology is now advanced enough to meet the basic requirements for a large number of medical applications and it is feasible that medical AR applications will be accepted by physicians in order to evaluate their use and integration into the clinical workflow. Before seeing the systematic use of these technologies as support for minimally invasive surgery some improvements are still necessary in order to fully satisfy the requirements of operating physicians

    Nextmed: Automatic Imaging Segmentation, 3D Reconstruction, and 3D Model Visualization Platform Using Augmented and Virtual Reality

    Get PDF
    The visualization of medical images with advanced techniques, such as augmented reality and virtual reality, represent a breakthrough for medical professionals. In contrast to more traditional visualization tools lacking 3D capabilities, these systems use the three available dimensions. To visualize medical images in 3D, the anatomical areas of interest must be segmented. Currently, manual segmentation, which is the most commonly used technique, and semi-automatic approaches can be time consuming because a doctor is required, making segmentation for each individual case unfeasible. Using new technologies, such as computer vision and artificial intelligence for segmentation algorithms and augmented and virtual reality for visualization techniques implementation, we designed a complete platform to solve this problem and allow medical professionals to work more frequently with anatomical 3D models obtained from medical imaging. As a result, the Nextmed project, due to the different implemented software applications, permits the importation of digital imaging and communication on medicine (dicom) images on a secure cloud platform and the automatic segmentation of certain anatomical structures with new algorithms that improve upon the current research results. A 3D mesh of the segmented structure is then automatically generated that can be printed in 3D or visualized using both augmented and virtual reality, with the designed software systems. The Nextmed project is unique, as it covers the whole process from uploading dicom images to automatic segmentation, 3D reconstruction, 3D visualization, and manipulation using augmented and virtual reality. There are many researches about application of augmented and virtual reality for medical image 3D visualization; however, they are not automated platforms. Although some other anatomical structures can be studied, we focused on one case: a lung study. Analyzing the application of the platform to more than 1000 dicom images and studying the results with medical specialists, we concluded that the installation of this system in hospitals would provide a considerable improvement as a tool for medical image visualization

    Review on hand gesture recognition

    Get PDF
    The aim of this chapter is to present a review on the development of vision systems based on hand gesture. Vision-Based Human to Computer Interaction (HCI) systems has the ability of carrying a wealth of information in a natural way and at a low cost. Therefore hand recognition becomes a widely studied topic with a wide range of applications such as SL translators, gesture recognition for control, augmented reality, surveillance, medical image processing, and etc. Hand recognition with no constraint on the shape is an open issue because the human hand is a complex articulated object consisting of many connected parts and joints. Considering the global hand pose and each finger joint, human hand motion has roughly 27 degree offreedom (DOF

    Principles of human movement augmentation and the challenges in making it a reality

    Get PDF
    Augmenting the body with artificial limbs controlled concurrently to one's natural limbs has long appeared in science fiction, but recent technological and neuroscientific advances have begun to make this possible. By allowing individuals to achieve otherwise impossible actions, movement augmentation could revolutionize medical and industrial applications and profoundly change the way humans interact with the environment. Here, we construct a movement augmentation taxonomy through what is augmented and how it is achieved. With this framework, we analyze augmentation that extends the number of degrees-of-freedom, discuss critical features of effective augmentation such as physiological control signals, sensory feedback and learning as well as application scenarios, and propose a vision for the field
    corecore