21 research outputs found
Optimising superoscillatory spots for far-field super-resolution imaging
Optical superoscillatory imaging, allowing unlabelled far-field super-resolution, has in recent years become reality. Instruments have been built and their super-resolution imaging capabilities demonstrated. The question is no longer whether this can be done, but how well: what resolution is practically achievable? Numerous works have optimised various particular features of superoscillatory spots, but in order to probe the limits of superoscillatory imaging we need to simultaneously optimise all the important spot features: those that define the resolution of the system. We simultaneously optimise spot size and its intensity relative to the sidebands for various fields of view, giving a set of best compromises for use in different imaging scenarios. Our technique uses the circular prolate spheroidal wave functions as a basis set on the field of view, and the optimal combination of these, representing the optimal spot, is found using a multi-objective genetic algorithm. We then introduce a less computationally demanding approach suitable for real-time use in the laboratory which, crucially, allows independent control of spot size and field of view. Imaging simulations demonstrate the resolution achievable with these spots. We show a three-order-of-magnitude improvement in the efficiency of focusing to achieve the same resolution as previously reported results, or a 26 % increase in resolution for the same efficiency of focusing
Far-Field Superoscillatory Metamaterial Superlens
We demonstrate a metamaterial superlens: a planar array of discrete subwavelength metamolecules with individual scattering characteristics tailored to vary spatially to create subdiffraction superoscillatory focus of, in principle, arbitrary shape and size. Metamaterial free-space lenses with previously unattainable effective numerical apertures – as high as 1.52 – and foci as small as 0.33λ in size are demonstrated. Super-resolution imaging with such lenses is experimentally verified breaking the conventional diffraction limit of resolution and exhibiting resolution close to the size of the focus. Our approach will enable far-field label-free super-resolution nonalgorithmic microscopies at harmless levels of intensity, including imaging inside cells, nanostructures, and silicon chips, without impregnating them with fluorescent materials
Far-field Unlabelled Super-Resolution Imaging with Superoscillatory Illumination
Unlabelled super-resolution is the next grand challenge in imaging. Stimulated emission depletion and single-molecule microscopies have revolutionised the life sciences but are still limited by the need for reporters (labels) embedded within the sample. While the Veselago-Pendry “super-lens” using a negative-index metamaterial is a promising idea for imaging beyond the diffraction limit, there are substantial technological challenges to its realisation. Another route to far-field subwavelength focusing is using optical superoscillations: engineered interference of multiple coherent waves creating an, in principle, arbitrarily small hotspot. Here we demonstrate microscopy with superoscillatory illumination of the object and describe its underlying principles. We show that far-field images taken with superoscillatory
illumination are themselves superoscillatory and hence can reveal fine structural details of the object that are lost in conventional far-field imaging. We show that the resolution of a superoscillatory microscope is determined by the size of the hotspot, rather than the bandwidth of the optical instrument. We demonstrate high-frame-rate polarisation-contrast imaging of unmodified living cells with resolution significantly exceeding that achievable with conventional instruments. This non-algorithmic, low-phototoxicity imaging technology is a powerful tool both for biological research and for super-resolution imaging of samples that do not allow labelling, such as the interior of silicon chips
Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and
healthcare, the deployment and adoption of AI technologies remain limited in
real-world clinical practice. In recent years, concerns have been raised about
the technical, clinical, ethical and legal risks associated with medical AI. To
increase real world adoption, it is essential that medical AI tools are trusted
and accepted by patients, clinicians, health organisations and authorities.
This work describes the FUTURE-AI guideline as the first international
consensus framework for guiding the development and deployment of trustworthy
AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and
currently comprises 118 inter-disciplinary experts from 51 countries
representing all continents, including AI scientists, clinicians, ethicists,
and social scientists. Over a two-year period, the consortium defined guiding
principles and best practices for trustworthy AI through an iterative process
comprising an in-depth literature review, a modified Delphi survey, and online
consensus meetings. The FUTURE-AI framework was established based on 6 guiding
principles for trustworthy AI in healthcare, i.e. Fairness, Universality,
Traceability, Usability, Robustness and Explainability. Through consensus, a
set of 28 best practices were defined, addressing technical, clinical, legal
and socio-ethical dimensions. The recommendations cover the entire lifecycle of
medical AI, from design, development and validation to regulation, deployment,
and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which
provides a structured approach for constructing medical AI tools that will be
trusted, deployed and adopted in real-world practice. Researchers are
encouraged to take the recommendations into account in proof-of-concept stages
to facilitate future translation towards clinical practice of medical AI