9,598 research outputs found

    Histopathology image classification: highlighting the gap between manual analysis and AI automation

    Get PDF
    The field of histopathological image analysis has evolved significantly with the advent of digital pathology, leading to the development of automated models capable of classifying tissues and structures within diverse pathological images. Artificial intelligence algorithms, such as convolutional neural networks, have shown remarkable capabilities in pathology image analysis tasks, including tumor identification, metastasis detection, and patient prognosis assessment. However, traditional manual analysis methods have generally shown low accuracy in diagnosing colorectal cancer using histopathological images. This study investigates the use of AI in image classification and image analytics using histopathological images using the histogram of oriented gradients method. The study develops an AI-based architecture for image classification using histopathological images, aiming to achieve high performance with less complexity through specific parameters and layers. In this study, we investigate the complicated state of histopathological image classification, explicitly focusing on categorizing nine distinct tissue types. Our research used open-source multi-centered image datasets that included records of 100.000 non-overlapping images from 86 patients for training and 7180 non-overlapping images from 50 patients for testing. The study compares two distinct approaches, training artificial intelligence-based algorithms and manual machine learning models, to automate tissue classification. This research comprises two primary classification tasks: binary classification, distinguishing between normal and tumor tissues, and multi-classification, encompassing nine tissue types, including adipose, background, debris, stroma, lymphocytes, mucus, smooth muscle, normal colon mucosa, and tumor. Our findings show that artificial intelligence-based systems can achieve 0.91 and 0.97 accuracy in binary and multi-class classifications. In comparison, the histogram of directed gradient features and the Random Forest classifier achieved accuracy rates of 0.75 and 0.44 in binary and multi-class classifications, respectively. Our artificial intelligence-based methods are generalizable, allowing them to be integrated into histopathology diagnostics procedures and improve diagnostic accuracy and efficiency. The CNN model outperforms existing machine learning techniques, demonstrating its potential to improve the precision and effectiveness of histopathology image analysis. This research emphasizes the importance of maintaining data consistency and applying normalization methods during the data preparation stage for analysis. It particularly highlights the potential of artificial intelligence to assess histopathological images

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    Towards Neuromorphic Gradient Descent: Exact Gradients and Low-Variance Online Estimates for Spiking Neural Networks

    Get PDF
    Spiking Neural Networks (SNNs) are biologically-plausible models that can run on low-powered non-Von Neumann neuromorphic hardware, positioning them as promising alternatives to conventional Deep Neural Networks (DNNs) for energy-efficient edge computing and robotics. Over the past few years, the Gradient Descent (GD) and Error Backpropagation (BP) algorithms used in DNNs have inspired various training methods for SNNs. However, the non-local and the reverse nature of BP, combined with the inherent non-differentiability of spikes, represent fundamental obstacles to computing gradients with SNNs directly on neuromorphic hardware. Therefore, novel approaches are required to overcome the limitations of GD and BP and enable online gradient computation on neuromorphic hardware. In this thesis, I address the limitations of GD and BP with SNNs by proposing three algorithms. First, I extend a recent method that computes exact gradients with temporally-coded SNNs by relaxing the firing constraint of temporal coding and allowing multiple spikes per neuron. My proposed method generalizes the computation of exact gradients with SNNs and enhances the tradeoffs between performance and various other aspects of spiking neurons. Next, I introduce a novel alternative to BP that computes low-variance gradient estimates in a local and online manner. Compared to other alternatives to BP, the proposed method demonstrates an improved convergence rate and increased performance with DNNs. Finally, I combine these two methods and propose an algorithm that estimates gradients with SNNs in a manner that is compatible with the constraints of neuromorphic hardware. My empirical results demonstrate the effectiveness of the resulting algorithm in training SNNs without performing BP

    Efficient Visual Computing with Camera RAW Snapshots

    Get PDF
    Conventional cameras capture image irradiance (RAW) on a sensor and convert it to RGB images using an image signal processor (ISP). The images can then be used for photography or visual computing tasks in a variety of applications, such as public safety surveillance and autonomous driving. One can argue that since RAW images contain all the captured information, the conversion of RAW to RGB using an ISP is not necessary for visual computing. In this paper, we propose a novel ρ-Vision framework to perform high-level semantic understanding and low-level compression using RAW images without the ISP subsystem used for decades. Considering the scarcity of available RAW image datasets, we first develop an unpaired CycleR2R network based on unsupervised CycleGAN to train modular unrolled ISP and inverse ISP (invISP) models using unpaired RAW and RGB images. We can then flexibly generate simulated RAW images (simRAW) using any existing RGB image dataset and finetune different models originally trained in the RGB domain to process real-world camera RAW images. We demonstrate object detection and image compression capabilities in RAW-domain using RAW-domain YOLOv3 and RAW image compressor (RIC) on camera snapshots. Quantitative results reveal that RAW-domain task inference provides better detection accuracy and compression efficiency compared to that in the RGB domain. Furthermore, the proposed ρ-Vision generalizes across various camera sensors and different task-specific models. An added benefit of employing the ρ-Vision is the elimination of the need for ISP, leading to potential reductions in computations and processing times

    Reliable Sensor Intelligence in Resource Constrained and Unreliable Environment

    Get PDF
    The objective of this research is to design a sensor intelligence that is reliable in a resource constrained, unreliable environment. There are various sources of variations and uncertainty involved in intelligent sensor system, so it is critical to build reliable sensor intelligence. Many prior works seek to design reliable sensor intelligence by developing robust and reliable task. This thesis suggests that along with improving task itself, task reliability quantification based early warning can further improve sensor intelligence. DNN based early warning generator quantifies task reliability based on spatiotemporal characteristics of input, and the early warning controls sensor parameters and avoids system failure. This thesis presents an early warning generator that predicts task failure due to sensor hardware induced input corruption and controls the sensor operation. Moreover, lightweight uncertainty estimator is presented to take account of DNN model uncertainty in task reliability quantification without prohibitive computation from stochastic DNN. Cross-layer uncertainty estimation is also discussed to consider the effect of PIM variations.Ph.D

    Inverse Design of Metamaterials for Tailored Linear and Nonlinear Optical Responses Using Deep Learning

    Get PDF
    The conventional process for developing an optimal design for nonlinear optical responses is based on a trial-and-error approach that is largely inefficient and does not necessarily lead to an ideal result. Deep learning can automate this process and widen the realm of nonlinear geometries and devices. This research illustrates a deep learning framework used to create an optimal plasmonic design for metamaterials with specific desired optical responses, both linear and nonlinear. The algorithm can produce plasmonic patterns that can maximize second-harmonic nonlinear effects of a nonlinear metamaterial. A nanolaminate metamaterial is used as a nonlinear material, and a plasmonic patterns are fabricated on the prepared nanolaminate to demonstrate the validity and efficacy of the deep learning algorithm for second-harmonic generation. Photonic upconversion from the infrared regime to the visible spectrum can occur through sum-frequency generation. The deep learning algorithm was improved to optimize a nonlinear plasmonic metamaterial for sum-frequency generation. The framework was then further expanded using transfer learning to lessen computation resources required to optimize metamaterials for new design parameters. The deep learning architecture applied in this research can be expanded to other optical responses and drive the innovation of novel optical applications.Ph.D

    The 2023 terahertz science and technology roadmap

    Get PDF
    Terahertz (THz) radiation encompasses a wide spectral range within the electromagnetic spectrum that extends from microwaves to the far infrared (100 GHz–∌30 THz). Within its frequency boundaries exist a broad variety of scientific disciplines that have presented, and continue to present, technical challenges to researchers. During the past 50 years, for instance, the demands of the scientific community have substantially evolved and with a need for advanced instrumentation to support radio astronomy, Earth observation, weather forecasting, security imaging, telecommunications, non-destructive device testing and much more. Furthermore, applications have required an emergence of technology from the laboratory environment to production-scale supply and in-the-field deployments ranging from harsh ground-based locations to deep space. In addressing these requirements, the research and development community has advanced related technology and bridged the transition between electronics and photonics that high frequency operation demands. The multidisciplinary nature of THz work was our stimulus for creating the 2017 THz Science and Technology Roadmap (Dhillon et al 2017 J. Phys. D: Appl. Phys. 50 043001). As one might envisage, though, there remains much to explore both scientifically and technically and the field has continued to develop and expand rapidly. It is timely, therefore, to revise our previous roadmap and in this 2023 version we both provide an update on key developments in established technical areas that have important scientific and public benefit, and highlight new and emerging areas that show particular promise. The developments that we describe thus span from fundamental scientific research, such as THz astronomy and the emergent area of THz quantum optics, to highly applied and commercially and societally impactful subjects that include 6G THz communications, medical imaging, and climate monitoring and prediction. Our Roadmap vision draws upon the expertise and perspective of multiple international specialists that together provide an overview of past developments and the likely challenges facing the field of THz science and technology in future decades. The document is written in a form that is accessible to policy makers who wish to gain an overview of the current state of the THz art, and for the non-specialist and curious who wish to understand available technology and challenges. A such, our experts deliver a 'snapshot' introduction to the current status of the field and provide suggestions for exciting future technical development directions. Ultimately, we intend the Roadmap to portray the advantages and benefits of the THz domain and to stimulate further exploration of the field in support of scientific research and commercial realisation

    La traduzione specializzata all’opera per una piccola impresa in espansione: la mia esperienza di internazionalizzazione in cinese di Bioretics© S.r.l.

    Get PDF
    Global markets are currently immersed in two all-encompassing and unstoppable processes: internationalization and globalization. While the former pushes companies to look beyond the borders of their country of origin to forge relationships with foreign trading partners, the latter fosters the standardization in all countries, by reducing spatiotemporal distances and breaking down geographical, political, economic and socio-cultural barriers. In recent decades, another domain has appeared to propel these unifying drives: Artificial Intelligence, together with its high technologies aiming to implement human cognitive abilities in machinery. The “Language Toolkit – Le lingue straniere al servizio dell’internazionalizzazione dell’impresa” project, promoted by the Department of Interpreting and Translation (ForlĂŹ Campus) in collaboration with the Romagna Chamber of Commerce (ForlĂŹ-Cesena and Rimini), seeks to help Italian SMEs make their way into the global market. It is precisely within this project that this dissertation has been conceived. Indeed, its purpose is to present the translation and localization project from English into Chinese of a series of texts produced by Bioretics© S.r.l.: an investor deck, the company website and part of the installation and use manual of the Aliquis© framework software, its flagship product. This dissertation is structured as follows: Chapter 1 presents the project and the company in detail; Chapter 2 outlines the internationalization and globalization processes and the Artificial Intelligence market both in Italy and in China; Chapter 3 provides the theoretical foundations for every aspect related to Specialized Translation, including website localization; Chapter 4 describes the resources and tools used to perform the translations; Chapter 5 proposes an analysis of the source texts; Chapter 6 is a commentary on translation strategies and choices

    Reaching the Edge of the Edge: Image Analysis in Space

    Full text link
    Satellites have become more widely available due to the reduction in size and cost of their components. As a result, there has been an advent of smaller organizations having the ability to deploy satellites with a variety of data-intensive applications to run on them. One popular application is image analysis to detect, for example, land, ice, clouds, etc. for Earth observation. However, the resource-constrained nature of the devices deployed in satellites creates additional challenges for this resource-intensive application. In this paper, we present our work and lessons-learned on building an Image Processing Unit (IPU) for a satellite. We first investigate the performance of a variety of edge devices (comparing CPU, GPU, TPU, and VPU) for deep-learning-based image processing on satellites. Our goal is to identify devices that can achieve accurate results and are flexible when workload changes while satisfying the power and latency constraints of satellites. Our results demonstrate that hardware accelerators such as ASICs and GPUs are essential for meeting the latency requirements. However, state-of-the-art edge devices with GPUs may draw too much power for deployment on a satellite. Then, we use the findings gained from the performance analysis to guide the development of the IPU module for an upcoming satellite mission. We detail how to integrate such a module into an existing satellite architecture and the software necessary to support various missions utilizing this module
    • 

    corecore