156 research outputs found

    Computational Imaging for Phase Retrieval and Biomedical Applications

    Get PDF
    In conventional imaging, optimizing hardware is prioritized to enhance image quality directly. Digital signal processing is viewed as supplementary. Computational imaging intentionally distorts images through modulation schemes in illumination or sensing. Then its reconstruction algorithms extract desired object information from raw data afterwards. Co-designing hardware and algorithms reduces demands on hardware and achieves the same or even better image quality. Algorithm design is at the heart of computational imaging, with model-based inverse problem or data-driven deep learning methods as approaches. This thesis presents research work from both perspectives, with a primary focus on the phase retrieval issue in computational microscopy and the application of deep learning techniques to address biomedical imaging challenges. The first half of the thesis begins with Fourier ptychography, which was employed to overcome chromatic aberration problems in multispectral imaging. Then, we proposed a novel computational coherent imaging modality based on Kramers-Kronig relations, aiming to replace Fourier ptychography as a non-iterative method. While this approach showed promise, it lacks certain essential characteristics of the original Fourier ptychography. To address this limitation, we introduced two additional algorithms to form a whole package scheme. Through comprehensive evaluation, we demonstrated that the combined scheme outperforms Fourier ptychography in achieving high-resolution, large field-of-view, aberration-free coherent imaging. The second half of the thesis shifts focus to deep-learning-based methods. In one project, we optimized the scanning strategy and image processing pipeline of an epifluorescence microscope to address focus issues. Additionally, we leveraged deep-learning-based object detection models to automate cell analysis tasks. In another project, we predicted the polarity status of mouse embryos from bright field images using adapted deep learning models. These findings highlight the capability of computational imaging to automate labor-intensive processes, and even outperform humans in challenging tasks.</p

    Behavior quantification as the missing link between fields: Tools for digital psychiatry and their role in the future of neurobiology

    Full text link
    The great behavioral heterogeneity observed between individuals with the same psychiatric disorder and even within one individual over time complicates both clinical practice and biomedical research. However, modern technologies are an exciting opportunity to improve behavioral characterization. Existing psychiatry methods that are qualitative or unscalable, such as patient surveys or clinical interviews, can now be collected at a greater capacity and analyzed to produce new quantitative measures. Furthermore, recent capabilities for continuous collection of passive sensor streams, such as phone GPS or smartwatch accelerometer, open avenues of novel questioning that were previously entirely unrealistic. Their temporally dense nature enables a cohesive study of real-time neural and behavioral signals. To develop comprehensive neurobiological models of psychiatric disease, it will be critical to first develop strong methods for behavioral quantification. There is huge potential in what can theoretically be captured by current technologies, but this in itself presents a large computational challenge -- one that will necessitate new data processing tools, new machine learning techniques, and ultimately a shift in how interdisciplinary work is conducted. In my thesis, I detail research projects that take different perspectives on digital psychiatry, subsequently tying ideas together with a concluding discussion on the future of the field. I also provide software infrastructure where relevant, with extensive documentation. Major contributions include scientific arguments and proof of concept results for daily free-form audio journals as an underappreciated psychiatry research datatype, as well as novel stability theorems and pilot empirical success for a proposed multi-area recurrent neural network architecture.Comment: PhD thesis cop

    The landscape of combination therapies against glioblastoma:From promises to challenges

    Get PDF
    We demonstrate in this thesis how new targets can be identified and highlight the challenges that lie in front of us when trying to translate these steps toward the clinic. We conclude that the blood-brain barrier, PD/PK of drugs, and therapy resistance are still major challenges and explain the limited improvement in treatment options for patients with GBM. First, GBM is a diffuse glioma located in the brain where the blood-brain barrier prevents the crossing of drugs and thereby limits the efficacy of treatment. Second, inter- and intratumoral heterogeneity have been observed in GBM leading to different cellular subpopulations with distinctive genetic profiles. Hence, treating these subpopulations with targeted drugs allows until now still survival of certain subpopulations that are not sensitive to this treatment. Lastly, therapy resistance is often seen in GBM patients and is probably related to intratumoral heterogeneity, but the intrinsic molecular mechanism is still not fully understood. Together they lead to the inevitable recurrence of the tumor

    Beyond Quantity: Research with Subsymbolic AI

    Get PDF
    How do artificial neural networks and other forms of artificial intelligence interfere with methods and practices in the sciences? Which interdisciplinary epistemological challenges arise when we think about the use of AI beyond its dependency on big data? Not only the natural sciences, but also the social sciences and the humanities seem to be increasingly affected by current approaches of subsymbolic AI, which master problems of quality (fuzziness, uncertainty) in a hitherto unknown way. But what are the conditions, implications, and effects of these (potential) epistemic transformations and how must research on AI be configured to address them adequately

    Intelligent computing : the latest advances, challenges and future

    Get PDF
    Computing is a critical driving force in the development of human civilization. In recent years, we have witnessed the emergence of intelligent computing, a new computing paradigm that is reshaping traditional computing and promoting digital revolution in the era of big data, artificial intelligence and internet-of-things with new computing theories, architectures, methods, systems, and applications. Intelligent computing has greatly broadened the scope of computing, extending it from traditional computing on data to increasingly diverse computing paradigms such as perceptual intelligence, cognitive intelligence, autonomous intelligence, and human computer fusion intelligence. Intelligence and computing have undergone paths of different evolution and development for a long time but have become increasingly intertwined in recent years: intelligent computing is not only intelligence-oriented but also intelligence-driven. Such cross-fertilization has prompted the emergence and rapid advancement of intelligent computing

    Toward Understanding Visual Perception in Machines with Human Psychophysics

    Get PDF
    Over the last several years, Deep Learning algorithms have become more and more powerful. As such, they are being deployed in increasingly many areas including ones that can directly affect human lives. At the same time, regulations like the GDPR or the AI Act are putting the request and need to better understand these artificial algorithms on legal grounds. How do these algorithms come to their decisions? What limits do they have? And what assumptions do they make? This thesis presents three publications that deepen our understanding of deep convolutional neural networks (DNNs) for visual perception of static images. While all of them leverage human psychophysics, they do so in two different ways: either via direct comparison between human and DNN behavioral data or via an evaluation of the helpfulness of an explainability method. Besides insights on DNNs, these works emphasize good practices: For comparison studies, we propose a checklist on how to design, conduct and interpret experiments between different systems. And for explainability methods, our evaluations exemplify that quantitatively testing widely spread intuitions can help put their benefits in a realistic perspective. In the first publication, we test how similar DNNs are to the human visual system, and more specifically its capabilities and information processing. Our experiments reveal that DNNs (1)~can detect closed contours, (2)~perform well on an abstract visual reasoning task and (3)~correctly classify small image crops. On a methodological level, these experiments illustrate that (1)~human bias can influence our interpretation of findings, (2)~distinguishing necessary and sufficient mechanisms can be challenging, and (3)~the degree of aligning experimental conditions between systems can alter the outcome. In the second and third publications, we evaluate how helpful humans find the explainability method feature visualization. The purpose of this tool is to grant insights into the features of a DNN. To measure the general informativeness and causal understanding supported via feature visualizations, we test participants on two different psychophysical tasks. Our data unveil that humans can indeed understand the inner DNN semantics based on this explainability tool. However, other visualizations such as natural data set samples also provide useful, and sometimes even \emph{more} useful, information. On a methodological level, our work illustrates that human evaluations can adjust our expectations toward explainability methods and that different claims have to match the experiment

    Expanding the Horizons of Manufacturing: Towards Wide Integration, Smart Systems and Tools

    Get PDF
    This research topic aims at enterprise-wide modeling and optimization (EWMO) through the development and application of integrated modeling, simulation and optimization methodologies, and computer-aided tools for reliable and sustainable improvement opportunities within the entire manufacturing network (raw materials, production plants, distribution, retailers, and customers) and its components. This integrated approach incorporates information from the local primary control and supervisory modules into the scheduling/planning formulation. That makes it possible to dynamically react to incidents that occur in the network components at the appropriate decision-making level, requiring fewer resources, emitting less waste, and allowing for better responsiveness in changing market requirements and operational variations, reducing cost, waste, energy consumption and environmental impact, and increasing the benefits. More recently, the exploitation of new technology integration, such as through semantic models in formal knowledge models, allows for the capture and utilization of domain knowledge, human knowledge, and expert knowledge toward comprehensive intelligent management. Otherwise, the development of advanced technologies and tools, such as cyber-physical systems, the Internet of Things, the Industrial Internet of Things, Artificial Intelligence, Big Data, Cloud Computing, Blockchain, etc., have captured the attention of manufacturing enterprises toward intelligent manufacturing systems

    Prediction of Secondary Protein Structure

    Get PDF
    Αυτό το έργο στοχεύει να δείξει στους αναγνώστες του μια προσπάθεια για την επίλυση του προβλήματος πρόβλεψης της δευτερογενούς δομής πρωτεΐνης χρησιμοποιώντας βαθιά υπολειμματικά νευρωνικά δίκτυα και άλλες μεθόδους. Οι πρωτεΐνες είναι ένα από τα πιο ζωτικά συστατικά κάθε ζωντανού όντος. Παίζουν πολύ σημαντικό ρόλο καθώς καθορίζουν τις λειτουργίες ενός οργανισμού. Επομένως, η γνώση της δομής της πρωτεΐνης είναι μεγάλης σημασίας. Συγκεκριμένα, η δομή της πρωτεΐνης αποτελείται από τέσσερα επίπεδα. πρωτοταγής, δευτεροταγής, τριτοταγής και τεταρτοταγής πρωτεϊνική δομή. Η πιο σημαντική είναι η δομή στον τρισδιάστατο χώρο, η τριτοταγής δομή , γιατί αυτή καθορίζει τον βιολογικό ρόλο της πρωτεΐνης. Ως αποτέλεσμα, η γνώση των πρωτεϊνικών λειτουργιών μπορεί να βοηθήσει στη θεραπεία πολλών ασθενειών. Δυστυχώς, οι μεθοδολογίες εξαγωγών που έχουν αναπτυχθεί μέχρι τώρα, είναι πολύ περίπλοκες και χρονοβόρες διαδικασίες. Ο ορισμός της δευτεροταγής δομής είναι απαραίτητος για την εξαγωγή της τριτοταγής δομής και αυτός είναι ο λόγος που μελετάται. Η δευτεροταγής δομή εξάγεται από την πρωτοταγή δομή, η οποία περιλαμβάνει μια αλληλουχία αμινοξέων. Σε αυτό το έργο θα αναλυθούν κυρίως τα βαθιά υπολειμματικά δίκτυα και ο τρόπος που μπορούν να βοηθήσουν στην πρόβλεψη της δευτεροταγούς δομής της πρωτεΐνης. Τέτοια δίκτυα ανήκουν στην κατηγορία των βαθιών νευρωνικών δικτύων, τα οποία ουσιαστικά αποτελούνται από συγκλίνοντα επίπεδα με προσθετικές συνδέσεις μεταξύ τους.This project aims to show its readers an effort for the solution of the prediction problem of the protein secondary structure using deep residual neural networks and other methods. Proteins are one of the most vital components of every living being. They play a quite important role as they define the functions of an organism. Therefore, knowing the protein structure is of great importance. Specifically, protein structure consists of four levels; primary, secondary, tertiary and quaternary protein structure. The most significant is the structure in the three-dimensional space, the tertiary structure because this one defines the biological role of the protein. As a result, knowing the protein functions may help the treatment of many diseases. Unfortunately, the export methodologies that are developed so far, are very complicated and time-wasting procedures. The definition of the secondary structure is needed to export the tertiary structure and that is the reason it is studied. The secondary structure is exported by the primary structure, which includes an amino acid sequence. In this project the deep residual networks and the way they can help for the prediction of the protein secondary structure will mainly be analyzed. Such networks belong to the category of deep residual neural ones, which essentially consist of convergent levels with additive connections among them
    corecore