5,768 research outputs found
Robust and Flexible Persistent Scatterer Interferometry for Long-Term and Large-Scale Displacement Monitoring
Die Persistent Scatterer Interferometrie (PSI) ist eine Methode zur Ăberwachung von Verschiebungen der ErdoberflĂ€che aus dem Weltraum. Sie basiert auf der Identifizierung und Analyse von stabilen Punktstreuern (sog. Persistent Scatterer, PS) durch die Anwendung von AnsĂ€tzen der Zeitreihenanalyse auf Stapel von SAR-Interferogrammen. PS Punkte dominieren die RĂŒckstreuung der Auflösungszellen, in denen sie sich befinden, und werden durch geringfĂŒgige Dekorrelation charakterisiert. Verschiebungen solcher PS Punkte können mit einer potenziellen Submillimetergenauigkeit ĂŒberwacht werden, wenn Störquellen effektiv minimiert werden.
Im Laufe der Zeit hat sich die PSI in bestimmten Anwendungen zu einer operationellen Technologie entwickelt. Es gibt jedoch immer noch herausfordernde Anwendungen fĂŒr die Methode. Physische VerĂ€nderungen der LandoberflĂ€che und Ănderungen in der Aufnahmegeometrie können dazu fĂŒhren, dass PS Punkte im Laufe der Zeit erscheinen oder verschwinden. Die Anzahl der kontinuierlich kohĂ€renten PS Punkte nimmt mit zunehmender LĂ€nge der Zeitreihen ab, wĂ€hrend die Anzahl der TPS Punkte zunimmt, die nur wĂ€hrend eines oder mehrerer getrennter Segmente der analysierten Zeitreihe kohĂ€rent sind. Daher ist es wĂŒnschenswert, die Analyse solcher TPS Punkte in die PSI zu integrieren, um ein flexibles PSI-System zu entwickeln, das in der Lage ist mit dynamischen VerĂ€nderungen der LandoberflĂ€che umzugehen und somit ein kontinuierliches Verschiebungsmonitoring ermöglicht. Eine weitere Herausforderung der PSI besteht darin, groĂflĂ€chiges Monitoring in Regionen mit komplexen atmosphĂ€rischen Bedingungen durchzufĂŒhren. Letztere fĂŒhren zu hoher Unsicherheit in den Verschiebungszeitreihen bei groĂen AbstĂ€nden zur rĂ€umlichen Referenz.
Diese Arbeit befasst sich mit Modifikationen und Erweiterungen, die auf der Grund lage eines bestehenden PSI-Algorithmus realisiert wurden, um einen robusten und flexiblen PSI-Ansatz zu entwickeln, der mit den oben genannten Herausforderungen umgehen kann. Als erster Hauptbeitrag wird eine Methode prĂ€sentiert, die TPS Punkte vollstĂ€ndig in die PSI integriert. In Evaluierungsstudien mit echten SAR Daten wird gezeigt, dass die Integration von TPS Punkten tatsĂ€chlich die BewĂ€ltigung dynamischer VerĂ€nderungen der LandoberflĂ€che ermöglicht und mit zunehmender ZeitreihenlĂ€nge zunehmende Relevanz fĂŒr PSI-basierte Beobachtungsnetzwerke hat. Der zweite Hauptbeitrag ist die Vorstellung einer Methode zur kovarianzbasierten Referenzintegration in groĂflĂ€chige PSI-Anwendungen zur SchĂ€tzung von rĂ€umlich korreliertem Rauschen. Die Methode basiert auf der Abtastung des Rauschens an Referenzpixeln mit bekannten Verschiebungszeitreihen und anschlieĂender Interpolation auf die restlichen PS Pixel unter BerĂŒcksichtigung der rĂ€umlichen Statistik des Rauschens. Es wird in einer Simulationsstudie sowie einer Studie mit realen Daten gezeigt, dass die Methode ĂŒberlegene Leistung im Vergleich zu alternativen Methoden zur Reduktion von rĂ€umlich korreliertem Rauschen in Interferogrammen mittels Referenzintegration zeigt.
Die entwickelte PSI-Methode wird schlieĂlich zur Untersuchung von Landsenkung im Vietnamesischen Teil des Mekong Deltas eingesetzt, das seit einigen Jahrzehnten von Landsenkung und verschiedenen anderen Umweltproblemen betroffen ist. Die geschĂ€tzten Landsenkungsraten zeigen eine hohe VariabilitĂ€t auf kurzen sowie groĂen rĂ€umlichen Skalen. Die höchsten Senkungsraten von bis zu 6 cm pro Jahr treten hauptsĂ€chlich in stĂ€dtischen Gebieten auf. Es kann gezeigt werden, dass der gröĂte Teil der Landsenkung ihren Ursprung im oberflĂ€chennahen Untergrund hat. Die prĂ€sentierte Methode zur Reduzierung von rĂ€umlich korreliertem Rauschen verbessert die Ergebnisse signifikant, wenn eine angemessene rĂ€umliche Verteilung von Referenzgebieten verfĂŒgbar ist. In diesem Fall wird das Rauschen effektiv reduziert und unabhĂ€ngige Ergebnisse von zwei Interferogrammstapeln, die aus unterschiedlichen Orbits aufgenommen wurden, zeigen groĂe Ăbereinstimmung. Die Integration von TPS Punkten fĂŒhrt fĂŒr die analysierte Zeitreihe von sechs Jahren zu einer deutlich gröĂeren Anzahl an identifizierten TPS als PS Punkten im gesamten Untersuchungsgebiet und verbessert damit das Beobachtungsnetzwerk erheblich. Ein spezieller Anwendungsfall der TPS Integration wird vorgestellt, der auf der Clusterung von TPS Punkten basiert, die innerhalb der analysierten Zeitreihe erschienen, um neue Konstruktionen systematisch zu identifizieren und ihre anfĂ€ngliche Bewegungszeitreihen zu analysieren
Design and Evaluation of a Hardware System for Online Signal Processing within Mobile Brain-Computer Interfaces
Brain-Computer Interfaces (BCIs) sind innovative Systeme, die eine direkte Kommunikation zwischen dem Gehirn und externen GerĂ€ten ermöglichen. Diese Schnittstellen haben sich zu einer transformativen Lösung nicht nur fĂŒr Menschen mit neurologischen Verletzungen entwickelt, sondern auch fĂŒr ein breiteres Spektrum von Menschen, das sowohl medizinische als auch nicht-medizinische Anwendungen umfasst. In der Vergangenheit hat die Herausforderung, dass neurologische Verletzungen nach einer anfĂ€nglichen Erholungsphase statisch bleiben, die Forscher dazu veranlasst, innovative Wege zu beschreiten. Seit den 1970er Jahren stehen BCIs an vorderster Front dieser BemĂŒhungen. Mit den Fortschritten in der Forschung haben sich die BCI-Anwendungen erweitert und zeigen ein groĂes Potenzial fĂŒr eine Vielzahl von Anwendungen, auch fĂŒr weniger stark eingeschrĂ€nkte (zum Beispiel im Kontext von Hörelektronik) sowie völlig gesunde Menschen (zum Beispiel in der Unterhaltungsindustrie). Die Zukunft der BCI-Forschung hĂ€ngt jedoch auch von der VerfĂŒgbarkeit zuverlĂ€ssiger BCI-Hardware ab, die den Einsatz in der realen Welt gewĂ€hrleistet.
Das im Rahmen dieser Arbeit konzipierte und implementierte CereBridge-System stellt einen bedeutenden Fortschritt in der Brain-Computer-Interface-Technologie dar, da es die gesamte Hardware zur Erfassung und Verarbeitung von EEG-Signalen in ein mobiles System integriert. Die Architektur der Verarbeitungshardware basiert auf einem FPGA mit einem ARM Cortex-M3 innerhalb eines heterogenen ICs, was FlexibilitĂ€t und Effizienz bei der EEG-Signalverarbeitung gewĂ€hrleistet. Der modulare Aufbau des Systems, bestehend aus drei einzelnen Boards, gewĂ€hrleistet die Anpassbarkeit an unterschiedliche Anforderungen. Das komplette System wird an der Kopfhaut befestigt, kann autonom arbeiten, benötigt keine externe Interaktion und wiegt einschlieĂlich der 16-Kanal-EEG-Sensoren nur ca. 56 g. Der Fokus liegt auf voller MobilitĂ€t.
Das vorgeschlagene anpassbare Datenflusskonzept erleichtert die Untersuchung und nahtlose Integration von Algorithmen und erhöht die FlexibilitĂ€t des Systems. Dies wird auch durch die Möglichkeit unterstrichen, verschiedene Algorithmen auf EEG-Daten anzuwenden, um unterschiedliche Anwendungsziele zu erreichen. High-Level Synthesis (HLS) wurde verwendet, um die Algorithmen auf das FPGA zu portieren, was den Algorithmenentwicklungsprozess beschleunigt und eine schnelle Implementierung von Algorithmusvarianten ermöglicht. Evaluierungen haben gezeigt, dass das CereBridge-System in der Lage ist, die gesamte Signalverarbeitungskette zu integrieren, die fĂŒr verschiedene BCI-Anwendungen erforderlich ist. DarĂŒber hinaus kann es mit einer Batterie von mehr als 31 Stunden Dauerbetrieb betrieben werden, was es zu einer praktikablen Lösung fĂŒr mobile Langzeit-EEG-Aufzeichnungen und reale BCI-Studien macht.
Im Vergleich zu bestehenden Forschungsplattformen bietet das CereBridge-System eine bisher unerreichte LeistungsfĂ€higkeit und Ausstattung fĂŒr ein mobiles BCI. Es erfĂŒllt nicht nur die relevanten Anforderungen an ein mobiles BCI-System, sondern ebnet auch den Weg fĂŒr eine schnelle Ăbertragung von Algorithmen aus dem Labor in reale Anwendungen. Im Wesentlichen liefert diese Arbeit einen umfassenden Entwurf fĂŒr die Entwicklung und Implementierung eines hochmodernen mobilen EEG-basierten BCI-Systems und setzt damit einen neuen Standard fĂŒr BCI-Hardware, die in der Praxis eingesetzt werden kann.Brain-Computer Interfaces (BCIs) are innovative systems that enable direct communication between the brain and external devices. These interfaces have emerged as a transformative solution not only for individuals with neurological injuries, but also for a broader range of individuals, encompassing both medical and non-medical applications. Historically, the challenge of neurological injury being static after an initial recovery phase has driven researchers to explore innovative avenues. Since the 1970s, BCIs have been at one forefront of these efforts. As research has progressed, BCI applications have expanded, showing potential in a wide range of applications, including those for less severely disabled (e.g. in the context of hearing aids) and completely healthy individuals (e.g. entertainment industry). However, the future of BCI research also depends on the availability of reliable BCI hardware to ensure real-world application.
The CereBridge system designed and implemented in this work represents a significant leap forward in brain-computer interface technology by integrating all EEG signal acquisition and processing hardware into a mobile system. The processing hardware architecture is centered around an FPGA with an ARM Cortex-M3 within a heterogeneous IC, ensuring flexibility and efficiency in EEG signal processing. The modular design of the system, consisting of three individual boards, ensures adaptability to different requirements. With a focus on full mobility, the complete system is mounted on the scalp, can operate autonomously, requires no external interaction, and weighs approximately 56g, including 16 channel EEG sensors.
The proposed customizable dataflow concept facilitates the exploration and seamless integration of algorithms, increasing the flexibility of the system. This is further underscored by the ability to apply different algorithms to recorded EEG data to meet different application goals. High-Level Synthesis (HLS) was used to port algorithms to the FPGA, accelerating the algorithm development process and facilitating rapid implementation of algorithm variants. Evaluations have shown that the CereBridge system is capable of integrating the complete signal processing chain required for various BCI applications. Furthermore, it can operate continuously for more than 31 hours with a 1800mAh battery, making it a viable solution for long-term mobile EEG recording and real-world BCI studies.
Compared to existing research platforms, the CereBridge system offers unprecedented performance and features for a mobile BCI. It not only meets the relevant requirements for a mobile BCI system, but also paves the way for the rapid transition of algorithms from the laboratory to real-world applications. In essence, this work provides a comprehensive blueprint for the development and implementation of a state-of-the-art mobile EEG-based BCI system, setting a new benchmark in BCI hardware for real-world applicability
Targeting immune and desmoplastic tumor microenvironment to sensitize gynecological cancer cells to therapy
Cancer is a pervasive global threat that manifests with diverse clinical attributes
and notable mortality rates, particularly attributable to its metastatic potential in
solid cancers. These tumours encompass various types including epithelial
cancers like high-grade serous ovarian cancer (HGSC) and mesenchymal
cancers like uterine sarcomas (USs).
Despite the differing origins of USs and HGSCs, the pivotal concept of the
transition between epithelial and mesenchymal states remains remarkably
plastic, occurring frequently in these cancers. This plasticity holds immense
significance in understanding tumour invasiveness and metastasis. The TME
emerges as a crucial influencer as exerting its impact on cancer progression,
epithelial-mesenchymal transition (EMT), metastasis, and even chemoresistance.
The TME comprises various elements, with the extracellular matrix (ECM)
containing structural proteins like collagens, standing out as a key constituent.
Moreover, immune cells within the TME, such as lymphocytes and macrophages,
actively engage in interactions with both the ECM and cancer cells shaping local
responses to kill the cancer cells or support their growth. Understanding the
intricate tumour-TME interactions become imperative in formulating effective
strategies aimed at modulating the immune response and halting cancer
progression. Therefore, a nuanced comprehension of these complexities is
crucial in developing strategies to combat cancer effectively.
This thesis focuses on identifying TME factors, including ECM components and
immune cell interactions in gynaecological cancers for improved precision
medicine including immunotherapies and other novel treatments.
In Paper I, Uterine sarcomas present distinct immune signatures with prognostic
value, independent of tumour type. FOXP3+ cell density and CD8+/FOXP3+ ratio
(CFR) correlated with favourable survival in endometrial stromal sarcomas (ESS)
and undifferentiated uterine sarcomas (USS). The CFR also highlighted the
correlation between CFR high and upregulation of ECM organization pathways. In
Paper II conversely, uterine leiomyosarcomas (uLMS) showed distinct
behaviours, with lower collagen density and upregulated ECM remodelling
enzymes correlating with aggressiveness. MMP-14 and yes-associated protein 1
(YAP) were required for uLMS growth and invasion. In Paper â
ą, shifting to HGSC,
matrisome, a group of proteins encoded by genes for core ECM proteins
4
(collagens, proteoglycans, and ECM glycoproteins) and ECM-associated proteins
(proteins structurally resembling ECM proteins, ECM remodelling enzymes, and
secreted factors) in the ECM, showed changes in expression depending on the
type of tumour host tissues and after chemotherapy. Collagen VI, among
scrutinized proteins, exhibited elevated expression linked to shortened survival
in ovarian cancer patients. Mechanistically, collagen VI promoted platinum
resistance via the stiffness-dependent ÎČ1 integrin-pMLC and YAP/TAZ pathways
in HGSC cell lines
In summary, this integrated exploration of uterine sarcomas and ovarian cancer
provides a comprehensive understating of their TME. The study elucidates
diverse immune and molecular features, offering potential prognostic markers
and therapeutic targets. The findings underscore the complexity of these
gynaecological malignancies, emphasizing the need for tailored approaches in
understanding and combating these diseases
Laboratory multistatic 3D SAR with polarimetry and sparse aperture sampling
With the advent of constellations of SAR satellites, and the possibility of swarms of SAR UAV's, there is increased interest in multistatic SAR image formation. This may provide advantages including allowing three-dimensional image formation free of clutter overlay; the coherent combination of bistatic SAR geometries for improved image resolution; and the collection of additional scattering information, including polarimetric. The polarimetric collection may provide useful target information, such as its orientation, polarisability, or number of interactions with the radar signal; distributed receivers would be more likely to capture any bright specular responses from targets in the scene, making target outlines distinct. Highlight results from multistatic polarimetric SAR experiments at the Cranfield University GBSAR laboratory are presented, illustrating the utility of the approach for fully sampled 3D SAR image formation, and for sparse aperture SAR 3D point-cloud generation with a newly developed volumetric multistatic interferometry algorithm.Defence Science and Technology Laboratory. Grant Number: P1568
Proceedings of the 10th International congress on architectural technology (ICAT 2024): architectural technology transformation.
The profession of architectural technology is influential in the transformation of the built environment regionally, nationally, and internationally. The congress provides a platform for industry, educators, researchers, and the next generation of built environment students and professionals to showcase where their influence is transforming the built environment through novel ideas, businesses, leadership, innovation, digital transformation, research and development, and sustainable forward-thinking technological and construction assembly design
Impact of diagenesis on the pore evolution and sealing capacity of carbonate cap rocks in the Tarim Basin, China
Analyzing the pore structure and sealing efficiency of carbonate cap rocks is essential to assess their ability to retain hydrocarbons in reservoirs and minimize leaking risks. In this contribution, the impact of diagenesis on the cap rocks' sealing capacity is studied in terms of their pore structure by analyzing rock samples from Ordovician carbonate reservoirs (Tarim Basin). Four lithology types are recognized: highly compacted, peloidal packstone-grainstone; highly cemented, intraclastic-oolitic-bioclastic grainstone; peloidal dolomitic limestone; and incipiently dolomitized, peloidal packstone-grainstone. The pore types of cap rocks include microfractures, intercrystalline pores, intergranular pores, and dissolution vugs. The pore structure of these cap rocks was heterogeneously modified by six diagenetic processes, including calcite cementation, dissolution, mechanical and chemical compaction, dolomitization, and calcitization (dedolomitization). Three situations affect the rocks' sealing capacity: (1) grainstone cap rocks present high sealing capacity in cases where compaction preceded cementation; (2) residual microfractures connecting adjacent pores result in low sealing capacity; and (3) increasing grain size in grainstones results in a larger proportion of intergranular pores being cemented. Four classes of cap rocks have been defined according to the lithology, pore structures, diagenetic alterations, and sealing performance. Class I cap rocks present the best sealing capacity because they underwent intense mechanical compaction, abundant chemical compaction, and calcite cementation, which contributed to the heterogeneous pore structures with poor pore connectivity. A four-stage, conceptual model of pore evolution of cap rocks is presented to reveal how the diagenetic evolution of cap rocks determines the heterogeneity of their sealing capacity in carbonate reservoirs.</p
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (âAIâ) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics â and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the CatĂłlica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Recommended from our members
Interpretable Machine Learning Architectures for Efficient Signal Detection with Applications to Gravitational Wave Astronomy
Deep learning has seen rapid evolution in the past decade, accomplishing tasks that were previously unimaginable. At the same time, researchers strive to better understand and interpret the underlying mechanisms of the deep models, which are often justifiably regarded as "black boxes". Overcoming this deficiency will not only serve to suggest better learning architectures and training methods, but also extend deep learning to scenarios where interpretability is key to the application. One such scenario is signal detection and estimation, with gravitational wave detection as a specific example, where classic methods are often preferred for their interpretability. Nonetheless, while classic statistical detection methods such as matched filtering excel in their simplicity and intuitiveness, they can be suboptimal in terms of both accuracy and computational efficiency. Therefore, it is appealing to have methods that achieve ``the best of both worlds'', namely enjoying simultaneously excellent performance and interpretability.
In this thesis, we aim to bridge this gap between modern deep learning and classic statistical detection, by revisiting the signal detection problem from a new perspective. First, to address the perceived distinction in interpretability between classic matched filtering and deep learning, we state the intrinsic connections between the two families of methods, and identify how trainable networks can address the structural limitations of matched filtering. Based on these ideas, we propose two trainable architectures that are constructed based on matched filtering, but with learnable templates and adaptivity to unknown noise distributions, and therefore higher detection accuracy. We next turn our attention toward improving the computational efficiency of detection, where we aim to design architectures that leverage structures within the problem for efficiency gains. By leveraging the statistical structure of class imbalance, we integrate hierarchical detection into trainable networks, and use a novel loss function which explicitly encodes both detection accuracy and efficiency. Furthermore, by leveraging the geometric structure of the signal set, we consider using signal space optimization as an alternative computational primitive for detection, which is intuitively more efficient than covering with a template bank. We theoretical prove the efficiency gain by analyzing Riemannian gradient descent on the signal manifold, which reveals an exponential improvement in efficiency over matched filtering. We also propose a practical trainable architecture for template optimization, which makes use of signal embedding and kernel interpolation.
We demonstrate the performance of all proposed architectures on the task of gravitational wave detection in astrophysics, where matched filtering is the current method of choice. The architectures are also widely applicable to general signal or pattern detection tasks, which we exemplify with the handwritten digit recognition task using the template optimization architecture. Together, we hope the this work useful to scientists and engineers seeking machine learning architectures with high performance and interpretability, and contribute to our understanding of deep learning as a whole
A Survey on Few-Shot Class-Incremental Learning
Large deep learning models are impressive, but they struggle when real-time data is not available. Few-shot class-incremental learning (FSCIL) poses a significant challenge for deep neural networks to learn new tasks from just a few labeled samples without forgetting the previously learned ones. This setup can easily leads to catastrophic forgetting and overfitting problems, severely affecting model performance. Studying FSCIL helps overcome deep learning model limitations on data volume and acquisition time, while improving practicality and adaptability of machine learning models. This paper provides a comprehensive survey on FSCIL. Unlike previous surveys, we aim to synthesize few-shot learning and incremental learning, focusing on introducing FSCIL from two perspectives, while reviewing over 30 theoretical research studies and more than 20 applied research studies. From the theoretical perspective, we provide a novel categorization approach that divides the field into five subcategories, including traditional machine learning methods, meta learning-based methods, feature and feature space-based methods, replay-based methods, and dynamic network structure-based methods. We also evaluate the performance of recent theoretical research on benchmark datasets of FSCIL. From the application perspective, FSCIL has achieved impressive achievements in various fields of computer vision such as image classification, object detection, and image segmentation, as well as in natural language processing and graph. We summarize the important applications. Finally, we point out potential future research directions, including applications, problem setups, and theory development. Overall, this paper offers a comprehensive analysis of the latest advances in FSCIL from a methodological, performance, and application perspective
Automatic Characterization of Block-In-Matrix Rock Outcrops through Segmentation Algorithms and Its Application to an Archaeo-Mining Case Study
The mechanical behavior of block-in-matrix materials is heavily dependent on their block content. This parameter is in most cases obtained through visual analyses of the ground through digital imagery, which provides the areal block proportion (ABP) of the area analyzed. Nowadays, computer vision models have the capability to extract knowledge from the information stored in these images. In this research, we analyze and compare classical feature-detection algorithms with state-of-the-art models for the automatic calculation of the ABP parameter in images from surface and underground outcrops. The outcomes of this analysis result in the development of a framework for ABP calculation based on the Segment Anything Model (SAM), which is capable of performing this task at a human level when compared with the results of 32 experts in the field. Consequently,
this model can help reduce human bias in the estimation of mechanical properties of block-in-matrix materials as well as contain underground technical problems due to mischaracterization of rock block quantities and dimensions. The methodology used to obtain the ABP at different outcrops
is combined with estimates of the rock matrix properties and other characterization techniques to mechanically characterize the block-in-matrix materials. The combination of all these techniques has been applied to analyze, understand and try, for the first time, to model Roman gold-mining
strategies in an archaeological site in NW Spain. This mining method is explained through a 2D finite-element method numerical model
- âŠ