65 research outputs found

    Cross-attention Spatio-temporal Context Transformer for Semantic Segmentation of Historical Maps

    Full text link
    Historical maps provide useful spatio-temporal information on the Earth's surface before modern earth observation techniques came into being. To extract information from maps, neural networks, which gain wide popularity in recent years, have replaced hand-crafted map processing methods and tedious manual labor. However, aleatoric uncertainty, known as data-dependent uncertainty, inherent in the drawing/scanning/fading defects of the original map sheets and inadequate contexts when cropping maps into small tiles considering the memory limits of the training process, challenges the model to make correct predictions. As aleatoric uncertainty cannot be reduced even with more training data collected, we argue that complementary spatio-temporal contexts can be helpful. To achieve this, we propose a U-Net-based network that fuses spatio-temporal features with cross-attention transformers (U-SpaTem), aggregating information at a larger spatial range as well as through a temporal sequence of images. Our model achieves a better performance than other state-or-art models that use either temporal or spatial contexts. Compared with pure vision transformers, our model is more lightweight and effective. To the best of our knowledge, leveraging both spatial and temporal contexts have been rarely explored before in the segmentation task. Even though our application is on segmenting historical maps, we believe that the method can be transferred into other fields with similar problems like temporal sequences of satellite images. Our code is freely accessible at https://github.com/chenyizi086/wu.2023.sigspatial.git

    Construction and analysis of a survival-associated competing endogenous RNA network in breast cancer

    Get PDF
    BackgroundRecently, increasing studies have shown that non-coding RNAs are closely associated with the progression and metastasis of cancer by participating in competing endogenous RNA (ceRNA) networks. However, the role of survival-associated ceRNAs in breast cancer (BC) remains unknown.MethodsThe Gene Expression Omnibus database and The Cancer Genome Atlas BRCA_dataset were used to identify differentially expressed RNAs. Furthermore, circRNA-miRNA interactions were predicted based on CircInteractome, while miRNA-mRNA interactions were predicted based on TargetScan, miRDB, and miRTarBase. The survival-associated ceRNA networks were constructed based on the predicted circRNA-miRNA and miRNA-mRNA pairs. Finally, the mechanism of miRNA-mRNA pairs was determined. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses of survival-related mRNAs were performed using the hypergeometric distribution formula in R software.The prognosis of hub genes was confirmed using gene set enrichment analysis.ResultsBased on the DE-circRNAs of the top 10 initial candidates, 162 DE-miRNAsand 34 DE-miRNAs associated with significant overall survival were obtained. The miRNA target genes were then identified using online tools and verified using the Cancer Genome Atlas (TCGA) database. Overall, 46 survival-associated DE-mRNAs were obtained. The results of GO and KEGG pathway enrichment analyses implied that up-regulated survival-related DE-mRNAs were mostly enriched in the “regulation of cell cycle” and “chromatin” pathways, while down-regulated survival-related DE-mRNAs were mostly enriched in “negative regulation of neurotrophin TRK receptor signaling” and “interleukin-6 receptor complex” pathways. Finally, the survival-associated circRNA-miRNA-mRNA ceRNA network was constructed using 34 miRNAs, 46 mRNAs, and 10 circRNAs. Based on the PPI network, two ceRNA axes were identified. These ceRNA axescould be considered biomarkers for BC.GSEA results revealed that the hub genes were correlated with “VANTVEER_BREAST_CANCER_POOR_PROGNOSIS”, and the hub genes were verified using BC patients' tissues.ConclusionsIn this study, we constructed a circRNA-mediated ceRNA network related to BC. This network provides new insight into discovering potential biomarkers for diagnosing and treating BC

    Chemotherapy-induced nausea and vomiting among cancer patients in Shanghai: a cross-sectional study

    Get PDF
    Background and purpose: Chemotherapy-induced nausea and vomiting (CINV) can cause severe damage to body functions and even lead to death. The prevention of CINV is critically important in patients receiving emetogenic chemotherapy regimen. This study aimed to investigate the prevalence and treatment of CINV in Grade-A tertiary hospitals in Shanghai and explore risk factors of CINV to improve its management. Methods: The clinical data of 376 cancer patients in Grade-A tertiary hospitals in Shanghai from October 2022 to December 2022 were collected retrospectively. The questionnaire was used to conduct a cross-sectional study. The univariate and multivariable logistic regression models were used to evaluate the influencing factors of CINV. Results: The management and coincidence of the guideline in 2022 significantly improved compared to five years ago. For patients receiving high-emetic-risk chemotherapy regimen, the coincidence of the guideline increased from 21.6% to 67.0%. For patients receiving moderate-emetic-risk chemotherapy regimen, the neurokinin-1 (NK-1) receptor antagonist was not significantly associated with CINV. Multivariable analysis showed that the chemotherapy regimen was the only risk factor for CINV during the whole period (P<0.05). Conclusion: The chemotherapy regimen is the main risk factor for CINV. To control CINV better, clinical practitioners should focus on the intrinsic risk of chemotherapy regimens preferentially, estimate the risk and adhere better to guidelines

    Post-capitalist property

    Get PDF
    When writing about property and property rights in his imagined post-capitalist society of the future, Marx seemed to envisage ‘individual property’ co-existing with ‘socialized property’ in the means of production. As the social and political consequences of faltering growth and increasing inequality, debt and insecurity gradually manifest themselves, and with automation and artificial intelligence lurking in the wings, the future of capitalism, at least in its current form, looks increasingly uncertain. With this, the question of what property and property rights might look like in the future, in a potentially post-capitalist society, is becoming ever more pertinent. Is the choice simply between private property and markets, and public (state-owned) property and planning? Or can individual and social property in the (same) means of production co-exist, as Marx suggested? This paper explores ways in which they might, through an examination of the Chinese household responsibility system (HRS) and the ‘fuzzy’ and seemingly confusing regime of land ownership that it instituted. It examines the HRS against the backdrop of Marx’s ideas about property and subsequent (post-Marx) theorizing about the legal nature of property in which property has come widely to be conceptualized not as a single, unitary ‘ownership’ right to a thing (or, indeed, as the thing itself) but as a ‘bundle of rights’. The bundle-of-rights idea of property, it suggests, enables us to see not only that ‘individual’ and ‘socialized’ property’ in the (same) means of production might indeed co-exist, but that the range of institutional possibility is far greater than that between capitalism and socialism/communism as traditionally conceived

    Vectorisation et géoréferencement semi-automatiques de cartes anciennes : le cas de l'Atlas de Paris (1789-1950)

    No full text
    Maps have been a unique source of knowledge for centuries. Such historical documents provide invaluable information for analyzing complex spatial transformations over important time frames. This is particularly true for urban areas that encompass multiple interleaved research domains: humanities, social sciences, etc. The large amount and significant diversity of map sources call for automatic image processing techniques in order to extract the relevant objects as vector features. The complexity of maps (text, noise, digitization artifacts, etc.) has hindered the capacity of proposing a versatile and efficient raster-to-vector approaches for decades.In this thesis, we propose a learnable, reproducible, and reusable solution for the automatic transformation of raster maps into vector objects (building blocks, streets, rivers), focusing on the extraction of closed shapes. Our approach is built upon the complementary strengths of convolutional neural networks which excel at filtering edges while presenting poor topological properties for their outputs, and mathematical morphology, which offers solid guarantees regarding closed shape extraction while being very sensitive to noise.In order to improve the robustness of deep edge filters to noise, we review several, and propose new topology-preserving loss functions which enable to improve the topological properties of the results. We also introduce a new contrast convolution (CConv) layer to investigate how architectural changes can impact such properties. Finally, we investigate the different approaches which can be used to implement each stage, and how to combine them in the most effective way.Thanks to a working shape extraction pipeline, we propose a new alignment procedure for historical map images, and start to leverage the redundancies contained in map sheets with similar contents to propagate annotations, improve vectorization quality, and eventually detect evolution patterns for later analysis or to automatically assess vectorization quality. To evaluate the performance of all methods mentioned above, we released a new dataset of annotated historical maps. It is the first public and open dataset targeting the task of historical map vectorization. We hope that thanks to our publications, public and open releases of datasets, codes and results, our work will benefit a wide range of historical map-related applicationsLes cartes sont une source unique de connaissances depuis des siècles. Ces documents historiques fournissent des informations inestimables pour analyser des transformations spatiales complexes sur des périodes importantes. Cela est particulièrement vrai pour les zones urbaines qui englobent de multiples domaines de recherche imbriqués : humanités, sciences sociales, etc. La complexité des cartes (texte, bruit, artefacts de numérisation, etc.) a entravé la capacité à proposer des approches de vectorisation polyvalentes et efficaces pendant des décennies. Dans cette thèse, nous proposons une solution apprenable, reproductible et réutilisable pour la transformation automatique de cartes raster en objets vectoriels (îlots, rues, rivières), en nous focalisant sur le problème d'extraction de formes closes. Notre approche s'appuie sur la complémentarité des réseaux de neurones convolutifs qui excellent dans le filtrage des signaux de contours, et de la morphologie mathématique, qui présente de solides garanties au regard de l'extraction de formes closes tout en étant très sensible au bruit. Afin d'améliorer la robustesse des filtres profonds au bruit, nous comparons plusieurs fonctions de coût visant spécifiquement à préserver les propriétés topologiques des résultats, et en proposons de nouvelles. À cette fin, nous introduisons également un nouveau type de couche convolutionnelle (CConv) exploitant le contraste des images, pour explorer les possibilités de telles améliorations à l'aide de transformations architecturales des réseaux. Finalement, nous comparons les différentes approches et architectures qui peuvent être utilisées pour implémenter chaque étape de notre chaîne de traitements, et comment combiner ces dernières de la meilleure façon possible. Grâce à une chaîne de traitement fonctionnelle, nous proposons une nouvelle procédure d'alignement d'images de plans historiques, et commençons à tirer profit de la redondance des données extraites dans des images similaires pour propager des annotations, améliorer la qualité de la vectorisation, et éventuellement détecter des cas d'évolution en vue d'analyse subséquente, ou encore l'estimation automatique de la qualité de la vectorisation. Afin d'évaluer la performance des méthodes mentionnées précédemment, nous avons publié un nouveau jeu de données composé d'images de plans historiques annotées. C'est le premier jeu de données en libre accès dédié à la vectorisation de plans historiques. Nous espérons qu'au travers de nos publications, et de diffusion ouverte et publique de nos résultats, sources et jeux de données, cette recherche bénéficie éventuellement à un large éventail d'applications liées aux cartes historique

    Vectorisation et alignement modernes des cartes historiques : Une application à l'Atlas de Paris (1789-1950)

    No full text
    Maps have been a unique source of knowledge for centuries. Such historical documents provide invaluable information for analyzing complex spatial transformations over important time frames. This is particularly true for urban areas that encompass multiple interleaved research domains: humanities, social sciences, etc.The large amount and significant diversity of map sources call for automatic image processing techniques in order to extract the relevant objects as vector features. The complexity of maps (text, noise, digitization artifacts, etc.) has hindered the capacity of proposing a versatile and efficient raster-to-vector approaches for decades. In this thesis, we propose a learnable, reproducible, and reusable solution for the automatic transformation of raster maps into vector objects (building blocks, streets, rivers), focusing on the extraction of closed shapes.Our approach is built upon the complementary strengths of convolutional neural networks which excel at filtering edges while preserving poor topological properties for their outputs, and mathematical morphology, which offers solid guarantees regarding closed shape extraction while being very sensitive to noise. In order to improve the robustness of deep edge filters to noise, we review several, and propose new topology preserving loss functions which enable to improve the topological properties of the results. We also introduce a new contrast convolution (CConv) layer to investigate how architectural changes can impact such properties. Finally, we investigate the different approaches which can be used to implement each stage, and how to combine them in the most efficient way. Thanks to a shape extraction pipeline, we propose a new alignment procedure for historical map images, and start to leverage the redundancies contained in map sheets with similar contents to propagate annotations, improve vectorization quality, and eventually detect evolution patterns for later analysis or to automatically assess vectorization quality. To evaluate the performance of all methods mentioned above, we released a new dataset of annotated historical map images. It is the first public and open dataset targeting the task of historical map vectorization. We hope that thanks to our publications, public and open releases of datasets, codes and results, our work will benefit a wide range of historical map-related applications.Les cartes sont une source unique de connaissances depuis des siècles. Ces documents historiques fournissent des informations inestimables pour analyser des transformations spatiales complexes sur des périodes importantes. Cela est particulièrement vrai pour les zones urbaines qui englobent de multiples domaines de recherche imbriqués : humanités, sciences sociales, etc. La complexité des cartes (texte, bruit, artefacts de numérisation, etc.) a entravé la capacité à proposer des approches de vectorisation polyvalentes et efficaces pendant des décennies. Dans cette thèse, nous proposons une solution apprenable, reproductible et réutilisable pour la transformation automatique de cartes raster en objets vectoriels (îlots, rues, rivières), en nous focalisant sur le problème d'extraction de formes closes. Notre approche s'appuie sur la complémentarité des réseaux de neurones convolutifs qui excellent dans et de la morphologie mathématique, qui présente de solides garanties au regard de l'extraction de formes closes tout en étant très sensible au bruit. Afin d'améliorer la robustesse au bruit des filtres convolutifs, nous comparons plusieurs fonctions de coût visant spécifiquement à préserver les propriétés topologiques des résultats, et en proposons de nouvelles. À cette fin, nous introduisons également un nouveau type de couche convolutive (CConv) exploitant le contraste des images, pour explorer les possibilités de telles améliorations à l'aide de transformations architecturales des réseaux. Finalement, nous comparons les différentes approches et architectures qui peuvent être utilisées pour implémenter chaque étape de notre chaîne de traitements, et comment combiner ces dernières de la meilleure façon possible. Grâce à une chaîne de traitement fonctionnelle, nous proposons une nouvelle procédure d'alignement d'images de plans historiques, et commençons à tirer profit de la redondance des données extraites dans des images similaires pour propager des annotations, améliorer la qualité de la vectorisation, et éventuellement détecter des cas d'évolution en vue d'analyse thématique, ou encore l'estimation automatique de la qualité de la vectorisation. Afin d'évaluer la performance des méthodes mentionnées précédemment, nous avons publié un nouveau jeu de données composé d'images de plans historiques annotées. C'est le premier jeu de données en libre accès dédié à la vectorisation de plans historiques. Nous espérons qu'au travers de nos publications, et de la diffusion ouverte et publique de nos résultats, sources et jeux de données, cette recherche pourra être utile à un large éventail d'applications liées aux cartes historiques

    Development of an Ethernet-capable Smart Meter Unit

    No full text
    The thesis represents the working principle and operation of the Smart meter. At the beginning of the study, lots of related material including the basic feature of the smart meter, and some projects design by others, but choosing the real design need a great effort.BSc/BAElectrical engineeringg

    Vectorisation et géoréferencement semi-automatiques de cartes anciennes : le cas de l'Atlas de Paris (1789-1950)

    No full text
    Les cartes sont une source unique de connaissances depuis des siècles. Ces documents historiques fournissent des informations inestimables pour analyser des transformations spatiales complexes sur des périodes importantes. Cela est particulièrement vrai pour les zones urbaines qui englobent de multiples domaines de recherche imbriqués : humanités, sciences sociales, etc. La complexité des cartes (texte, bruit, artefacts de numérisation, etc.) a entravé la capacité à proposer des approches de vectorisation polyvalentes et efficaces pendant des décennies. Dans cette thèse, nous proposons une solution apprenable, reproductible et réutilisable pour la transformation automatique de cartes raster en objets vectoriels (îlots, rues, rivières), en nous focalisant sur le problème d'extraction de formes closes. Notre approche s'appuie sur la complémentarité des réseaux de neurones convolutifs qui excellent dans le filtrage des signaux de contours, et de la morphologie mathématique, qui présente de solides garanties au regard de l'extraction de formes closes tout en étant très sensible au bruit. Afin d'améliorer la robustesse des filtres profonds au bruit, nous comparons plusieurs fonctions de coût visant spécifiquement à préserver les propriétés topologiques des résultats, et en proposons de nouvelles. À cette fin, nous introduisons également un nouveau type de couche convolutionnelle (CConv) exploitant le contraste des images, pour explorer les possibilités de telles améliorations à l'aide de transformations architecturales des réseaux. Finalement, nous comparons les différentes approches et architectures qui peuvent être utilisées pour implémenter chaque étape de notre chaîne de traitements, et comment combiner ces dernières de la meilleure façon possible. Grâce à une chaîne de traitement fonctionnelle, nous proposons une nouvelle procédure d'alignement d'images de plans historiques, et commençons à tirer profit de la redondance des données extraites dans des images similaires pour propager des annotations, améliorer la qualité de la vectorisation, et éventuellement détecter des cas d'évolution en vue d'analyse subséquente, ou encore l'estimation automatique de la qualité de la vectorisation. Afin d'évaluer la performance des méthodes mentionnées précédemment, nous avons publié un nouveau jeu de données composé d'images de plans historiques annotées. C'est le premier jeu de données en libre accès dédié à la vectorisation de plans historiques. Nous espérons qu'au travers de nos publications, et de diffusion ouverte et publique de nos résultats, sources et jeux de données, cette recherche bénéficie éventuellement à un large éventail d'applications liées aux cartes historiquesMaps have been a unique source of knowledge for centuries. Such historical documents provide invaluable information for analyzing complex spatial transformations over important time frames. This is particularly true for urban areas that encompass multiple interleaved research domains: humanities, social sciences, etc. The large amount and significant diversity of map sources call for automatic image processing techniques in order to extract the relevant objects as vector features. The complexity of maps (text, noise, digitization artifacts, etc.) has hindered the capacity of proposing a versatile and efficient raster-to-vector approaches for decades.In this thesis, we propose a learnable, reproducible, and reusable solution for the automatic transformation of raster maps into vector objects (building blocks, streets, rivers), focusing on the extraction of closed shapes. Our approach is built upon the complementary strengths of convolutional neural networks which excel at filtering edges while presenting poor topological properties for their outputs, and mathematical morphology, which offers solid guarantees regarding closed shape extraction while being very sensitive to noise.In order to improve the robustness of deep edge filters to noise, we review several, and propose new topology-preserving loss functions which enable to improve the topological properties of the results. We also introduce a new contrast convolution (CConv) layer to investigate how architectural changes can impact such properties. Finally, we investigate the different approaches which can be used to implement each stage, and how to combine them in the most effective way.Thanks to a working shape extraction pipeline, we propose a new alignment procedure for historical map images, and start to leverage the redundancies contained in map sheets with similar contents to propagate annotations, improve vectorization quality, and eventually detect evolution patterns for later analysis or to automatically assess vectorization quality. To evaluate the performance of all methods mentioned above, we released a new dataset of annotated historical maps. It is the first public and open dataset targeting the task of historical map vectorization. We hope that thanks to our publications, public and open releases of datasets, codes and results, our work will benefit a wide range of historical map-related application
    corecore