174 research outputs found

    The Opium Wars and Capital Accumulation: Comments on Karl Marx’s Perspectives on the Qing and England Recorded in The New York Herald Tribune

    Get PDF
    Karl Marx, a pioneer of anti-imperialism (or anti-mercantilism), was exiled in the 1850s in London. A decade later, he wrote nearly 500 editorials for T he New York Herald Tribune . Although a handful of these editorials offer important clues for understanding Marx’s thinking on imperialism, these writings have been largely ignored. This study fleshes out Marx’s thinking, especially with respect to the relationship between wars and capital accumulation. This article employs the example of the Second Opium War, launched in the mid-1850s. The dominant Western powers at the time, such as Great Britain and France, advocated mercantilism, usually supported by military force, and regarded wars as a quick method to accumulate capital for national wealth. Additionally, Britain skillfully maneuvered its cultural hegemony by using its official periodical, The Economist , to legitimize the wars against Qing China. In the mid-nineteenth century, Marx had already clearly observed the close relationship between the Opium Wars and capital accumulation

    Rethinking the Split-Sample Approach in Hydrological Model Calibration

    Get PDF
    Hydrological models, which have become increasingly complex in the last half century due to the advances in computing capabilities and data collection, have been extensively utilized to facilitate decision-making in water resources management. Such computer-based models generally contain considerable parameters that cannot be directly measured, and hence calibration and validation are required to ensure model transferability and robustness in model building (development). The most widely used method used for assessing model transferability in time is the split-sample test (SST) framework, which has even been a paradigm in the hydrological modeling community for decades. However, there is no clear guidance or empirical/numerical evidence that supports how a dataset should be split into the calibration and validation subsets. The SST decisions usually appear to be unclear and even subjective in literature. Even though past studies have spared tremendous efforts to investigate possible ways to improve model performance by adopting various data splitting methods; however, such problem of data splitting still remain as a challenge and no consensus has achieved on which splitting method may be optimal in hydrological modeling community. One of the key reasons is lacking a robust evaluation framework to objectively compare different data splitting methods in the “out-of-sample” model application period. To mitigate these gaps, this thesis aims at assessing different data splitting methods using the large-sample hydrology approach to identify optimal data splitting methods under different conditions, as well as exploring alternative validation methods to improve model robustness that is usually done by the SST method. First, the thesis introduces a unique and comprehensive evaluation framework to compare different data splitting methods. This evaluation framework defines different model build years, as such models can be built in various data availability scenarios. Years after the model build year are retained as model testing period, which acts as an “out-of-sample” data beyond the model building period and matches how models are applied in operational use. The evaluation framework allows to incorporate various data splitting methods into comparison, as the comparison of model performance is performed in the common testing period no matter how calibration and validation data are split in model building period. Moreover, a reference climatology, which is purely observation data-based, is applied to benchmark our model simulations. Model inadequacy is properly handled by considering the possible decisions modelers may make when faced with bad model simulations. As such, the model building can be more robust and realistic. Example approaches which cover a wide range of aspects modelers may care about in practice are provided to assess large-sample modeling results. Two large-sample modeling experiments are performed in the proposed evaluation framework to compare different data splitting methods. In the first experiment, two conceptual hydrological models are applied in 463 catchments across the United States to evaluate 50 different continuous calibration sub-periods (CSPs) for model calibration (varying data period length and recency) across five different model build year scenarios, which ensures robust results across three testing period conditions. Model performance in testing periods are assessed from three independent aspects: frequency of each short-period CSP being better than its corresponding full-period CSP; central tendency of the objective function metric as computed in model testing period; and frequency that a CSP correctly classifies model testing period failure and success. The second experiment assesses 44 representative continuous and discontinuous data splitting methods using a conceptual hydrological model in 463 catchments across the United States. These data splitting methods consist of all the ways hydrological model calibration split-sampling is currently done when only a single split sample is evaluated and one method found in data-driven modeling. This results in over 0.4 million model calibration-validation and 1.7 million model testing exercises for an extensive analysis. Model performance in testing periods are assessed in similar ways in the first experiment except that all model optimization trials are utilized to draw even more robust conclusions. Three SST recommendations are made based on the strong empirical evidence. Calibrating models to older data and then validating models on newer data produces inferior model testing period performance in every single analysis conducted and should be avoided. Calibrating a model to the full available data period and skipping temporal model validation entirely is the most robust choice. It is recommended that hydrological modelers rebuild models after their validation experiments, but prior to operational use of the model, by calibrating models to all available data. Last but not least, alternative model validation methods are further tested to enhance model robustness based on the above large-sample modeling results. A proxy validation is adopted to replace the traditional validation period in the SST method by using Split Kling-Gupta Efficiency (KGE) and Split Reference KGE in calibration to identify unacceptable models. The proxy validation is demonstrated to have some promise to enhance model robustness when all data are used in calibration

    Enhancing long short-term memory (LSTM)-based streamflow prediction with a spatially distributed approach

    Get PDF
    Deep learning (DL) algorithms have previously demonstrated their effectiveness in streamflow prediction. However, in hydrological time series modelling, the performance of existing DL methods is often bound by limited spatial information, as these data-driven models are typically trained with lumped (spatially aggregated) input data. In this study, we propose a hybrid approach, namely the Spatially Recursive (SR) model, that integrates a lumped long short-term memory (LSTM) network seamlessly with a physics-based hydrological routing simulation for enhanced streamflow prediction. The lumped LSTM was trained on the basin-averaged meteorological and hydrological variables derived from 141 gauged basins located in the Great Lakes region of North America. The SR model involves applying the trained LSTM at the subbasin scale for local streamflow predictions which are then translated to the basin outlet by the hydrological routing model. We evaluated the efficacy of the SR model with respect to predicting streamflow at 224 gauged stations across the Great Lakes region and compared its performance to that of the standalone lumped LSTM model. The results indicate that the SR model achieved performance levels on par with the lumped LSTM in basins used for training the LSTM. Additionally, the SR model was able to predict streamflow more accurately on large basins (e.g., drainage area greater than 2000 km2), underscoring the substantial information loss associated with basin-wise feature aggregation. Furthermore, the SR model outperformed the lumped LSTM when applied to basins that were not part of the LSTM training (i.e., pseudo-ungauged basins). The implication of this study is that the lumped LSTM predictions, especially in large basins and ungauged basins, can be reliably improved by considering spatial heterogeneity at finer resolution via the SR model.</p

    Data and Knowledge Co-driving for Cancer Subtype Classification on Multi-Scale Histopathological Slides

    Full text link
    Artificial intelligence-enabled histopathological data analysis has become a valuable assistant to the pathologist. However, existing models lack representation and inference abilities compared with those of pathologists, especially in cancer subtype diagnosis, which is unconvincing in clinical practice. For instance, pathologists typically observe the lesions of a slide from global to local, and then can give a diagnosis based on their knowledge and experience. In this paper, we propose a Data and Knowledge Co-driving (D&K) model to replicate the process of cancer subtype classification on a histopathological slide like a pathologist. Specifically, in the data-driven module, the bagging mechanism in ensemble learning is leveraged to integrate the histological features from various bags extracted by the embedding representation unit. Furthermore, a knowledge-driven module is established based on the Gestalt principle in psychology to build the three-dimensional (3D) expert knowledge space and map histological features into this space for metric. Then, the diagnosis can be made according to the Euclidean distance between them. Extensive experimental results on both public and in-house datasets demonstrate that the D&K model has a high performance and credible results compared with the state-of-the-art methods for diagnosing histopathological subtypes. Code: https://github.com/Dennis-YB/Data-and-Knowledge-Co-driving-for-Cancer-Subtypes-Classificatio

    Multi-Modality Multi-Scale Cardiovascular Disease Subtypes Classification Using Raman Image and Medical History

    Full text link
    Raman spectroscopy (RS) has been widely used for disease diagnosis, e.g., cardiovascular disease (CVD), owing to its efficiency and component-specific testing capabilities. A series of popular deep learning methods have recently been introduced to learn nuance features from RS for binary classifications and achieved outstanding performance than conventional machine learning methods. However, these existing deep learning methods still confront some challenges in classifying subtypes of CVD. For example, the nuance between subtypes is quite hard to capture and represent by intelligent models due to the chillingly similar shape of RS sequences. Moreover, medical history information is an essential resource for distinguishing subtypes, but they are underutilized. In light of this, we propose a multi-modality multi-scale model called M3S, which is a novel deep learning method with two core modules to address these issues. First, we convert RS data to various resolution images by the Gramian angular field (GAF) to enlarge nuance, and a two-branch structure is leveraged to get embeddings for distinction in the multi-scale feature extraction module. Second, a probability matrix and a weight matrix are used to enhance the classification capacity by combining the RS and medical history data in the multi-modality data fusion module. We perform extensive evaluations of M3S and found its outstanding performance on our in-house dataset, with accuracy, precision, recall, specificity, and F1 score of 0.9330, 0.9379, 0.9291, 0.9752, and 0.9334, respectively. These results demonstrate that the M3S has high performance and robustness compared with popular methods in diagnosing CVD subtypes

    Predicting hydraulic tensile fracture spacing in strata-bound systems

    Get PDF
    AbstractA model is presented which predicts the spacing of tensile-fractures due to fluid pressure increase in a multilayered sedimentary sequence comprising different typical sedimentary deposits such as mudstones, siltstones and sandstones. During normal burial and tectonic conditions, strata will undergo both extensional forces and an increase in fluid pressures. This model addresses the effects of the diffuse fluid pressure increase, and is useful for engineered applications such as the injection of fluid into a reservoir that may cause an increase of fluid pressure beneath a caprock, and for sedimentary sequences during normal digenetic processes of burial and fault activation. Analytical and numerical elastic stress strain solutions are compared to provide a robust normalised standard relationship for predicting the spacing of fractures. Key parameters are the local minimum horizontal stress, variability of the tensile strengths of the layers of a sedimentary sequence and the thickness of the beds. Permeability and storage are also shown to affect the fracture spacing. The model predicts many of the field observations made regarding strata-bound fracture systems, and should also prove useful in consideration of the impact of raised reservoir fluid pressures on caprock integrity
    corecore