10,231 research outputs found
Fibroblast-organoid contacts precede organoid branching.
(A) Time-lapse snapshots of an organoid-fibroblast co-culture. Scale bar: 100 μm. (B) Detailed snapshots of 3 examples of fibroblast-organoid contact establishment in the co-cultures shown in (A) on days 1, 2, and 3. Red arrowheads indicate fibroblasts of interest. Scale bar: 50 μm. (C) Quantification of organoid circularity (data from Fig 1), number of new branches and number of established fibroblast-organoid contacts from matched experiments. The plot shows mean ± SD; n = 3 (each dot represents the average from a biologically independent experiment, N = 20 organoids per experiment). (D) Maximum intensity projection (MIP) and optical section images of a dispersed co-culture on day 2.5, representative images of cystic and budding organoids (tdTomato). Fibroblasts were detected by immunostaining for PDGFRα. Scale bar: 100 μm. (E) Quantification of organoid middle section perimeter in contact with PDGFRα signal. The plot shows mean ± SD. Each dot represents an average from 1 experiment. Statistical analysis: two-tailored t test; n = 3 independent biological samples, N = 15–24 organoids per sample. The data underlying the graphs shown in the figure can be found in S1 Data. (TIFF)</p
Fibroblasts in co-cultures are in physical contact with the epithelium.
(A) Snapshots from time-lapse brightfield and fluorescence imaging of organoid (tdTomato) and fibroblast (GFP) co-culture (dispersed culture). Scale bar: 100 μm. Top line shows detail of fibroblast-organoid close interaction. Scale bar: 20 μm. (B, C) Images (B) and quantification (C) of the contact point between organoid (tdTomato) and fibroblasts (GFP) on day 4 of co-culture (dispersed culture). Scale bar: 100 μm, scale bar in detail: 20 μm. (C) The plot shows mean ± SD, each dot represents 1 organoid, n = 5 experiments, N = 21 organoids. Statistical analysis: Two-tailored t test. (D) Images of the contact point between organoid (luminal (KRT8) and myoepithelial (KRT5) cells) and fibroblasts (VIM) on day 5 of co-culture (dispersed culture). Scale bar: 20 μm. (E) Quantification of fibroblasts in contact with KRT5+ or KRT8+ epithelial cells. The plot shows mean ± SD, each dot represents average from 1 biological replicate, n = 3 experiments, N = 14 organoids, 219 fibroblasts. Statistical analysis: Two-tailored t test. (F) Transmission electron micrographs and scheme (inset) of the contact point between luminal (LC, blue) and myoepithelial (MeC, magenta) cells and fibroblasts (green) on day 4 of co-culture (dispersed culture). Scale bar: 20 μm, scale bar in detail: 2 μm. In agreement with a published study (Ewald and colleagues), luminal cells are defined as lumen-facing cells, which present microvilli and numerous vesicles and granules. Myoepithelial cells are basally oriented, more elongated cells with less vesicles, granules, and organelles in the cytoplasm and they show a different electron density in their cytoplasm (it appears darker than the cytoplasm of luminal cells). The white arrowheads denote ECM between fibroblast and organoid. (G) Optical slice of organoid-fibroblast co-culture (dispersed culture), laminin 5 (cyan), DAPI (blue), F-actin (red), fibroblasts were isolated from R26-mT/mG mice (tdTomato, white). Scale bar: 100 μm, scale bar in detail: 10 μm. (H) A representative 1D relative fluorescence intensity plot. The measurement line is depicted in yellow (right). The data underlying the graphs shown in the figure can be found in S1 Data. ECM, extracellular matrix.</p
Tumor Segmentation and Classification Using Machine Learning Approaches
Medical image processing has recently developed progressively in terms of methodologies and applications to increase serviceability in health care management. Modern medical image processing employs various methods to diagnose tumors due to the burgeoning demand in the related industry. This study uses the PG-DBCWMF, the HV area method, and CTSIFT extraction to identify brain tumors that have been combined with pancreatic tumors. In terms of efficiency, precision, creativity, and other factors, these strategies offer improved performance in therapeutic settings. The three techniques, PG-DBCWMF, HV region algorithm, and CTSIFT extraction, are combined in the suggested method. The PG-DBCWMF (Patch Group Decision Couple Window Median Filter) works well in the preprocessing stage and eliminates noise. The HV region technique precisely calculates the vertical and horizontal angles of the known images. CTSIFT is a feature extraction method that recognizes the area of tumor images that is impacted. The brain tumor and pancreatic tumor databases, which produce the best PNSR, MSE, and other results, were used for the experimental evaluation
Fibroblast contractility is necessary for branch maintenance.
(A) Experimental scheme (top) and time-lapse snapshots of dispersed co-cultures treated with contractility inhibitors on day 3 of culture. Scale bar: 100 μm. White arrowheads indicate organoid branches. (B–D) Quantification of organoids with retracted branches (B), number of formed branches per branched organoids (C) and number of retracted branches per organoid (D). The plots show mean ± SD. Statistical analysis: two-tailored t test; n = 4 independent biological replicates, N = 20 organoids per experiment. The data underlying the graphs shown in the figure can be found in S1 Data. (TIFF)</p
Flood dynamics derived from video remote sensing
Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models.
Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science
Extensive study of flow characters for two vertical rectangular polygons in a two-dimensional cross flow
Fluid dynamics problems have a significant impact on the growth of science and technologies all over the world. This study investigates viscous fluid’s behavior when interacting with two rectangular polygons positioned vertically and aligned in a staggered configuration. Two physical parameters, Reynolds Number and Gap spacings, are discussed using the Lattice Boltzmann Method for two-dimensional flow. Results are discussed in vortex snapshots, time trace histories of drag and lift coefficient, and power spectra analysis of lift coefficient. Nine distinct flow vortex streets are identified based on increasing gap spacings between the pair of two rectangular polygons. The vortex shedding mechanism is disturbed at small gap spacings and becomes optimal at large gap spacings. Different physical parameters of practical importance, like mean drag coefficient, root mean square values of drag coefficient, root mean square values of lift coefficient, and Strouhal number, approach the single rectangular polygon value at large gap spacings
Applications of Deep Learning Models in Financial Forecasting
In financial markets, deep learning techniques sparked a revolution, reshaping conventional approaches and amplifying predictive capabilities. This thesis explored the applications of deep learning models to unravel insights and methodologies aimed at advancing financial forecasting.
The crux of the research problem lies in the applications of predictive models within financial domains, characterised by high volatility and uncertainty. This thesis investigated the application of advanced deep-learning methodologies in the context of financial forecasting, addressing the challenges posed by the dynamic nature of financial markets. These challenges were tackled by exploring a range of techniques, including convolutional neural networks (CNNs), long short-term memory networks (LSTMs), autoencoders (AEs), and variational autoencoders (VAEs), along with
approaches such as encoding financial time series into images. Through analysis, methodologies such as transfer learning, convolutional neural networks, long short-term memory networks, generative modelling, and image encoding of time series data were examined. These methodologies collectively offered a comprehensive toolkit for extracting meaningful insights from financial data.
The present work investigated the practicality of a deep learning CNN-LSTM model within the Directional Change framework to predict significant DC events—a task crucial for timely decisionmaking in financial markets. Furthermore, the potential of autoencoders and variational autoencoders to enhance financial forecasting accuracy and remove noise from financial time series data was explored. Leveraging their capacity within financial time series, these models offered promising avenues for improved data representation and subsequent forecasting. To further contribute to
financial prediction capabilities, a deep multi-model was developed that harnessed the power of pre-trained computer vision models. This innovative approach aimed to predict the VVIX, utilising the cross-disciplinary synergy between computer vision and financial forecasting. By integrating knowledge from these domains, novel insights into the prediction of market volatility were provided
Fibroblasts dynamically interact with the epithelium.
Time-lapse video (bright-field and fluorescence imaging) shows 4 days of epithelial morphogenesis in fibroblast (cyan)-organoid (red) co-culture (day 0–4). Scale bar: 100 μm. Snapshots from the movie are depicted in Fig 3A. (AVI)</p
<i>Myh9</i> knock-out does not impede fibroblast motility.
(A) Detailed time-lapse snapshots of fibroblast-organoid contact establishment in dispersed co-cultures with control or Myh9-KO fibroblasts and tdTomato+ organoids. Scale bar: 50 μm. (B) Quantification of fibroblast-organoid contacts established in the first 3 days of co-culture, comparing GFP+ and GFP- fibroblasts (GFP is a marker of adenoviral transduction). The plot shows mean ± SD. Statistical analysis: two-tailored t test; n = 3 independent biological replicates, N = 20 organoids per experiment. The data underlying the graphs shown in the figure can be found in S1 Data. (TIFF)</p
Mammary epithelial branching morphogenesis upon FGF2 treatment or fibroblast co-culture.
The video is composed of time-lapse videos capturing 5 days of epithelial morphogenesis in 3D organoid culture with no growth factor in the basal organoid medium (left), with FGF2 in the basal organoid medium (middle), or in fibroblast-organoid co-culture without addition of any growth factors to the basal organoid medium. Snapshots from the videos are depicted in Fig 1A. (AVI)</p
- …