10,913 research outputs found
Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions
In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request
New functions of platelet C3G: Involvement in TPO-regulation, ischemia-induced angiogenesis and tumor metastasis
[ES] Las GTPasas son proteínas que regulan una gran variedad de procesos celulares,
entre los que cabe destacar la proliferación, la diferenciación celular y la apoptosis.
Estas proteínas alternan entre dos confirmaciones: una activa o unida a GTP y una
inactiva o unida a GDP. El intercambio de GDP por GTP esta catalizado por un grupo
de proteínas denominadas GEF (factores intercambiadores de nucleótidos de guanina),
mientras que las proteínas GAP (proteínas activadoras de la actividad GTPasa) inhiben
a la GTPasa. C3G es un GEF para varias GTPasas de la familia de Ras, principalmente
de Rap1, R-Ras y TC21, y para una GTPasa de la familia de Rho, TC10. Mediante el
uso de modelos animales que expresan de manera específica en plaquetas y
megacariocitos o bien C3G (tgC3G), o bien una forma mutante de C3G (caracterizada
por la pérdida del dominio catalítico, tgC3GCat), nuestro grupo ha demostrado la
participación de C3G en la diferenciación megacariocítica, así como en la regulación de
la función hemostática de las plaquetas. En concreto, las plaquetas tgC3G presentan
una mayor activación y agregación plaquetaria, que se correlaciona con tiempos de
sangrado significativamente inferiores en los ratones tgC3G, además de un incremento
en la formación de trombos en modelos in vivo. La sobreexpresión de C3G plaquetario,
también genera una alteración en la secreción de los gránulos-α, caracterizada por la
retención del factor de crecimiento del endotelio vascular (VEGF) y de trombospondina-
1 (TSP-1) en el citoplasma de las plaquetas, dando lugar a un secretoma netamente
proangiogénico. Como resultado de la mayor capacidad proangiogénica de las
plaquetas que sobreexpresan C3G, los ratones tgC3G mostraron un crecimiento tumoral
más rápido en dos modelos heterotópicos de implantación tumoral. Además, la proteína
C3G plaquetaria promueve la metástasis pulmonar de células de melanoma (B16-F10).
Sin embargo, la expresión transgénica de C3G no altera los recuentos plaquetarios en
sangre periférica.
En esta Tesis, hemos profundizado en el papel de C3G en la megacariopoyesis,
en la angiogénesis inducida por isquemia y en la metástasis tumoral. Para ello, hemos
desarrollado un modelo animal adicional (C3G-KO), en el cual C3G se encuentra
específicamente delecionado en megacariocitos (Mk). Al igual que lo observado en
ratones tgC3G, los animales C3G-KO tampoco mostraron diferencias ni en el número
de Mk en médula ósea, ni en los recuentos plaquetarios en sangre periférica. Sin
embargo, la deleción de C3G resultó en una mayor maduración megacariocítica in vitro
cuando las médulas óseas fueron cultivas en medio enriquecido con trombopoyetina (TPO) junto con un cocktail de citocinas, sugiriendo un posible papel de C3G en una
megacariopoyesis patológica.
En base a esto, hemos analizamos el papel de C3G en dos modelos in vivo de
megacariopoyesis patológica: la inyección de TPO y la mielosupresión inducida por 5-
Fluoruracilo (5-FU). La inyección intravenosa de TPO estimula la megacariopoyesis,
incrementando los niveles plaquetarios; mientras que el 5-FU induce la depleción de la
médula ósea alrededor del séptimo día tas la inyección, lo que va seguido de un
profundo incremento en el recuento plaquetario, proceso conocido como rebote
plaquetario (platelet rebound) tras 10-15 días de tratamiento
A correlation between tellurite resistance and nitric oxide detoxification in Salmonella Typhimurium
Salmonella are important enteric pathogens that are responsible for causing various diseases from gastroenteritis to systemic typhoid fever. Salmonella are a major contributor to morbidity and mortality worldwide. Crucial to their pathogenesis is the survival in harmful conditions elicited by the host immune system, one of these being reactive oxygen and nitrogen species (ROS/RNS). These are produced by macrophages and neutrophils in an attempt to eliminate pathogens. Salmonella, have the unique ability to colonise macrophages and have dedicated nitric oxide (NO) detoxification systems. There are three prominent metalloenzymes (HmpA, NorVW and NrfA) heavily researched in the literature for NO detoxification. Previous work suggested that more proteins are responsible for the nitrosative stress response with these being regulated by the nitric oxide sensitive transcriptional repressor, NsrR.
This study demonstrates a relationship between three putative tellurite resistance proteins regulated by NsrR (STM1808, YeaR and TehB) and NO detoxification. A Functional redundancy between these proteins was observed for anaerobic protection against NO and tellurite. Furthermore, this study identified that proteins responsible in NO protection such as HmpA and YtfE also provide resistance to tellurite during aerobic and anaerobic conditions, respectively. Tellurite resistant Salmonella strains were evolved by continued passage in this study that consequently had altered H2O2 resistance profiles and increased sensitivity to antibiotics. However, these strains were not significantly attenuated during macrophage survival or during the presence of NO in vitro. Additionally, the hypothetical protein YgbA, which has predicted roles in NO detoxification, was found to be important to Salmonella survival in macrophages. However, in vitro NO exposure with the NO donor deta NONOate only showed a role for anaerobic protection
Neural Natural Language Generation: A Survey on Multilinguality, Multimodality, Controllability and Learning
Developing artificial learning systems that can understand and generate natural language has been one of the long-standing goals of artificial intelligence. Recent decades have witnessed an impressive progress on both of these problems, giving rise to a new family of approaches. Especially, the advances in deep learning over the past couple of years have led to neural approaches to natural language generation (NLG). These methods combine generative language learning techniques with neural-networks based frameworks. With a wide range of applications in natural language processing, neural NLG (NNLG) is a new and fast growing field of research. In this state-of-the-art report, we investigate the recent developments and applications of NNLG in its full extent from a multidimensional view, covering critical perspectives such as multimodality, multilinguality, controllability and learning strategies. We summarize the fundamental building blocks of NNLG approaches from these aspects and provide detailed reviews of commonly used preprocessing steps and basic neural architectures. This report also focuses on the seminal applications of these NNLG models such as machine translation, description generation, automatic speech recognition, abstractive summarization, text simplification, question answering and generation, and dialogue generation. Finally, we conclude with a thorough discussion of the described frameworks by pointing out some open research directions.This work has been partially supported by the European Commission ICT COST Action “Multi-task, Multilingual, Multi-modal Language Generation” (CA18231). AE was supported by BAGEP 2021 Award of the Science Academy. EE was supported in part by TUBA GEBIP 2018 Award. BP is in in part funded by Independent Research Fund Denmark (DFF) grant 9063-00077B. IC has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 838188. EL is partly funded by Generalitat Valenciana and the Spanish Government throught projects PROMETEU/2018/089 and RTI2018-094649-B-I00, respectively. SMI is partly funded by UNIRI project uniri-drustv-18-20. GB is partly supported by the Ministry of Innovation and the National Research, Development and Innovation Office within the framework of the Hungarian Artificial Intelligence National Laboratory Programme. COT is partially funded by the Romanian Ministry of European Investments and Projects through the Competitiveness Operational Program (POC) project “HOLOTRAIN” (grant no. 29/221 ap2/07.04.2020, SMIS code: 129077) and by the German Academic Exchange Service (DAAD) through the project “AWAKEN: content-Aware and netWork-Aware faKE News mitigation” (grant no. 91809005). ESA is partially funded by the German Academic Exchange Service (DAAD) through the project “Deep-Learning Anomaly Detection for Human and Automated Users Behavior” (grant no. 91809358)
Graphical scaffolding for the learning of data wrangling APIs
In order for students across the sciences to avail themselves of modern data streams, they must first know how to wrangle data: how to reshape ill-organised, tabular data into another format, and how to do this programmatically, in languages such as Python and R. Despite the cross-departmental demand and the ubiquity of data wrangling in analytical workflows, the research on how to optimise the instruction of it has been minimal. Although data wrangling as a programming domain presents distinctive challenges - characterised by on-the-fly syntax lookup and code example integration - it also presents opportunities. One such opportunity is how tabular data structures are easily visualised. To leverage the inherent visualisability of data wrangling, this dissertation evaluates three types of graphics that could be employed as scaffolding for novices: subgoal graphics, thumbnail graphics, and parameter graphics. Using a specially built e-learning platform, this dissertation documents a multi-institutional, randomised, and controlled experiment that investigates the pedagogical effects of these. Our results indicate that the graphics are well-received, that subgoal graphics boost the completion rate, and that thumbnail graphics improve navigability within a command menu. We also obtained several non-significant results, and indications that parameter graphics are counter-productive. We will discuss these findings in the context of general scaffolding dilemmas, and how they fit into a wider research programme on data wrangling instruction
Developing automated meta-research approaches in the preclinical Alzheimer's disease literature
Alzheimer’s disease is a devastating neurodegenerative disorder for which there is no cure. A crucial part of the drug development pipeline involves testing therapeutic interventions in animal disease models. However, promising findings in preclinical experiments have not translated into clinical trial success. Reproducibility has often been cited as a major issue affecting biomedical research, where experimental results in one laboratory cannot be replicated in another. By using meta-research (research on research) approaches such as systematic reviews, researchers aim to identify and summarise all available evidence relating to a specific research question. By conducting a meta-analysis, researchers can also combine the results from different experiments statistically to understand the overall effect of an intervention and to explore reasons for variations seen across different publications. Systematic reviews of the preclinical Alzheimer’s disease literature could inform decision making, encourage research improvement, and identify gaps in the literature to guide future research. However, due to the vast amount of potentially useful evidence from animal models of Alzheimer’s disease, it remains difficult to make sense of and utilise this data effectively. Systematic reviews are common practice within evidence based medicine, yet their application to preclinical research is often limited by the time and resources required. In this thesis, I develop, build-upon, and implement automated meta-research approaches to collect, curate, and evaluate the preclinical Alzheimer’s literature. I searched several biomedical databases to obtain all research relevant to Alzheimer’s disease. I developed a novel deduplication tool to automatically identify and remove duplicate publications identified across different databases with minimal human effort. I trained a crowd of reviewers to annotate a subset of the publications identified and used this data to train a machine learning algorithm to screen through the remaining publications for relevance. I developed text-mining tools to extract model, intervention, and treatment information from publications and I improved existing automated tools to extract reported measures to reduce the risk of bias. Using these tools, I created a categorised database of research in transgenic Alzheimer’s disease animal models and created a visual summary of this dataset on an interactive, openly accessible online platform. Using the techniques described, I also identified relevant publications within the categorised dataset to perform systematic reviews of two key outcomes of interest in transgenic Alzheimer’s disease models: (1) synaptic plasticity and transmission in hippocampal slices and (2) motor activity in the open field test.
Over 400,000 publications were identified across biomedical research databases, with 230,203 unique publications. In a performance evaluation across different preclinical datasets, the automated deduplication tool I developed could identify over 97% of duplicate citations and a had an error rate similar to that of human performance. When evaluated on a test set of publications, the machine learning classifier trained to identify relevant research in transgenic models performed was highly sensitive (captured 96.5% of relevant publications) and excluded 87.8% of irrelevant publications. Tools to identify the model(s) and outcome measure(s) within the full-text of publications may reduce the burden on reviewers and were found to be more sensitive than searching only the title and abstract of citations. Automated tools to assess risk of bias reporting were highly sensitive and could have the potential to monitor research improvement over time. The final dataset of categorised Alzheimer’s disease research contained 22,375 publications which were then visualised in the interactive web application. Within the application, users can see how many publications report measures to reduce the risk of bias and how many have been classified as using each transgenic model, testing each intervention, and measuring each outcome. Users can also filter to obtain curated lists of relevant research, allowing them to perform systematic reviews at an accelerated pace with reduced effort required to search across databases, and a reduced number of publications to screen for relevance. Both systematic reviews and meta-analyses highlighted failures to report key methodological information within publications. Poor transparency of reporting limited the statistical power I had to understand the sources of between-study variation. However, some variables were found to explain a significant proportion of the heterogeneity. Transgenic animal model had a significant impact on results in both reviews. For certain open field test outcomes, wall colour of the open field arena and the reporting of measures to reduce the risk of bias were found to impact results. For in vitro electrophysiology experiments measuring synaptic plasticity, several electrophysiology parameters, including magnesium concentration of the recording solution, were found to explain a significant proportion of the heterogeneity. Automated meta-research approaches and curated web platforms summarising preclinical research could have the potential to accelerate the conduct of systematic reviews and maximise the potential of existing evidence to inform translation
IMPROVED IMAGE QUALITY IN CONE-BEAM COMPUTED TOMOGRAPHY FOR IMAGE-GUIDED INTERVENTIONS
In the past few decades, cone-beam computed tomography (CBCT) emerged as a rapidly developing imaging modality that provides single rotation 3D volumetric reconstruction with sub-millimeter spatial resolution. Compared to the conventional multi-detector CT (MDCT), CBCT exhibited a number of characteristics that are well suited to applications in image-guided interventions, including improved mechanical simplicity, higher portability, and lower cost. Although the current generation of CBCT has shown strong promise for high-resolution and high-contrast imaging (e.g., visualization of bone structures and surgical instrumentation), it is often believed that CBCT yields inferior contrast resolution compared to MDCT and is not suitable for soft-tissue imaging.
Aiming at expanding the utility of CBCT in image-guided interventions, this dissertation concerns the development of advanced imaging systems and algorithms to tackle the challenges of soft-tissue contrast resolution. The presented material includes work encompassing: (i) a comprehensive simulation platform to generate realistic CBCT projections (e.g., as training data for deep learning approaches); (ii) a new projection domain statistical noise model to improve the noise-resolution tradeoff in model-based iterative reconstruction (MBIR); (iii) a novel method to avoid CBCT metal artifacts by optimization of the source-detector orbit; (iv) an integrated software pipeline to correct various forms of CBCT artifacts (i.e., lag, glare, scatter, beam hardening, patient motion, and truncation); (v) a new 3D reconstruction method that only reconstructs the difference image from the image prior for use in CBCT neuro-angiography; and (vi) a novel method for 3D image reconstruction (DL-Recon) that combines deep learning (DL)-based image synthesis network with physics-based models based on Bayesian estimation of the statical uncertainty of the neural network.
Specific clinical challenges were investigated in monitoring patients in the neurological critical care unit (NCCU) and advancing intraoperative soft-tissue imaging capability in image-guided spinal and intracranial neurosurgery. The results show that the methods proposed in this work substantially improved soft-tissue contrast in CBCT. The thesis demonstrates that advanced imaging approaches based on accurate system models, novel artifact reduction methods, and emerging 3D image reconstruction algorithms can effectively tackle current challenges in soft-tissue contrast resolution and expand the application of CBCT in image-guided interventions
- …