12,954 research outputs found
Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions
In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
A Design Science Research Approach to Smart and Collaborative Urban Supply Networks
Urban supply networks are facing increasing demands and challenges and thus constitute a relevant field for research and practical development. Supply chain management holds enormous potential and relevance for society and everyday life as the flow of goods and information are important economic functions. Being a heterogeneous field, the literature base of supply chain management research is difficult to manage and navigate. Disruptive digital technologies and the implementation of cross-network information analysis and sharing drive the need for new organisational and technological approaches. Practical issues are manifold and include mega trends such as digital transformation, urbanisation, and environmental awareness.
A promising approach to solving these problems is the realisation of smart and collaborative supply networks. The growth of artificial intelligence applications in recent years has led to a wide range of applications in a variety of domains. However, the potential of artificial intelligence utilisation in supply chain management has not yet been fully exploited. Similarly, value creation increasingly takes place in networked value creation cycles that have become continuously more collaborative, complex, and dynamic as interactions in business processes involving information technologies have become more intense.
Following a design science research approach this cumulative thesis comprises the development and discussion of four artefacts for the analysis and advancement of smart and collaborative urban supply networks. This thesis aims to highlight the potential of artificial intelligence-based supply networks, to advance data-driven inter-organisational collaboration, and to improve last mile supply network sustainability. Based on thorough machine learning and systematic literature reviews, reference and system dynamics modelling, simulation, and qualitative empirical research, the artefacts provide a valuable contribution to research and practice
Corporate Social Responsibility: the institutionalization of ESG
Understanding the impact of Corporate Social Responsibility (CSR) on firm performance as it relates to industries reliant on technological innovation is a complex and perpetually evolving challenge. To thoroughly investigate this topic, this dissertation will adopt an economics-based structure to address three primary hypotheses. This structure allows for each hypothesis to essentially be a standalone empirical paper, unified by an overall analysis of the nature of impact that ESG has on firm performance. The first hypothesis explores the evolution of CSR to the modern quantified iteration of ESG has led to the institutionalization and standardization of the CSR concept. The second hypothesis fills gaps in existing literature testing the relationship between firm performance and ESG by finding that the relationship is significantly positive in long-term, strategic metrics (ROA and ROIC) and that there is no correlation in short-term metrics (ROE and ROS). Finally, the third hypothesis states that if a firm has a long-term strategic ESG plan, as proxied by the publication of CSR reports, then it is more resilience to damage from controversies. This is supported by the finding that pro-ESG firms consistently fared better than their counterparts in both financial and ESG performance, even in the event of a controversy. However, firms with consistent reporting are also held to a higher standard than their nonreporting peers, suggesting a higher risk and higher reward dynamic. These findings support the theory of good management, in that long-term strategic planning is both immediately economically beneficial and serves as a means of risk management and social impact mitigation. Overall, this contributes to the literature by fillings gaps in the nature of impact that ESG has on firm performance, particularly from a management perspective
Reinforcing optimization enabled interactive approach for liver tumor extraction in computed tomography images
Detecting liver abnormalities is a difficult task in radiation planning and treatment. The modern development integrates medical imaging into computer techniques. This advancement has monumental effect on how medical images are interpreted and analyzed. In many circumstances, manual segmentation of liver from computerized tomography (CT) imaging is imperative, and cannot provide satisfactory results. However, there are some difficulties in segmenting the liver due to its uneven shape, fuzzy boundary and complicated structure. This leads to necessity of enabling optimization in interactive segmentation approach. The main objective of reinforcing optimization is to search the optimal threshold and reduce the chance of falling into local optimum with survival of the fittest (SOF) technique. The proposed methodology makes use of pre-processing stage and reinforcing meta heuristics optimization based fuzzy c-means (FCM) for obtaining detailed information about the image. This information gives the optimal threshold value that is used for segmenting the region of interest with minimum user input. Suspicious areas are recognized from the segmented output. Both public and simulated dataset have been taken for experimental purposes. To validate the effectiveness of the proposed strategy, performance criteria such as dice coefficient, mode and user interaction level are taken and compared with state-of-the-art algorithms
Evolutionary Computation in Action: Feature Selection for Deep Embedding Spaces of Gigapixel Pathology Images
One of the main obstacles of adopting digital pathology is the challenge of
efficient processing of hyperdimensional digitized biopsy samples, called whole
slide images (WSIs). Exploiting deep learning and introducing compact WSI
representations are urgently needed to accelerate image analysis and facilitate
the visualization and interpretability of pathology results in a postpandemic
world. In this paper, we introduce a new evolutionary approach for WSI
representation based on large-scale multi-objective optimization (LSMOP) of
deep embeddings. We start with patch-based sampling to feed KimiaNet , a
histopathology-specialized deep network, and to extract a multitude of feature
vectors. Coarse multi-objective feature selection uses the reduced search space
strategy guided by the classification accuracy and the number of features. In
the second stage, the frequent features histogram (FFH), a novel WSI
representation, is constructed by multiple runs of coarse LSMOP. Fine
evolutionary feature selection is then applied to find a compact (short-length)
feature vector based on the FFH and contributes to a more robust deep-learning
approach to digital pathology supported by the stochastic power of evolutionary
algorithms. We validate the proposed schemes using The Cancer Genome Atlas
(TCGA) images in terms of WSI representation, classification accuracy, and
feature quality. Furthermore, a novel decision space for multicriteria decision
making in the LSMOP field is introduced. Finally, a patch-level visualization
approach is proposed to increase the interpretability of deep features. The
proposed evolutionary algorithm finds a very compact feature vector to
represent a WSI (almost 14,000 times smaller than the original feature vectors)
with 8% higher accuracy compared to the codes provided by the state-of-the-art
methods
A Survey on Biomedical Text Summarization with Pre-trained Language Model
The exponential growth of biomedical texts such as biomedical literature and
electronic health records (EHRs), provides a big challenge for clinicians and
researchers to access clinical information efficiently. To address the problem,
biomedical text summarization has been proposed to support clinical information
retrieval and management, aiming at generating concise summaries that distill
key information from single or multiple biomedical documents. In recent years,
pre-trained language models (PLMs) have been the de facto standard of various
natural language processing tasks in the general domain. Most recently, PLMs
have been further investigated in the biomedical field and brought new insights
into the biomedical text summarization task. In this paper, we systematically
summarize recent advances that explore PLMs for biomedical text summarization,
to help understand recent progress, challenges, and future directions. We
categorize PLMs-based approaches according to how they utilize PLMs and what
PLMs they use. We then review available datasets, recent approaches and
evaluation metrics of the task. We finally discuss existing challenges and
promising future directions. To facilitate the research community, we line up
open resources including available datasets, recent approaches, codes,
evaluation metrics, and the leaderboard in a public project:
https://github.com/KenZLuo/Biomedical-Text-Summarization-Survey/tree/master.Comment: 19 pages, 6 figures, TKDE under revie
The determinants of value addition: a crtitical analysis of global software engineering industry in Sri Lanka
It was evident through the literature that the perceived value delivery of the global software
engineering industry is low due to various facts. Therefore, this research concerns global
software product companies in Sri Lanka to explore the software engineering methods and
practices in increasing the value addition. The overall aim of the study is to identify the key
determinants for value addition in the global software engineering industry and critically
evaluate the impact of them for the software product companies to help maximise the value
addition to ultimately assure the sustainability of the industry.
An exploratory research approach was used initially since findings would emerge while the
study unfolds. Mixed method was employed as the literature itself was inadequate to
investigate the problem effectively to formulate the research framework. Twenty-three face-to-face online interviews were conducted with the subject matter experts covering all the
disciplines from the targeted organisations which was combined with the literature findings as
well as the outcomes of the market research outcomes conducted by both government and nongovernment institutes. Data from the interviews were analysed using NVivo 12. The findings
of the existing literature were verified through the exploratory study and the outcomes were
used to formulate the questionnaire for the public survey. 371 responses were considered after
cleansing the total responses received for the data analysis through SPSS 21 with alpha level
0.05. Internal consistency test was done before the descriptive analysis. After assuring the
reliability of the dataset, the correlation test, multiple regression test and analysis of variance
(ANOVA) test were carried out to fulfil the requirements of meeting the research objectives.
Five determinants for value addition were identified along with the key themes for each area.
They are staffing, delivery process, use of tools, governance, and technology infrastructure.
The cross-functional and self-organised teams built around the value streams, employing a
properly interconnected software delivery process with the right governance in the delivery
pipelines, selection of tools and providing the right infrastructure increases the value delivery.
Moreover, the constraints for value addition are poor interconnection in the internal processes,
rigid functional hierarchies, inaccurate selections and uses of tools, inflexible team
arrangements and inadequate focus for the technology infrastructure. The findings add to the
existing body of knowledge on increasing the value addition by employing effective processes,
practices and tools and the impacts of inaccurate applications the same in the global software
engineering industry
Statistical Learning for Gene Expression Biomarker Detection in Neurodegenerative Diseases
In this work, statistical learning approaches are used to detect biomarkers for neurodegenerative diseases (NDs). NDs are becoming increasingly prevalent as populations age, making understanding of disease and identification of biomarkers progressively important for facilitating early diagnosis and the screening of individuals for clinical trials. Advancements in gene expression profiling has enabled the exploration of disease biomarkers at an unprecedented scale. The work presented here demonstrates the value of gene expression data in understanding the underlying processes and detection of biomarkers of NDs. The value of novel approaches to previously collected -omics data is shown and it is demonstrated that new therapeutic targets can be identified. Additionally, the importance of meta-analysis to improve power of multiple small studies is demonstrated. The value of blood transcriptomics data is shown in applications to researching NDs to understand underlying processes using network analysis and a novel hub detection method. Finally, after demonstrating the value of blood gene expression data for investigating NDs, a combination of feature selection and classification algorithms were used to identify novel accurate biomarker signatures for the diagnosis and prognosis of Parkinson’s disease (PD) and Alzheimer’s disease (AD). Additionally, the use of feature pools based on previous knowledge of disease and the viability of neural networks in dimensionality reduction and biomarker detection is demonstrated and discussed. In summary, gene expression data is shown to be valuable for the investigation of ND and novel gene biomarker signatures for the diagnosis and prognosis of PD and AD
The Role of Transient Vibration of the Skull on Concussion
Concussion is a traumatic brain injury usually caused by a direct or indirect blow to the head that affects brain function. The maximum mechanical impedance of the brain tissue occurs at 450±50 Hz and may be affected by the skull resonant frequencies. After an impact to the head, vibration resonance of the skull damages the underlying cortex. The skull deforms and vibrates, like a bell for 3 to 5 milliseconds, bruising the cortex. Furthermore, the deceleration forces the frontal and temporal cortex against the skull, eliminating a layer of cerebrospinal fluid. When the skull vibrates, the force spreads directly to the cortex, with no layer of cerebrospinal fluid to reflect the wave or cushion its force. To date, there is few researches investigating the effect of transient vibration of the skull. Therefore, the overall goal of the proposed research is to gain better understanding of the role of transient vibration of the skull on concussion. This goal will be achieved by addressing three research objectives. First, a MRI skull and brain segmentation automatic technique is developed. Due to bones’ weak magnetic resonance signal, MRI scans struggle with differentiating bone tissue from other structures. One of the most important components for a successful segmentation is high-quality ground truth labels. Therefore, we introduce a deep learning framework for skull segmentation purpose where the ground truth labels are created from CT imaging using the standard tessellation language (STL). Furthermore, the brain region will be important for a future work, thus, we explore a new initialization concept of the convolutional neural network (CNN) by orthogonal moments to improve brain segmentation in MRI. Second, the creation of a novel 2D and 3D Automatic Method to Align the Facial Skeleton is introduced. An important aspect for further impact analysis is the ability to precisely simulate the same point of impact on multiple bone models. To perform this task, the skull must be precisely aligned in all anatomical planes. Therefore, we introduce a 2D/3D technique to align the facial skeleton that was initially developed for automatically calculating the craniofacial symmetry midline. In the 2D version, the entire concept of using cephalometric landmarks and manual image grid alignment to construct the training dataset was introduced. Then, this concept was extended to a 3D version where coronal and transverse planes are aligned using CNN approach. As the alignment in the sagittal plane is still undefined, a new alignment based on these techniques will be created to align the sagittal plane using Frankfort plane as a framework. Finally, the resonant frequencies of multiple skulls are assessed to determine how the skull resonant frequency vibrations propagate into the brain tissue. After applying material properties and mesh to the skull, modal analysis is performed to assess the skull natural frequencies. Finally, theories will be raised regarding the relation between the skull geometry, such as shape and thickness, and vibration with brain tissue injury, which may result in concussive injury
- …