13,878 research outputs found
Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions
In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request
ARA-net: an attention-aware retinal atrophy segmentation network coping with fundus images
BackgroundAccurately detecting and segmenting areas of retinal atrophy are paramount for early medical intervention in pathological myopia (PM). However, segmenting retinal atrophic areas based on a two-dimensional (2D) fundus image poses several challenges, such as blurred boundaries, irregular shapes, and size variation. To overcome these challenges, we have proposed an attention-aware retinal atrophy segmentation network (ARA-Net) to segment retinal atrophy areas from the 2D fundus image.MethodsIn particular, the ARA-Net adopts a similar strategy as UNet to perform the area segmentation. Skip self-attention connection (SSA) block, comprising a shortcut and a parallel polarized self-attention (PPSA) block, has been proposed to deal with the challenges of blurred boundaries and irregular shapes of the retinal atrophic region. Further, we have proposed a multi-scale feature flow (MSFF) to challenge the size variation. We have added the flow between the SSA connection blocks, allowing for capturing considerable semantic information to detect retinal atrophy in various area sizes.ResultsThe proposed method has been validated on the Pathological Myopia (PALM) dataset. Experimental results demonstrate that our method yields a high dice coefficient (DICE) of 84.26%, Jaccard index (JAC) of 72.80%, and F1-score of 84.57%, which outperforms other methods significantly.ConclusionOur results have demonstrated that ARA-Net is an effective and efficient approach for retinal atrophic area segmentation in PM
A real-time smart sensing system for automatic localization and recognition of vegetable plants for weed control
Tomato is a globally grown vegetable crop with high economic and nutritional values. Tomato production is being threatened by weeds. This effect is more pronounced in the early stages of tomato plant growth. Thus weed management in the early stages of tomato plant growth is very critical. The increasing labor cost of manual weeding and the negative impact on human health and the environment caused by the overuse of herbicides are driving the development of smart weeders. The core task that needs to be addressed in developing a smart weeder is to accurately distinguish vegetable crops from weeds in real time. In this study, a new approach is proposed to locate tomato and pakchoi plants in real time based on an integrated sensing system consisting of camera and color mark sensors. The selection scheme of reference, color, area, and category of plant labels for sensor identification was examined. The impact of the number of sensors and the size of the signal tolerance region on the system recognition accuracy was also evaluated. The experimental results demonstrated that the color mark sensor using the main stem of tomato as the reference exhibited higher performance than that of pakchoi in identifying the plant labels. The scheme of applying white topical markers on the lower main stem of the tomato plant is optimal. The effectiveness of the six sensors used by the system to detect plant labels was demonstrated. The computer vision algorithm proposed in this study was specially developed for the sensing system, yielding the highest overall accuracy of 95.19% for tomato and pakchoi localization. The proposed sensor-based system is highly accurate and reliable for automatic localization of vegetable plants for weed control in real time
The VLT/SPHERE view of the ATOMIUM cool evolved star sample
Context. Low- and intermediate-mass asymptotic giant stars and massive red supergiant stars are important contributors to the chemical enrichment of the Universe. They are among the most efficient dust factories of the Galaxy, harboring chemically rich circumstellar environments. Yet, the processes that lead to dust formation or the large-scale shaping of the mass loss still escape attempts at modeling.
Aims. Through the ATOMIUM project, we aim to present a consistent view of a sample of 17 nearby cool evolved stars. Our goals are to unveil the dust-nucleation sites and morphologies of the circumstellar envelope of such stars and to probe ambient environments with various conditions. This will further enhance our understanding of the roles of stellar convection and pulsations, and that of companions in shaping the dusty circumstellar medium.
Methods. Here we present and analyze VLT/SPHERE-ZIMPOL polarimetric maps obtained in the visible (645–820 nm) of 14 out of the 17 ATOMIUM sources. They were obtained contemporaneously with the ALMA high spatial resolution data. To help interpret the polarized signal, we produced synthetic maps of light scattering by dust, through 3D radiative transfer simulations with the RADMC3D code.
Results. The degree of linear polarization (DoLP) observed by ZIMPOL spreads across several optical filters. We infer that it primarily probes dust located just outside of the point spread function of the central source, and in or near the plane of the sky. The polarized signal is mainly produced by structures with a total optical depth close to unity in the line of sight, and it represents only a fraction of the total circumstellar dust. The maximum DoLP ranges from 0.03–0.38 depending on the source, fractions that can be reproduced by our 3D pilot models for grains composed of olivine, melilite, corundum, enstatite, or forsterite. The spatial structure of the DoLP shows a diverse set of shapes, including clumps, arcs, and full envelopes. Only for three sources do we note a correlation between the ALMA CO υ = 0, J = 2−1 and SiO υ = 0, J = 5−4 lines, which trace the gas density, and the DoLP, which traces the dust.
Conclusions. The clumpiness of the DoLP and the lack of a consistent correlation between the gas and the dust location show that, in the inner environment, dust formation occurs at very specific sites. This has potential consequences for the derived mass-loss rates and dust-to-gas ratio in the inner region of the circumstellar environment. Except for π1 Gru and perhaps GY Aql, we do not detect interactions between the circumstellar wind and the hypothesized companions that shape the wind at larger scales. This suggests that the orbits of any other companions are tilted out of the plane of the sky
Planar fiber-chip-coupling using angle-polished polarization maintaining fibers
We report on our latest developments of a planar fiber-chip-coupling scheme, using angle polished, polarization maintaining (PM) fibers. Most integrated photonic chip components are polarization sensitive and a suitable way to launch several wavelength channels with the same polarization to the chip is the use of PM fibers. Those impose several challenges at processing and handling to achieve a stable, permanent, and low-loss coupling. We present the processing of the fibers in detail and experimental results for our planar and compact fiber-chip-coupling technique
Victims' Access to Justice in Trinidad and Tobago: An exploratory study of experiences and challenges of accessing criminal justice in a post-colonial society
This thesis investigates victims' access to justice in Trinidad and Tobago, using their own narratives. It seeks to capture how their experiences affected their identities as victims and citizens, alongside their perceptions of legitimacy regarding the criminal justice system. While there have been some reforms in the administration of criminal justice in Trinidad and Tobago, such reforms have not focused on victims' accessibility to the justice system. Using grounded theory methodology, qualitative data was collected through 31 in-depth interviews with victims and victim advocates. The analysis found that victims experienced interpersonal, structural, and systemic barriers at varying levels throughout the criminal justice system, which manifested as institutionalized secondary victimization, silencing and inequality. This thesis argues that such experiences not only served to appropriate conflict but demonstrates that access is often given in a very narrow sense. Furthermore, it shows a failure to encompass access to justice as appropriated conflicts are left to stagnate in the system as there is often very little resolution. Adopting a postcolonial lens to analyse victims' experiences, the analysis identified othering practices that served to institutionalize the vulnerability and powerlessness associated with victim identities. Here, it is argued that these othering practices also affected the rights consciousness of victims, delegitimating their identities as citizens. Moreover, as a result of their experiences, victims had mixed perceptions of the justice system. It is argued that while the system is a legitimate authority victims' endorsement of the system is questionable, therefore victims' experiences suggest that there is a reinforcement of the system's legal hegemony. The findings suggest that within the legal system of Trinidad and Tobago, legacies of colonialism shape the postcolonial present as the psychology and inequalities of the past are present in the interactions and processes of justice. These findings are relevant for policymakers in Trinidad and Tobago and other regions. From this study it is recognized that, to improve access to justice for victims, there needs to be a move towards victim empowerment that promotes resilience and enhances social capital. Going forward it is noted that there is a need for further research
The place where curses are manufactured : four poets of the Vietnam War
The Vietnam War was unique among American wars. To pinpoint its uniqueness, it was necessary to look for a non-American voice that would enable me to articulate its distinctiveness and explore the American character as observed by an Asian. Takeshi Kaiko proved to be most helpful. From his novel, Into a Black Sun, I was able to establish a working pair of 'bookends' from which to approach the poetry of Walter McDonald, Bruce Weigl, Basil T. Paquet and Steve Mason. Chapter One is devoted to those seemingly mismatched 'bookends,' Walt Whitman and General William C. Westmoreland, and their respective anthropocentric and technocentric visions of progress and the peculiarly American concept of the "open road" as they manifest themselves in Vietnam. In Chapter, Two, I analyze the war poems of Walter McDonald. As a pilot, writing primarily about flying, his poetry manifests General Westmoreland's technocentric vision of the 'road' as determined by and manifest through technology. Chapter Three focuses on the poems of Bruce Weigl. The poems analyzed portray the literal and metaphorical descent from the technocentric, 'numbed' distance of aerial warfare to the world of ground warfare, and the initiation of a 'fucking new guy,' who discovers the contours of the self's interior through a set of experiences that lead from from aerial insertion into the jungle to the degradation of burning human
feces. Chapter Four, devoted to the thirteen poems of Basil T. Paquet, focuses on the continuation of the descent begun in Chapter Two. In his capacity as a medic, Paquet's entire body of poems details his quotidian tasks which entail tending the maimed, the mortally wounded and the dead. The final chapter deals with Steve Mason's JohnnY's Song, and his depiction of the plight of Vietnam veterans back in "The World" who are still trapped inside the interior landscape of their individual "ghettoes" of the soul created by their war-time experiences
Implementing Health Impact Assessment as a Required Component of Government Policymaking: A Multi-Level Exploration of the Determinants of Healthy Public Policy
It is widely understood that the public policies of ‘non-health’ government sectors have greater impacts on population health than those of the traditional healthcare realm. Health Impact Assessment (HIA) is a decision support tool that identifies and promotes the health benefits of policies while also mitigating their unintended negative consequences. Despite numerous calls to do so, the Ontario government has yet to implement HIA as a required component of policy development. This dissertation therefore sought to identify the contexts and factors that may both enable and impede HIA use at the sub-national (i.e., provincial, territorial, or state) government level.
The three integrated articles of this dissertation provide insights into specific aspects of the policy process as they relate to HIA. Chapter one details a case study of purposive information-seeking among public servants within Ontario’s Ministry of Education (MOE). Situated within Ontario’s Ministry of Health (MOH), chapter two presents a case study of policy collaboration between health and ‘non-health’ ministries. Finally, chapter three details a framework analysis of the political factors supporting health impact tool use in two sub-national jurisdictions – namely, Québec and South Australia.
MOE respondents (N=9) identified four components of policymaking ‘due diligence’, including evidence retrieval, consultation and collaboration, referencing, and risk analysis. As prospective HIA users, they also confirmed that information is not routinely sought to mitigate the potential negative health impacts of education-based policies. MOH respondents (N=8) identified the bureaucratic hierarchy as the brokering mechanism for inter-ministerial policy development. As prospective HIA stewards, they also confirmed that the ministry does not proactively flag the potential negative health impacts of non-health sector policies. Finally, ‘lessons learned’ from case articles specific to Québec (n=12) and South Australia (n=17) identified the political factors supporting tool use at different stages of the policy cycle, including agenda setting (‘policy elites’ and ‘political culture’), implementation (‘jurisdiction’), and sustained implementation (‘institutional power’).
This work provides important insights into ‘real life’ policymaking. By highlighting existing facilitators of and barriers to HIA use, the findings offer a useful starting point from which proponents may tailor context-specific strategies to sustainably implement HIA at the sub-national government level
Breast mass segmentation from mammograms with deep transfer learning
Abstract. Mammography is an x-ray imaging method used in breast cancer screening, which is a time consuming process. Many different computer assisted diagnosis have been created to hasten the image analysis. Deep learning is the use of multilayered neural networks for solving different tasks. Deep learning methods are becoming more advanced and popular for segmenting images. One deep transfer learning method is to use these neural networks with pretrained weights, which typically improves the neural networks performance.
In this thesis deep transfer learning was used to segment cancerous masses from mammography images. The convolutional neural networks used were pretrained and fine-tuned, and they had an an encoder-decoder architecture. The ResNet22 encoder was pretrained with mammography images, while the ResNet34 encoder was pretrained with various color images. These encoders were paired with either a U-Net or a Feature Pyramid Network decoder. Additionally, U-Net model with random initialization was also tested. The five different models were trained and tested on the Oulu Dataset of Screening Mammography (9204 images) and on the Portuguese INbreast dataset (410 images) with two different loss functions, binary cross-entropy loss with soft Jaccard loss and a loss function based on focal Tversky index.
The best models were trained on the Oulu Dataset of Screening Mammography with the focal Tversky loss. The best segmentation result achieved was a Dice similarity coefficient of 0.816 on correctly segmented masses and a classification accuracy of 88.7% on the INbreast dataset. On the Oulu Dataset of Screening Mammography, the best results were a Dice score of 0.763 and a classification accuracy of 83.3%. The results between the pretrained models were similar, and the pretrained models had better results than the non-pretrained models. In conclusion, deep transfer learning is very suitable for mammography mass segmentation and the choice of loss function had a large impact on the results.Rinnan massojen segmentointi mammografiakuvista syvä- ja siirto-oppimista hyödyntäen. Tiivistelmä. Mammografia on röntgenkuvantamismenetelmä, jota käytetään rintäsyövän seulontaan. Mammografiakuvien seulonta on aikaa vievää ja niiden analysoimisen avuksi on kehitelty useita tietokoneavusteisia ratkaisuja. Syväoppimisella tarkoitetaan monikerroksisten neuroverkkojen käyttöä eri tehtävien ratkaisemiseen. Syväoppimismenetelmät ovat ajan myötä kehittyneet ja tulleet suosituiksi kuvien segmentoimiseen. Yksi tapa yhdistää syvä- ja siirtooppimista on hyödyntää neuroverkkoja esiopetettujen painojen kanssa, mikä auttaa parantamaan neuroverkkojen suorituskykyä.
Tässä diplomityössä tutkittiin syvä- ja siirto-oppimisen käyttöä syöpäisten massojen segmentoimiseen mammografiakuvista. Käytetyt konvoluutioneuroverkot olivat esikoulutettuja ja hienosäädettyjä. Lisäksi niillä oli enkooderi-dekooderi arkkitehtuuri. ResNet22 enkooderi oli esikoulutettu mammografia kuvilla, kun taas ResNet34 enkooderi oli esikoulutettu monenlaisilla värikuvilla. Näihin enkoodereihin yhdistettiin joko U-Net:n tai piirrepyramidiverkon dekooderi. Lisäksi käytettiin U-Net mallia ilman esikoulutusta. Nämä viisi erilaista mallia koulutettiin ja testattiin sekä Oulun Mammografiaseulonta Datasetillä (9204 kuvaa) että portugalilaisella INbreast datasetillä (410 kuvaa) käyttäen kahta eri tavoitefunktiota, jotka olivat binääriristientropia yhdistettynä pehmeällä Jaccard-indeksillä ja fokaaliin Tversky indeksiin perustuva tavoitefunktiolla.
Parhaat mallit olivat koulutettu Oulun datasetillä fokaalilla Tversky tavoitefunktiolla. Parhaat tulokset olivat 0,816 Dice kerroin oikeissa positiivisissa segmentaatioissa ja 88,7 % luokittelutarkkuus INbreast datasetissä. Esikoulutetut mallit antoivat parempia tuloksia kuin mallit joita ei esikoulutettu. Oulun datasetillä parhaat tulokset olivat 0,763:n Dice kerroin ja 83,3 % luokittelutarkkuus. Tuloksissa ei ollut suurta eroa esikoulutettujen mallien välillä. Tulosten perusteella syvä- ja siirto-oppiminen soveltuvat hyvin massojen segmentoimiseen mammografiakuvista. Lisäksi tavoitefunktiovalinnalla saatiin suuri vaikutus tuloksiin
Learning disentangled speech representations
A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody.
The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions.
In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks.
This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically
- …