19,157 research outputs found
Simulation of metal powder packing behaviour in laser-based powder bed fusion
Laser-based powder bed fusion (L-PBF) is a method of additive manufacturing, in which metal powder is fused into solid parts, layer by layer. L-PBF shows high promise for manufacture of functional Tungsten parts, but the development of Tungsten powder feedstock for L-PBF processing is demanding and expensive. Therefore, computer simulation is explored as a possible tool for Tungsten powder feedstock development at EOS Finland Oy, with whom this thesis was made.
The aim of this thesis was to develop a simulation model of the recoating process of an EOS M 290 L-PBF system, as well as a validation method for the simulation. The validated simulation model can be used to evaluate the applicability of the used simulation software (FLOW-3D DEM) in powder material development, and possibly use the model as a platform for future application with Tungsten powder. In order to reduce complexity and uncertainties, the irregular Tungsten powder is not yet simulated, and a well-known, spherical EOS IN718 powder feedstock was used instead.
The validation experiment is based on building a low, enclosed wall using the M 290 L-PBF system. Recoated powder is trapped inside as the enclosure is being built, making it possible to remove the sampled powder from a known volume. This enables measuring the powder packing density (PD) of the powder bed. The experiment was repeated five times and some sources of error were also quantified. Average PD was found to be 52 % with a standard deviation of 0.2 %.
The simulation was modelled after the IN718 powder and corresponding process used in the M 290 system. Material-related input values were found by dynamic image analysis, pycnometry, rheometry, and from literature. PD was measured with six different methods, and the method considered as most analogous to the practical validation experiment yielded a PD of 52 %. Various particle behavior phenomena were also observed and analyzed.
Many of the powder bed characterization methods found in literature were not applicable to L-PBF processing or were not representative of the simulated conditions. Many simulation studies were also found to use no validation, or used a validation method which is not based on the investigated phenomena. The validation model developed in this thesis accurately represents the simulated conditions and is found to produce reliable and repeatable results. The simulation model was parametrized with values acquired from practical experiments or literature and closely matched the validation experiment, and could therefore be considered a truthful representation of the powder recoating process of an EOS M 290. The model can be used as a platform for future development of Tungsten powder simulation
Japanese Expert Teachers' Understanding of the Application of Rhythm in Judo: a New Pedagogy
Aim
The aim of this research is to understand the application of rhythm in judo through the experience of expert Japanese coaches.
Background
Scientists and experienced coaches agree rhythm is an important skill in people’s everyday life. There is currently no research that investigates the importance of rhythm in judo. People with a highly developed sense of rhythm, move properly, breathe properly, or begin and finish work at the right time. Where sport is concerned, motion and dance can play an important role not only in the improvement of performance, but also in the reduction, or even prevention of, injuries. Those who are naturally musically inclined (have a musical ear) may find they can improve their technique faster than others, and this is something that, by investigating the way expert coaches understand the application of rhythm in judo, this research seeks to understand.
As Lange, (1970) stated, factors of movement are ‘weight, space, time, and flow on the background of the general flux of movement in proportional arrangements’ (Bradley, 2008; Selioni, 2013; Youngerman, 1976), therefore, this research will investigate the interaction of body and mind. Dance training as well as judo are somatic experiences that have as their ultimate goal the attainment of a skilled body. With quality training an athlete gains an increased awareness of their body which leads to better control of movement and is very important for judo athletes. This training is found in Japanese kabuki dance (Hahn, 2007), the Greek syrtaki dance (Zografou & Pateraki, 2007), and in walking techniques used in the traditional and Olympic sports of Japanese judo and Greek wrestling.
Methods
Interpretative phenomenological analysis (IPA) was the most suitable data analysis approach for this study for a number of reasons, mainly because it was considered to most closely reflect the author's realist epistemological view. The idiographic approach and framework, particularly on IPA, was regarded as a useful framework in which the current topic could meaningfully be explored.
As this study is one of the first to explore this new thematic area, IPA was the preferred approach to address the goal of providing a detailed account of the expert’s experience. Therefore, semi-structured interviews were used as a data source. This is the most conventional form of data collection using IPA and most closely reflects the researcher-participant relationship. Semi-structured interviews provide considerable flexibility by allowing the researcher to be guided by the phenomena of interest to the participant.
In this study, purposive sampling was achieved using inclusion criteria pertaining to the research question.
Using the ranking system criteria based on the belt in combination with age employed by the International Judo Federation (IJF) and Kodokan Judo Institute, six expert coaches of forty years old and over with a minimum belt rank of 6th dan were selected as a sample.
Results
Both interviews and the codification process contributed to new findings regarding the application of rhythm to judo, and judo itself as a pedagogical tool.
The diagrammatic model can be considered a 'guideline' to the phenomena deemed most significant. The personal significance of rhythm in judo was evidenced by the frequency with which the interviewees naturally referred to it during the interviews. A number of interviewees said that it was important for rhythm to be second nature. Rhythm was also described as an integrated and representative
element in the context of training. This framework was seen as essential in providing the reader with a contextualised understanding of the phenomena considered most important for the current research. Interviewees reported various motives for employing training in rhythm such as faster technical development, better attack/defence, fitness, speed, skills acquisition, personal and spiritual growth, competition results.
Conclusions
This study offers first-hand accounts from professional coaches of a previously unknown phenomena, namely the use of rhythm in judo, and sheds insight on how judo experts understand rhythm in terms of training, competition, and personal growth. These findings suggest that outside of training, coaches play an important role in teaching, mentoring, and leading students. In conclusion, the research revealed four important points which form the basis of a new method of teaching judo: pedagogy, skills, rhythm and movement
Towards A Practical High-Assurance Systems Programming Language
Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation.
Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code.
To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process
Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions
In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request
Applying Robotic Process Automation to improve Sales Operations at EDP Comercial
Project Work presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Marketing Research and CRMThis study aims to address the world of Robotic Process Automation (RPA), unraveling some myths and truths about it and, as it is a project work report, we have the opportunity to see the real impact that this technology can have in a company. This project was carried out in the Business Growth Area (Área de Dinamização de Negócio - ADN) team, who gives support to the whole Direction of Face-to-Face Channels (Direção de Canais Presenciais - DCP), which contains 4 channels: official Stores, Agents, which are partner but not official EDP stores, Door to Door (D2D) agents, which sell products door-to-door, and Distribution Agents Network (Rede de Agentes de Distribuição – RAD), which are distribution agents. These channels are responsible for all physical sales of EDP Comercial, which is responsible for all the sales of the Group EDP. When this project began, some problems were immediately detected, as various processes were being carried out manually that did not make any sense, both in monetary terms and, most importantly for EDP, in terms of time wasting, which means that the work activity was not being done efficiently enough and they saw this as an opportunity to explore the RPA world. So, the proposed work was to identify which processes could be improved and then build robots that could assume those activities. The most important result of this project was, as initially expected, the increase in efficiency in the work of the people who no longer have to perform routine tasks and can focus their energy on more important projects
Colour technologies for content production and distribution of broadcast content
The requirement of colour reproduction has long been a priority driving the development of new colour imaging systems that maximise human perceptual plausibility. This thesis explores machine learning algorithms for colour processing to assist both content production and distribution. First, this research studies colourisation technologies with practical use cases in restoration and processing of archived content. The research targets practical deployable solutions, developing a cost-effective pipeline which integrates the activity of the producer into the processing workflow. In particular, a fully automatic image colourisation paradigm using Conditional GANs is proposed to improve content generalisation and colourfulness of existing baselines. Moreover, a more conservative solution is considered by providing references to guide the system towards more accurate colour predictions. A fast-end-to-end architecture is proposed to improve existing exemplar-based image colourisation methods while decreasing the complexity and runtime. Finally, the proposed image-based methods are integrated into a video colourisation pipeline. A general framework is proposed to reduce the generation of temporal flickering or propagation of errors when such methods are applied frame-to-frame. The proposed model is jointly trained to stabilise the input video and to cluster their frames with the aim of learning scene-specific modes. Second, this research explored colour processing technologies for content distribution with the aim to effectively deliver the processed content to the broad audience. In particular, video compression is tackled by introducing a novel methodology for chroma intra prediction based on attention models. Although the proposed architecture helped to gain control over the reference samples and better understand the prediction process, the complexity of the underlying neural network significantly increased the encoding and decoding time. Therefore, aiming at efficient deployment within the latest video coding standards, this work also focused on the simplification of the proposed architecture to obtain a more compact and explainable model
The Viability and Potential Consequences of IoT-Based Ransomware
With the increased threat of ransomware and the substantial growth of the Internet of Things (IoT) market, there is significant motivation for attackers to carry out IoT-based ransomware campaigns. In this thesis, the viability of such malware is tested.
As part of this work, various techniques that could be used by ransomware developers to attack commercial IoT devices were explored. First, methods that attackers could use to communicate with the victim were examined, such that a ransom note was able to be reliably sent to a victim. Next, the viability of using "bricking" as a method of ransom was evaluated, such that devices could be remotely disabled unless the victim makes a payment to the attacker. Research was then performed to ascertain whether it was possible to remotely gain persistence on IoT devices, which would improve the efficacy of existing ransomware methods, and provide opportunities for more advanced ransomware to be created. Finally, after successfully identifying a number of persistence techniques, the viability of privacy-invasion based ransomware was analysed.
For each assessed technique, proofs of concept were developed. A range of devices -- with various intended purposes, such as routers, cameras and phones -- were used to test the viability of these proofs of concept. To test communication hijacking, devices' "channels of communication" -- such as web services and embedded screens -- were identified, then hijacked to display custom ransom notes. During the analysis of bricking-based ransomware, a working proof of concept was created, which was then able to remotely brick five IoT devices. After analysing the storage design of an assortment of IoT devices, six different persistence techniques were identified, which were then successfully tested on four devices, such that malicious filesystem modifications would be retained after the device was rebooted. When researching privacy-invasion based ransomware, several methods were created to extract information from data sources that can be commonly found on IoT devices, such as nearby WiFi signals, images from cameras, or audio from microphones. These were successfully implemented in a test environment such that ransomable data could be extracted, processed, and stored for later use to blackmail the victim.
Overall, IoT-based ransomware has not only been shown to be viable but also highly damaging to both IoT devices and their users. While the use of IoT-ransomware is still very uncommon "in the wild", the techniques demonstrated within this work highlight an urgent need to improve the security of IoT devices to avoid the risk of IoT-based ransomware causing havoc in our society. Finally, during the development of these proofs of concept, a number of potential countermeasures were identified, which can be used to limit the effectiveness of the attacking techniques discovered in this PhD research
A Descriptive Qualitative Study Exploring Middle-School Teachers’ Perceptions of Professional Development on Technology Integration
Today’s teachers are being encouraged to incorporate technology into their classrooms. Technology integration became a worldwide focus for schools after remote learning was necessary to continue instruction due to the COVID-19 pandemic. Additionally, research shows that technology-infused lessons improve student achievement and increase student engagement. Despite efforts to support teachers throughout the technology integration process, concerns have developed. Preparing highly qualified teachers ready to incorporate technology into their teaching repertoire has developed additional stress factors. In this descriptive qualitative study, the researcher wanted to address the problem of teacher attrition, possibly related to stress factors associated with technology integration. The purpose of this qualitative descriptive study was to explore teachers’ perceptions of professional development opportunities that possibly improve the technology integration process. Additionally, the researcher wanted to identify stress factors associated with technology adoption and how professional development may help to reduce stress factors associated with technology integration in one middle school in New York. The researcher chose a qualitative descriptive study using Vygotsky’s social constructivist theory and Bandura’s social learning theory on self-efficacy as the theoretical framework. The researcher included an exposition of the literature sources, synthesized the research findings, and provided recommendations for practice and future research. The data collection process consisted of semistructured open-ended questions that were developed with the support of a panel of experts. There were 10 participants chosen using a snowball sampling strategy. This study’s findings were that professional development should be hands-on, continuous, and targeted to increase teachers’ personal level of engagement. Also, creating opportunities for colleague support systems reduced stress factors associated with technology integration. These peer support systems reduced the time required to research the most effective resources, digital tools, and applications as participants shared the resources with one another. Recommendations for practice included providing adequate professional development, offering appropriate infrastructure, and hands-on, targeted, continuous training for teachers to feel more comfortable developing technology-infused lessons. Recommendations for research include providing additional insight into teachers’ perceived benefits and motivation for technology integration and how stress factors associated with the technology adoption process possibly increase teacher attrition
Perfect is the enemy of test oracle
Automation of test oracles is one of the most challenging facets of software
testing, but remains comparatively less addressed compared to automated test
input generation. Test oracles rely on a ground-truth that can distinguish
between the correct and buggy behavior to determine whether a test fails
(detects a bug) or passes. What makes the oracle problem challenging and
undecidable is the assumption that the ground-truth should know the exact
expected, correct, or buggy behavior. However, we argue that one can still
build an accurate oracle without knowing the exact correct or buggy behavior,
but how these two might differ. This paper presents SEER, a learning-based
approach that in the absence of test assertions or other types of oracle, can
determine whether a unit test passes or fails on a given method under test
(MUT). To build the ground-truth, SEER jointly embeds unit tests and the
implementation of MUTs into a unified vector space, in such a way that the
neural representation of tests are similar to that of MUTs they pass on them,
but dissimilar to MUTs they fail on them. The classifier built on top of this
vector representation serves as the oracle to generate "fail" labels, when test
inputs detect a bug in MUT or "pass" labels, otherwise. Our extensive
experiments on applying SEER to more than 5K unit tests from a diverse set of
open-source Java projects show that the produced oracle is (1) effective in
predicting the fail or pass labels, achieving an overall accuracy, precision,
recall, and F1 measure of 93%, 86%, 94%, and 90%, (2) generalizable, predicting
the labels for the unit test of projects that were not in training or
validation set with negligible performance drop, and (3) efficient, detecting
the existence of bugs in only 6.5 milliseconds on average.Comment: Published in ESEC/FSE 202
Cardiovascular diseases prediction by machine learning incorporation with deep learning
It is yet unknown what causes cardiovascular disease (CVD), but we do know that it is associated with a high risk of death, as well as severe morbidity and disability. There is an urgent need for AI-based technologies that are able to promptly and reliably predict the future outcomes of individuals who have cardiovascular disease. The Internet of Things (IoT) is serving as a driving force behind the development of CVD prediction. In order to analyse and make predictions based on the data that IoT devices receive, machine learning (ML) is used. Traditional machine learning algorithms are unable to take differences in the data into account and have a low level of accuracy in their model predictions. This research presents a collection of machine learning models that can be used to address this problem. These models take into account the data observation mechanisms and training procedures of a number of different algorithms. In order to verify the efficacy of our strategy, we combined the Heart Dataset with other classification models. The proposed method provides nearly 96 percent of accuracy result than other existing methods and the complete analysis over several metrics has been analysed and provided. Research in the field of deep learning will benefit from additional data from a large number of medical institutions, which may be used for the development of artificial neural network structures
- …