25586 research outputs found
Sort by
Design Considerations for Building an IoT Enabled Digital Twin Machine Tool Sub-System
The Internet of Things (IoT) and digital twins (DT) are both key advancements in the fourth industrial revolution. IoT can enable the connection of various devices which collect data that can be utilized to create a DT which can provide various services such as condition monitoring, predictive maintenance, modeling, and other useful functionality. Machine tools are key components in modern manufacturing forming the backbone of most modern factories. Modern machine tools are very complex and expensive machines, often costing hundreds of thousands to millions of dollars. It is desirable to ensure they are in good working condition. Given their complexity it may be desirable to begin IoT enabled DT development with one of its various subsystems such as the feed drives, tool changer, or spindle. This work covers considerations when building an IoT-enabled subsystem DT including: which data streams can be utilized, what should be considered when selecting a DT platform, and how data will be transmitted and analyzed in the system. A case study of a linear feed drive IoT-enabled feed drive is examined under this framework of development. This work should aid others with beginning the development process for an IoT enabled DT sub-system of their own
Adaptive SIF-EKF estimation for fault detection in attitude control experiments
An inherent property of dynamic systems with real applications is their high degree of variability, manifesting itself in ways that are often harmful to system stability and performance. External disturbances, modeling error, and faulty components must be accounted for, either in the system design, or algorithmically through estimation and control methods. In orbital satellite systems, the ability to compensate for uncertainty and detect faults is vital. Satellites are responsible for many essential operations on Earth, including GPS tracking, radio communication/broadcasting, defense, and climate monitoring. They are also expensive to design and fabricate, to deploy, and currently impossible to fix if suddenly inoperable. In being subjected to unforeseen disturbances or minor system failures, communications with Earth can cease and valuable data can be lost. Researchers have been developing robust estimation and control strategies for several decades to mitigate the effects of these failure modes. For instance, fault detection methods can be employed in satellites to detect deviations in attitude or actuator states such that error or incorrect data does not propagate further across its long life cycle. The Kalman Filter (KF) is an optimal state estimation strategy with sub-optimal nonlinear variations, commonly applied in most dynamic systems, including satellites. However, in the presence of aforementioned uncertainties, these optimal estimators tend to degrade drastically in performance, and must be replaced for more robust methods. The newly developed sliding-innovation filter (SIF) is one such candidate, as it has been demonstrated to perform state estimation robustly in faulty systems. Using an in-lab Nanosatellite Attitude Control Simulator (NACS), an adaptive hybrid formulation of the SIF and EKF is applied to a satellite system to detect faults and disturbances in experiments, based on the normalized innovation squares (NIS) metric. This strategy was demonstrated to improve state estimation accuracy in the presence of multiple faults, compared to conventional methods
2023 ASEE Workshop Combining Arduino and MATLAB for Controls Experiments
At the ASEE 2023 conference, a workshop was delivered showcasing the integra;on of advanced features in MATLAB® and Simulink® soCware with Arduino® microprocessor boards for conduc;ng control experiments. This document illuminates the successful strategies and pinpoints areas necessita;ng enhancements, discerned through par;cipant feedback and empirical observa;ons. It comprehensively ar;culates the required experiments and materials, emphasizing a prac;cal, hands-on methodology. The workshop extensively u;lized project based learning (PBL) methodologies, harnessing a spectrum of economical hardware solu;ons. Moreover, it introduced alterna;ve methodologies u;lizing robust MATLAB simula;on func;ons. The combina;on of student feedback and instructor insights is presented, offering valuable benchmarks for the workshop's efficacy. This workshop was developed as a joint effort between York College of Pennsylvania, McMaster University, and MathWorks®. This document is intended to func;on as a resourceful blueprint for future rendi;ons of this workshop, whether facilitated by MathWorks or others, ensuring con;nuous improvement and knowledge dissemina;on
The Single Source of Truth Paradigm as a Tool for Supporting Software Maintenance
Many software systems become complex over time and eventually become harder to
maintain. They often face performance problems, security risks, outdated dependen-
cies, bugs, and other issues. To address these challenges, practitioners use various
maintenance tools like performance profilers, static analyzers, security scanners, and
more. However, the data from these tools is often scattered and di”cult to com-
bine, making it hard to get a complete picture, perform analysis, and make informed
decisions.
We introduce the implementation of the Single Source of Truth (SST) paradigm,
which allows us to bring all software maintenance data together in one place. The SST
aggregates information from di!erent tools, structures it, and stores it in a consistent
and reliable way. It uses a graph-based approach to organize and unify the data,
making it easier to explore and analyze. The system was tested on several software
projects and showed that it can help better understand the software systems and
support smarter maintenance decisions.ThesisMaster of Applied Science (MASc)As software systems grow, they often become harder to manage, with problems like
slow performance, bugs, security issues, and outdated parts. Developers use di!erent
tools to find and fix these issues, but each tool gives information in its own way,
making it hard to see the full picture. This project introduces a system called the
Single Source of Truth (SST) that brings all this information together in one place.
It organizes the data as a unified graph representation, ensuring data validations and
consistency
Designing Molecular Fishhooks for Virus Surveillance Platforms
Virus surveillance platforms are critical infrastructure for public health. Clinical sequencing
platforms are growing where all virus genomes in a community can be assessed, but they
come with limitations that raise costs or slow down public health responses. When dealing
with a large volume of patients, this translates to a large volume of data, which takes time
to analyze, and delays are not ideal when virus replication and spread are exponential with
time. There are two ways clinical sequencing platforms can be improved to produce a more
robust virus surveillance platform: by optimizing how we analyze large volumes of data or
by producing tools to reduce the complexity of samples such that we focus only on the virus
material alone.
To optimize the means we use to analyse large volumes of data, I developed the SARS-
CoV-2 Illumina GeNome Assembly Line (SIGNAL). Written in Python, SIGNAL is built
using Snakemake to analyse raw SARS-CoV-2 sequencing data in parallel. SIGNAL has
contributed to surveillance platforms at the provincial and federal levels.
To reduce the complexity of biological samples, I also developed the Viral Syndromic
Target Enrichment Pipeline (ViralSTEP) and EvoBaits, which proposed two sets of
molecular fishhooks or baits that can allow the physical separation of virus nucleotides
from other non-viral nucleotides. One bait set targeted virus families associated with
respiratory symptomology using known sequence information. The second bait set was
designed using the evolutionary history of virus gene families to target the same
respiratory-associated viruses.
Virus identification was possible through blinded validation studies, but we lacked
sufficient material to reproduce the whole genome and allow for epidemiological analysis.
Through this work, I developed tools that are a step forward toward creating more robust
virus surveillance platforms, which will be critical in preparing for the next pandemic.ThesisDoctor of Philosophy (PhD)Virus surveillance platforms using sequencing methods are needed to understand the
circulation of human-infecting viruses within the community and how they change over
time. Sequencing all virus genomes is often overwhelming in terms of volume – of both
patients and data.
Through this work, I have solved this problem in two ways: by advancing how we process
large volumes of data and by collecting desired data to reduce the data needed to analyse.
I developed the SARS-CoV-2 Illumina GeNome Assembly Line (SIGNAL) during the
Coronavirus Disease 2019 (COVID-19) pandemic, demonstrating effective large-scale data
processing implemented nationally.
I also proposed a series of molecular fishhooks to simplify complex biological samples by
physically separating virus material. I showed that we could identify viruses but needed to
extract more virus material to develop a robust surveillance platform.
My work is a step forward in advancing clinical sequencing in preparation for another
pandemic
The Effect of Pre-Main Sequence Evolution on Star Cluster Dynamics
The effects of adding pre-main sequence stellar evolution to a stellar dynamics program is investigated. Based on available stellar evolution tracks, pre-main sequence
evolution from birth to the zero age main sequence was implemented into the popular dynamics code Starlab. Medium-sized star clusters were modeled under different
circumstances, paying special attention to the differences in stellar population. In all,
3 sets of simulations were used. The first was a control set with all stars starting at
the main sequence. The second used similar parameters as the first, but with stars
beginning their evolution at the pre-main sequence. Because pre-main sequence stars
have such large radii, a large number of the binary stars were in contact. For the
third set, the binary parameters were adjusted to ensure that all of the binary stars
were detached. The second set of simulations produces a luminosity profile that is dominated by
high magnitude stars in the early years of the clusters. It also experiences a large
number of mergers, which affect a number of dynamical properties of the models. The
mergers lower the binary function of the clusters, which slightly affect the behaviour of
its core. More intermediate mass stars abound in the clusters, which leads to higher
mass loss through stellar evolution and more high velocity escaping star systems.
Fewer blue stragglers are observed since many of the close binaries merge very early
on in their existence.
The third set ofsimulations yields similar results, but mostly for different reasons.
There are very few mergers in this implementation, but since there are few hard
binaries and more soft binaries many of the multiple systems break up, yielding
a similay binary fraction to that of the second set of simulations. Very few of the
binaries in these models circularize in stark comparison to the first two sets of models, which experience circularization in a fraction of its binaries. These models also end
up having a slightly higher concentration at the end of the simulation, with a core
density of roughly 3 times that of the other sets after 1.5 Gyr.
In general, adding pre-main sequence evolution to star cluster simulations decreases the binary fraction and the number of hard binaries in the cluster. Thus
pre-main sequence evolution should be computed for high detail simulations.ThesisMaster of Science (MS
Temporal trends in COVID-19 vaccine uptake among social housing residents compared to the general population in Ontario, Canada: a population-based panel study
Background
This study examined temporal trends in COVID-19 vaccine uptake among social housing residents compared to the general population in Ontario, Canada, during the first year of vaccine availability.
Methods
We analyzed 2021 COVID-19 vaccination data from Ontario administrative databases. The social housing population was identified using postal codes of designated social housing buildings. Vaccination rates were compared quarterly across age and sex categories between social housing residents and the general population.
Results
In 2021, there were 14,842,488 eligible individuals identified in Ontario administrative health data, with 328,276 individuals residing in social housing. By the end of 2021, 75.45 % of adult social housing residents were fully vaccinated (2 or more COVID-19 vaccine doses) compared to 87.46 % of the general adult population. This gap persisted over time and across sexes. Over the same period, 30.61 % of the children and youth in social housing achieved full vaccination rates compared to 30.21 % of the general population, with greater vaccine uptake among females.
Conclusion
Despite COVID-19 vaccination policies aimed at prioritizing vulnerable groups in Ontario, Canada, adult social housing residents had lower vaccination rates compared to the general population. Children and youth in social housing achieved slightly higher vaccination coverage. These findings underscore the need for more targeted efforts to improve vaccine accessibility and uptake among social housing residents.This study was funded by McMaster COVID-19 Research Fund and Ontario Health Data Platform OHDP (Ontario Health Data Base Funding). The funders had no role in the study design, data collection, analysis, decision to publish, or preparation of the paper. All authors had full access to study data and can take responsibility for the integrity of the data and accuracy of the data analysis. All authors confirm independence from the funder
THE INTERACTIONS OF GENERATION AND SELF-REFERENCE
This thesis explored the individual and combined effects of two well-established memory-enhancing strategies: the generation effect and the self-reference effect. A total of 89 undergraduate participants completed sentence completion tasks that manipulated both generative tasks (generate vs. read) and personal relevance through the use of second-person possessive pronouns (“your” vs. “their”). Memory performance was assessed using both free and cued recall tasks to evaluate how these encoding strategies operate across different forms of relational memory. A 2x2 mixed ANOVA revealed a main effect of generation, with the generation condition recalling significantly more items than the read condition across both memory tests. A smaller but statistically significant main effect of self-reference was also observed, with items from sentences with self pronouns (“your”) being recalled significantly more than items from sentences with other pronouns (“their”). This suggests that even minimal linguistic cues can elicit enhanced encoding when linked to the self. Importantly, the interaction between generation and self-reference was not significant, indicating that these strategies provide additive benefits and can be used concurrently without interference. An exploratory analysis revealed that the generation effect was significantly larger in cued recall compared to free recall, possibly due to stronger cue-target associative encoding. These results contribute to our understanding of the underlying mechanisms of memory and support the practical application of combining generative learning with personalized language. The findings have implications for educational and cognitive training contexts, where both strategies may be integrated to enhance memory performance in a variety of learners.ThesisMaster of Science (MSc)This study investigated two well established strategies for improving memory: the generation effect, where information is better remembered when individuals produce it themselves, and the self-reference effect, which enhances memory by relating information to oneself. We asked participants to complete sentence fragments like “Your glass contained your ___” or “Their glass contained their ___” using either self-related or other-related wording. We also compared how well they remembered the sentences using two types of memory tests: free recall (remembering as much as you can) and cued recall (remembering with hints).
We found that generating answers significantly improved memory. We also found that using “your” instead of “their” also gave a small boost to memory, showing that even subtle language cues can make information feel more personal and therefore more memorable. Interestingly, these two strategies didn’t interfere with each other, meaning people can use both at the same time to provide unique benefits to learning. These findings have practical implications for education and communication, demonstrating that both active engagement and subtle shifts in language can meaningfully enhance memory