659 research outputs found
An EMG Gesture Recognition System with Flexible High-Density Sensors and Brain-Inspired High-Dimensional Classifier
EMG-based gesture recognition shows promise for human-machine interaction.
Systems are often afflicted by signal and electrode variability which degrades
performance over time. We present an end-to-end system combating this
variability using a large-area, high-density sensor array and a robust
classification algorithm. EMG electrodes are fabricated on a flexible substrate
and interfaced to a custom wireless device for 64-channel signal acquisition
and streaming. We use brain-inspired high-dimensional (HD) computing for
processing EMG features in one-shot learning. The HD algorithm is tolerant to
noise and electrode misplacement and can quickly learn from few gestures
without gradient descent or back-propagation. We achieve an average
classification accuracy of 96.64% for five gestures, with only 7% degradation
when training and testing across different days. Our system maintains this
accuracy when trained with only three trials of gestures; it also demonstrates
comparable accuracy with the state-of-the-art when trained with one trial
Auto-tuning Distributed Stream Processing Systems using Reinforcement Learning
Fine tuning distributed systems is considered to be a craftsmanship, relying
on intuition and experience. This becomes even more challenging when the
systems need to react in near real time, as streaming engines have to do to
maintain pre-agreed service quality metrics. In this article, we present an
automated approach that builds on a combination of supervised and reinforcement
learning methods to recommend the most appropriate lever configurations based
on previous load. With this, streaming engines can be automatically tuned
without requiring a human to determine the right way and proper time to deploy
them. This opens the door to new configurations that are not being applied
today since the complexity of managing these systems has surpassed the
abilities of human experts. We show how reinforcement learning systems can find
substantially better configurations in less time than their human counterparts
and adapt to changing workloads
A robotic telescope for university-level distance teaching
We present aspects of the deployment of a remotely operable telescope for teaching practical science to distance learning undergraduate students. We briefly describe the technical realisation of the facility, PIRATE, in Mallorca and elaborate on how it is embedded in the Open University curriculum. The PIRATE teaching activities were studied as part of a wider research project into the importance of realism, sociability and metafunctionality for the effectiveness of virtual and remote laboratories in teaching practical science. We find that students accept virtual experiments (e.g. a telescope simulator) when they deliver genuine, "messy" data, clarify how they differ from a realistic portrayal, and are flagged as training tools. A robotic telescope is accepted in place of on-site practical work when realistic activities are included, the internet connection is stable, and when there is at least one live video feed. The robotic telescope activity should include group work and facilitate social modes of learning. Virtual experiments, though normally considered as asynchronous tools, should also include social interaction. To improve student engagement and learning outcomes a greater situational awareness for the robotic telescope setting should be devised. We conclude this report with a short account of the current status of PIRATE after its relocation from Mallorca to Tenerife and its integration into the OpenScience Observatories
Evaluating Internal and External Data Points in Long-term Periodical Testing with Protection Relays
A protection relay is a part of the electrical network intended to protect the distribution network and react in case of abnormal situations. The protection relay can be electromechanical, static, microprocessor-based, or digital, also known as numerical. The numerical protection relay is the newest type of protection relays, which uses a digital signal processor. With the modernization of protection relays, various failures inside the protection relays are nowadays becoming more challenging to detect. Therefore, new test methods need to be developed to detect protection relay’s failures and to test the functionality of protection relays.
Continuous testing, quality assurance, and development of protection relay’s testing methods will reduce the failures may occur in protection relays. The purpose of this study is to reduce the failure situations of protection relays and to focus on ensuring and improving the quality of protection relays through their continuously expanding lifetime by performing a new test method, long-term testing, as a part of long-term test system. This study is carried out in collaboration with ABB Oy that is a major global technology company with operations in over 100 countries that specializes in automation and electrification.
The first objective of this study is to conduct long-term testing with a protection relay called REX640 and to automate the entire testing process, which includes collecting, storing, and analyzing data programmatically. In this study data will be collected from internal and external data points of REX640. The research aims to discover the relevant data in long-term periodical testing with protection relays to evaluate the prediction of protection relay failures. To discover the relevant data, the first research question is defined as follow: What data is relevant in long-term periodical testing with protection relays?
A proper method should be discovered to collect the relevant data. The second objective of this study is to discover a method to evaluate, store, and analyze the collected protection relay data. This will allow protection relay failures to be detected. To find the proper method, the second research question is defined as follow: Which method will be used to collect the relevant data?
The interviews, literature review, and automated test environment developed in this study allowed finding answers to the research questions. As a result, the study identified the relevant data in long-term periodical testing and a proper method to collect the identified relevant data. The relevant data and the discovered method will allow identifying the defects may occur in the protection relay and evaluating the predictability of protection relay failures with long-term periodical testing before a major disruption occurs in the distribution network.Suojarele on sähköverkon osa, jonka tarkoituksena on suojata jakeluverkkoa ja reagoida poikkeustilanteissa. Suojarele voi olla sähkömekaaninen, staattinen, mikroprosessoripohjainen tai digitaalinen eli numeerinen. Numeerinen suojarele on uusin suojareletyyppi, jossa käytetään digitaalista signaaliprosessoria. Suojareleet kehittyvät nopeata vauhtia ja niiden nykyaikaistumisen myötä suojareleiden sisällä olevien erilaisten vikojen havaitseminen on nykyään entistä haastavampaa. Tämän vuoksi on kehitettävä uusia testausmenetelmiä suojareleiden sisäisten vikojen havaitsemiseksi sekä niiden toimivuuden testaamiseksi.
Suojareleiden jatkuva testaus, laadunvarmistus ja kehittäminen vähentävät suojareleissä esiintyvien vikojen mahdollisuutta. Tutkimuksen tarkoituksena on vähentää suojareleiden vikatilanteita ja keskittyä varmistamaan ja parantamaan suojareleiden laatua niiden jatkuvasti pidentyvän käyttöiän aikana suorittamalla uutta testausmenetelmää, pitkäaikaistestausta osana pitkäaikaistes-tausjärjestelmää. Tämä tutkimus toteutetaan yhteistyössä ABB Oy:n kanssa. ABB on merkittävä maailmanlaajuinen teknologiayritys, jolla on toimintaa yli 100 maassa ja joka on erikoistunut automaatioon ja sähköistämiseen.
Tutkimuksen yksi tavoite on suorittaa pitkäaikaista jaksottaista testausta ABB:n REX640-nimiselle suojareleelle ja automatisoida ohjelmallisesti kokonaista testausprosessia, johon sisältyy datan kerääminen, tallentaminen ja analysointi. Tutkimuksessa kerätään dataa REX640:n sisä- ja ulkopuolelta. Tutkimuksessa on selvitettävä pitkäaikaisen testauksen aikana asiaankuuluvaa dataa, jolla saadaan arvioitua suojareleen hajoamisen ennustamista. Asiaankuuluvan datan selvittämiseksi määriteltiin ensimmäinen tutkimuskysymys seuraavasti:
Mikä data on relevantti pitkäaikaisessa jaksottaisessa testauksessa suojareleiden kanssa?
Relevantin datan löytämiseksi, on selvitettävä menetelmä, jonka avulla voidaan kerätä kyseistä dataa. Näin ollen, tutkimuksen toinen tavoite on löytää menetelmä, jonka avulla arvioidaan, tallennettaan sekä analysoidaan suojareleestä kerättävää dataa. Sopivan menetelmän löytämiseksi määriteltiin toinen tutkimuskysymys seuraavasti:
Mitä metodia tullaan käyttämään relevantin datan keräämiseksi?
Tutkimuksessa kehitetyt haastattelut, kirjallisuuskatsaus, vaatimukset sekä automatisoitu testiympäristö auttoivat löytämään vastauksia tutkimuskysymyksiin. Tuloksena selvitettiin pitkäaikaiselle jaksottaiselle testaukselle relevanttia dataa sekä sopivaa metodia havaitun relevantin datan keräämiseksi. Relevantin datan sekä kehitetyn metodin avulla on mahdollisuus tunnistaa suojareleissä ilmeneviä vikoja sekä arvioida suojareleiden vikojen ennustettavuutta pitkäaikaisen jaksottaisen testauksen avulla ennen kuin jakeluverkossa tapahtuu merkittäviä häiriöitä
HIL: designing an exokernel for the data center
We propose a new Exokernel-like layer to allow mutually untrusting physically deployed services to efficiently share the resources of a data center. We believe that such a layer offers not only efficiency gains, but may also enable new economic models, new applications, and new security-sensitive uses. A prototype (currently in active use) demonstrates that the proposed layer is viable, and can support a variety of existing provisioning tools and use cases.Partial support for this work was provided by the MassTech Collaborative Research Matching Grant Program, National Science Foundation awards 1347525 and 1149232 as well as the several commercial partners of the Massachusetts Open Cloud who may be found at http://www.massopencloud.or
Implementation and Deployment of a Distributed Network Topology Discovery Algorithm
In the past few years, the network measurement community has been interested
in the problem of internet topology discovery using a large number (hundreds or
thousands) of measurement monitors. The standard way to obtain information
about the internet topology is to use the traceroute tool from a small number
of monitors. Recent papers have made the case that increasing the number of
monitors will give a more accurate view of the topology. However, scaling up
the number of monitors is not a trivial process. Duplication of effort close to
the monitors wastes time by reexploring well-known parts of the network, and
close to destinations might appear to be a distributed denial-of-service (DDoS)
attack as the probes converge from a set of sources towards a given
destination. In prior work, authors of this report proposed Doubletree, an
algorithm for cooperative topology discovery, that reduces the load on the
network, i.e., router IP interfaces and end-hosts, while discovering almost as
many nodes and links as standard approaches based on traceroute. This report
presents our open-source and freely downloadable implementation of Doubletree
in a tool we call traceroute@home. We describe the deployment and validation of
traceroute@home on the PlanetLab testbed and we report on the lessons learned
from this experience. We discuss how traceroute@home can be developed further
and discuss ideas for future improvements
Space-Efficient Predictive Block Management
With growing disk and storage capacities, the amount of required metadata for tracking all blocks in a system becomes a daunting task by itself. In previous work, we have demonstrated a system software effort in the area of predictive data grouping for reducing power and latency on hard disks. The structures used, very similar to prior efforts in prefetching and prefetch caching, track access successor information at the block level, keeping a fixed number of immediate successors per block. While providing powerful predictive expansion capabilities and being more space efficient in the amount of required metadata than many previous strategies, there remains a growing concern of how much data is actually required. In this paper, we present a novel method of storing equivalent information, SESH, a Space Efficient Storage of Heredity. This method utilizes the high amount of block-level predictability observed in a number of workload trace sets to reduce the overall metadata storage by up to 99% without any loss of information. As a result, we are able to provide a predictive tool that is adaptive, accurate, and robust in the face of workload noise, for a tiny fraction of the metadata cost previously anticipated; in some cases, reducing the required size from 12 gigabytes to less than 150 megabytes
- …