5 research outputs found
Effect of surface modification on mechanical properties of buri palm (corypha utan) fibre composite reinforcement
Natural fibre materials are replacing synthetic fibre materials since they are
considered as a low-cost, lightweight, and biodegradability engineering materials with a good
specific strength. However, the effects of some process and geometrical parameters (such as fibre
type, size, and concentration, and chemical modification) on the strength of the final natural
composite product are not well documented. The purpose of the research is to analyse the physical
and mechanical properties of single-strand buri palm fibre under different conditions and surface
modification. The buri palm fibre was treated using 5 wt.% and 10 wt.% sodium hydroxide (NaOH)
with a duration of 1 and 24 h immersion throughout the whole process. For a single-strand test, the
samples were carefully extracted from the corresponding woven fibre by hand. While the woven
buri palm fibre composite was fabricated by employing 4 and 5-layering sequences in the hand lay�up technique followed by the compression method. The buri palm fibre showed that a higher
concentration of NaOH solution and immersion period led to a lower density. The effectiveness of
the alkali treatment in the removal of cellulose and hemicellulose from the fibre strands was verified
by chemical composition in FTIR investigation. The highest tensile strength of 159.16 MPa was
indicated from the result of single-strand treated with 5 wt.% NaOH for 24 h immersion. This
treatment was found as the most appropriate treatment and is employed to fabricate both 4-layer
and 5-layer stacking sequence composite. The 5-layer treated composite gives the highest tensile
strength and flexural strength of 33.51 MPa and 56.72 MPa, respectively. In conclusion, the
mechanical properties increased with the addition of each sequence layering treated fibres in the
composite. The obtained results indicate that the utilisation of buri palm fibre as a reinforcement in
the epoxy composite can be used in the lightweight and moderate load applications, such as the
interior parts in the automotive industry
Fusion of Smartphone Motion Sensors for Physical Activity Recognition
For physical activity recognition, smartphone sensors, such as an accelerometer and a gyroscope, are being utilized in many research studies. So far, particularly, the accelerometer has been extensively studied. In a few recent studies, a combination of a gyroscope, a magnetometer (in a supporting role) and an accelerometer (in a lead role) has been used with the aim to improve the recognition performance. How and when are various motion sensors, which are available on a smartphone, best used for better recognition performance, either individually or in combination? This is yet to be explored. In order to investigate this question, in this paper, we explore how these various motion sensors behave in different situations in the activity recognition process. For this purpose, we designed a data collection experiment where ten participants performed seven different activities carrying smart phones at different positions. Based on the analysis of this data set, we show that these sensors, except the magnetometer, are each capable of taking the lead roles individually, depending on the type of activity being recognized, the body position, the used data features and the classification method employed (personalized or generalized). We also show that their combination only improves the overall recognition performance when their individual performances are not very high, so that there is room for performance improvement. We have made our data set and our data collection application publicly available, thereby making our experiments reproducible
Detecting Physical Activity within Lifelogs towards Preventing Obesity and Aid Ambient Assisted Living
Obesity is a global health issue that affects 2.1 billion people worldwide and has an economic impact of approximately $2 trillion. It is a disease that can make the aging process worse by impairing physical function, which can lead to people becoming more frail and immobile. Nevertheless, it is envisioned that technology can be used to aid in motivating behavioural changes to combat this preventable condition. The ubiquitous presence of wearable and mobile devices has enabled a continual stream of quantifiable data (e.g. physiological signals) to be collected about ourselves. This data can then be used to monitor physical activity to aid in self-reflection and motivation to alter behaviour. However, such information is susceptible to noise interference, which makes processing and extracting knowledge from such data challenging. This paper posits our approach that collects and processes physiological data that has been collected from tri-axial accelerometers and a heart-rate monitor, to detect physical activity. Furthermore, an end-user use case application has also been proposed that integrates these findings into a smartwatch visualisation. This provides a method of visualising the results to the user so that they are able to gain an overview of their activity. The goal of the paper has been to evaluate the performance of supervised machine learning in distinguishing physical activity. This has been achieved by (i) focusing on wearable sensors to collect data and using our methodology to process this raw lifelogging data so that features can be extracted/selected. (ii) Undertaking an evaluation between ten supervised learning classifiers to determine their accuracy in detecting human activity. To demonstrate the effectiveness of our method, this evaluation has been performed across a baseline method and two other methods. (iii) Undertaking an evaluation of the processing time of the approach and the smartwatch battery and network cost analysis between transferring data from the smartwatch to the phone. The results of the classifier evaluations indicate that our approach shows an improvement on existing studies, with accuracies of up to 99% and sensitivities of 100%
MonitorMe: sistema de reconhecimento de atividades baseado em Android
Mestrado em Engenharia de Computadores e TelemáticaA monitorização de uma pessoa pode ser importante em várias situações do
dia-a-dia. Um modo de monitorização é a identificação de atividades
realizadas. Atualmente, vários sensores potencialmente úteis para o
reconhecimento de atividades, são integrados em dispositivos móveis, o que
os torna particularmente interessantes para este tipo de monitorização.
Uma forma complementar de monitorização é a utilização da gravação de um
vídeo do ambiente que rodeia a pessoa a ser monitorizada. No entanto, dado
o tamanho elevado dos vídeos para transmissão por canais sem fios ou
mesmo para gravação no dispositivo, torna-se necessário atuar na
compressão e redução da informação associada. Uma forma de o conseguir é
adaptar a cadência de imagens adquiridas à velocidade da pessoa que está
ser monitorizada.
Nesta dissertação é proposto um sistema de monitorização online, chamado
MonitorMe, que permite o reconhecimento de atividades e a gravação de um
vídeo do ambiente envolvente de uma pessoa. Este sistema inclui um
smartphone Android, mantido num bolso de camisa, e um módulo MARG
(Magnetic, Angular Rate and Gravity), colocado num bolso das calças. Foi
desenvolvida uma aplicação para o smartphone, que obtém dados dos
sensores integrados em ambos os dispositivos para a realização do
reconhecimento online de 6 atividades diferentes (em pé, sentado, deitado,
andar, correr e queda). Este reconhecimento é conseguido utilizando um
algoritmo de baixo custo computacional, cujo desenvolvimento teve em
consideração as restrições relativas à capacidade de processamento e à
duração da bateria dos telemóveis.
Paralelamente ao reconhecimento de atividades, a câmara do smartphone
captura imagens com uma cadência que varia com a velocidade do utilizador,
esta última estimada a partir dos dados dos sensores processados para o
reconhecimento de atividades. Demonstra-se assim a possibilidade de, com
baixo custo computacional, diminuir a largura de banda de transmissão ou o
armazenamento no dispositivo móvel.
O sistema MonitorMe foi treinado e depois testado com dados obtidos em
duas experiências envolvendo 10 pessoas, num total de 440 eventos
diferentes com uma duração total de 45 minutos (2/3 usados para treino e 1/3
para teste). Os resultados globais obtidos mostraram uma sensibilidade
superior a 93% e uma especificidade superior a 98% para o reconhecimento
de atividades, e um erro médio relativo de 8.6% para a estimativa de
velocidade.The monitoring of a given person can be important in different day-to-day
scenarios. Monitoring can be performed by detecting activities while being
carried out. Presently, various sensors with potential for activity recognition are
being included in mobile devices, so they are particularly interesting for this
type of monitoring.
A complementary way of monitoring consists in the use of a video recording of
the subject’s surrounding environment. However, given the large size of the
videos for transmission through wireless links or even for storage in the
device, it is necessary to compress and reduce the corresponding information.
This can be achieved by adapting the frame rate of the captured images to the
speed of the user being monitored.
In this dissertation an online monitoring system, MonitorMe, which performs
activity recognition and video recording of the surrounding environment of a
subject, is proposed. This system includes an Android smartphone, inserted in
a shirt pocket, and an MARG (Magnetic, Angular Rate and Gravity) module,
placed in a pants pocket. A smartphone application was developed, which
collects data from the sensors integrated in both devices to perform the online
recognition of 6 different activities (standing, sitting, lying, walking, running and
fall). This was achieved by using an algorithm of low computational cost, which
took into account the existing restrictions regarding processing power and
battery life of mobile phones.
In parallel with activity recognition, the smartphone camera captures images
with a frame rate that varies with the user speed, the latter estimated from
sensor data processed for activity recognition. This demonstrates the
possibility of reducing the required transmission bandwidth or the storage in
the mobile device, with a low computational cost.
The MonitorMe system was trained and then tested using data collected in two
experiments with a participation of 10 subjects, which resulted in a total of 440
different events with a total duration of 45 minutes (2/3 used for training and
1/3 for testing). The overall results have shown a sensibility greater than 93%
and a specificity greater than 98% for activity recognition, and an average
relative error of 8.6% for speed estimation