18 research outputs found

    Efficacy and safety of lebrikizumab in adult patients with mild-to-moderate asthma not receiving inhaled corticosteroids

    Get PDF
    Background: Asthma is a heterogeneous and complex disease in both its clinical course and response to treatment. IL-13 is central to Type 2 inflammation and contributes to many features of asthma. In a previous Phase 2 study, lebrikizumab, an anti-IL-13 monoclonal antibody, did not significantly improve FEV1 in mild-to-moderate asthma patients not receiving ICS therapy. This Phase 3 study was designed to further assess the efficacy and safety of lebrikizumab in adult patients with mild-to-moderate asthma treated with daily short-acting β2-agonist therapy alone. Methods: Adult patients with mild-to-moderate asthma were randomised to receive lebrikizumab 125 mg subcutaneously (SC), placebo SC, or montelukast 10 mg orally for 12 weeks, with an 8-week follow-up period. The primary efficacy endpoint was absolute change in pre-bronchodilator FEV1 from baseline at Week 12. Findings: A total of 310 patients were randomised and dosed in the study. The mean absolute change in FEV1 from baseline at Week 12 was higher in the lebrikizumab-treated arm compared with placebo (150 mL versus 67 mL); however, this improvement did not achieve statistical significance (overall adjusted difference of 83 mL [95% CI:-3, 170]; p = .06). Montelukast did not improve FEV1 as compared with placebo. Lebrikizumab was generally safe and well tolerated during the study. Interpretation: Lebrikizumab did not significantly improve FEV1 in mild-to-moderate asthma patients at a dose expected to inhibit the IL-13 pathway. Inhibiting IL-13 in this patient population was not sufficient to improve lung function. These data support the findings of a previous trial of lebrikizumab in patients not receiving ICS

    Design and development of an artificial intelligent system for audio-visual cancer breast self-examination

    No full text
    This paper presents the development of a computer system for breast cancer awareness and education, particularly, in proper breast self-examination (BSE) performance. It includes the design and development of an artificial intelligent system(AIS) for audio-visual BSE which is capable of computer vision (CV), speech recognition (SR), speech synthesis (SS), and audiovisual (AV) feedback response. The AIS is named BEA, an acronym for Breast Examination Assistant, which acts like a virtual health care assistant that can assist a female user in performing proper BSE. BEA is composed of four interdependent modules: perception, memory, intelligence, and execution. Collectively, these modules are part of an intelligent operating architecture (IOA) that runs the BEA system. The methods of development of the individual subsystems (CV, SR, SS, and AV feedback) together with the intelligent integration of these components are discussed in the methodology section. Finally, the authors presented the results of the tests performed in the system

    Insect detection and monitoring in stored grains using MFCCs and artificial neural network

    No full text
    The variability in grain production makes it necessary to have strategic grain storage plans in order to ensure adequate supplies at all times. However, insects in stored grain products cause infestation and contamination which reduce grain quality and quantity. In order to prevent these problems, early detection and constant monitoring need to be implemented. Acoustic methods have been established in numerous studies as a viable approach for insect detection and monitoring with various sound parameterization and classification techniques. The aim of this study is to further demonstrate the efficacy of acoustic methods in pest management mainly through feature extraction using Mel-frequency cepstral coefficients (MFCCs) and classification using artificial neural network. The study used sounds from the Sitophilus oryzae (L.) or commonly known as rice weevil in larval stage recorded using five different acoustic sensors with the purpose of proving the capability of artificial neural network to recognize insect sounds regardless of the acoustic sensors used. Network models with varying number of nodes for the hidden layer were experimented in search for the highest accuracy that may be obtained. Results show that the network with 25 nodes for the hidden layer provides the best over-all network performance with 94.70% accuracy and the training, validation, and testing are accurate at 95.10%, 94.00%, and 93.60% respectively. Although, difference in accuracy values across all simulations never exceeded 1%. These show that the proposed method is capable of recognizing insect sounds regardless of the acoustic sensors used provided that proper acoustic signal preprocessing, feature extraction, and implementation of the network are performed. © 2017 IEEE

    Detection and classification of public security threats in the Philippines using neural networks

    No full text
    Life being put into jeopardy when in public has always been Filipinos\u27 concern. While there are reinforcements of laws, and common practices taught, these are no more than just band-aid solutions to the problem. With the immediate detection and classification of common public security threats through the videos fed from CCTVs, it will be an immense help to protect Filipinos. In this study, the use of pre-trained R-CNN model inception v2 alongside tools for other phases such as annotation, training, and testing will be discussed. The process through which the study attained the goal of the system will be highlighted. © 2020 IEEE

    Vehicle detection and tracking using corner feature points and artificial neural networks for a vision-based contactless apprehension system

    No full text
    Blocked intersections have been a contributing factor in the city-wide traffic congestion, especially in metropolitan cities. This research study aims to develop a better traffic violations management system in city-road intersections by using a machine vision system that automatically identifies and tags traffic violations committed in an intersection. The proposed system have three main sub-systems which are the video capture, video analysis, and output sub-systems. This study presents the development and results of a vehicle detection and tracking system using corner feature point detection and artificial neural networks for the vision-based contactless traffic violations apprehension system. This detection and tracking system serves as the front-end processing in the video analysis sub-system. Experiments were conducted for different corner feature-points detection algorithm: Harris, Shi-Tomasi, and Features from Accelerated Segment Test (FAST). The results showed that in the testing phase Harris-ANN have 89.09% accuracy, Shi-Tomasi-ANN have 88.48%, and FAST-ANN have 90.30% accuracy. © 2017 IEEE

    Microscopic road traffic scene analysis using computer vision and traffic flow modelling

    No full text
    This paper presents the development of a vision-based system for microscopic road traffic scene analysis and understanding using computer vision and computational intelligence techniques. The traffic flow model is calibrated using the information obtained from the road-side cameras. It aims to demonstrate an understanding of different levels of traffic scene analysis from simple detection, tracking, and classification of traffic agents to a higher level of vehicular and pedestrian dynamics, traffic congestion build-up, and multiagent interactions. The study used a video dataset suitable for analysis of a T-intersection. Vehicle detection and tracking have 88.84% accuracy and 88.20% precision. The system can classify private cars, public utility vehicles, buses, and motorcycles. Vehicular flow of every detected vehicles from origin to destination are also monitored for traffic volume estimation, and volume distribution analysis. Lastly, a microscopic traffic model for a T-intersection was developed to simulate a traffic response based on actual road scenarios. © 2018 Fuji Technology Press.All Rights Reserved

    Quality assessment of mangoes using convolutional neural network

    No full text
    Philippines is one of the countries in the world known for exporting good quality crops. Mangoes in the Philippines are very popular for its good sweet taste and considerably one of the best. Hence, ensuring the quality of the crop to be exported is essential. The study focused on utilizing convolutional neural network in determining the quality of carabao mango (Mangifera Indica). To make sure that all sides of the mango is going to be considered for the quality assessment, a mechanical system that uses conveyor belt, rollers, and camera was used to gather videos for training and validation of the model. The videos were extracted into frames and gone through image processing to remove the background and retain the mango only. The dataset is composed of different mangoes having both good and bad qualities. The implemented model used a total of 5550 training samples with 94.99% accuracy and a total of 2320 samples used for validation with an accuracy of 97.21%. © 2019 IEEE

    A robotic model approach of an automated traffic violation detection system with apprehension

    No full text
    This study suggests a robotic model approach to create a sample traffic violation detection system with apprehension scenario before such system can be implemented in a real road. The model used two robots, one for the moving object which will be detected by the camera and one for the robot that will follow the moving object if its speed reaches a certain limit. The captured images from the camera were fed to an algorithm which detects the centroid of the moving object to track its speed, thereby deciding if it is moving beyond the reference speed. The result of this algorithm was fed to the tracker robot, which then mobilizes and follows the moving object when the moving object exceeds the speed limit. © 2018 IEEE

    Vehicle-pedestrian classification with road context recognition using convolutional neural networks

    No full text
    In road traffic scene analysis, it is important to observe vehicular traffic and how pedestrian foot traffic affects the over-all traffic situation. Road context is also significant in proper detection of vehicles and pedestrians. This paper presents a vehicle-pedestrian detection and classification system with road context recognition using convolutional neural networks. Using Catch-All traffic video data sets, the system was trained to identify vehicles and pedestrians in four different road conditions such as low-altitude view T-type road intersection (DSO), mid-altitude view bus stop area in day-time (DS4-1) and night-time (DS4-3) condition, and high-altitude view wide intersection (DS3-1). In the road context recognition, the system was first tasked to identify in which of the four road conditions the current traffic scene belongs. This is designed to ensure a high detection rate of vehicles and pedestrians in the mentioned road conditions. Road context recognition has 98.64% training accuracy with 2800 sample images, and 100% validation accuracy with 1200 sample images. After road context recognition, a detection algorithm for vehicle and pedestrians was trained for each condition. In DSO, the training accuracy is 97.75% with 1200 image samples, while validation accuracy is 94.75% with 400 image samples. In DS3-1, the training accuracy is 98.63% with 1400 image samples, while validation accuracy is 98.29% with 600 image samples. In DS4-1, the training accuracy is 99.43% with 1400 image samples, while validation accuracy is 99.83% with 600 image samples. In DS4-3, the training accuracy is 97.77% with 1400 image samples, while validation accuracy is 98.29% with 600 image samples. © 2018 IEEE

    Vision-based passenger activity analysis system in public transport and bus stop areas

    No full text
    This study presents the development of a vision system for passenger activity analysis in public transport and bus stop areas. The vision system used people detection and counting algorithm to track the flow of boarding and alighting passengers in a bus stop area. A fuzzy logic controller used inputs from the vision system to determine boarding frequency and alighting frequency for analysis of bus route and dwell time to avoid long queueing that usually cause traffic congestion. People detection and counting result using DS6 dataset (indoor) have 96.81% accuracy with 97.93% precision. People detection and counting result using DS4-1 dataset (outdoor, bus stop area) have 80.39% accuracy with 87.13% precision. Fuzzy simulation results show a boarding frequency of 22 passengers /minute and alighting frequency of 12 passengers /minute. The vision system also analyzed the boarding and alighting of passengers in no loading and unloading areas. This event usually caused traffic bottleneck due to road blockage and long bus queues. In the analysis of DS4-1 (24-hr length) videos, a total of 212 no loading/unloading violations were recorded
    corecore