31 research outputs found

    Heuristic search methods and cellular automata modelling for layout design

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Spatial layout design must consider not only ease of movement for pedestrians under normal conditions, but also their safety in panic situations, such as an emergency evacuation in a theatre, stadium or hospital. Using pedestrian simulation statistics, the movement of crowds can be used to study the consequences of different spatial layouts. Previous works either create an optimal spatial arrangement or an optimal pedestrian circulation. They do not automatically optimise both problems simultaneously. Thus, the idea behind the research in this thesis is to achieve a vital architectural design goal by automatically producing an optimal spatial layout that will enable smooth pedestrian flow. The automated process developed here allows the rapid identification of layouts for large, complex, spatial layout problems. This is achieved by using Cellular Automata (CA) to model pedestrian simulation so that pedestrian flow can be explored at a microscopic level and designing a fitness function for heuristic search that maximises these pedestrian flow statistics in the CA simulation. An analysis of pedestrian flow statistics generated from feasible novel design solutions generated using the heuristic search techniques (hill climbing, simulated annealing and genetic algorithm style operators) is conducted. The statistics that are obtained from the pedestrian simulation is used to measure and analyse pedestrian flow behaviour. The analysis from the statistical results also provides the indication of the quality of the spatial layout design generated. The technique has shown promising results in finding acceptable solutions to this problem when incorporated with the pedestrian simulator when demonstrated on simulated and real-world layouts with real pedestrian data.This study was funded by the University Science of Malaysia and Kementerian Pengajian Tinggi Malaysia

    Recurrent Stroke Prediction using Machine Learning Algorithms with Clinical Public Datasets: An Empirical Performance Evaluation

    Get PDF
    غالبًا ما تكون السكتة الدماغية المتكررة مدمرة وقادرة على التسبب في إعاقة شديدة أو الوفاة. ومع ذلك ، فإن ما يقرب من 90 ٪ من أسباب السكتة الدماغية المتكررة قابلة للتغير ، مما يعني أنه يمكن تجنب السكتات الدماغية المتكررة عن طريق التحكم في عوامل الخطر ، والتي هي في الأساس سلوكية واستقلابية بطبيعتها. وبالتالي ، يتضح من الأعمال السابقة أن نموذج التنبؤ بالسكتة الدماغية المتكررة يمكن أن يساعد في تقليل احتمالية الإصابة بسكتة دماغية متكررة. أظهرت الأعمال السابقة نتائج واعدة في التنبؤ بحالات السكتة الدماغية لأول مرة باستخدام أساليب التعلم الآلي. ومع ذلك ، هناك أعمال محدودة للتنبؤ بالسكتة الدماغية المتكررة باستخدام أساليب التعلم الآلي. ومن ثم ، تم اقتراح هذا العمل لإجراء تحليل تجريبي والتحقيق في خوارزميات التعلم الآلي المطبقة في نماذج التنبؤ بالسكتة الدماغية المتكررة. يهدف هذا البحث إلى التحقيق في أداء خوارزميات التعلم الآلي ومقارنتها باستخدام مجموعات البيانات السريرية العامة للسكتة الدماغية المتكررة. في هذه الدراسة ، تم استخدام الشبكة العصبية الاصطناعية (ANN) وآلة المتجهات الداعمة (SVM) وقائمة قواعد بايزي (BRL) ومقارنة أدائها في مجال نموذج التنبؤ بالسكتة الدماغية المتكررة. تظهر نتيجة التجارب التجريبية أن ANN سجلت أعلى دقة عند 80.00٪ ، تليها BRL بنسبة 75.91٪ و SVM بنسبة 60.45٪.Recurrent strokes can be devastating, often resulting in severe disability or death. However, nearly 90% of the causes of recurrent stroke are modifiable, which means recurrent strokes can be averted by controlling risk factors, which are mainly behavioral and metabolic in nature. Thus, it shows that from the previous works that recurrent stroke prediction model could help in minimizing the possibility of getting recurrent stroke. Previous works have shown promising results in predicting first-time stroke cases with machine learning approaches. However, there are limited works on recurrent stroke prediction using machine learning methods. Hence, this work is proposed to perform an empirical analysis and to investigate machine learning algorithms implementation in the recurrent stroke prediction models. This research aims to investigate and compare the performance of machine learning algorithms using recurrent stroke clinical public datasets. In this study, Artificial Neural Network (ANN), Support Vector Machine (SVM) and Bayesian Rule List (BRL) are used and compared their performance in the domain of recurrent stroke prediction model. The result of the empirical experiments shows that ANN scores the highest accuracy at 80.00%, follows by BRL with 75.91% and SVM with 60.45%

    Spatial layout design factors during panic situations

    Get PDF
    Crowd management is the human-traffic problem-solving for crowd control to manage the crowd activities by monitoring, simulating and designing model.This concept paper is to discuss on the crowd management and discover the major contributing factors that lead towards casualties during panic situation.Crowd management activity has a close relation with spatial management that gives a high impact towards the movement of pedestrian during a certain situation and space.Hence, this concept paper provides a validation on effect of the behavior reflection based on the spatial layout design during panic situation

    Cellular automata model for pedestrian evacuation in fire spreading conditions

    Get PDF
    In this paper, a two-dimensional cellular automata model presented to simulate pedestrians evacuation in fire spreading conditions.In this proposed model, the movement of pedestrians is represented as “chaotic”, mimicking panic egress behaviors during a fire evacuation.This model includes a fire circular front shape based on the spiral movement technique. Simulation results show that this model can be used to predict the number of pedestrians who have evacuated safely or have been killed

    Using Clustering and Predictive Analysis of Infected Area on Dengue Outbreaks in Malaysia

    Get PDF
    Machine learning and data mining have a great impact on the predictive analysis process. The features classification on machine learning can be used to adopt the clustering method to define further analysis on the targeted issues. Nowadays, the epidemic disease outbreaks have caused a great concern towards Malaysian community as the diseases can cause great fatality. One of the common killer epidemic diseases in Malaysia is dengue fever. Dengue fever is caused by dengue virus that spreads by Aedes mosquitoes. The outbreaks cause several cases of death and it varies throughout the states in Malaysia. The factors that cause this epidemic disease were determined and the data on the dengue outbreaks in Malaysia were gathered. To predict the infected area of dengue, data were mined and the machine learning method was implemented. In this study, the clustering method in machine learning for predictive analysis is proven to be an effective method in determining the most infected area of dengue outbreaks in Malaysia: Selangor and W.P. Kuala Lumpur/ Putrajaya. The selected areas were identified as the busiest place in Malaysia with a great number of population that had caused high physical contact and promoted the dengue outbreaks

    Two-dimensional cellular automation model to simulate pedestrian evacuation under fire-spreading conditions

    Get PDF
    A pedestrian evacuation under fire-spreading conditions is simulated by using a two-dimensional cellular automaton model.The proposed model presents a non-static fire-spreading behavior to avoid considerable discrepancies between reality and simulation.The proposed model adopts a circular fire front shape based on spiral fire movement.Moreover, four dynamic parameters are introduced to simplify the decision-making process of a pedestrian’s movement inside the layout during fire spreading.In addition, the proposed model includes the number of victims (i.e., caught in the fire) and the number of pedestrians who were evacuated safely.By analyzing these variables, a suitable evacuation plan enabling the control of crowd movements in different situations such as fire disasters can be consequently designed

    Non-Overlapping Ratios as Fitness Function in Optimisation Spatial Layout Design

    Get PDF
    Arrangement of furniture inside a room can be one of many problems during concept generation stage. Finding an optimal arrangement of furniture or an optimal spatial layout design is vital to minimise a production cost. Also, an optimal spatial layout design will promote the movement of people inside the space reducing any possible injuries. Implementing optimisation algorithms will assist in finding feasible spatial layout design and generating thousands of solutions in shorter period. This paper studies the application of Genetic Algorithm in solving problems regarding spatial layout design. It basically finds the best placements for the furniture in a given space taking into consideration several constraints. Non-overlapping objects is one of the main constraints in spatial layout design requirements. Thus, using non-overlapping ratios as a fitness function could assist in evaluating the quality of the generated solutions

    Spatio-temporal crime HotSpot detection and prediction: a systematic literature review

    Get PDF
    The primary objective of this study is to accumulate, summarize, and evaluate the state-of-the-art for spatio-temporal crime hotspot detection and prediction techniques by conducting a systematic literature review (SLR). The authors were unable to find a comprehensive study on crime hotspot detection and prediction while conducting this SLR. Therefore, to the best of author's knowledge, this study is the premier attempt to critically analyze the existing literature along with presenting potential challenges faced by current crime hotspot detection and prediction systems. The SLR is conducted by thoroughly consulting top five scientific databases (such as IEEE, Science Direct, Springer, Scopus, and ACM), and synthesized 49 different studies on crime hotspot detection and prediction after critical review. This study unfolds the following major aspects: 1) the impact of data mining and machine learning approaches, especially clustering techniques in crime hotspot detection; 2) the utility of time series analysis techniques and deep learning techniques in crime trend prediction; 3) the inclusion of spatial and temporal information in crime datasets making the crime prediction systems more accurate and reliable; 4) the potential challenges faced by the state-of-the-art techniques and the future research directions. Moreover, the SLR aims to provide a core foundation for the research on spatio-temporal crime prediction applications while highlighting several challenges related to the accuracy of crime hotspot detection and prediction applications

    Spatio-temporal crime predictions by leveraging artificial intelligence for citizens security in smart cities

    Get PDF
    Smart city infrastructure has a significant impact on improving the quality of humans life. However, a substantial increase in the urban population from the last few years poses challenges related to resource management, safety, and security. To ensure the safety and security in the smart city environment, this paper presents a novel approach by empowering the authorities to better visualize the threats, by identifying and predicting the highly-reported crime zones in the smart city. To this end, it first investigates the Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) to detect the hot-spots that have a higher risk of crime occurrence. Second, for crime prediction, Seasonal Auto-Regressive Integrated Moving Average (SARIMA) is exploited in each dense crime region to predict the number of crime incidents in the future with spatial and temporal information. The proposed HDBSCAN and SARIMA based crime prediction model is evaluated on ten years of crime data (2008-2017) for New York City (NYC) . The accuracy of the model is measured by considering different time scenarios such as the year-wise, (i.e., for each year), and for the total considered duration of ten years using an 80:20 ratio. The 80% of data was used for training and 20% for testing. The proposed approach outperforms with an average Mean Absolute Error (MAE) of 11.47 as compared to the highest scoring DBSCAN based method with MAE 27.03
    corecore