81 research outputs found
Numerical Study of Flow inside the Micro Fluidic Cell Sense Cartridge
AbstractBiosensor is a device which utilizes the biological element as a recognition element for the detection of analytes. The sample receiving unit is a critical integral part of the biosensor through which the sample is supplied to the biological element. In this study, the flow pattern inside the fluid cartridge was simulated using COMSOL multiphysics software with an objective of optimizing the shape and size of cartridge elements. The velocity distribution, flow induced shear stress and pressure drop were simulated as a single phase incompressible laminar flow under no slip condition. Influence of the shape and geometry of cartridge on flow patterns were studied by changing the shape and geometry of the cartridge. At a flow rate of 500μl/min, the velocity and flow induced shear stress are in the range of 22 to 24mm/sec and 0.76 to 0.85N/m2 respectively
Analysis of Dimensionality Reduction Techniques on Big Data
Due to digitization, a huge volume of data is being generated across several sectors such as healthcare, production, sales, IoT devices, Web, organizations. Machine learning algorithms are used to uncover patterns among the attributes of this data. Hence, they can be used to make predictions that can be used by medical practitioners and people at managerial level to make executive decisions. Not all the attributes in the datasets generated are important for training the machine learning algorithms. Some attributes might be irrelevant and some might not affect the outcome of the prediction. Ignoring or removing these irrelevant or less important attributes reduces the burden on machine learning algorithms. In this work two of the prominent dimensionality reduction techniques, Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are investigated on four popular Machine Learning (ML) algorithms, Decision Tree Induction, Support Vector Machine (SVM), Naive Bayes Classifier and Random Forest Classifier using publicly available Cardiotocography (CTG) dataset from University of California and Irvine Machine Learning Repository. The experimentation results prove that PCA outperforms LDA in all the measures. Also, the performance of the classifiers, Decision Tree, Random Forest examined is not affected much by using PCA and LDA.To further analyze the performance of PCA and LDA the eperimentation is carried out on Diabetic Retinopathy (DR) and Intrusion Detection System (IDS) datasets. Experimentation results prove that ML algorithms with PCA produce better results when dimensionality of the datasets is high. When dimensionality of datasets is low it is observed that the ML algorithms without dimensionality reduction yields better results
Load Balancing of Energy Cloud using Wind Driven and Firefly Algorithms in Internet of Everything
The smart applications dominating the planet in the present day and age, have innovatively progressed to deploy Internet of Things (IoT) based systems and related infrastructures in all spectrums of life. Since, variety of applications are being developed using this IoT paradigm, there is an immense necessity for storing data, processing them to get meaningful information and render suitable services to the end-users. The “thing” in this decade is not only a smart sensor or a device; it can be any physical or household object, a smart device or a mobile. With the ever increasing rise in population and smart device usage in every sphere of life, when all of such “thing”s generates data, there is a chance of huge data traffic in the internet. This could be handled only by integrating “Internet of Everything (IoE)” paradigm with a completely diversified technology - Cloud Computing. In order to handle this heavy flow of data traffic and process the same to generate meaningful information, various services in the global environment are utilized. Hence the primary focus revolves in integrating these two diversified paradigm shifts to develop intelligent information processing systems. Energy Efficient Cloud Based Internet of Everything (EECloudIoE) architecture is proposed in this study, which acts as an initial step in integrating these two wide areas thereby providing valuable services to the end users. The utilization of energy is optimized by clustering the various IoT network using Wind Driven Optimization Algorithm. Next, an optimized Cluster Head (CH) is chosen for each cluster, using Firefly Algorithm resulting in reduced data traffic in comparison to other non-clustering schemes. The proposed clustering of IoE is further compared with the widely used state of the art techniques like Artificial Bee Colony (ABC) algorithm, Genetic Algorithm (GA) and Adaptive Gravitational Search algorithm (AGSA). The results justify the superiority of the proposed methodology outperforming the existing approaches with an increased -life-time and reduction in traffic
Real-Time Prediction of Lean Blowout using Chemical Reactor Network
Thesis (Master's)--University of Washington, 2018The lean blow-out (LBO) of gas turbine combustors is a concern that can limit the rate of descent for an aircraft, the maneuverability of military jets, and cause a costly and time-intensive reignition of land-based gas turbines. This work explores the feasibility of a model-based combustor monitoring for the real-time prediction of combustion system proximity to LBO. The approach makes use of (1) real-time temperature measurements in the reactor, coupled with (2) the use of a real-time chemical reactor network (CRN) model to interpret the data as it is collected. The approach is tested using a laboratory jet-stirred reactor (JSR), operating on methane at near atmospheric pressure. The CRN represents the reactor as three perfectly stirred reactors (PSRs) in series with a recirculation pathway, the model inputs include real-time reactor temperature measurements and mass flows of fuel and air. The goal of the CRN is to provide a computationally fast means of interpreting measurements in real time with regard to blowout proximity. The free radical concentrations and their trends and ratios are studied in each reactor zone. The results indicate that the hydroxyl radical maximum concentration moves downstream as the reactor approaches LBO. The ratio of hydroxyl radical concentrations in the jet region versus the recirculation region is proposed as a criterion for the LBO proximity. This real-time, model-based monitoring methodology sheds insight into combustion processes in aerodynamically stabilized combustors as they approach LBO
An Effective Classification of DDoS Attacks in a Distributed Network by Adopting Hierarchical Machine Learning and Hyperparameters Optimization Techniques
Data privacy is essential in the financial sector to protect client’s sensitive information, prevent financial fraud, ensure regulatory compliance, and safeguard intellectual property. It has become a challenging task due to the increase in usage of the internet and digital transactions. In this scenario, DDoS attack is one of the major attacks that makes clients’ privacy questionable. It requires effective and robust attack detection and prevention techniques. Machine Learning (ML) is the most effective approach for employing cyber attack detection systems. It paves the way for a new era where human and scientific communities will benefit. This paper presents a hierarchical ML-based hyperparameter-optimization approach for classifying intrusions in a network. CICIDS 2017 standard dataset was considered for this work. Initially, data was preprocessed with the min-max scaling and SMOTE methods. The LASSO approach was used for feature selection, given as input to the hierarchical ML algorithms: XGboost, LGBM, CatBoost, Random Forest (RF), and Decision Tree (DT). All these algorithms are pretrained with hyperparameters to enhance the effectiveness of algorithms. Models performance was assessed in terms of recall, precision, accuracy, and F1-score metrics. Evaluated approaches have shown that the LGBM algorithm gives a proven performance in classifying DDoS attacks with 99.77% of classification accuracy
Podstawy integracji europejskiej kształtujące politykę determinujące przyszłość integracji europejskiej
Celem pracy jest ustalenie czynników wpływających na postawy państw członkowskich UE wobec integracji europejskiej, uwzględniając proces kształtowania polityki UE oraz praktyki ‘Nowej Międzyrządowości’ (New Intergovernmentalism practice), odkrycie trudności stojących przed procesem kształtowania polityki UE w kontekście obecnego porządku europejskiego. Ponadto, omówione zostaną pojawiające się możliwości, które Unia Europejska może osiągnąć w procesie europejskiej integracji.Wielokrotne rządy Europy, podział władzy między europejskimi instytucjami, państwa członkowskie UE, i reprezentacja polityczna oraz pytania o deficyt demokracji w UE. W niniejszej pracy omówiono także europejskie poziomy kształtowania polityki, podział władzy między UE, rozumianą jako organ ponadnarodowy, i narodowymi państwami członkowskimi.Tematyka pracy koncentruje się na fundamentalnych kwestiach jakimi są rzeczywista rola odgrywana przez UE w procesie kształtowania polityki w ramach Nowej Międzyrządowości, a także rola państw członkowskich i ich wpływ na przyszłość europejskiej integracji.Przeprowadzone w pracy rozumowanie pokazuje, że państwa członkowskie UE są najbardziej znaczącymi postaciami w procesie kształtowania polityki europejskiej, zwłaszcza po traktacie z Maastricht, który zmienił podstawy polityki w UE na szersze, argumentując na rzecz zapewnienia demokratycznej debaty i większej roli państwom członkowskim. W celu weryfikacji tej hipotezy i chcąc odpowiedzieć na sformułowane w pracy pytania, zastosowano procedury badań dedukcyjnych; począwszy od ustalenia aspektów ontologicznych UE przez analizę obecnych uwarunkowań integracji europejskiej. Zmierzam do rozpoznania różnych form integracji UE w sporach między państwami członkowskimi; sporach wynikających z kształtowania polityk i procesów decyzyjnych.This thesis aimed Statement factors affecting the EU member states attitude towards the European integration, followed by the process of EU's policymaking, within the new Intergovernmentalism practice to discover the difficulties facing the EU policymaking process in the current European order. Moreover, the appearance of the opportunities that can be achieved by the European Union in the process of European integration. The Multiple governments Europe, the Division of Power among European Institutions, EU member states, and Political Representation and the question of Democracy deficit in the EU. This thesis also discusses the European levels of policymaking, power division among the EU as a super-national body and the national member states. The subject of the thesis set on the fundamental question concerning the reality of the role performed by the EU in the process of policymaking within the new Intergovernmentalism approach, as well as the role of member states and the impact of them on the future of the European integration. The reasoning of the study shows that the EU member states are the most prominent figures in the process of European Policymaking, especially after the Maastricht treaty, which the base of policymaking in EU had changed to broader within more arguments about the democratic deliberation including delivers a more significant role to the member states. To prove the authenticity of this hypothesis and answer to the questions of this thesis, we have applied the deductive research method; beginning from the ontological aspects of EU and analyzing the current circumstances of European integration. Examining the various characters of EU's integration in the disputes among the member states; the disputes that based on policymaking and decision making
Adaptive digital notch filtering
The problem of narrow band interference while transmitting broad-band signals
like Direct Sequence Spread Spectrum is a common source of problems in Electronic
Warfare. This can occur either due to intentional jamming or due to unavoidable
signal sources present in the vicinity of the receiver. Lack of improper information
on these narrow band interferers makes it difficult to cancel them.
In this thesis the above problem is addressed by using an adaptive notch filtering
technique. Before adopting such a technique other methods like the Least Square
Estimator and the Maximum Likelihood Estimator were explored. However the Kwan
and Martin adaptive notch filter structure was found both relevant and suitable for
the problem of interest. The Kwan and Martin method has the difficulty of increasing
hardware complexity with number of notches. This makes it difficult to implement
in real time. A new algorithm was developed for the purpose of implementing the
structure in real-time. This new algorithm offers the same performance at reduced
hardware complexity. This algorithm was simulated and the results were presented. A
hardware feasibility is discussed by proposing a simple structure based upon existing commercial signal processing chips.http://archive.org/details/adaptivedigitaln00rangApproved for public release; distribution is unlimited
- …