37 research outputs found
23-bit Metaknowledge Template Towards Big Data Knowledge Discovery and Management
The global influence of Big Data is not only growing but seemingly endless.
The trend is leaning towards knowledge that is attained easily and quickly from
massive pools of Big Data. Today we are living in the technological world that
Dr. Usama Fayyad and his distinguished research fellows discussed in the
introductory explanations of Knowledge Discovery in Databases (KDD) predicted
nearly two decades ago. Indeed, they were precise in their outlook on Big Data
analytics. In fact, the continued improvement of the interoperability of
machine learning, statistics, database building and querying fused to create
this increasingly popular science- Data Mining and Knowledge Discovery. The
next generation computational theories are geared towards helping to extract
insightful knowledge from even larger volumes of data at higher rates of speed.
As the trend increases in popularity, the need for a highly adaptive solution
for knowledge discovery will be necessary. In this research paper, we are
introducing the investigation and development of 23 bit-questions for a
Metaknowledge template for Big Data Processing and clustering purposes. This
research aims to demonstrate the construction of this methodology and proves
the validity and the beneficial utilization that brings Knowledge Discovery
from Big Data.Comment: IEEE Data Science and Advanced Analytics (DSAA'2014
Data science and knowledge discovery
Nowadays, Data Science (DS) is gaining a relevant impact on the community. The most recent developments in Computer Science, such as advances in Machine and Deep Learning, Big Data, Knowledge Discovery, and Data Analytics, have triggered the development of several innovative solutions (e.g., approaches, methods, models, or paradigms). It is a trending topic with many application possibilities and motivates the researcher to conduct experiments in these most diverse areas. This issue created an opportunity to expose some of the most relevant achievements in the Knowledge Discovery and Data Science field and contribute to such subjects as Health, Smart Homes, Social Humanities, Government, among others. The relevance of this field can be easily observed by its current achieved numbers: thirteen research articles, one technical note, and forty-six authors from fifteen nationalities.This work has been supported by FCT-Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020
FSL-BM: Fuzzy Supervised Learning with Binary Meta-Feature for Classification
This paper introduces a novel real-time Fuzzy Supervised Learning with Binary
Meta-Feature (FSL-BM) for big data classification task. The study of real-time
algorithms addresses several major concerns, which are namely: accuracy, memory
consumption, and ability to stretch assumptions and time complexity. Attaining
a fast computational model providing fuzzy logic and supervised learning is one
of the main challenges in the machine learning. In this research paper, we
present FSL-BM algorithm as an efficient solution of supervised learning with
fuzzy logic processing using binary meta-feature representation using Hamming
Distance and Hash function to relax assumptions. While many studies focused on
reducing time complexity and increasing accuracy during the last decade, the
novel contribution of this proposed solution comes through integration of
Hamming Distance, Hash function, binary meta-features, binary classification to
provide real time supervised method. Hash Tables (HT) component gives a fast
access to existing indices; and therefore, the generation of new indices in a
constant time complexity, which supersedes existing fuzzy supervised algorithms
with better or comparable results. To summarize, the main contribution of this
technique for real-time Fuzzy Supervised Learning is to represent hypothesis
through binary input as meta-feature space and creating the Fuzzy Supervised
Hash table to train and validate model.Comment: FICC201
Characteristic On Electromagnetic Energy Harvesting Using Graphene/Silver Filled Epoxy For PVT Thermal Hybrid Solar Collector
Emerging wireless and flexible electronic systems such as wearable or portable devices and sensor networks call for a power source that is sustainable,reliable,have high power density,and can be integrated into a flexible package at low cost.These demands can be met using photovoltaic thennal (PVT) systems,consisting of solar modules for energy harvesting,battery storage to overcome variations in solar module output or load and often power
electronics to regulate voltages and power flows.A great deal of research in recent years has focused on the development of high-performing materials and architectures for individual components such as solar cells panel circuit and batteries.The use of graphene and silver conductive ink as a flexible,low-cost solution-processed transparent electrode for photovoltaics is investigated.To fabricate these systems,conductive ink printing technique are of great interest as they can be performed at low temperatures and high speeds and
facilitate customization of the components.The parameters that were evaluated consist of resistivity,surface roughness,and morphological analysis.In order to accomplish the analysis,the four-point probe is used to measure the resistance value of the sample in ohm-per-square.Conductive ink loading has detected the presence of resistivity with a certain percentage.The highest average resistivity is detected in 1 layer coil conductive ink while the low resistivity in 3 layers coils conductive ink.In terms of surface roughness,
Nanoindentation machine was used which resulted that the samples with a consistent average value of uniform and smooth surface.Conductive ink samples with 80% weight percentage of filler had consistent surface regularities that contributed to smooth surface.In morphological analysis,nanoindentation is used to visualize the microscopic image of graphene and silver nanoparticles ink categorized by the electrical properties of the ink
Quantifying MyCiTi supply usage via Big Data and Agent Based Modelling
The MyCiTi is currently generating large volumes of raw transactional information in the form of commuter smartcard transactions, which can be considered Big Data. Agent Based modelling (ABM) has been applied internationally as a means of deriving actionable intelligence from Big Data. It is proposed that ABM can be used to unlock the hidden potential within the aforementioned data. This paper demonstrates how to go about developing and calibrating a MATSim-based ABM to analyse AFC data. It is found that data formatting algorithms are critical in the preparation of data for modelling activities. These algorithms are highly complex, requiring significant time investment prior to development. Furthermore, the development of appropriate ABM calibration parameters requires careful consideration in terms of appropriate data collection, simulation testing, and justification. This study serves as strong evidence to suggest that ABM is an appropriate analysis technique for MyCiTi data systems. Validation exercises reveal that ABM is able to calculate on board bus usage and system behaviour with a strong degree of accuracy (R-squared 0.85). It is however recommended that additional research be conducted into more detailed calibration activities, such as fine-tuning agent behaviour during simulation. Ultimately this research study achieves its explorative objectives of model development and testing, and paves a way forward for future research into the practical applications of Big Data and ABM in the South African context