1,095 research outputs found
Database Design and Implementation
The book of Database Design and Implementation is a comprehensive guide that provides a thorough introduction to the principles, concepts, and best practices of database design and implementation. It covers the essential topics required to design, develop, and manage a database system, including data modeling, database normalization, SQL programming, and database administration.
The book is designed for students, database administrators, software developers, and anyone interested in learning how to design and implement a database system. It provides a step-by-step approach to database design and implementation, with clear explanations and practical examples. It also includes exercises and quizzes at the end of each chapter to help reinforce the concepts covered. The book begins by introducing the fundamental concepts of database systems and data modeling. It then discusses the process of database design and normalization, which is essential for creating a well-structured and efficient database system. The book also covers SQL programming, which is used for querying, updating, and managing data in a database. Additionally, it includes a comprehensive discussion on database administration, including security, backup and recovery, and performance tuning.https://orc.library.atu.edu/atu_oer/1002/thumbnail.jp
The use of extended reality and machine learning to improve healthcare and promote greenhealth
Com a Quarta Revolução Industrial, a propagação da Internet das Coisas, o avanço nas áreas de Inteligência Artificial e de Machine Learning até à migração para a Computação em Nuvem, o
termo "Ambientes Inteligentes" cada vez mais deixa de ser uma idealização para se tornar realidade.
Da mesma forma as tecnologias de Realidade Extendida também elas têm aumentado a
sua presença no mundo tecnológico após um "perÃodo de hibernação", desde a popularização do
conceito de Metaverse assim como a entrada das grandes empresas informáticas como a Apple e
a Google num mercado onde a Realidade Virtual, Realidade Aumentada e Realidade Mista eram
dominadas por empresas com menos experiência no desenvolvimento de sistemas (e.g. Meta),
reconhecimento a nÃvel mundial (e.g. HTC Vive), ou suporte financeiro e confiança do mercado.
Esta tese tem como foco o estudo do potencial uso das tecnologias de Realidade Estendida de
forma a promover Saúde Verde assim como seu uso em Hospitais Inteligentes, uma das variantes
de Ambientes Inteligentes, incorporando Machine Learning e Computer Vision, como ferramenta
de suporte e de melhoria de cuidados de saúde, tanto do ponto de vista do profissional de saúde
como do paciente, através duma revisão literarária e análise da atualidade. Resultando na elaboração
de um modelo conceptual com a sugestão de tecnologias a poderem ser usadas para alcançar
esse cenário selecionadas pelo seu potencial, sendo posteriormente descrito o desenvolvimento de
protótipos de partes do modelo conceptual para Óculos de Realidade Extendida como validação
de conceito.With the Fourth Industrial Revolution, the spread of the Internet of Things, the advance in the areas of Artificial Intelligence and Machine Learning until the migration to Cloud Computing, the term "Intelligent Environments" increasingly ceases to be an idealization to become reality. Likewise, Extended Reality technologies have also increased their presence in the technological world after a "hibernation period", since the popularization of the Metaverse concept, as well as the entry of large computer companies such as Apple and Google into a market where Virtual Reality, Augmented Reality and Mixed Reality were dominated by companies with less experience in system development (e.g. Meta), worldwide recognition (e.g. HTC Vive) or financial support and trust in the market. This thesis focuses on the study of the potential use of Extended Reality technologies in order to promote GreenHealth as well as their use in Smart Hospitals, one of the variants of Smart Environments, incorporating Machine Learning and Computer Vision, as a tool to support and improve healthcare, both from the point of view of the health professional and the patient, through a literature review and analysis of the current situation. Resulting in the elaboration of a conceptual model with the suggestion of technologies that can be used to achieve this scenario selected for their potential, and then the development of prototypes of parts of the conceptual model for Extended Reality Headsets as concept validation
Creating a Relational Distributed Object Store
In and of itself, data storage has apparent business utility. But when we can
convert data to information, the utility of stored data increases dramatically.
It is the layering of relation atop the data mass that is the engine for such
conversion. Frank relation amongst discrete objects sporadically ingested is
rare, making the process of synthesizing such relation all the more
challenging, but the challenge must be met if we are ever to see an equivalent
business value for unstructured data as we already have with structured data.
This paper describes a novel construct, referred to as a relational distributed
object store (RDOS), that seeks to solve the twin problems of how to
persistently and reliably store petabytes of unstructured data while
simultaneously creating and persisting relations amongst billions of objects.Comment: 12 pages, 5 figure
Metocean Big Data Processing Using Hadoop
This report will discuss about MapReduce and how it handles big data. In this report, Metocean (Meteorology and Oceanography) Data will be used as it consist of large data. As the number and type of data acquisition devices grows annually, the sheer size and rate of data being collected is rapidly expanding. These big data sets can contain gigabytes or terabytes of data, and can grow on the order of megabytes or gigabytes per day. While the collection of this information presents opportunities for insight, it also presents many challenges. Most algorithms are not designed to process big data sets in a reasonable amount of time or with a reasonable amount of memory. MapReduce allows us to meet many of these challenges to gain important insights from large data sets. The objective of this project is to use MapReduce to handle big data. MapReduce is a programming technique for analysing data sets that do not fit in memory. The problem statement chapter in this project will discuss on how MapReduce comes as an advantage to deal with large data. The literature review part will explain the definition of NoSQL and RDBMS, Hadoop Mapreduce and big data, things to do when selecting database, NoSQL database deployments, scenarios for using Hadoop and Hadoop real world example. The methodology part will explain the waterfall method used in this project development. The result and discussion will explain in details the result and discussion from my project. The last chapter in this project report is conclusion and recommendatio
Real-Time intelligence
Dissertação de mestrado em Computer ScienceOver the past 20 years, data has increased in a large scale in various fields. This explosive
increase of global data led to the coin of the term Big Data. Big data is mainly used to describe
enormous datasets that typically includes masses of unstructured data that may need
real-time analysis. This paradigm brings important challenges on tasks like data acquisition,
storage and analysis. The ability to perform these tasks efficiently got the attention
of researchers as it brings a lot of oportunities for creating new value. Another topic with
growing importance is the usage of biometrics, that have been used in a wide set of application
areas as, for example, healthcare and security. In this work it is intended to handle
the data pipeline of data generated by a large scale biometrics application providing basis
for real-time analytics and behavioural classification. The challenges regarding analytical
queries (with real-time requirements, due to the need of monitoring the metrics/behavior)
and classifiers’ training are particularly addressed.Nos os últimos 20 anos, a quantidade de dados armazenados e passÃveis de serem processados,
tem vindo a aumentar em áreas bastante diversas. Este aumento explosivo, aliado
às potencialidades que surgem como consequência do mesmo, levou ao aparecimento do
termo Big Data. Big Data abrange essencialmente grandes volumes de dados, possivelmente
com pouca estrutura e com necessidade de processamento em tempo real. As especificidades
apresentadas levaram ao aparecimento de desafios nas diversas tarefas do pipeline
tÃpico de processamento de dados como, por exemplo, a aquisição, armazenamento e a
análise. A capacidade de realizar estas tarefas de uma forma eficiente tem sido alvo de estudo
tanto pela indústria como pela comunidade académica, abrindo portas para a criação
de valor. Uma outra área onde a evolução tem sido notória é a utilização de biométricas comportamentais
que tem vindo a ser cada vez mais acentuada em diferentes cenários como,
por exemplo, na área dos cuidados de saúde ou na segurança. Neste trabalho um dos objetivos
passa pela gestão do pipeline de processamento de dados de uma aplicação de larga
escala, na área das biométricas comportamentais, de forma a possibilitar a obtenção de
métricas em tempo real sobre os dados (viabilizando a sua monitorização) e a classificação
automática de registos sobre fadiga na interação homem-máquina (em larga escala)
- …