2,224 research outputs found
MERRA Analytic Services: Meeting the Big Data Challenges of Climate Science Through Cloud-enabled Climate Analytics-as-a-service
Climate science is a Big Data domain that is experiencing unprecedented growth. In our efforts to address the Big Data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS). We focus on analytics, because it is the knowledge gained from our interactions with Big Data that ultimately produce societal benefits. We focus on CAaaS because we believe it provides a useful way of thinking about the problem: a specialization of the concept of business process-as-a-service, which is an evolving extension of IaaS, PaaS, and SaaS enabled by Cloud Computing. Within this framework, Cloud Computing plays an important role; however, we it see it as only one element in a constellation of capabilities that are essential to delivering climate analytics as a service. These elements are essential because in the aggregate they lead to generativity, a capacity for self-assembly that we feel is the key to solving many of the Big Data challenges in this domain. MERRA Analytic Services (MERRAAS) is an example of cloud-enabled CAaaS built on this principle. MERRAAS enables MapReduce analytics over NASAs Modern-Era Retrospective Analysis for Research and Applications (MERRA) data collection. The MERRA reanalysis integrates observational data with numerical models to produce a global temporally and spatially consistent synthesis of 26 key climate variables. It represents a type of data product that is of growing importance to scientists doing climate change research and a wide range of decision support applications. MERRAAS brings together the following generative elements in a full, end-to-end demonstration of CAaaS capabilities: (1) high-performance, data proximal analytics, (2) scalable data management, (3) software appliance virtualization, (4) adaptive analytics, and (5) a domain-harmonized API. The effectiveness of MERRAAS has been demonstrated in several applications. In our experience, Cloud Computing lowers the barriers and risk to organizational change, fosters innovation and experimentation, facilitates technology transfer, and provides the agility required to meet our customers' increasing and changing needs. Cloud Computing is providing a new tier in the data services stack that helps connect earthbound, enterprise-level data and computational resources to new customers and new mobility-driven applications and modes of work. For climate science, Cloud Computing's capacity to engage communities in the construction of new capabilies is perhaps the most important link between Cloud Computing and Big Data
Quality of Information in Mobile Crowdsensing: Survey and Research Challenges
Smartphones have become the most pervasive devices in people's lives, and are
clearly transforming the way we live and perceive technology. Today's
smartphones benefit from almost ubiquitous Internet connectivity and come
equipped with a plethora of inexpensive yet powerful embedded sensors, such as
accelerometer, gyroscope, microphone, and camera. This unique combination has
enabled revolutionary applications based on the mobile crowdsensing paradigm,
such as real-time road traffic monitoring, air and noise pollution, crime
control, and wildlife monitoring, just to name a few. Differently from prior
sensing paradigms, humans are now the primary actors of the sensing process,
since they become fundamental in retrieving reliable and up-to-date information
about the event being monitored. As humans may behave unreliably or
maliciously, assessing and guaranteeing Quality of Information (QoI) becomes
more important than ever. In this paper, we provide a new framework for
defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the
current state-of-the-art on the topic. We also outline novel research
challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN
Recommended from our members
Internationalization in the information age: a new era for places, firms and international business networks?
The new techno-economic paradigm of the information age has brought about new structures and processes in international business (IB). In this article, we examine the changing nature of the competitive advantages of places, the competitive advantages and strategies of firms, and the governance structure of IB networks in what has also been called the third industrial revolution. These three areas of change in IB activities can be mapped respectively to the location (L), ownership (O) and internalization (I) advantages of the eclectic paradigm. We interpret these OLI factors as dynamic constructs in order to depict analytically the shifts in the IB environment and their implications for IB
Big Data Research in Information Systems: Toward an Inclusive Research Agenda
Big data has received considerable attention from the information systems (IS) discipline over the past few years, with several recent commentaries, editorials, and special issue introductions on the topic appearing in leading IS outlets. These papers present varying perspectives on promising big data research topics and highlight some of the challenges that big data poses. In this editorial, we synthesize and contribute further to this discourse. We offer a first step toward an inclusive big data research agenda for IS by focusing on the interplay between big data’s characteristics, the information value chain encompassing people-process-technology, and the three dominant IS research traditions (behavioral, design, and economics of IS). We view big data as a disruption to the value chain that has widespread impacts, which include but are not limited to changing the way academics conduct scholarly work. Importantly, we critically discuss the opportunities and challenges for behavioral, design science, and economics of IS research and the emerging implications for theory and methodology arising due to big data’s disruptive effects
Performance Evaluation of Smart Decision Support Systems on Healthcare
Medical activity requires responsibility not only from clinical knowledge and skill but
also on the management of an enormous amount of information related to patient care. It is
through proper treatment of information that experts can consistently build a healthy wellness
policy. The primary objective for the development of decision support systems (DSSs) is
to provide information to specialists when and where they are needed. These systems provide
information, models, and data manipulation tools to help experts make better decisions in a
variety of situations.
Most of the challenges that smart DSSs face come from the great difficulty of dealing
with large volumes of information, which is continuously generated by the most diverse types
of devices and equipment, requiring high computational resources. This situation makes this
type of system susceptible to not recovering information quickly for the decision making. As a
result of this adversity, the information quality and the provision of an infrastructure capable
of promoting the integration and articulation among different health information systems (HIS)
become promising research topics in the field of electronic health (e-health) and that, for this
same reason, are addressed in this research. The work described in this thesis is motivated
by the need to propose novel approaches to deal with problems inherent to the acquisition,
cleaning, integration, and aggregation of data obtained from different sources in e-health environments,
as well as their analysis.
To ensure the success of data integration and analysis in e-health environments, it
is essential that machine-learning (ML) algorithms ensure system reliability. However, in this
type of environment, it is not possible to guarantee a reliable scenario. This scenario makes
intelligent SAD susceptible to predictive failures, which severely compromise overall system
performance. On the other hand, systems can have their performance compromised due to the
overload of information they can support.
To solve some of these problems, this thesis presents several proposals and studies
on the impact of ML algorithms in the monitoring and management of hypertensive disorders
related to pregnancy of risk. The primary goals of the proposals presented in this thesis are
to improve the overall performance of health information systems. In particular, ML-based
methods are exploited to improve the prediction accuracy and optimize the use of monitoring
device resources. It was demonstrated that the use of this type of strategy and methodology
contributes to a significant increase in the performance of smart DSSs, not only concerning precision
but also in the computational cost reduction used in the classification process.
The observed results seek to contribute to the advance of state of the art in methods
and strategies based on AI that aim to surpass some challenges that emerge from the integration
and performance of the smart DSSs. With the use of algorithms based on AI, it is possible to
quickly and automatically analyze a larger volume of complex data and focus on more accurate
results, providing high-value predictions for a better decision making in real time and without
human intervention.A atividade médica requer responsabilidade não apenas com base no conhecimento
e na habilidade clínica, mas também na gestão de uma enorme quantidade de informações
relacionadas ao atendimento ao paciente. É através do tratamento adequado das informações
que os especialistas podem consistentemente construir uma política saudável de bem-estar. O
principal objetivo para o desenvolvimento de sistemas de apoio à decisão (SAD) é fornecer informações
aos especialistas onde e quando são necessárias. Esses sistemas fornecem informações,
modelos e ferramentas de manipulação de dados para ajudar os especialistas a tomar melhores
decisões em diversas situações.
A maioria dos desafios que os SAD inteligentes enfrentam advêm da grande dificuldade
de lidar com grandes volumes de dados, que é gerada constantemente pelos mais diversos
tipos de dispositivos e equipamentos, exigindo elevados recursos computacionais. Essa situação
torna este tipo de sistemas suscetível a não recuperar a informação rapidamente para a
tomada de decisão. Como resultado dessa adversidade, a qualidade da informação e a provisão
de uma infraestrutura capaz de promover a integração e a articulação entre diferentes sistemas
de informação em saúde (SIS) tornam-se promissores tópicos de pesquisa no campo da saúde
eletrônica (e-saúde) e que, por essa mesma razão, são abordadas nesta investigação. O trabalho
descrito nesta tese é motivado pela necessidade de propor novas abordagens para lidar
com os problemas inerentes à aquisição, limpeza, integração e agregação de dados obtidos de
diferentes fontes em ambientes de e-saúde, bem como sua análise.
Para garantir o sucesso da integração e análise de dados em ambientes e-saúde é
importante que os algoritmos baseados em aprendizagem de máquina (AM) garantam a confiabilidade
do sistema. No entanto, neste tipo de ambiente, não é possível garantir um cenário
totalmente confiável. Esse cenário torna os SAD inteligentes suscetíveis à presença de falhas
de predição que comprometem seriamente o desempenho geral do sistema. Por outro lado, os
sistemas podem ter seu desempenho comprometido devido à sobrecarga de informações que
podem suportar.
Para tentar resolver alguns destes problemas, esta tese apresenta várias propostas e
estudos sobre o impacto de algoritmos de AM na monitoria e gestão de transtornos hipertensivos
relacionados com a gravidez (gestação) de risco. O objetivo das propostas apresentadas nesta
tese é melhorar o desempenho global de sistemas de informação em saúde. Em particular, os
métodos baseados em AM são explorados para melhorar a precisão da predição e otimizar o
uso dos recursos dos dispositivos de monitorização. Ficou demonstrado que o uso deste tipo
de estratégia e metodologia contribui para um aumento significativo do desempenho dos SAD
inteligentes, não só em termos de precisão, mas também na diminuição do custo computacional
utilizado no processo de classificação.
Os resultados observados buscam contribuir para o avanço do estado da arte em métodos
e estratégias baseadas em inteligência artificial que visam ultrapassar alguns desafios que
advêm da integração e desempenho dos SAD inteligentes. Como o uso de algoritmos baseados
em inteligência artificial é possível analisar de forma rápida e automática um volume maior de
dados complexos e focar em resultados mais precisos, fornecendo previsões de alto valor para uma melhor tomada de decisão em tempo real e sem intervenção humana
Empirical investigation of key business factors for digital game performance
Game development is an interdisciplinary concept that embraces software engineering, business, management, and artistic disciplines. This research facilitates a better understanding of the business dimension of digital games. The main objective of this research is to investigate empirically the effect of business factors on the performance of digital games in the market and to answer the research questions asked in this study. Game development organizations are facing high pressure and competition in the digital game industry. Business has become a crucial dimension, especially for game development organizations. The main contribution of this paper is to investigate empirically the influence of key business factors on the business performance of games. This is the first study in the domain of game development that demonstrates the interrelationship between key business factors and game performance in the market. The results of the study provide evidence that game development organizations must deal with multiple business key factors to remain competitive and handle the high pressure in the digital game industry. Furthermore, the results of the study support the theoretical assertion that key business factors play an important role in game business performance
A Survey on Applications of Cache-Aided NOMA
Contrary to orthogonal multiple-access (OMA), non-orthogonal multiple-access (NOMA) schemes can serve a pool of users without exploiting the scarce frequency or time domain resources. This is useful in meeting the future network requirements (5G and beyond systems), such as, low latency, massive connectivity, users' fairness, and high spectral efficiency. On the other hand, content caching restricts duplicate data transmission by storing popular contents in advance at the network edge which reduces data traffic. In this survey, we focus on cache-aided NOMA-based wireless networks which can reap the benefits of both cache and NOMA; switching to NOMA from OMA enables cache-aided networks to push additional files to content servers in parallel and improve the cache hit probability. Beginning with fundamentals of the cache-aided NOMA technology, we summarize the performance goals of cache-aided NOMA systems, present the associated design challenges, and categorize the recent related literature based on their application verticals. Concomitant standardization activities and open research challenges are highlighted as well
- …