12,312 research outputs found
Comparative analysis of reporting mechanisms based on XML technology
Comparative analysis of reporting mechanisms based on XML technology is presented in the paper. The analysis was carried out as the part of the process of selecting and implementing of reporting mechanisms for a cadastre information system. The reports were designed for two versions of the system, i.e. for the internet system based on PHP technology and the fat client system in two-layer client-server architecture. The reports for the internet system were prepared using XSLT for HTML output and using XML-FO for PDF output and compared with reports implemented using Free PDF library. Each solution was tested by means of the Web Application Stress Tool in order to determine what limits in scalability and efficiency could be observed. As far as the desktop system is concerned three versions of reporting mechanisms based on Crystal Reports, Microsoft Reporting Services and XML technology were accomplished and compared with the mean execution time as the main criterion
An empirical study of power consumption of Web-based communications in mobile phones
Currently, mobile devices are the most popular
pervasive computing device, and they are becoming the primer way for Web access. Energy is a critical resource in such pervasive
computing devices, being network communication one of the primary energy consuming operations in mobile apps. Indeed, web-based communication is the most used, but also energy demanding. So, mobile web developers should be aware of how much energy consumes the different web-based communication alternatives. The goal of this paper is to measure and compare the
energy consumption of three asynchronous Web-based methods in mobile devices. Our experiments consider three different Web applications models that allow a web server to push data to a browser: Polling, Long Polling and WebSockets. The obtained
results are analyzed to get more accurate understanding of the impact in energy consumption of a mobile browser for each
of these three methods. The utility of these experiments is to show developers what are the factors that influence the energy consumption when different web-based asynchronous communication
is used. With this information mobile web developers
could reduce the power consumption of web applications on
mobile devices, by selecting the most appropriate method for
asynchronous server communication.MUniversidad de MĂĄlaga. Campus de Excelencia Internacional AndalucĂa Tech
ERP implementation for an administrative agency as a corporative frontend and an e-commerce smartphone app
This document contains all the descriptions, arguments and demonstrations of the researches, analysis, reasoning, designs and tasks performed to achieve the requirement to technologically evolve an managing agency in a way that, through a solution that requires a reduced investment, makes possible to arrange a business management tool with e-commerce and also a mobile application that allows access and consultation of mentioned tool. The first part of the document describes the scenario in order to contextualize the project and introduces ERP (Enterprise Resources Planning). In the second part, a deep research of ERP market products is carried out, identifying the strengths and weaknesses of each one of the products in order to finish with the choice of the most suitable product for the scenario proposed in the project. A third part of the document describes the installation process of the selected product carried out based on the use of Dockers, as well as the configurations and customizations that they make on the selected ERP. A description of the installation and configuration of additional modules is also made, necessary to achieve the agreed scope of the project. In a fourth part of the thesis, the process of creating an iOS and Android App that connects to the selected ERP database is described. The process begins with the design of the App. Once designed, it is explained the process of study and documentation of technologies to choose the technology stack that allows making an application robust and contemporary without use of licensing. After choosing the technologies to use there are explained the dependencies and needs to install runtime enviornments prior to the start of coding. Later, it describes how the code of the App has been raised and developed. The compilation and verification mechanisms are indicated in continuation. And finally, it is showed the result of the development of the App once distributed. Finally, a chapter for the conclusions analyzes the difficulties encountered during the project and the achievements, analyzing what has been learned during the development of this project
XML data integrity based on concatenated hash function
Data integrity is the fundamental for data authentication. A major problem for XML data authentication is that signed XML data can be copied to another document but still keep signature valid. This is caused by XML data integrity protecting. Through investigation, the paper discovered that besides data content integrity, XML data integrity should also protect element location information, and context referential integrity under fine-grained security situation. The aim of this paper is to propose a model for XML data integrity considering XML data features. The paper presents an XML data integrity model named as CSR (content integrity, structure integrity, context referential integrity) based on a concatenated hash function. XML data content integrity is ensured using an iterative hash process, structure integrity is protected by hashing an absolute path string from root node, and context referential integrity is ensured by protecting context-related elements. Presented XML data integrity model can satisfy integrity requirements under situation of fine-grained security, and compatible with XML signature. Through evaluation, the integrity model presented has a higher efficiency on digest value-generation than the Merkle hash tree-based integrity model for XML data
A decision support system for corporations cyber security risk management
This thesis presents a decision aiding system named C3-SEC (Contex-aware Corporative
Cyber Security), developed in the context of a master program at Polytechnic Institute of
Leiria, Portugal. The research dimension and the corresponding software development
process that followed are presented and validated with an application scenario and case study
performed at Universidad de las Fuerzas Armadas ESPE â Ecuador.
C3-SEC is a decision aiding software intended to support cyber risks and cyber threats
analysis of a corporative information and communications technological infrastructure. The
resulting software product will help corporations Chief Information Security Officers
(CISO) on cyber security risk analysis, decision-making and prevention measures for the
infrastructure and information assets protection.
The work is initially focused on the evaluation of the most popular and relevant tools
available for risk assessment and decision making in the cyber security domain. Their
properties, metrics and strategies are studied and their support for cyber security risk
analysis, decision-making and prevention is assessed for the protection of organization's
information assets.
A contribution for cyber security experts decision support is then proposed by the means of
reuse and integration of existing tools and C3-SEC software. C3-SEC extends existing tools
features from the data collection and data analysis (perception) level to a full context-ware
reference model.
The software developed makes use of semantic level, ontology-based knowledge
representation and inference supported by widely adopted standards, as well as cyber
security standards (CVE, CPE, CVSS, etc.) and cyber security information data sources
made available by international authorities, to share and exchange information in this
domain. C3-SEC development follows a context-aware systems reference model addressing
the perception, comprehension, projection and decision/action layers to create corporative
scale cyber security situation awareness
A Distributed Sensor Data Search Platform for Internet of Things Environments
Recently, the number of devices has grown increasingly and it is hoped that,
between 2015 and 2016, 20 billion devices will be connected to the Internet and
this market will move around 91.5 billion dollars. The Internet of Things (IoT)
is composed of small sensors and actuators embedded in objects with Internet
access and will play a key role in solving many challenges faced in today's
society. However, the real capacity of IoT concepts is constrained as the
current sensor networks usually do not exchange information with other sources.
In this paper, we propose the Visual Search for Internet of Things (ViSIoT)
platform to help technical and non-technical users to discover and use sensors
as a service for different application purposes. As a proof of concept, a real
case study is used to generate weather condition reports to support rheumatism
patients. This case study was executed in a working prototype and a performance
evaluation is presented.Comment: International Journal of Services Computing (ISSN 2330-4472) Vol. 4,
No.1, January - March, 201
Regional Data Archiving and Management for Northeast Illinois
This project studies the feasibility and implementation options for establishing a regional data archiving system to help monitor
and manage traffic operations and planning for the northeastern Illinois region. It aims to provide a clear guidance to the
regional transportation agencies, from both technical and business perspectives, about building such a comprehensive
transportation information system. Several implementation alternatives are identified and analyzed. This research is carried
out in three phases.
In the first phase, existing documents related to ITS deployments in the broader Chicago area are summarized, and a
thorough review is conducted of similar systems across the country. Various stakeholders are interviewed to collect
information on all data elements that they store, including the format, system, and granularity. Their perception of a data
archive system, such as potential benefits and costs, is also surveyed. In the second phase, a conceptual design of the
database is developed. This conceptual design includes system architecture, functional modules, user interfaces, and
examples of usage. In the last phase, the possible business models for the archive system to sustain itself are reviewed. We
estimate initial capital and recurring operational/maintenance costs for the system based on realistic information on the
hardware, software, labor, and resource requirements. We also identify possible revenue opportunities.
A few implementation options for the archive system are summarized in this report; namely:
1. System hosted by a partnering agency
2. System contracted to a university
3. System contracted to a national laboratory
4. System outsourced to a service provider
The costs, advantages and disadvantages for each of these recommended options are also provided.ICT-R27-22published or submitted for publicationis peer reviewe
Making open data work for plant scientists
Despite the clear demand for open data sharing, its implementation within plant science is still limited. This is, at least in part, because open data-sharing raises several unanswered questions and challenges to current research practices. In this commentary, some of the challenges encountered by plant researchers at the bench when generating, interpreting, and attempting to disseminate their data have been highlighted. The difficulties involved in sharing sequencing, transcriptomics, proteomics, and metabolomics data are reviewed. The benefits and drawbacks of three data-sharing venues currently available to plant scientists are identified and assessed: (i) journal publication; (ii) university repositories; and (iii) community and project-specific databases. It is concluded that community and project-specific databases are the most useful to researchers interested in effective data sharing, since these databases are explicitly created to meet the researchersâ needs, support extensive curation, and embody a heightened awareness of what it takes to make data reuseable by others. Such bottom-up and community-driven approaches need to be valued by the research community, supported by publishers, and provided with long-term sustainable support by funding bodies and government. At the same time, these databases need to be linked to generic databases where possible, in order to be discoverable to the majority of researchers and thus promote effective and efficient data sharing. As we look forward to a future that embraces open access to data and publications, it is essential that data policies, data curation, data integration, data infrastructure, and data funding are linked together so as to foster data access and research productivity
Storage Solutions for Big Data Systems: A Qualitative Study and Comparison
Big data systems development is full of challenges in view of the variety of
application areas and domains that this technology promises to serve.
Typically, fundamental design decisions involved in big data systems design
include choosing appropriate storage and computing infrastructures. In this age
of heterogeneous systems that integrate different technologies for optimized
solution to a specific real world problem, big data system are not an exception
to any such rule. As far as the storage aspect of any big data system is
concerned, the primary facet in this regard is a storage infrastructure and
NoSQL seems to be the right technology that fulfills its requirements. However,
every big data application has variable data characteristics and thus, the
corresponding data fits into a different data model. This paper presents
feature and use case analysis and comparison of the four main data models
namely document oriented, key value, graph and wide column. Moreover, a feature
analysis of 80 NoSQL solutions has been provided, elaborating on the criteria
and points that a developer must consider while making a possible choice.
Typically, big data storage needs to communicate with the execution engine and
other processing and visualization technologies to create a comprehensive
solution. This brings forth second facet of big data storage, big data file
formats, into picture. The second half of the research paper compares the
advantages, shortcomings and possible use cases of available big data file
formats for Hadoop, which is the foundation for most big data computing
technologies. Decentralized storage and blockchain are seen as the next
generation of big data storage and its challenges and future prospects have
also been discussed
- âŠ