9,692 research outputs found
SERVICE-BASED INTERACTIVE WORKFLOWS FOR MOBILE ENVIRONMENTS
Since the use of mobile devices spreads increasingly, mobile systems also play a major role for distributed business processes. In such scenarios, extending workflow management support to mobile systems offers potential to seamlessly integrate field staff into business processes, even if executing devices are disconnected from the company\u27s server. However, the heterogeneity of current mobile systems still requires complex device-specific descriptions of user interfaces to integrate manual tasks. Therefore, this paper presents an abstract and modality-independent description model to support the development and execution of interactive mobile workflows and a corresponding prototype realization based on a service-oriented execution module
Workflows and service discovery: a mobile device approach
Bioinformatics has moved from command-line standalone
programs to web-service based environments. Such trend has resulted
in an enormous amount of online resources which can be hard to find
and identify, let alone execute and exploit. Furthermore, these resources
are aimed -in general- to solve specific tasks. Usually, this tasks need to
be combined in order to achieve the desired results. In this line, finding
the appropriate set of tools to build up a workflow to solve a problem
with the services available in a repository is itself a complex exercise. Issues
such as services discovering, composition and representation appear.
On the technological side, mobile devices have experienced an incredible
growth in the number of users and technical capabilities. Starting from
this reality, in the present paper, we propose a solution for service discovering
and workflow generation while distinct approaches of representing
workflows in a mobile environment are reviewed and discussed. As a
proof of concept, a specific use case has been developed: we have embedded
an expanded version of our Magallanes search engine into mORCA,
our mobile client for bioinformatics. Such composition delivers a powerful
and ubiquitous solution that provides the user with a handy tool for
not only generate and represent workflows, but also services, data types,
operations and service types discoveryUniversidad de Málaga. Campus de Excelencia Internacional AndalucĂa Tech
weSPOT: a cloud-based approach for personal and social inquiry
Scientific inquiry is at the core of the curricula of schools and universities across Europe. weSPOT is a new European initiative proposing a cloud-based approach for personal and social inquiry. weSPOT aims at enabling students to create their mashups out of cloud-based tools in order to perform scientific investigations. Students will also be able to share their inquiry accomplishments in social networks and receive feedback from the learning environment and their peers
KAPTUR: technical analysis report
Led by the Visual Arts Data Service (VADS) and funded by the JISC Managing Research Data programme (2011-13) KAPTUR will discover, create and pilot a sectoral model of best practice in the management of research data in the visual arts in collaboration with four institutional partners: Glasgow School of Art; Goldsmiths, University of London; University for the Creative Arts; and University of the Arts London.
This report is framed around the research question: which technical system is most suitable for managing visual arts research data?
The first stage involved a literature review including information gathered through attendance at meetings and events, and Internet research, as well as information on projects from the previous round of JISCMRD funding (2009-11).
During February and March 2012, the Technical Manager carried out interviews with the four KAPTUR Project Officers and also met with IT staff at each institution. This led to the creation of a user requirement document (Appendix A), which was then circulated to the project team for additional comments and feedback. The Technical Manager selected 17 systems to compare with the user requirement document (Appendix B). Five of the systems had similar scores so these were short-listed. The Technical Manager created an online form into which the Project Officers entered priority scores for each of the user requirements in order to calculate a more accurate score for each of the five short-listed systems (Appendix C) and this resulted in the choice of EPrints as the software for the KAPTUR project
A Time-driven Data Placement Strategy for a Scientific Workflow Combining Edge Computing and Cloud Computing
Compared to traditional distributed computing environments such as grids,
cloud computing provides a more cost-effective way to deploy scientific
workflows. Each task of a scientific workflow requires several large datasets
that are located in different datacenters from the cloud computing environment,
resulting in serious data transmission delays. Edge computing reduces the data
transmission delays and supports the fixed storing manner for scientific
workflow private datasets, but there is a bottleneck in its storage capacity.
It is a challenge to combine the advantages of both edge computing and cloud
computing to rationalize the data placement of scientific workflow, and
optimize the data transmission time across different datacenters. Traditional
data placement strategies maintain load balancing with a given number of
datacenters, which results in a large data transmission time. In this study, a
self-adaptive discrete particle swarm optimization algorithm with genetic
algorithm operators (GA-DPSO) was proposed to optimize the data transmission
time when placing data for a scientific workflow. This approach considered the
characteristics of data placement combining edge computing and cloud computing.
In addition, it considered the impact factors impacting transmission delay,
such as the band-width between datacenters, the number of edge datacenters, and
the storage capacity of edge datacenters. The crossover operator and mutation
operator of the genetic algorithm were adopted to avoid the premature
convergence of the traditional particle swarm optimization algorithm, which
enhanced the diversity of population evolution and effectively reduced the data
transmission time. The experimental results show that the data placement
strategy based on GA-DPSO can effectively reduce the data transmission time
during workflow execution combining edge computing and cloud computing
Applications and Challenges of Real-time Mobile DNA Analysis
The DNA sequencing is the process of identifying the exact order of
nucleotides within a given DNA molecule. The new portable and relatively
inexpensive DNA sequencers, such as Oxford Nanopore MinION, have the potential
to move DNA sequencing outside of laboratory, leading to faster and more
accessible DNA-based diagnostics. However, portable DNA sequencing and analysis
are challenging for mobile systems, owing to high data throughputs and
computationally intensive processing performed in environments with unreliable
connectivity and power.
In this paper, we provide an analysis of the challenges that mobile systems
and mobile computing must address to maximize the potential of portable DNA
sequencing, and in situ DNA analysis. We explain the DNA sequencing process and
highlight the main differences between traditional and portable DNA sequencing
in the context of the actual and envisioned applications. We look at the
identified challenges from the perspective of both algorithms and systems
design, showing the need for careful co-design
funcX: A Federated Function Serving Fabric for Science
Exploding data volumes and velocities, new computational methods and
platforms, and ubiquitous connectivity demand new approaches to computation in
the sciences. These new approaches must enable computation to be mobile, so
that, for example, it can occur near data, be triggered by events (e.g.,
arrival of new data), be offloaded to specialized accelerators, or run remotely
where resources are available. They also require new design approaches in which
monolithic applications can be decomposed into smaller components, that may in
turn be executed separately and on the most suitable resources. To address
these needs we present funcX---a distributed function as a service (FaaS)
platform that enables flexible, scalable, and high performance remote function
execution. funcX's endpoint software can transform existing clouds, clusters,
and supercomputers into function serving systems, while funcX's cloud-hosted
service provides transparent, secure, and reliable function execution across a
federated ecosystem of endpoints. We motivate the need for funcX with several
scientific case studies, present our prototype design and implementation, show
optimizations that deliver throughput in excess of 1 million functions per
second, and demonstrate, via experiments on two supercomputers, that funcX can
scale to more than more than 130000 concurrent workers.Comment: Accepted to ACM Symposium on High-Performance Parallel and
Distributed Computing (HPDC 2020). arXiv admin note: substantial text overlap
with arXiv:1908.0490
- …