202,519 research outputs found
Managing Uncertainty: A Case for Probabilistic Grid Scheduling
The Grid technology is evolving into a global, service-orientated
architecture, a universal platform for delivering future high demand
computational services. Strong adoption of the Grid and the utility computing
concept is leading to an increasing number of Grid installations running a wide
range of applications of different size and complexity. In this paper we
address the problem of elivering deadline/economy based scheduling in a
heterogeneous application environment using statistical properties of job
historical executions and its associated meta-data. This approach is motivated
by a study of six-month computational load generated by Grid applications in a
multi-purpose Grid cluster serving a community of twenty e-Science projects.
The observed job statistics, resource utilisation and user behaviour is
discussed in the context of management approaches and models most suitable for
supporting a probabilistic and autonomous scheduling architecture
Managing Opportunities and Challenges of Co-Authorship
Research with the largest impact on practice and science is often conducted by teams with diverse substantive, clinical, and methodological expertise. Team and interdisciplinary research has created authorship groups with varied expertise and expectations. Co-authorship among team members presents many opportunities and challenges. Intentional planning, clear expectations, sensitivity to differing disciplinary perspectives, attention to power differentials, effective communication, timelines, attention to published guidelines, and documentation of progress will contribute to successful co-authorship. Both novice and seasoned authors will find the strategies identified by the Western Journal of Nursing Research Editorial Board useful for building positive co-authorship experiences
Recommended from our members
Every Moment Counts: Synchrophasors for Distribution Networks with Variable Resources
Chapter 34 in the textbook, "Renewable Energy Integration: Practical Management of Variability, Uncertainty and Flexibility
An integrated framework to assess financial reward systems in construction projects
Motivation is a major driver of project performance. Despite team member ability to deliver successful project outcomes if they are not positively motivated to pursue joint project goals, then performance will be constrained. One approach to improving the motivation of project organizations is by offering a financial reward for the achievement of set performance standards above a minimum required level. However, little investigation has been undertaken into the features of successful incentive systems as a part of an overall delivery strategy. With input from organizational management literature, and drawing on the literature covering psychological and economic theories of motivation, this paper presents an integrated framework that can be used by project organizations to assess the impact of financial reward systems on motivation in construction projects. The integrated framework offers four motivation indicators which reflect key theoretical concepts across both psychological and economic disciplines. The indicators are: (1) Goal Commitment, (2) Distributive Justice, (3) Procedural Justice, and (4) Reciprocity. The paper also interprets the integrated framework against the results of a successful Australian social infrastructure project case study and identifies key learning’s for project organizations to consider when designing financial reward systems. Case study results suggest that motivation directed towards the achievement of incentive goals is influenced not only by the value placed on the financial reward for commercial benefit, but also driven by the strength of the project initiatives that encourage just and fair dealings, supporting the establishment of trust and positive reciprocal behavior across a project team. The strength of the project relationships was found to be influenced by how attractive the achievement of the goal is to the incentive recipient and how likely they were to push for the achievement of the goal. Interestingly, findings also suggested that contractor motivation is also influenced by the fairness of the performance measurement process and their perception of the trustworthiness and transparency of their client. These findings provide the basis for future research on the impact of financial reward systems on motivation in construction projects. It is anticipated that such research will shed new light on this complex topic and further define how reward systems should be designed to promote project team motivation. Due to the unique nature of construction projects with high levels of task complexity and interdependence, results are expected to vary in comparison to previous studies based on individuals or single-entity organizations
Data locality in Hadoop
Current market tendencies show the need of storing and processing rapidly
growing amounts of data. Therefore, it implies the demand for distributed
storage and data processing systems. The Apache Hadoop is an open-source
framework for managing such computing clusters in an effective, fault-tolerant
way.
Dealing with large volumes of data, Hadoop, and its storage system HDFS
(Hadoop Distributed File System), face challenges to keep the high efficiency
with computing in a reasonable time. The typical Hadoop implementation
transfers computation to the data, rather than shipping data across the cluster.
Otherwise, moving the big quantities of data through the network could significantly
delay data processing tasks. However, while a task is already running,
Hadoop favours local data access and chooses blocks from the nearest nodes.
Next, the necessary blocks are moved just when they are needed in the given
ask.
For supporting the Hadoop’s data locality preferences, in this thesis, we propose
adding an innovative functionality to its distributed file system (HDFS), that
enables moving data blocks on request. In-advance shipping of data makes it
possible to forcedly redistribute data between nodes in order to easily adapt it to
the given processing tasks. New functionality enables the instructed movement
of data blocks within the cluster. Data can be shifted either by user running
the proper HDFS shell command or programmatically by other module like an
appropriate scheduler.
In order to develop such functionality, the detailed analysis of Apache Hadoop
source code and its components (specifically HDFS) was conducted. Research
resulted in a deep understanding of internal architecture, what made it possible
to compare the possible approaches to achieve the desired solution, and develop
the chosen one
A Quality Model for Actionable Analytics in Rapid Software Development
Background: Accessing relevant data on the product, process, and usage
perspectives of software as well as integrating and analyzing such data is
crucial for getting reliable and timely actionable insights aimed at
continuously managing software quality in Rapid Software Development (RSD). In
this context, several software analytics tools have been developed in recent
years. However, there is a lack of explainable software analytics that software
practitioners trust. Aims: We aimed at creating a quality model (called
Q-Rapids quality model) for actionable analytics in RSD, implementing it, and
evaluating its understandability and relevance. Method: We performed workshops
at four companies in order to determine relevant metrics as well as product and
process factors. We also elicited how these metrics and factors are used and
interpreted by practitioners when making decisions in RSD. We specified the
Q-Rapids quality model by comparing and integrating the results of the four
workshops. Then we implemented the Q-Rapids tool to support the usage of the
Q-Rapids quality model as well as the gathering, integration, and analysis of
the required data. Afterwards we installed the Q-Rapids tool in the four
companies and performed semi-structured interviews with eight product owners to
evaluate the understandability and relevance of the Q-Rapids quality model.
Results: The participants of the evaluation perceived the metrics as well as
the product and process factors of the Q-Rapids quality model as
understandable. Also, they considered the Q-Rapids quality model relevant for
identifying product and process deficiencies (e.g., blocking code situations).
Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model
enables detecting problems that take more time to find manually and adds
transparency among the perspectives of system, process, and usage.Comment: This is an Author's Accepted Manuscript of a paper to be published by
IEEE in the 44th Euromicro Conference on Software Engineering and Advanced
Applications (SEAA) 2018. The final authenticated version will be available
onlin
Recommended from our members
Distributed software development in a financial services organisation
The outsourcing of IS functionality to offshore development firms has been a growth industry that has blossomed over the last 10 years. This is as a result of organisations, seeking to optimise costs, mitigate risks, and achieve greater return on shareholder value by delegating the delivery of business information systems and applications to third party vendors. At the same time, distributed approaches to software development has arisen, there has been a growing interest in the applicability of lightweight or Agile development methodologies. As such, this paper this paper discusses experiences of a European Financial Services firm in outsourcing, and subsequently offshoring, two of its IT projects to vendor firms in India, where Agile approaches were used. The authors provide a model of the financial firm’s critical success factors presented as a frame of reference for others interested and involved in this topical area
- …