20,598 research outputs found

    “I See You!” – The Zulu Insight to Caring Leadership

    Get PDF
    Although the role of leaders in building relationships with team members has been well-established as a foundation for improved performance (Beer, 2009), the complex challenges in directing the modern organization in a highly competitive global marketplace often mean that leaders of organizations are more focused on tasks rather than people. Nonetheless, a growing body of research about the importance of leader-member relationships confirms that leaders who demonstrate a caring commitment to the welfare of organization members also create organizations that are more profitable, more innovative, and more effective at meeting customer needs (Cameron, 2003; Kouzes & Posner, 2012)

    Scaling better together: The International Livestock Research Institute’s framework for scaling

    Get PDF

    Survey Methodology: International Developments

    Get PDF
    Falling response rates and the advancement of technology have shaped the discussion in survey methodology in the last few years. Both led to a notable change in data collection efforts. Survey organizations try to create adaptive recruitment and survey designs and increased the collection of non-survey data for sampled cases. While the first strategy is an attempt to increase response rates and to save cost, the latter is part of efforts to reduce possible bias and response burden of those interviewed. To successfully implement adaptive designs and alternative data collection efforts researchers need to understand error properties of mixedmode and multiple-frame surveys. Randomized experiments might be needed to gain that knowledge. In addition close collaboration between survey organizations and researchers is needed, including the possibility and willingness to shared data between those organizations. Expanding options for graduate and post-graduate education in survey methodology might help to increase the possibility for high quality surveys.Survey Methodology, Responsive Design, Paradata

    No-reference bitstream-based visual quality impairment detection for high definition H.264/AVC encoded video sequences

    Get PDF
    Ensuring and maintaining adequate Quality of Experience towards end-users are key objectives for video service providers, not only for increasing customer satisfaction but also as service differentiator. However, in the case of High Definition video streaming over IP-based networks, network impairments such as packet loss can severely degrade the perceived visual quality. Several standard organizations have established a minimum set of performance objectives which should be achieved for obtaining satisfactory quality. Therefore, video service providers should continuously monitor the network and the quality of the received video streams in order to detect visual degradations. Objective video quality metrics enable automatic measurement of perceived quality. Unfortunately, the most reliable metrics require access to both the original and the received video streams which makes them inappropriate for real-time monitoring. In this article, we present a novel no-reference bitstream-based visual quality impairment detector which enables real-time detection of visual degradations caused by network impairments. By only incorporating information extracted from the encoded bitstream, network impairments are classified as visible or invisible to the end-user. Our results show that impairment visibility can be classified with a high accuracy which enables real-time validation of the existing performance objectives

    Building development cost drivers in the New Zealand construction industry : a multilevel analysis of the causal relationships : a thesis submitted in fulfilment of the requirements for the degree of Doctor of Philosophy (PhD) in Construction, School of Engineering & Advanced Technology, Massey University, Albany, New Zealand

    Get PDF
    Building development cost is influenced by a raft of complex factors which range from project characteristics to the operating environment and external dynamics. It is not yet clearly understood how these factors interact with each other and individually to influence building cost. This gap in knowledge has resulted in inaccuracies in estimates, improper cost management and control, and poor project cost performance. This study aims to bridge the knowledge gap by developing and validating a multilevel model of the key drivers of building development cost (BDC) and their causal relationships. Based on literature insights and feedback from a survey of industry practitioners, some hypotheses were put forward in regards to the causal relationships between the BDC and the following key drivers as latent constructs: project component costs factor, project characteristics factor, project stakeholders’ influences factor, property market and construction industry factor, statutory and regulatory factor, national and global dynamics, and socio-economic factor. Observed indicators of the model's latent constructs were identified and measured using a mixed methods research design. Results showed that property market and construction industry factor was the most significant predictor of building development cost in New Zealand, while project component cost factor has the least impact. The model’s fit to the empirical dataset, and its predictive reliability, was validated using structural equation modelling. Results of an additional model validation test by a panel of experts further confirmed its efficacy. Overall, the results suggest that sole reliance on the immediate project component costs without due consideration of the wider and more influencing effects of the external factors could result in inaccurate estimates of building development cost. Key recommendations included addressing the priority observed indicators of the most significant latent variables in cost studies and analysis. Keywords: Building development cost, cost drivers, cost modelling, cost predictio

    An Automated Methodology for Validating Web Related Cyber Threat Intelligence by Implementing a Honeyclient

    Get PDF
    Loodud töö panustab küberkaitse valdkonda pakkudes alternatiivse viisi, kuidas hoida ohuteadmus andmebaas uuendatuna. Veebilehti kasutatakse ära viisina toimetada pahatahtlik kood ohvrini. Peale veebilehe klassifitseerimist pahaloomuliseks lisatakse see ohuteadmus andmebaasi kui pahaloomulise indikaatorina. Lõppkokkuvõtteks muutuvad sellised andmebaasid mahukaks ja sisaldavad aegunud kirjeid. Lahendus on automatiseerida aegunud kirjete kontrollimist klient-meepott tarkvaraga ning kogu protsess on täielikult automatiseeritav eesmärgiga hoida kokku aega. Jahtides kontrollitud ja kinnitatud indikaatoreid aitab see vältida valedel alustel küberturbe intsidentide menetlemist.This paper is contributing to the open source cybersecurity community by providing an alternative methodology for analyzing web related cyber threat intelligence. Websites are used commonly as an attack vector to spread malicious content crafted by any malicious party. These websites become threat intelligence which can be stored and collected into corresponding databases. Eventually these cyber threat databases become obsolete and can lead to false positive investigations in cyber incident response. The solution is to keep the threat indicator entries valid by verifying their content and this process can be fully automated to keep the process less time consuming. The proposed technical solution is a low interaction honeyclient regularly tasked to verify the content of the web based threat indicators. Due to the huge amount of database entries, this way most of the web based threat indicators can be automatically validated with less time consumption and they can be kept relevant for monitoring purposes and eventually can lead to avoiding false positives in an incident response processes

    Using the right information: A theoretical explanation of user motivation to validate web-content

    Get PDF
    Using unverified information can have many dire consequences especially when used in decision making or tasks. Evidence of gaffes caused by using deficient information abound. The increasing dependence of users on the web as a source of information raises the risk of using obsolete, inaccurate, and unreliable content it is unverified. Though the web serves as an information store and archive, the nature of its information content is sticky due to the lack of centralized controls, regulation, and content gatekeeping. Using the regulatory focus theory as a lens and augmenting with propositions from attributional processing, this study seeks to theoretically understand how users can be motivated to validate web content and the moderating conditions

    Machine Learning at Microsoft with ML .NET

    Full text link
    Machine Learning is transitioning from an art and science into a technology available to every developer. In the near future, every application on every platform will incorporate trained models to encode data-based decisions that would be impossible for developers to author. This presents a significant engineering challenge, since currently data science and modeling are largely decoupled from standard software development processes. This separation makes incorporating machine learning capabilities inside applications unnecessarily costly and difficult, and furthermore discourage developers from embracing ML in first place. In this paper we present ML .NET, a framework developed at Microsoft over the last decade in response to the challenge of making it easy to ship machine learning models in large software applications. We present its architecture, and illuminate the application demands that shaped it. Specifically, we introduce DataView, the core data abstraction of ML .NET which allows it to capture full predictive pipelines efficiently and consistently across training and inference lifecycles. We close the paper with a surprisingly favorable performance study of ML .NET compared to more recent entrants, and a discussion of some lessons learned

    SOCR: Statistics Online Computational Resource

    Get PDF
    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR). This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student's intuition and enhance their learning.

    The European Institute for Innovation through Health Data

    Get PDF
    The European Institute for Innovation through Health Data (i~HD, www.i-hd.eu) has been formed as one of the key sustainable entities arising from the Electronic Health Records for Clinical Research (IMI-JU-115189) and SemanticHealthNet (FP7-288408) projects, in collaboration with several other European projects and initiatives supported by the European Commission. i~HD is a European not-for-profit body, registered in Belgium through Royal Assent. i~HD has been established to tackle areas of challenge in the successful scaling up of innovations that critically rely on high-quality and interoperable health data. It will specifically address obstacles and opportunities to using health data by collating, developing, and promoting best practices in information governance and in semantic interoperability. It will help to sustain and propagate the results of health information and communication technology (ICT) research that enables better use of health data, assessing and optimizing their novel value wherever possible. i~HD has been formed after wide consultation and engagement of many stakeholders to develop methods, solutions, and services that can help to maximize the value obtained by all stakeholders from health data. It will support innovations in health maintenance, health care delivery, and knowledge discovery while ensuring compliance with all legal prerequisites, especially regarding the insurance of patient's privacy protection. It is bringing multiple stakeholder groups together so as to ensure that future solutions serve their collective needs and can be readily adopted affordably and at scale
    corecore