21,817 research outputs found

    The Sum of Its Parts: The Lawyer-Client Relationship in Initial Public Offerings

    Get PDF
    This Article examines the impact of the quality of a lawyer\u27s working relationship with his or her client on one of the most important types of capital markets deal in a company\u27s existence: its initial public offering (IPO). Drawing on data from interviews with equity capital markets lawyers at major law firms, and analyzing data from IPOs in the United States registered with the Securities and Exchange Commission between June 1996 and December 2010, this study finds a strong association between several measures of IPO performance and the familiarity between the lead underwriter and its counsel, as measured by the number of times a particular law firm serves as counsel to a managing underwriter within a relatively short time period. Performance is gauged according to a stock\u27s opening day returns, price performance over thirty, sixty, and ninety trading days, correct price revision, litigation rates, and the speed at which deals are completed. I also analyze the relationships between the lawyers for the lead underwriter and the lawyers for the issuer. The analysis shows some benefits from familiarity, albeit generally smaller than those associated with the underwriter-lawyer relationship. In all cases, the positive effects of repeated interaction diminish the further back in time the previous collaborations occurred. To rule out selection and reverse causality, I perform a number of tests using smaller subsets of the data to remove observations that are plausibly selection driven. I also show that the relationships between familiarity and deal quality occur independently of the level of the lawyers\u27 experience. These findings support the conclusion that lawyers\u27 relational skill can positively influence deal outcomes, independent even of substance and process knowledge. I hypothesize that the core advantage of repeated interaction is the formation of more effective lawyer-client team dynamics

    Employee Screening : Theory and Evidence

    Get PDF
    Arguably the fundamental problem faced by employers is how to elicit effort from employees. Most models suggest that employers meet this challenge by monitoring employees carefully to prevent shirking. But there is another option that relies on heterogeneity across employees, and that is to screen job candidates to find workers with a stronger work ethic who require less monitoring. This should be especially useful in work systems where monitoring by supervisors is more difficult, such as teamwork systems. We analyze the relationship between screening and monitoring in the context of a principal-agent model and test the theoretical results using a national sample of U.S. establishments, which includes information on employee selection. We find that employers screen applicants more intensively for work ethic where they make greater use of systems such as teamwork where monitoring is more difficult. This screening is also associated with higher wages, as predicted by the theory : The synergies between reduced monitoring costs and high performance work systems enable the firm to pay higher wages to attract and retain such workers. Screening for other attributes, such as work experiences and academic performance, does not produce these results.Employee Screening, Monitoring, Work Ethic, High Performance Work Practices, Principal-Agent Model

    Employee Screening: Theory and Evidence

    Get PDF
    Arguably the fundamental problem faced by employers is how to elicit effort from employees. Most models suggest that employers meet this challenge by monitoring employees carefully to prevent shirking. But there is another option that relies on heterogeneity across employees, and that is to screen job candidates to find workers with a stronger work ethic who require less monitoring. This should be especially useful in work systems where monitoring by supervisors is more difficult, such as teamwork systems. We analyze the relationship between screening and monitoring in the context of a principal-agent model and test the theoretical results using a national sample of U.S. establishments, which includes information on employee selection. We find that employers screen applicants more intensively for work ethic where they make greater use of systems such as teamwork where monitoring is more difficult. This screening is also associated with higher wages, as predicted by the theory: The synergies between reduced monitoring costs and high performance work systems enable the firm to pay higher wages to attract and retain such workers. Screening for other attributes, such as work experiences and academic performance, does not produce these results.Employee Screening, Monitoring, Work Ethic, High Performance Work Practices, Principal-Agent Model.

    Structuring AI Teammate Communication: An Exploration of AI\u27s Communication Strategies in Human-AI Teams

    Get PDF
    In the past decades, artificial intelligence (AI) has been implemented in various domains to facilitate humans in their work, such as healthcare and the automotive industry. Such application of AI has led to increasing attention on human-AI teaming, where AI closely collaborates with humans as a teammate. AI as a teammate is expected to have the ability to coordinate with humans by sharing task-related information, predicting other teammates’ behaviors, and progressing team tasks accordingly. To complete these team activities effectively, AI teammates must communicate with humans, such as sharing updates and checking team progress. Even though communication is a core element of teamwork that helps to achieve effective coordination, how to design and structure human-AI communication in teaming environments still remains unclear. Given the context-dependent characteristics of communication, research on human-AI teaming communication needs to narrow down and focus on specific communication elements/components, such as the proactivity of communication and communication content. In doing so, this dissertation explores how AI teammates’ communication should be structured by modifying communication components through three studies, each of which details a critical component of effective AI communication: (1) communication proactivity, (2) communication content (explanation), and (3) communication approach (verbal vs. non-verbal). These studies provide insights into how AI teammates’ communication ii can be integrated into teamwork and how to design AI teammate communication in human-AI teaming. Study 1 explores an important communication element, communication proactivity, and its impact on team processes and team performance. Specifically, communication proactivity in this dissertation refers to whether an AI teammate proactively communicates with human teammates, i.e., proactively pushing information to human teammates. Experimental analysis shows that AI teammates’ proactive communication plays a crucial role in impacting human perceptions, such as perceived teammate performance and satisfaction with the teammate. Importantly, teams with a non-proactive communication AI teammate increase team performance more than teams with a proactive communication AI as the human and the AI collaborate more. This study identifies the positive impact of AI being proactive in communication at the initial stage of task coordination, as well as the potential need for AI’s flexibility in their communication proactivity (i.e., once human and AI teammates’ coordination pattern forms, AI can be non-proactive in communication). Study 2 examines communication content by focusing on AI’s explanation and its impact on human perceptions in teaming environments. Results indicate that AI’s explanation, as part of communication content, does not always positively impact human trust in human-AI teaming. Instead, the impact of AI’s explanations on human perceptions depends on specific collaboration scenarios. Specifically, AI’s explanations facilitate trust in the AI teammate when explaining why AI disobeys humans’ orders, but hinder trust when explaining why AI lies to humans. In addition, AI giving an explanation of why they ignored the human teammate’s injury was perceived to be more effective than AI not providing such an explanation. The findings emphasize the context-dependent characteristic of AI’s communication content with a focus on AI’s explanation of their actions. iii Study 3 investigates AI’s communication approach, which was manipulated as verbal vs. non-verbal communication. Results indicate that AI teammates’ verbal/nonverbal communication does not impact human trust in the AI teammate, but facilitates the maintenance of humans’ situation awareness in task coordination. In addition, AI with non-verbal communication is perceived as having lower communication quality and lower performance. Importantly, AI with non-verbal communication has better team performance in human-human-AI teams than human-AI-AI teams, whereas AI with verbal communication has better team performance in human-AI-AI teams than human-human-AI teams. These three studies together address multiple research gaps in human-AI team communication and provide a holistic view of the design and structure of AI’s communication by examining three specific aspects of communication in human-AI teaming. In addition, each study in this dissertation proposes practical design implications on AI’s communication in human-AI teams, which will assist AI designers and developers to create better AI teammates that facilitate humans in teaming environments

    The Agency Costs of Teamwork

    Get PDF

    How to Make Agents and Influence Teammates: Understanding the Social Influence AI Teammates Have in Human-AI Teams

    Get PDF
    The introduction of computational systems in the last few decades has enabled humans to cross geographical, cultural, and even societal boundaries. Whether it was the invention of telephones or file sharing, new technologies have enabled humans to continuously work better together. Artificial Intelligence (AI) has one of the highest levels of potential as one of these technologies. Although AI has a multitude of functions within teaming, such as improving information sciences and analysis, one specific application of AI that has become a critical topic in recent years is the creation of AI systems that act as teammates alongside humans, in what is known as a human-AI team. However, as AI transitions into teammate roles they will garner new responsibilities and abilities, which ultimately gives them a greater influence over teams\u27 shared goals and resources, otherwise known as teaming influence. Moreover, that increase in teaming influence will provide AI teammates with a level of social influence. Unfortunately, while research has observed the impact of teaming influence by examining humans\u27 perception and performance, an explicit and literal understanding of the social influence that facilitates long-term teaming change has yet to be created. This dissertation uses three studies to create a holistic understanding of the underlying social influence that AI teammates possess. Study 1 identifies the fundamental existence of AI teammate social influence and how it pertains to teaming influence. Qualitative data demonstrates that social influence is naturally created as humans actively adapt around AI teammate teaming influence. Furthermore, mixed-methods results demonstrate that the alignment of AI teammate teaming influence with a human\u27s individual motives is the most critical factor in the acceptance of AI teammate teaming influence in existing teams. Study 2 further examines the acceptance of AI teammate teaming and social influence and how the design of AI teammates and humans\u27 individual differences can impact this acceptance. The findings of Study 2 show that humans have the greatest levels of acceptance of AI teammate teaming influence that is comparative to their own teaming influence on a single task, but the acceptance of AI teammate teaming influence across multiple tasks generally decreases as teaming influence increases. Additionally, coworker endorsements are shown to increase the acceptance of high levels of AI teammate teaming influence, and humans that perceive the capabilities of technology, in general, to be greater are potentially more likely to accept AI teammate teaming influence. Finally, Study 3 explores how the teaming and social influence possessed by AI teammates change when presented in a team that also contains teaming influence from multiple human teammates, which means social influence between humans also exists. Results demonstrate that AI teammate social influence can drive humans to prefer and observe their human teammates over their AI teammates, but humans\u27 behavioral adaptations are more centered around their AI teammates than their human teammates. These effects demonstrate that AI teammate social influence, when in the presence of human-human teaming and social influence, retains potency, but its effects are different when impacting either perception or behavior. The above three studies fill a currently under-served research gap in human-AI teaming, which is both the understanding of AI teammate social influence and humans\u27 acceptance of it. In addition, each study conducted within this dissertation synthesizes its findings and contributions into actionable design recommendations that will serve as foundational design principles to allow the initial acceptance of AI teammates within society. Therefore, not only will the research community benefit from the results discussed throughout this dissertation, but so too will the developers, designers, and human teammates of human-AI teams

    Developing and Facilitating Temporary Team Mental Models Through an Information-Sharing Recommender System

    Get PDF
    It is well understood that teams are essential and common in many aspects of life, both work and leisure. Due to the importance of teams, much research attention has focused on how to improve team processes and outcomes. Of particular interest are the cognitive aspects of teamwork including team mental models (TMMs). Among many other benefits, TMMs involve team members forming a compatible understanding of the task and team in order to more efficiently make decisions. This understanding is sometimes classified using four TMM domains: equipment (e.g., operating procedures), task (e.g., strategies), team interactions (e.g., interdependencies) and teammates (e.g., tendencies). Of particular interest to this dissertation is accelerating the development of teammate TMMs which include members understanding the knowledge, skills, attitudes, preferences, and tendencies of their teammates. An accurate teammate TMM allows teams to predict and account for the needs and behaviors of their teammates. Although much research has highlighted how the development of the four TMM domains can be supported, promoting the development of teammate TMMs is particularly challenging for a specific type of team: temporary teams. Temporary teams, in contrast to ongoing teams, involve unknown teammates, novel tasks, short task times (alternatively limited interactions), and members disbanding after completing their task. These teams are increasingly used by organizations as they can be agilely formed with individual members selected to accomplish a specific task. Such teams are commonly used in contexts such as film production, the military, emergency response, and software development, just to name a few. Importantly, although these teams benefit greatly from teammate TMMs due to the efficiencies gained in decision making while working under limited deadlines, the literature is severely limited in understanding how to support temporary teams in this way. As prior research has suggested, an opportunity to accelerate teammate TMM development on temporary teams is through the use of technology to selectively share teammate information to support these TMMs. However, this solution poses numerous privacy concerns. This dissertation uses four studies to create a foundational and thorough understanding of how recommender system technology can be used to promote teammate TMMs through information sharing while limiting privacy concerns. Study 1 takes a highly exploratory approach to set a foundation for future dissertation studies. This study investigates what information is perceived to be helpful for promoting teammate TMMs on actual temporary teams. Qualitative data suggests that sharing teammate information related to skills/preferences, conflict management styles, and work ethic/reliability is perceived as beneficial to supporting teammate TMMs. Also, this data provides a foundational understanding for what should be involved in information-sharing recommendations for promoting teammate TMMs. Quantitative results indicate that conflict management data is perceived as more helpful and appropriate to share than personality data. Study 2 investigates the presentation of these recommendations through the factors of anonymity and explanations. Although explanations did not improve trust or satisfaction in the system, providing recommendations associated with a specific teammate name significantly improved several team measures associated with TMMs for actual temporary teams compared to teams who received anonymous recommendations. This study also sheds light on what temporary team members perceive as the benefits to sharing this information and what they perceive as concerns to their privacy. Study 3 investigates how the group/team context and individual differences can influence disclosure behavior when using an information-sharing recommender system. Findings suggest that members of teams who are fully assessed as a team are more willing to unconditionally disclose personal information than members who are assessed as an individual or members who are mixed assessed as an individual and a team. The results also show how different individual differences and different information types are associated with disclosure behavior. Finally, Study 4 investigates how the occurrence and content of explanations can influence disclosure behavior and system perceptions of an information-sharing recommender system. Data from this study highlights how benefit explanations provided during disclosure can increase disclosure and explanations provided during recommendations can influence perceptions of trust competence. Meanwhile, benefit-related explanations can decrease privacy concerns. The aforementioned studies fill numerous research gaps relating to teamwork literature (i.e., TMMs and temporary teams) and recommender system research. In addition to contributions to these fields, this dissertation results in design recommendations that inform both the design of group recommender systems and the novel technology conceptualized through this dissertation, information-sharing recommender systems

    South Carolina Public High Schools: Leadership, Network Dynamics and Innovation

    Get PDF
    The purpose of this study was to identify and model the role of leaders in a complex organization. This paper analyzed the spread of innovations through use of Complexity Theory, Complexity Leadership Theory, and Social Network Theory. Complexity Leadership Theory suggests that certain \u27conditions\u27, \u27attractors\u27, or relationships must be present during the early stages of innovation, causing the emergence of innovation, long before an innovation reaches institutionalization. A Dynamic Network Analysis will be used to explore the inner workings and relationships that are present that influence the innovation as it moves through from emergence to possible institutionalization

    Clinical governance: striking a balance between checking and trusting

    Get PDF
    Clinical governance emerged as one of the big ideas central to the latest round of health reforms. It places with health care managers, for the first time, a statutory duty for quality of care on an equal footing with the pre-existing duty of financial responsibility (Warden 1998). Clinical governance tries to encourage an appropriate emphasis on the quality of clinical services by locating the responsibility for that quality along defined lines of accountability. This paper explores some of the implications of clinical governance using the economic perspective of principal-agent theory. It examines the ways in which principals seek to overcome the potential for agent opportunism either by reducing asymmetries of information (for example, by using performance data) or by aligning objective functions (for example, by creation of a shared quality culture). As trust and mutuality (or their absence) underpin all principal-agent relationships these issues lie at the heart of the discussion. The analysis emphasises the need for a balance between techniques that seek to compel performance improvements (through externally applied measurement and management), and approaches that trust to intrinsic professional motivation to deliver high quality services. Of crucial importance in achieving this balance is the creation and maintenance of the right organisational culture.governance

    Towards an Expert System for the Analysis of Computer Aided Human Performance

    Get PDF
    • …
    corecore