819 research outputs found

    Application of an Asynchronous Synchronous Alternator for Wind Power Plant of Low, Medium and High Power

    Get PDF
    The chapter shows the prospects for the development of alternative energy. The growth rate of wind energy is ahead of other types of alternative energy, but the development of wind energy is constrained by a number of technical contradictions that need to be solved. The main problem of wind power is that the power and direction of the wind flow are continuously changing. This leads to the fact that the frequency of rotation of the generator constantly varies and alternator produces energy with nonstandard parameters in amplitude, frequency and phase. Converting this energy into energy with standard parameters is a difficult technical task. A brief analysis of the different directions to solve this problem is shown. It is proved that the promising direction of solving this problem from the point of view of efficiency is the use of double-fed induction alternator (DFIA). The chapter describes the principle of operation of the DFIA, the theory of energy conversion based on equivalent circuits. Approaches to the optimal design of generators based on generalized variables are shown. Two variants of the generator design are described. One option is to contain an additional exciter generator to power the rotor. The design of this 10 kW generator is presented. In another version, the power supply function of the rotor is performed by the battery. In addition, the battery performs the function of accumulation of electricity. It is concluded that the development of wind power in the direction of the DFIA is promising. On the basis of the proposed concept, a number of wind power plants can be built with power from 10 kW to 6 MW. DFIA can operate in standalone mode and in conjunction with electrical grid. The design for the range of wind turbines will be the same type. The DFIA will differ only in size

    Human Resource and Employment Practices in Telecommunications Services, 1980-1998

    Get PDF
    [Excerpt] In the academic literature on manufacturing, much research and debate have focused on whether firms are adopting some form of “high-performance” or “high-involvement” work organization based on such practices as employee participation, teams, and increased discretion, skills, and training for frontline workers (Ichniowski et al., 1996; Kochan and Osterman, 1994; MacDuffie, 1995). Whereas many firms in the telecommunications industry flirted with these ideas in the 1980s, they did not prove to be a lasting source of inspiration for the redesign of work and employment practices. Rather, work restructuring in telecommunications services has been driven by the ability of firms to leverage network and information technologies to reduce labor costs and create customer segmentation strategies. “Good jobs” versus “bad jobs,” or higher versus lower wage jobs, do not vary according to whether firms adopt a high- involvement model. They vary along two other dimensions: (1) within firms and occupations, by the value-added of the customer segment that an employee group serves; and (2) across firms, by union and nonunion status. We believe that this customer segmentation strategy is becoming a more general model for employment practices in large-scale service | operations; telecommunications services firms may be somewhat more | advanced than other service firms in adopting this strategy because of certain unique industry characteristics. The scale economies of network technology are such that once a company builds the network infrastructure to a customer’s specifications, the cost of additional services is essentially zero. As a result, and notwithstanding technological uncertainty, all of the industry’s major players are attempting to take advantage of system economies inherent in the nature of the product market and technology to provide customized packages of multimedia products to identified market segments. They have organized into market-driven business units providing differentiated services to large businesses and institutions, small businesses, and residential customers. They have used information technologies and process reengineering to customize specific services to different segments according to customer needs and ability to pay. Variation in work and employment practices, or labor market segmentation, follows product market segmentation. As a result, much of the variation in employment practices in this industry is within firms and within occupations according to market segment rather than across firms. In addition, despite market deregulation beginning in 1984 and opportunities for new entrants, a tightly led oligopoly structure is replacing the regulated Bell System monopoly. Former Bell System companies, the giants of the regulated period, continue to dominate market share in the post-1984 period. Older players and new entrants alike are merging and consolidating in order to have access to multimedia markets. What is striking in this industry, therefore, is the relative lack of variation in management and employment practices across firms after more than a decade of experience with deregulation. We attribute this lack of variation to three major sources. (1) Technological advances and network economics provide incentives for mergers, organizational consolidation, and, as indicated above, similar business strategies. (2) The former Bell System companies have deep institutional ties, and they continue to benchmark against and imitate each other so that ideas about restructuring have diffused quickly among them. (3) Despite overall deunionization in the industry, they continue to have high unionization rates; de facto pattern bargaining within the Bell system has remained quite strong. Therefore, similar employment practices based on inherited collective bargaining agreements continue to exist across former Bell System firms

    Non-Standard Sound Synthesis with Dynamic Models

    Get PDF
    Full version unavailable due to 3rd party copyright restrictions.This Thesis proposes three main objectives: (i) to provide the concept of a new generalized non-standard synthesis model that would provide the framework for incorporating other non-standard synthesis approaches; (ii) to explore dynamic sound modeling through the application of new non-standard synthesis techniques and procedures; and (iii) to experiment with dynamic sound synthesis for the creation of novel sound objects. In order to achieve these objectives, this Thesis introduces a new paradigm for non-standard synthesis that is based in the algorithmic assemblage of minute wave segments to form sound waveforms. This paradigm is called Extended Waveform Segment Synthesis (EWSS) and incorporates a hierarchy of algorithmic models for the generation of microsound structures. The concepts of EWSS are illustrated with the development and presentation of a novel non-standard synthesis system, the Dynamic Waveform Segment Synthesis (DWSS). DWSS features and combines a variety of algorithmic models for direct synthesis generation: list generation and permutation, tendency masks, trigonometric functions, stochastic functions, chaotic functions and grammars. The core mechanism of DWSS is based in an extended application of Cellular Automata. The potential of the synthetic capabilities of DWSS is explored in a series of Case Studies where a number of sound object were generated revealing (i) the capabilities of the system to generate sound morphologies belonging to other non-standard synthesis approaches and, (ii) the capabilities of the system of generating novel sound objects with dynamic morphologies. The introduction of EWSS and DWSS is preceded by an extensive and critical overview on the concepts of microsound synthesis, algorithmic composition, the two cultures of computer music, the heretical approach in composition, non- standard synthesis and sonic emergence along with the thorough examination of algorithmic models and their application in sound synthesis and electroacoustic composition. This Thesis also proposes (i) a new definition for “algorithmic composition”, (ii) the term “totalistic algorithmic composition”, and (iii) four discrete aspects of non-standard synthesis

    Managing ERP Implementation Failure: A Project Management Perspective

    Get PDF
    In recent years, rapid progress in the use of the internet has resulted in huge losses in many organizations due to lax security. As a result, information security awareness is becoming an important issue to anyone using theInternet. To reduce losses, organizations have made information security awareness a top priority. The three main barriers to information security awareness are: (1) general security awareness, (2) employees’ computerskills, and (3) organizational budgets. Online learning appears a feasible alternative to providing information security awareness and countering these three barriers. Research has identified three levels of securityawareness: perception, comprehension and projection. This paper reports on a laboratory experiment that investigates the impacts of hypermedia, multimedia and hypertext to increase information security awarenessamong the three awareness levels in an online training environment. The results indicate that: (1) learners who have the better understanding at the perception and comprehension levels can improve understanding at the projection level; (2) learners with text material perform better at the perception level; and (3) learners with multimedia material perform better at the comprehension level and projection level. The results could be used by educators and training designers to create meaningful information security awareness materials

    OPTIMIZATION OF NONSTANDARD REASONING SERVICES

    Get PDF
    The increasing adoption of semantic technologies and the corresponding increasing complexity of application requirements are motivating extensions to the standard reasoning paradigms and services supported by such technologies. This thesis focuses on two of such extensions: nonmonotonic reasoning and inference-proof access control. Expressing knowledge via general rules that admit exceptions is an approach that has been commonly adopted for centuries in areas such as law and science, and more recently in object-oriented programming and computer security. The experiences in developing complex biomedical knowledge bases reported in the literature show that a direct support to defeasible properties and exceptions would be of great help. On the other hand, there is ample evidence of the need for knowledge confidentiality measures. Ontology languages and Linked Open Data are increasingly being used to encode the private knowledge of companies and public organizations. Semantic Web techniques facilitate merging different sources of knowledge and extract implicit information, thereby putting at risk security and the privacy of individuals. But the same reasoning capabilities can be exploited to protect the confidentiality of knowledge. Both nonmonotonic inference and secure knowledge base access rely on nonstandard reasoning procedures. The design and realization of these algorithms in a scalable way (appropriate to the ever-increasing size of ontologies and knowledge bases) is carried out by means of a diversified range of optimization techniques such as appropriate module extraction and incremental reasoning. Extensive experimental evaluation shows the efficiency of the developed optimization techniques: (i) for the first time performance compatible with real-time reasoning is obtained for large nonmonotonic ontologies, while (ii) the secure ontology access control proves to be already compatible with practical use in the e-health application scenario.

    Learning and understanding in abstract algebra

    Get PDF
    Students\u27 learning and understanding in an undergraduate abstract algebra class were described using Tall and Vinner\u27s notion of a concept image, which is the entire cognitive structure associated with a concept, including examples, nonexamples, definitions, representations, and results. Prominent features and components of students\u27 concept images were identified for concepts of elementary group theory, including group, subgroup, isomorphism, coset, and quotient group. Analysis of interviews and written work from five students provided insight into their concept images, revealing ways they understood the concepts. Because many issues were related to students\u27 uses of language and notation, the analysis was essentially semiotic, using the linguistic, notational, and representational distinctions that the students made to infer their conceptual understandings and the distinctions they were and were not making among concepts. Attempting to explain and synthesize the results of the analysis became a process of theory generation, from which two themes emerged: making distinctions and managing abstraction. The students often made nonstandard linguistic and notational distinctions. For example, some students used the term coset to describe not only individual cosets but also the set of all cosets. This kind of understanding was characterized as being immersed in the process of generating all of the cosets of a subgroup, a characterization that described and explained several instances of the phenomenon of failing to distinguish between a set and its elements. The students managed their relationships with abstract ideas through metaphor, process and object conceptions, and proficiency with concepts, examples, and representations. For example, some students understood a particular group by relying upon its operation table, which they sometimes took to be the group itself rather than a representation. The operation table supported an object conception even when a student had a fragile understanding of the processes used in forming the group. Making distinctions and managing abstraction are elaborated as fundamental characteristics of mathematical activity. Mathematics thereby becomes a dialectic between precision and abstraction, between logic and intuition, which has important implications for teaching, teacher education, and research

    Expert Cognition During Proof Construction Using the Principle of Mathematical Induction

    Get PDF
    The purpose of this study was to identify and analyze the observable cognitive processes of experts in mathematics while they work on proof-construction activities using the Principle of Mathematical Induction (PMI). Graduate student participants in the study worked on ``nonstandard mathematical induction problems that did not involve algebraic identities or finite sums. This study identified some of the problem solving-strategies used by the participants during a Cognitive Task Analysis (Feldon, 2007) as well as epistemological obstacles they encountered while working with PMI. After the Cognitive Task Analysis, the graduate students participated in two semi-structured interviews. These interviews explored graduate students\u27 beliefs about proofs and proof techniques and situates their use of PMI within the contexts of these beliefs. Two primary theoretical frameworks were used to analyze participant cognition and the qualitative data collected. First, the study used Action, Process, Object, Schema (APOS) Theory (Asiala et al., 1996) to to study and analyze the participants\u27 conceptual understanding of the technique of mathematical induction and to test a preliminary genetic decomposition adapted from previous studies on PMI (Dubinsky \& Lewin 1996, 1999; Garcia-Martinez & Parraguez, 2017). Second, an Expert Knowledge Framework (Bransford, Brown, & Cocking, 1999; Shepherd & Sande, 2014) was used to classify the participants\u27 responses to the semi-structured interview questions according to several characteristics of expertise. The study identified several results which (1) give insight to the mental constructions used by mathematical experts when solving problem involving PMI; (2) offer some implications for improving the instruction of PMI in introductory proofs classrooms; and (3) provide results that allow for future comparison between expert and novice mathematical learners
    corecore