1,669 research outputs found
Low-Skill Workers' Access to Quality Green Jobs
Explores the potential for the green jobs market to help low-skill workers gain needed skills and higher wages. Offers recommendations for improving training efforts, including curricular reforms and financial support, and examples of innovative programs
The impact of personality and competence of leaders on business success
Purpose: This article aims to identify leaders’ personality and competence traits that determine success for Polish small and medium-sized enterprises. Design/Methodology/Approach: Empirical data are selected from an experimental survey conducted by the Statistics Poland from December 2017 to January 2018 as part of the Determinants of Entrepreneurship Developments in the SMEs Sector project. We used 20959 surveys of enterprises in which the leader (an owner or a manager) played a dominant role. To test the dependence of measures of success (selected aspects of changes in enterprises) and assessments of the importance of personality and competence features of leaders, we built appropriate contingency tables and used the Pearson chi-square independence test. We also applied logistic regression and calculated the appropriate odds ratios. Findings: When estimating logistic parameters, we obtained a model with five statistically significant variables: beliefs about the possibility of achieving set goals; high aspirations and constant search for new challenges; passion and commitment; fluency in foreign languages; and knowledge of the company’s market. Practical implications: The results of this research suggest for enterprises a need for pro-development activities in the field of managerial competencies.peer-reviewe
Recommended from our members
Process modelling for information system description
My previous experiences and some preliminary studies of the relevant technical literature allowed me to identify several reasons for which the current state of the database theory seemed unsatisfactory and required further research. These reasons included: insufficient formalism of data semantics, misinterpretation of NULL values, inconsistencies in the concept of the universal relation, certain ambiguities in domain definition, and inadequate representation of facts and constraints.
The commonly accepted ’sequentiality’ principle in most of the current system design methodologies imposes strong restrictions on the processes that a target system is composed of. They must be algorithmic and must not be interrupted during execution; neither may they have any parallel subprocesses as their own components. This principle can no longer be considered acceptable. In very many existing systems multiple processors perform many concurrent actions that can interact with each other.
The overconcentration on data models is another disadvantage of the majority of system design methods. Many techniques pay little (or no) attention to process definition. They assume that the model of the Real World consists only of data elements and relationships among them. However, the way the processes are related to each other (in terms of precedence relation) may have considerable impact on the data model.
It has been assumed that the Real World is discretisable, i.e. it may be modelled by a structure of objects. The word object is to be interpreted in a wide sense so it can mean anything within the boundaries of this part of the Real World that is to be represented in the target system. An object may then denote a fact or a physical or abstract entity, or relationships between any of these, or relationships between relationships, or even a still more complex structure.
The fundamental hypothesis was formulated stating the necessity of considering the three aspects of modelling - syntax, semantics and behaviour, and these to be considered integrally.
A syntactic representation of an object within a target system is called a construct A construct which cannot be decomposed further (either syntactically or semantically) is defined to be an atom. Any construct is a result of the following production rules: construct ::= atom I function construct; function ::= atom I construct. This syntax forms a sentential notation.
The sentential notation allows for extensive use of denotational semantics. The meaning of a construct may be defined as a function mapping from a set of syntactic constructs to the appropriate semantic domains; these in turn appear to be sets of functions since a construct may have a meaning in more than one class of objects. Because of its functional form the meaning of a construct may be derived from the meaning of its components.
The issue of system behaviour needed further investigation and a revision of the conventional model of computing. The sequentiality principle has been rejected, concurrency being regarded as a natural property of processes. A postulate has been formulated that any potential parallelism should be constructively used for data/process design and that the process structure would affect the data model. An important distinction has been made between a process declaration - considered as a form of data or an abstraction of knowledge - and a process application that corresponds to a physical action performed by a processor, according to a specific process declaration. In principle, a process may be applied to any construct - including its own representation - and it is a matter of semantics to state whether or not it is sensible to do so. The process application mechanism has been explained in terms of formal systems theory by introducing an abstract machine with two input and two output types of channels.
The system behaviour has been described by defining a process calculus. It is based on logical and functional properties of a discrete time model and provides a means to handle expressions composed of process-variables connected by logical functors. Basic terms of the calculus are: constructs and operations (equivalence, approximation, precedence, incidence, free-parallelism, strict-parallelism). Certain properties of these operations (e.g. associativity or transitivity) allow for handling large expressions. Rules for decomposing/integrating process applications, analogous in some sense to those forming the basis for structured programming, have been derived
3D Printing Applications within Spectrophotometry
Ultraviolet-visible spectroscopy is a tool used throughout the field of chemistry and in chemical labs across the world. Spectrophotometers are a core technology in analytical chemistry, and are used to obtain accurate data on solution concentration. Unfortunately spectrophotometers can be difficult to obtain due to their high cost and low availability outside of a laboratory; this is especially true in either high school or lower grades where buying a spectrophotometer could be considered unreasonable due to price. There are ways to obtain cheaper spectrophotometers, but they can have a high initial cost or low overall quality. This paper will discuss and present a 3D printed spectrophotometer that is inexpensive to build, but displays an overall quality that will allow for accurate measurements of analyte. The design is relatively simple and allows for one time placement of both a blank and analyte cuvette, which makes for a more convenient and time efficient measurement
Methods of evaluation of autonomic nervous system function
Disturbances of the autonomic nervous system play a crucial role in the pathogenesis and clinical course of many diseases. Recently, rapid development has occurred in the clinical assessment of autonomic function. Various procedures have been described as diagnostic tools to monitor autonomic dysfunction. Some of them are mostly used for research purposes. Many, however, have found their place in routine clinical evaluation. Our paper presents selected methods of assessment of the autonomic nervous system with particular emphasis on those that are useful in diagnosis and treatment of diseases of the cardiovascular system. We discuss multiple tests based on cardiovascular reflexes, methods of studying heart rate variability as well as direct catecholamine measurements. Moreover, we outline tests of sudomotor function and microneurography
COVID-19 during pregnancy, delivery and postpartum period based on EBM
The pandemic caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has become the reason of theglobal health crisis. Since the first case of diagnosed COVID-19 pneumonia was reported in Wuhan, Hubei Province, China,in December 2019, the infection has spread rapidly to all over the world.The knowledge gained from previous human coronavirus infection outbreaks suggests that pregnant women and theirfoetuses represent a high-risk population during infectious disease epidemics.Moreover, a pregnancy, due to the physiological changes involving immune and cardiopulmonary systems, is a statepredisposing women to respiratory complications of viral infection.The constantly increasing number of publications regarding the course of COVID-19 infection in pregnant women has beenpublished, however, the available data remains limited and many questions remain unanswered. The aim of this reviewwas to summarize the literature data and adjusted to current recommendations regarding pregnancy care, delivery andpostpartum period.An extremely important issue is the need to register all the cases of COVID-19 affected women and the course of thesepregnancies to local, regional, or international registries, which will be helpful to answer many clinical and scientific questionsand to create guidelines ensuring an adequate level of care for women affected by COVID-19 infection during pregnancy,delivery and during postpartum period, as well as their newborns
Going Deeper than Supervised Discretisation in Processing of Stylometric Features
Rough set theory is employed in cases where data are incomplete and inconsistent and an ap- proximation of concepts is needed. The classical approach works for discrete data and allows only nominal classification. To induce the best rules, access to all available information is ad- vantageous, which can be endangered if discretisation is a necessary step in the data preparation stage. Discretisation, even executed with taking into account class labels of instances, brings some information loss. The research methodology illustrated in this paper is dedicated to ex- tended transformations of continuous input features into categorical, with the goal of enhancing the performance of rule-based classifiers, constructed with rough set data mining. The experi- ments were carried out in the stylometry domain, with its key task of authorship attribution. The obtained results indicate that supporting supervised discretisation with elements of unsuper- vised transformations can lead to enhanced predictions, which shows the merits of the proposed research framework
- …