11,605 research outputs found
Developing an inter-enterprise alignment maturity model: research challenges and solutions
Business-IT alignment is pervasive today, as organizations strive to achieve competitive advantage. Like in other areas, e.g., software development, maintenance and IT services, there are maturity models to assess such alignment. Those models, however, do not specifically address the aspects needed for achieving alignment between business and IT in inter-enterprise settings. In this paper, we present the challenges we face in the development of an inter-enterprise alignment maturity model, as well as the current solutions to counter these problems
Assessing the overall perceived quality of the undergraduate students
Purpose
- The paper is twofold aimed: (i) defining and validating a scale to assess the quality
of the university experienced by students and (ii) analyzing the role of the aforementioned di-
mensions and their impact on studentsâ satisfaction.
Methodology/Approach
- A survey of 2,557 undergraduate students that finished their degrees
in 2013 at universities located in the region of Catalonia has been analyzed using Structural
Equation Modeling (SEM). An exploratory analysis suggests the final dimensions that were
confirmed in a confirmatory analysis. The psychometric characteristics of the scale are provided
to show reliability and validity of the constructs.
An extra model (also using SEM) assesses the impact of these dimensions on overall satisfac-
tion.
Findings
- The quality is a multifactor construct composed by: (i) âsyllabusâ, which refers to
the quality of the learning methods and the coordination efforts through the whole study period;
(ii) âskills developmentâ, referring to the skills that students might acquire along their studies
and (iii) âservices and facilitiesâ of the university.
Moreover, the first and third factors act as âenablersâ for the second factor one. Nevertheless,
only âSyllabusâ dimension affects significantly on studentsâ satisfaction, whereas âservices and
facilitiesâ do not have a significant role, although they are necessary in order to provide a good
service.
Research Limitation/implication
- Although the sample is large enough to draw robust re-
sults, it is limited the Catalonia. The paper provides recommendations for university managers
and public administration authorities in order to allocate the available resources.
Originality/Value of paper
- In an era of global competition, universities are trying to adapt
to these new requirements by expanding they academic offer, introducing innovative teaching
methods, providing teaching resources to lecturers, and updating the general services of the
university among others. All these services will be considered when students evaluate their
experience at the university. The paper contributes with an assessment scale for the holistic
service provided by the university within the period that the student is in the university. These findings can be applied to help define attractive academic programs and provide useful insights
on how the supporting facilities should be designed to allow students take advantage of their
learning process at universities.Postprint (published version
Detecting Functional Requirements Inconsistencies within Multi-teams Projects Framed into a Model-based Web Methodology
One of the most essential processes within the software project life cycle is the REP (Requirements
Engineering Process) because it allows specifying the software product requirements. This specification
should be as consistent as possible because it allows estimating in a suitable manner the effort required to
obtain the final product. REP is complex in itself, but this complexity is greatly increased in big, distributed
and heterogeneous projects with multiple analyst teams and high integration between functional modules.
This paper presents an approach for the systematic conciliation of functional requirements in big projects
dealing with a web model-based approach and how this approach may be implemented in the context of the
NDT (Navigational Development Techniques): a web methodology. This paper also describes the empirical
evaluation in the CALIPSOneo project by analyzing the improvements obtained with our approach.Ministerio de EconomĂa y Competitividad TIN2013-46928-C3-3-RMinisterio de EconomĂa y Competitividad TIN2015-71938-RED
Moving from Data-Constrained to Data-Enabled Research: Experiences and Challenges in Collecting, Validating and Analyzing Large-Scale e-Commerce Data
Widespread e-commerce activity on the Internet has led to new opportunities
to collect vast amounts of micro-level market and nonmarket data. In this paper
we share our experiences in collecting, validating, storing and analyzing large
Internet-based data sets in the area of online auctions, music file sharing and
online retailer pricing. We demonstrate how such data can advance knowledge by
facilitating sharper and more extensive tests of existing theories and by
offering observational underpinnings for the development of new theories. Just
as experimental economics pushed the frontiers of economic thought by enabling
the testing of numerous theories of economic behavior in the environment of a
controlled laboratory, we believe that observing, often over extended periods
of time, real-world agents participating in market and nonmarket activity on
the Internet can lead us to develop and test a variety of new theories.
Internet data gathering is not controlled experimentation. We cannot randomly
assign participants to treatments or determine event orderings. Internet data
gathering does offer potentially large data sets with repeated observation of
individual choices and action. In addition, the automated data collection holds
promise for greatly reduced cost per observation. Our methods rely on
technological advances in automated data collection agents. Significant
challenges remain in developing appropriate sampling techniques integrating
data from heterogeneous sources in a variety of formats, constructing
generalizable processes and understanding legal constraints. Despite these
challenges, the early evidence from those who have harvested and analyzed large
amounts of e-commerce data points toward a significant leap in our ability to
understand the functioning of electronic commerce.Comment: Published at http://dx.doi.org/10.1214/088342306000000231 in the
Statistical Science (http://www.imstat.org/sts/) by the Institute of
Mathematical Statistics (http://www.imstat.org
The animal spirits hypothesis and the Benhabib-Farmer condition for indeterminacy
This paper provides a self-contained review of the introduction of the animal spirits hypothesis into the infinite horizon optimal growth model. The analysis begins with an economic discussion of Pontryaginâs maximum principles. Thereafter, I develop a version of the increasing-returns Benhabib-Farmer model by showing the possible sub-optimality of the central planner solution and deriving the bifurcation condition for indeterminacy. Moreover, I give some insights on how to model intrinsic and extrinsic uncertainty. Finally, analysing the equilibrium condition of the labour market, I provide an intuitive rationale for the mechanism that in this model might lead prophecies to be self-fulfilling.Maximum problems in continuous time; indeterminate equilibrium paths; self-fulfilling prophecies.
Shape Expressions Schemas
We present Shape Expressions (ShEx), an expressive schema language for RDF
designed to provide a high-level, user friendly syntax with intuitive
semantics. ShEx allows to describe the vocabulary and the structure of an RDF
graph, and to constrain the allowed values for the properties of a node. It
includes an algebraic grouping operator, a choice operator, cardinalitiy
constraints for the number of allowed occurrences of a property, and negation.
We define the semantics of the language and illustrate it with examples. We
then present a validation algorithm that, given a node in an RDF graph and a
constraint defined by the ShEx schema, allows to check whether the node
satisfies that constraint. The algorithm outputs a proof that contains
trivially verifiable associations of nodes and the constraints that they
satisfy. The structure can be used for complex post-processing tasks, such as
transforming the RDF graph to other graph or tree structures, verifying more
complex constraints, or debugging (w.r.t. the schema). We also show the
inherent difficulty of error identification of ShEx
- âŠ