1,022 research outputs found

    Intergenerational support and reproduction of gender inequalities: a case study from Eastern and Western Germany

    Full text link
    Social support is often described as an exclusively positively acting factor. Its absence is said to mean negative consequences for individuals. This article shows that the supply and dependence of inter-generational social support can have negative consequences and pertains to persisting unequal gender roles and a gendered division of labor in relationships. Based on qualitative interviews, conducted in eastern and western Germany, with young adults (28-30 years old) and their parents, we hypothesize, that the bigger supply of inter-generational support of grandparents for their children and grandchildren and an alleged dependence on these transfers is especially responsible for impeding the modernization of traditional role models assigning women to the role as a mother and housewife. However, less availability and dependence on this kind of social support in eastern Germany, contribute to a more flexible form of role allocation in a relationship

    Constraints for behavioural specifications

    Get PDF
    Behavioural specifications with constraints for the incremental development of algebraic specifications are presented. The behavioural constraints correspond to the completely defined subparts of a given incomplete behavioural specification. Moreover, the local observability criteria used within a behavioural constraint could not coincide with the global criteria used in the behavioural specification. This is absolutely needed because, otherwise, some constraints could involve only non observable sorts and therefore have trivial semantics. Finally, the extension operations and completion operations for refining specifications are defined. The extension operations correspond to horizontal refinements and build larger specifications on top of existing ones in a conservative way. The completion operations correspond to vertical refinements, they add detail to an incomplete behavioural specification and they do restrict the class of models.Postprint (published version

    Databases in High Energy Physics: a critial review

    Get PDF
    The year 2000 is marked by a plethora of significant milestones in the history of High Energy Physics. Not only the true numerical end to the second millennium, this watershed year saw the final run of CERN's Large Electron-Positron collider (LEP) - the world-class machine that had been the focus of the lives of many of us for such a long time. It is also closely related to the subject of this chapter in the following respects: - Classified as a nuclear installation, information on the LEP machine must be retained indefinitely. This represents a challenge to the database community that is almost beyond discussion - archiving of data for a relatively small number of years is indeed feasible, but retaining it for centuries, millennia or more is a very different issue; - There are strong scientific arguments as to why the data from the LEP machine should be retained for a short period. However, the complexity of the data itself, the associated metadata and the programs that manipulate it make even this a huge challenge; - The story of databases in HEP is closely linked to that of LEP itself: what were the basic requirements that were identified in the early years of LEP preparation? How well have these been satisfied? What are the remaining issues and key messages? - Finally, the year 2000 also marked the entry of Grid architectures into the central stage of HEP computing. How has the Grid affected the requirements on databases or the manner in which they are deployed? Furthermore, as the LEP tunnel and even parts of the detectors that it housed are readied for re-use for the Large Hadron Collider (LHC), how have our requirements on databases evolved at this new scale of computing? A number of the key players in the field of databases - as can be seen from the author list of the various publications - have since retired from the field or else this world. Given the fallibility of human memory, the need for a record of the use of databases for physics data processing is clearly needed before memories fade completely and the story is lost forever. It is necessarily somewhat CERN-centric, although effort has been made to cover important developments and events elsewhere. Frequent reference is made to the Computing in High Energy Physics (CHEP) conference series - the most accessible and consistent record of this field

    Graphical user interface tools

    Get PDF

    Big data analytics in intensive care units: challenges and applicability in an Argentinian hospital

    Get PDF
    In a typical intensive care unit of a healthcare facilities, many sensors are connected to patients to measure high frequency physiological data. Currently, measurements are registered from time to time, possibly every hour. With this data lost, we are losing many opportunities to discover new patterns in vital signs that could lead to earlier detection of pathologies. The early detection of pathologies gives physicians the ability to plan and begin treatments sooner or potentially stop the progression of a condition, possibly reducing mortality and costs. The data generated by medical equipment are a Big Data problem with near real-time restrictions for processing medical algorithms designed to predict pathologies. This type of system is known as realtime big data analytics systems. This paper analyses if proposed system architectures can be applied in the Francisco Lopez Lima Hospital (FLLH), an Argentinian hospital with relatively high financial constraints. Taking into account this limitation, we describe a possible architectural approach for the FLLH, a mix of a local computing system at FLLH and a public cloud computing platform. We believe this work may be useful to promote the research and development of such systems in intensive care units of hospitals with similar characteristics to the FLLH.Facultad de InformĂĄtic

    Big data analytics in intensive care units: challenges and applicability in an Argentinian hospital

    Get PDF
    In a typical intensive care unit of a healthcare facilities, many sensors are connected to patients to measure high frequency physiological data. Currently, measurements are registered from time to time, possibly every hour. With this data lost, we are losing many opportunities to discover new patterns in vital signs that could lead to earlier detection of pathologies. The early detection of pathologies gives physicians the ability to plan and begin treatments sooner or potentially stop the progression of a condition, possibly reducing mortality and costs. The data generated by medical equipment are a Big Data problem with near real-time restrictions for processing medical algorithms designed to predict pathologies. This type of system is known as realtime big data analytics systems. This paper analyses if proposed system architectures can be applied in the Francisco Lopez Lima Hospital (FLLH), an Argentinian hospital with relatively high financial constraints. Taking into account this limitation, we describe a possible architectural approach for the FLLH, a mix of a local computing system at FLLH and a public cloud computing platform. We believe this work may be useful to promote the research and development of such systems in intensive care units of hospitals with similar characteristics to the FLLH.Facultad de InformĂĄtic

    Verification in ASL and related specification languages

    Get PDF

    Construct by Contract: Construct by Contract: An Approach for Developing Reliable Software

    Get PDF
    This research introduces “Construct by Contract” as a proposal for a general methodology to develop dependable software systems. It describes an ideal process to construct systems by propagating requirements as contracts from the client’s desires to the correctness proof in verification stage, especially in everyday-used software like web applications, mobile applications and desktop application. Such methodology can be converted in a single integrated workspace as standalone tool to develop software. To achieve the already mentioned goal, this methodology puts together a collection of software engineering tools and techniques used throughout the software’s lifecycle, from requirements gathering to the testing phase, in order to ensure a contract-based flow. Construct by Contract is inclusive, regarding the roles of the people involved in the software construction process, including for instance customers, users, project managers, designers, developers and testers, all of them interacting in one common software development environment, sharing information in an understandable presentation according to each stage. It is worth to mention that we focus on the verification phase, as the key to achieve the reliability sought. Although at this point, we only completed the definition and the specification of this methodology, we evaluate the implementation by analysing, measuring and comparing different existing tools that could fit at any of the stages of software’s lifecycle, and that could be applied into a piece of commercial software. These insights are provided in a proof of concept case study, involving a productive Java Web application using struts framework
    • 

    corecore