534 research outputs found

    Analysis and development of customer segmentation criteria and tools for SMEs

    Get PDF
    In order to use the limited resources of sales and marketing optimally, and to provide customers with the best services, effective customer segmentation is of prime importance. This thesis deals with methods for analysing and comparing the individual values of customers for SMEs (Small Medium Enterprises), because not all customers bring the same value to the company and not every customer can be treated in the same way. The different segmentation models are judged by different criteria. Which segmentation method allows a company to treat customers in the best possible way based on their value for the company? To answer this question first requires the SME company to determine whether they know the monetary or non-monetary value of their customers. The researcher examined if the size of the company influences the choice of segmentation criteria and method. To determine this, it is necessary to address which companies are SMEs. The main methods are reviewed extensively likewise available software models were evaluated and included in the research, and the advantages and disadvantages are compared. For this research topic, a mixed-method design was chosen. The researcher carried out one-to-one semi-structured expert interviews and, parallel to the qualitative research, quantitative data from a technical retailing company’s database was analysed. The company has data from more than 10,000 customers in the business warehouse and CRM system. The results of this research provide new thoughts to reflect on whether the segmentation methods of the existing literature are useful for SMEs in the B2B business and provide the basis for further research and development in this field. The new segmentation method, identified and confirmed through follow-up interviews in this research, will be of immense value to practitioners. Especially for sales and marketing managers working in this field

    Tight Bounds for Online Matching in Bounded-Degree Graphs with Vertex Capacities

    Get PDF
    We study the b-matching problem in bipartite graphs G = (S,R,E). Each vertex s ? S is a server with individual capacity b_s. The vertices r ? R are requests that arrive online and must be assigned instantly to an eligible server. The goal is to maximize the size of the constructed matching. We assume that G is a (k,d)-graph [J. Naor and D. Wajc, 2018], where k specifies a lower bound on the degree of each server and d is an upper bound on the degree of each request. This setting models matching problems in timely applications. We present tight upper and lower bounds on the performance of deterministic online algorithms. In particular, we develop a new online algorithm via a primal-dual analysis. The optimal competitive ratio tends to 1, for arbitrary k ? d, as the server capacities increase. Hence, nearly optimal solutions can be computed online. Our results also hold for the vertex-weighted problem extension, and thus for AdWords and auction problems in which each bidder issues individual, equally valued bids. Our bounds improve the previous best competitive ratios. The asymptotic competitiveness of 1 is a significant improvement over the previous factor of 1-1/e^{k/d}, for the interesting range where k/d ? 1 is small. Recall that 1-1/e ? 0.63. Matching problems that admit a competitive ratio arbitrarily close to 1 are rare. Prior results rely on randomization or probabilistic input models

    Optimal Algorithms for Online b-Matching with Variable Vertex Capacities

    Get PDF
    We study the b-matching problem, which generalizes classical online matching introduced by Karp, Vazirani and Vazirani (STOC 1990). Consider a bipartite graph G = (S ?? R,E). Every vertex s ? S is a server with a capacity b_s, indicating the number of possible matching partners. The vertices r ? R are requests that arrive online and must be matched immediately to an eligible server. The goal is to maximize the cardinality of the constructed matching. In contrast to earlier work, we study the general setting where servers may have arbitrary, individual capacities. We prove that the most natural and simple online algorithms achieve optimal competitive ratios. As for deterministic algorithms, we give a greedy algorithm RelativeBalance and analyze it by extending the primal-dual framework of Devanur, Jain and Kleinberg (SODA 2013). In the area of randomized algorithms we study the celebrated Ranking algorithm by Karp, Vazirani and Vazirani. We prove that the original Ranking strategy, simply picking a random permutation of the servers, achieves an optimal competitiveness of 1-1/e, independently of the server capacities. Hence it is not necessary to resort to a reduction, replacing every server s by b_s vertices of unit capacity and to then run Ranking on this graph with ?_{s ? S} b_s vertices on the left-hand side. From a theoretical point of view our result explores the power of randomization and strictly limits the amount of required randomness. From a practical point of view it leads to more efficient allocation algorithms. Technically, we show that the primal-dual framework of Devanur, Jain and Kleinberg cannot establish a competitiveness better than 1/2 for the original Ranking algorithm, choosing a permutation of the servers. Therefore, we formulate a new configuration LP for the b-matching problem and then conduct a primal-dual analysis. We extend this analysis approach to the vertex-weighted b-matching problem. Specifically, we show that the algorithm PerturbedGreedy by Aggarwal, Goel, Karande and Mehta (SODA 2011), again with a sole randomization over the set of servers, is (1-1/e)-competitive. Together with recent work by Huang and Zhang (STOC 2020), our results demonstrate that configuration LPs can be strictly stronger than standard LPs in the analysis of more complex matching problems

    LineRESOLFT microscopy

    Get PDF
    Recent advances in RESOLFT (reversible saturable optical fluorescence transitions) microscopy have enabled the non-invasive three-dimensional visualization of numerous structures in living cells at high spatial resolution. This technique, which utilizes low light intensities, has manifold potential applications in the life sciences. The work presented here envisions further broadening the applications of RESOLFT microscopy by implementing a scheme for fast image acquisition of large fields of view, applicable to thick specimen. With a pattern consisting of line-shaped intensity minima, this novel technique, called lineRESOLFT, permits fast imaging of living cells at ~40nm lateral resolution while offering strong optical sectioning. The full potential of this method is further illustrated by the achievement of continuous three-dimensional imaging of neurons in living brain slices with high spatio-temporal resolution, enabling the observation of rapid spine motility for large fields of view on the second time scale

    Effects of Disorder on the Pressure-Induced Mott Transition in κ\kappa-BEDT-TTF)2_2Cu[N(CN)2_2]Cl

    Full text link
    We present a study of the influence of disorder on the Mott metal-insulator transition for the organic charge-transfer salt κ\kappa-(BEDT-TTF)2_2Cu[N(CN)2_2]Cl. To this end, disorder was introduced into the system in a controlled way by exposing the single crystals to x-ray irradiation. The crystals were then fine-tuned across the Mott transition by the application of continuously controllable He-gas pressure at low temperatures. Measurements of the thermal expansion and resistance show that the first-order character of the Mott transition prevails for low irradiation doses achieved by irradiation times up to 100 h. For these crystals with a moderate degree of disorder, we find a first-order transition line which ends in a second-order critical endpoint, akin to the pristine crystals. Compared to the latter, however, we observe a significant reduction of both, the critical pressure pcp_c and the critical temperature TcT_c. This result is consistent with the theoretically-predicted formation of a soft Coulomb gap in the presence of strong correlations and small disorder. Furthermore, we demonstrate, similar to the observation for the pristine sample, that the Mott transition after 50 h of irradiation is accompanied by sizable lattice effects, the critical behavior of which can be well described by mean-field theory. Our results demonstrate that the character of the Mott transition remains essentially unchanged at a low disorder level. However, after an irradiation time of 150 h, no clear signatures of a discontinuous metal-insulator transition could be revealed anymore. These results suggest that, above a certain disorder level, the metal-insulator transition becomes a smeared first-order transition with some residual hysteresis.Comment: 20 pages, 7 figures, appeared in the Special Issue "Advances in Organic Conductors and Superconductors" of Crystal

    Modelling Hourly Particulate Matter (PM10) Concentrations at High Spatial Resolution in Germany Using Land Use Regression and Open Data

    Get PDF
    Air pollution is a major health risk factor worldwide. Regular short- and long-time exposures to ambient particulate matter (PM) promote various diseases and can lead to premature death. Therefore, in Germany, air quality is assessed continuously at approximately 400 measurement sites. However, knowledge about this intermediate distribution is either unknown or lacks a high spatial–temporal resolution to accurately determine exposure since commonly used chemical transport models are resource intensive. In this study, we present a method that can provide information about the ambient PM concentration for all of Germany at high spatial (100 m × 100 m) and hourly resolutions based on freely available data. To do so we adopted and optimised a method that combined land use regression modelling with a geostatistical interpolation technique using ordinary kriging. The land use regression model was set up based on CORINE (Coordination of Information on the Environment) land cover data and the Germany National Emission Inventory. To test the model’s performance under different conditions, four distinct data sets were used. (1) From a total of 8760 (365 × 24) available h, 1500 were randomly selected. From those, the hourly mean concentrations at all stations (ca. 400) were used to run the model (n = 566,326). The leave-one-out cross-validation resulted in a mean absolute error (MAE) of 7.68 μg m−3 and a root mean square error (RMSE) of 11.20 μg m−3. (2) For a more detailed analysis of how the model performs when an above-average number of high values are modelled, we selected all hourly means from February 2011 (n = 256,606). In February, measured concentrations were much higher than in any other month, leading to a slightly higher MAE of 9.77 μg m−3 and RMSE of 14.36 μg m−3, respectively. (3) To enable better comparability with other studies, the annual mean concentration (n = 413) was modelled with a MAE of 4.82 μg m−3 and a RMSE of 6.08 μg m−3. (4) To verify the model’s capability of predicting the exceedance of the daily mean limit value, daily means were modelled for all days in February (n = 10,845). The exceedances of the daily mean limit value of 50 μg m−3 were predicted correctly in 88.67% of all cases. We show that modelling ambient PM concentrations can be performed at a high spatial–temporal resolution for large areas based on open data, land use regression modelling, and kriging, with overall convincing results. This approach offers new possibilities in the fields of exposure assessment, city planning, and governance since it allows more accurate views of ambient PM concentrations at the spatial–temporal resolution required for such assessments.Peer Reviewe

    Effekte einer Kombination von Terlipressin und Dopexamin gegenüber einer alleinigen Terlipressinapplikation im endotoxämischen Schafmodell

    Full text link
    Im katecholaminresistenten septischen Schock werden Vasopressinrezeptoragonisten wie Terlipressin benutzt. Diese beeinträchtigen das Herzzeitvolumen, den Sauerstoffverbrauch, führen zur pulmonalarteriellen Hypertonie und zu Mikrozirkulationsstörungen. Dopexamin könnte diese Wirkungen reduzieren. Nach Induktion des endotoxämischen Schocks an 12 Schafen erfolgte nach 16h die kontinuierliche Terlipressingabe und 3h später wurden die Schafe randomisiert einer Dopexamin- (Terlipressin und steigende Dopexamindosierungen) und einer Kontrollgruppe (nur Terlipressin) zugeteilt. Dopexamin reduzierte den pulmonalarteriellen Mitteldruck, den Gefäßwiderstand und führte zum Anstieg des Herzzeitvolumens. Der mittlere arterielle Druck fiel ab und die Herzfrequenz sowie Laktatkonzentration stiegen an. Die Kombination von Terlipressin mit Dopexamin beeinflusst die pulmonale Hämodynamik positiv und verschlechtert die systemische Hämodynamik, weshalb diese Kombination nicht empfohlen werden kann
    • …
    corecore