1.
Loh, F., Mehling, N., Hoßfeld, T.: Towards LoRaWAN without Data Loss: Studying the Performance of Different Channel Access Approaches. Sensors. (2022).
The Long Range Wide Area Network (LoRaWAN) is one of the fastest growing Internet of Things (IoT) access protocols. It operates in the license free 868 MHz band and gives everyone the possibility to create their own small sensor networks. The drawback of this technology is often unscheduled or random channel access, which leads to message collisions and potential data loss. For that reason, recent literature studies alternative approaches for LoRaWAN channel access. In this work, state-of-the-art random channel access is compared with alternative approaches from the litera- ture by means of collision probability. Furthermore, a time scheduled channel access methodology is presented to completely avoid collisions in LoRaWAN. For this approach, an exhaustive simulation study was conducted and the performance was evaluated with random access cross-traffic. In a general theoretical analysis the limits of the time scheduled approach are discussed to comply with duty cycle regulations in LoRaWAN.
2.
Seufert, A., Poignée, F., Hoßfeld, T., Seufert, M.: Pandemic in the Digital Age: Analyzing WhatsApp Communication Behavior before, during, and after the COVID-19 Lockdown. Humanities and Social Sciences Communications. 9, 140 (1–9) (2022).
The strict restrictions introduced by the COVID-19 lockdowns, which started from March 2020, changed people’s daily lives and habits on many different levels. In this work, we investigate the impact of the lockdown on the communication behavior in the mobile instant messaging application WhatsApp. Our evaluations are based on a large dataset of 2577 private chat histories with 25,378,093 messages from 51,973 users. The analysis of the one-to-one and group conversations confirms that the lockdown severely altered the communication in WhatsApp chats compared to pre-pandemic time ranges. In particular, we observe short-term effects, which caused an increased message frequency in the first lockdown months and a shifted communication activity during the day in March and April 2020. Moreover, we also see long-term effects of the ongoing pandemic situation until February 2021, which indicate a change of communication behavior towards more regular messaging, as well as a persisting change in activity during the day. The results of our work show that even anonymized chat histories can tell us a lot about people’s behavior and especially behavioral changes during the COVID-19 pandemic and thus are of great relevance for behavioral researchers. Furthermore, looking at the pandemic from an Internet provider perspective, these insights can be used during the next pandemic, or if the current COVID-19 situation worsens, to adapt communication networks to the changed usage behavior early on and thus avoid network congestion.
3.
Tran, H.T., Pham Ngoc, N., Hoßfeld, T., Seufert, M., Thang, T.C.: Cumulative Quality Modeling for HTTP Adaptive Streaming. ACM Transactions on Multimedia Computing, Communications, and Applications. 17, 22 (1–24) (2021).
4.
Nguyen Huu, T., Trung Kien, N., Van Hoa, N., Thu Huong, T., Wamser, F., Hoßfeld, T.: Energy-Aware Service Function Chain Embedding in Edge-Cloud Environments for IoT Applications. IEEE Internet of Things Journal. (2021).
The implementation of IoT applications faces several challenges in practice, such as compliance with QoS requirements, resource constraints and energy consumption. In this context, the joint paradigm for IoT applications can resolve some of the issues arising in pure cloud computing scenarios, such as those related to latency, energy or privacy. Therefore, an environment could be promising for resource and energy-efficient IoT applications that implement VNF bound together into SFC. However, a resource and energy efficient SFC placement requires smart SFC embedding mechanisms in the environment, as several challenges arise, such as IoT service chain modeling and evaluation, the trade-off between resource allocation, energy efficiency and performance, and the resource dynamics. In this article, we address issues in modeling resource and energy utilization for IoT applications in environments. A smart traffic monitoring IP camera system is deployed as a use case for a realistic modeling of a service chain. The system is implemented in our testbed, which is designed and developed specifically to model and investigate the resource and energy utilization of SFC embedding strategies. A resource and energy-aware SFC strategy in the environment for IoT applications is then proposed. Our algorithm is able to cope with dynamic load and resource situations emerging from dynamic SFC requests. The strategy is evaluated systematically in terms of acceptance ratio of SFC requests, resource efficiency and utilization, power consumption, and VNF migrations depending on the offered system load. Results show that our strategy outperforms some existing approaches in terms of resource and energy efficiency, thus it overcomes the relevant challenges from practice and meets the demands of IoT applications.
5.
Loh, F., Poignée, F., Wamser, F., Leidinger, F., Hoßfeld, T.: Uplink vs. Downlink: Machine Learning-Based Quality Prediction for HTTP Adaptive Video Streaming. Sensors. 21, 4172 (2021).
Streaming video is responsible for the bulk of Internet traffic these days. For this reason, Internet providers and network operators try to make predictions and assessments about the streaming quality for an end user. Current monitoring solutions are based on a variety of different machine learning approaches. The challenge for providers and operators nowadays is that existing approaches require large amounts of data. In this work, the most relevant quality of experience metrics, i.e., the initial playback delay, the video streaming quality, video quality changes, and video rebuffering events, are examined using a voluminous data set of more than 13,000 YouTube video streaming runs that were collected with the native YouTube mobile app. Three Machine Learning models are developed and compared to estimate playback behavior based on uplink request information. The main focus has been on developing a lightweight approach using as few features and as little data as possible, while maintaining state-of-the-art performance.
6.
Wamser, F., Seufert, A., Hall, A., Wunderer, S., Hoßfeld, T.: Valid Statements by the Crowd: Statistical Measures for Precision in Crowdsourced Mobile Measurements. Network. 1, 215–232 (2021).
Crowdsourced network measurements (CNMs) are becoming increasingly popular as they assess the performance of a mobile network from the end user’s perspective on a large scale. Here, network measurements are performed directly on the end-users’ devices, thus taking advantage of the real-world conditions end-users encounter. However, this type of uncontrolled measurement raises questions about its validity and reliability. The problem lies in the nature of this type of data collection. In CNMs, mobile network subscribers are involved to a large extent in the measurement process, and collect data themselves for the operator. The collection of data on user devices in arbitrary locations and at uncontrolled times requires means to ensure validity and reliability. To address this issue, our paper defines concepts and guidelines for analyzing the precision of CNMs; specifically, the number of measurements required to make valid statements. In addition to the formal definition of the aspect, we illustrate the problem and use an extensive sample data set to show possible assessment approaches. This data set consists of more than 20.4 million crowdsourced mobile measurements from across France, measured by a commercial data provider.
7.
Loh, F., Mehling, N., Metzger, F., Hoßfeld, T.: LoRaPlan: A Software to Evaluate Gateway Placement in LoRaWAN. 17th International Conference on Network and Service Management. (2021).
Long Range Wide Area Network (LoRaWAN) is one of the fastest growing Internet of Things (IoT) access network solutions. One major challenge in current LoRaWAN planning is message collision due to the random channel access. To study the collision behavior in LoRaWAN, we present LoRaPlan in this demonstration. LoRaPlan is a software to evaluate gateway placement decisions by studying the collision probability. Therefore, sensor and gateway locations can be imported to create a LoRaWAN. Furthermore, existing networks can be extended by additional gateways. Based on this setup, network coverage, network quality with regard to the number of sensors a single gateway has to manage, and also transmission quality and collision probabilities can be studied. In this demonstration, first the process of network creation with gateway and sensor import and additional manual gateway placement in LoRaPlan is presented. Different adjustable parameters are shown and the influence on network coverage and collision probability is discussed. At the end of this demonstration, it is possible for the audience to place their own gateways and evaluate own placements by means of collision probability and the coverage.
8.
Geissler, S., Lange, S., Linguaglossa, L., Rossi, D., Zinner, T., Hoßfeld, T.: Discrete-Time Modeling of NFV Accelerators that Exploit Batched Processing. ACM Transactions on Modeling and Performance Evaluation of Computing Systems. 6, 1–27 (2021).
9.
Wehner, N., Seufert, M., Schüler, J., Wassermann, S., Casas, P., Hoßfeld, T.: Improving Web QoE Monitoring for Encrypted Network Traffic through Time Series Modeling. ACM SIGMETRICS Performance Evaluation Review. 48, 37–40 (2021).
This paper addresses the problem of Quality of Experience (QoE) monitoring for web browsing. In particular, the inference of common Web QoE metrics such as Speed Index (SI) is investigated. Based on a large dataset collected with open web-measurement platforms on different device-types, a unique feature set is designed and used to estimate the RUMSI -- an efficient approximation to SI, with machine-learning based regression and classification approaches. Results indicate that it is possible to estimate the RUMSI accurately, and that in particular, recurrent neural networks are highly suitable for the task, as they capture the network dynamics more precisely.
10.
Hoßfeld, T., Heegaard, P.E., Skorin-Kapov, L., Varela, M.: Deriving QoE in systems: from fundamental relationships to a QoE‑based Service‑level Quality Index. Springer Quality and User Experience. 5, https://rdcu.be/b483m (2020).
With Quality of Experience (QoE) research having made significant advances over the years, service and network providers aim at user-centric evaluation of the services provided in their system. The question arises how to derive QoE in systems. In the context of subjective user studies conducted to derive relationships between influence factors and QoE, user diversity leads to varying distributions of user rating scores for different test conditions. Such models are commonly exploited by providers to derive various QoE metrics in their system, such as expected QoE, or the percentage of users rating above a certain threshold. The question then becomes how to combine (a) user rating distributions obtained from subjective studies, and (b) system parameter distributions, so as to obtain the actual observed QoE distribution in the system? Moreover, how can various QoE metrics of interest in the system be derived? We prove fundamental relationships for the derivation of QoE in systems, thus providing an important link between the QoE community and the systems community. In our numerical examples, we focus mainly on QoE metrics. We furthermore provide a more generalized view on quantifying the quality of systems by defining a QoE-based Service-level Quality Index. This index exploits the fact that quality can be seen as a proxy measure for utility. Following the assumption that not all user sessions should be weighted equally, we aim to provide a generic framework that can be utilized to quantify the overall utility of a service delivered by a system.
11.
Borchert, K., Seufert, A., Gamboa, E., Hirth, M., Hoßfeld, T.: In vitro vs in vivo: does the study’s interface design influence crowdsourced video QoE?. Quality and User Experience. (2020).
Evaluating the Quality of Experience (QoE) of video streaming and its influence factors has become paramount for streaming providers, as they want to maintain high satisfaction for their customers. In this context, crowdsourced user studies became a valuable tool to evaluate different factors which can affect the perceived user experience on a large scale. In general, most of these crowdsourcing studies either use, what we refer to, as an in vivo or an in vitro interface design. In vivo design means that the study participant has to rate the QoE of a video that is embedded in an application similar to a real streaming service, e.g., YouTube or Netflix. In vitro design refers to a setting, in which the video stream is separated from a specific service and thus, the video plays on a plain background. Although these interface designs vary widely, the results are often compared and generalized. In this work, we use a crowdsourcing study to investigate the influence of three interface design alternatives, an in vitro and two in vivo designs with different levels of interactiveness, on the perceived video QoE. Contrary to our expectations, the results indicate that there is no significant influence of the study’s interface design in general on the video experience. Furthermore, we found that the in vivo design does not reduce the test takers’ attentiveness. However, we observed that participants who interacted with the test interface reported a higher video QoE than other groups.
12.
Metzger, F., Hoßfeld, T., Bauer, A., Kounev, S., Heegaard, P.E.: Modeling of Aggregated IoT Traffic and Its Application to an IoT Cloud. Proceedings of the IEEE. 107, 679–694 (2019).
As the Internet of Things (IoT) continues to gain traction in telecommunication networks, a very large number of devices are expected to be connected and used in the near future. In order to appropriately plan and dimension the network, as well as the back-end cloud systems and the resulting signaling load, traffic models are employed. These models are designed to accurately capture and predict the properties of IoT traffic in a concise manner. To achieve this, Poisson process approximations, based on the Palm–Khintchine theorem, have often been used in the past. Due to the scale (and the difference in scales in various IoT networks) of the modeled systems, the fidelity of this approximation is crucial, as, in practice, it is very challenging to accurately measure or simulate large-scale IoT deployments. The main goal of this paper is to understand the level of accuracy of the Poisson approximation model. To this end, we first survey both common IoT network properties and network scales as well as traffic types. Second, we explain and discuss the Palm–Khintiche theorem, how it is applied to the problem, and which inaccuracies can occur when using it. Based on this, we derive guidelines as to when a Poisson process can be assumed for aggregated periodic IoT traffic. Finally, we evaluate our approach in the context of an IoT cloud scaler use case.
13.
Hoßfeld, T., Skorin-Kapov, L., Varela, M., Chen, K.-T.: Guest Editorial: Special Issue on “QoE Management for Multimedia Services”. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM). (2018).
14.
Dinh-Xuan, L., Popp, C., Burger, V., Wamser, F., Hoßfeld, T.: Impact of VNF Placements on QoE Monitoring in the Cloud. International Journal of Network Management. (2018).
15.
Chang, H.-S., Hsu, C.-F., Hoßfeld, T., Chen, K.-T.: Active Learning for Crowdsourced QoE Modeling. IEEE Transactions on Multimedia. (2018).
16.
Hoßfeld, T., Heegaard, P.E., Varela, M., Skorin-Kapov, L.: Confidence Interval Estimators for MOS Values. arXiv preprint arXiv:1806.01126. (2018).
17.
Sieber, C., Hagn, K., Moldovan, C., Hoßfeld, T., Kellerer, W.: Towards Machine Learning-Based Optimal HAS. arXiv preprint arXiv:1808.08065. (2018).
18.
Hoßfeld, T., Timmerer, C.: Quality of experience column: an introduction. ACM SIGMultimedia Records. 10 (2018).
19.
Hoßfeld, T., Skorin-Kapov, L., Heegaard, P.E., Varela, M.: A new QoE fairness index for QoE management. Quality and User Experience. (2018).
20.
Skorin-Kapov, L., Varela, M., Hoßfeld, T., Chen, K.-T.: A Survey of Emerging Concepts and Challenges for QoE Management of Multimedia Services. ACM Transactions on Multimedia Computing, Communications and Applications. (2018).
21.
Hoßfeld, T., Chan, S.-H.G., Mark, B.L., Timm-Giel, A.: Softwarization and Caching in NGN. Computer Networks. 125, 1–3 (2017).
22.
Hoßfeld, T.: 2016 International Teletraffic Congress (ITC 28) Report. ACM SIGCOMM Computer Communication Review. (2017).
23.
Hoßfeld, T., Heegaard, P.E., Varela, M., Möller, S.: Formal definition of QoE metrics. arXiv preprint arXiv:1607.00321. (2016).
24.
Metzger, F., Liotou, E., Moldovan, C., Hoßfeld, T.: TCP video streaming and mobile networks: Not a love story, but better with context. Computer Networks. 109, 246–256 (2016).
25.
Burger, V., Seufert, M., Hoßfeld, T., Tran-Gia, P.: Performance Evaluation of Backhaul Bandwidth Aggregation Using a Partial Sharing Scheme. Physical Communication. 19, 135–144 (2016).
To cope with the increasing demand of mobile devices and the limited capacity of cellular networks mobile connections are offloaded to WiFi. The access capacity is further increased, by aggregating bandwidth of WiFi access links. To analyse the performance of aggregated access links we model the most simple case of two cooperating systems interchanging capacities using an offloading scheme. The resulting analytic model is computed by means of a two-dimensional birth and death process. It can be used to seamlessly evaluate the performance of systems between partitioning and complete sharing. This allows to optimize the setting of thresholds dependent on the load of the cooperating system. Furthermore the benefit of aggregating bandwidth in different scenarios with homogeneous and heterogeneous workloads is quantified and the performance of more than two cooperating systems is evaluated by simulation.
26.
Hoßfeld, T., Skorin-Kapov, L., Heegaard, P.E., Varela, M.: Definition of QoE Fairness in Shared Systems. IEEE Communications Letters. (2016).
27.
Seufert, M., Griepentrog, T., Burger, V., Hoßfeld, T.: A Simple WiFi Hotspot Model for Cities. IEEE Communications Letters. 20, 384–387 (2016).
WiFi offloading has become increasingly popular. Many private and public institutions (e.g., libraries, cafes, restaurants) already provide an alternative free Internet link via WiFi, but also commercial services emerge to mitigate the load on mobile networks. Moreover, smart cities start to establish WiFi infrastructure for current and future civic services. In this work, the hotspot locations of ten diverse large cities are characterized, and a surprisingly simple model for the distribution of WiFi hotspots in an urban environment is derived.
28.
Hoßfeld, T., Heegaard, P.E., Varela, M., Möller, S.: QoE beyond the MOS: an in-depth look at QoE via better metrics and their relation to MOS. Quality and User Experience. 1, (2016).
29.
Seufert, M., Lange, S., Hoßfeld, T.: More than Topology: Joint Topology and Attribute Sampling and Generation of Social Network Graphs. Computer Communications. 73, 176–187 (2016).
Graph sampling refers to the process of deriving a small subset of nodes from a possibly huge graph in order to estimate properties of the whole graph from examining the sample. Whereas topological properties can already be obtained accurately by sampling, current approaches do not take possibly hidden dependencies between node topology and attributes into account. Especially in the context of online social networks, node attributes are of importance as they correspond to properties of the social network's users. Therefore, existing sampling algorithms can be extended to attribute sampling, but still lack the capturing of structural properties. Analyzing topology (e.g., node degree, clustering coefficient) and attribute properties (e.g., age, location) jointly can provide valuable insights into the social network and allows for a better understanding of social processes. As major contribution, this work proposes a novel sampling algorithm which provides unbiased and reliable estimates of joint topological and attribute based graph properties in a resource efficient fashion. Furthermore, the obtained samples allow for the generation of synthetic graphs, which show high similarity to the original graph with respect to topology and attributes. The proposed sampling and generation algorithms are evaluated on real world social network graphs, for which they demonstrate to be effective.
30.
Wamser, F., Casas, P., Seufert, M., Moldovan, C., Tran-Gia, P., Hoßfeld, T.: Modeling the YouTube Stack: from Packets to Quality of Experience. Computer Networks. 109, 211–224 (2016).
YouTube is one of the most popular and volume-dominant services in today’s Internet, and has changed the Web for ever. Consequently, network operators are forced to consider it in the design, deployment, and optimization of their networks. Taming YouTube requires a good understanding of the complete YouTube stack, from the network streaming service to the application itself. Understanding the interplays between individual YouTube functionalities and their implications for traffic and user Quality of Experience (QoE) becomes paramount nowadays. In this paper we characterize and model the YouTube stack at different layers, going from the generated network traffic to the QoE perceived by the users watching YouTube videos. Firstly, we present a network traffic model for the YouTube flow control mechanism, which permits to understand how YouTube provisions video traffic flows to users. Secondly, we investigate how traffic is consumed at the client side, deriving a simple model for the YouTube application. Thirdly, we analyze the implications for the end user, and present a model for the quality as perceived by them. This model is finally integrated into a system for real time QoE-based YouTube monitoring, highly useful to operators to assess the performance of their networks for provisioning YouTube videos. The central parameter for all the presented models is the buffer level at the YouTube application layer. This paper provides an extensive compendium of objective tools and models for network operators to better understand the YouTube traffic in their networks, to predict the playback behavior of the video player, and to assess how well they do it in the practice with the satisfaction of their customers watching YouTube.
31.
Hoßfeld, T., Seufert, M., Sieber, C., Zinner, T., Tran-Gia, P.: Identifying QoE Optimal Adaptation of HTTP Adaptive Streaming Based on Subjective Studies. Computer Networks. 81, 320–332 (2015).
HTTP Adaptive Streaming (HAS) technologies, e.g., Apple HLS or MPEG-DASH, automatically adapt the delivered video quality to the available network. This reduces stalling of the video but additionally introduces quality switches, which also influence the user-perceived Quality of Experience (QoE). In this work, we conduct a subjective study to identify the impact of adaptation parameters on QoE. The results indicate that the video quality has to be maximized first, and that the number of quality switches is less important. Based on these results, a method to compute the optimal QoE-optimal adaptation strategy for HAS on a per user basis with mixed-integer linear programming is presented. This QoE-optimal adaptation enables the benchmarking of existing adaptation algorithms for any given network condition. Moreover, the investigated concept is extended to a multi-user IPTV scenario. The question is answered whether video quality, and thereby, the QoE can be shared in a fair manner among the involved users.
32.
Hoßfeld, T., Metzger, F., Jarschel, M.: QoE for Cloud Gaming. IEEE Communications Society E-Letter. (2015).
Cloud Gaming combines the successful concepts of Cloud Computing and Online Gaming. It provides the entire game experience to the users by processing the game in the cloud and streaming the contents to the player. The player is no longer dependent on a specific type or quality of gaming hardware, but is able to use common devices. However, at the same time the end device needs a broadband internet connection and the ability to display a video stream properly. While this may reduce hardware costs for users and increase the revenue for developers by leaving out the retail chain, it also raises new challenges for Quality of Service (QoS) in terms of bandwidth and latency for the underlying network. In particular, there is a strong interest in the player’s Quality of Experience (QoE) by the involved stakeholders, ie, the game providers and the network operators. Given similar pricing schemes, players are likely to be influenced by expected and experienced quality. Thus, a provider is interested to understand QoE and to react on QoE problems by managing or adapting the service. There is also a strong academic interest, since QoE for cloud gaming as well as managing QoE for cloud gaming addresses a multitude of fascinating challenges in QoE. One might think that the topic of online video games is equally popular in research, but efforts are often solely focused on cloud gaming and its subjective QoE through user studies. Compared to plain video streaming, the inner properties of video games are not that straight-forward to observe from the outside. But to conduct proper measurements, it is essential to understand them.
33.
Seufert, M., Egger, S., Slanina, M., Zinner, T., Hoßfeld, T., Tran-Gia, P.: A Survey on Quality of Experience of HTTP Adaptive Streaming. IEEE Communications Surveys & Tutorials. 17, 469–492 (2015).
Changing network conditions pose severe problems to video streaming in the Internet. HTTP adaptive streaming (HAS) is a technology employed by numerous video services which relieves these issues by adapting the video to the current network conditions. It enables service providers to improve resource utilization and Quality of Experience (QoE) by incorporating information from different layers in order to deliver and adapt a video in its best possible quality. Thereby, it allows to take into account end user device capabilities, available video quality levels, current network conditions, and current server load. For end users, the major benefits of HAS compared to classical HTTP video streaming are reduced interruptions of the video playback and higher bandwidth utilization, which both generally result in a higher QoE. Adaptation is possible by changing the frame rate, resolution, or quantization of the video, which can be done with various adaptation strategies and related client- and server-side actions. The technical development of HAS, existing open standardized solutions, but also proprietary solutions are reviewed in this article as fundament to derive the QoE influence factors which emerge as a result of adaptation. The main contribution is a comprehensive survey of QoE related works from human computer interaction and networking domains which are structured according to the QoE impact of video adaptation. To be more precise, subjective studies which cover QoE aspects of adaptation dimensions and strategies are revisited. As a result, QoE influence factors of HAS and corresponding QoE models are identified, but also open issues and conflicting results are discussed. Furthermore, technical influence factors, which are often ignored in the context of HAS, affect perceptual QoE influence factors and are consequently analyzed. This survey gives the reader an overview of the current state of the art and recent developments. At the same time it targets networking researchers who develop new solutions for HTTP video streaming or assess video streaming from a user centric point of view. Therefore, the article is a major step towards truly improving HAS.
34.
Hoßfeld, T., Tran-Gia, P., Vukovic, M.: Special issue on crowdsourcing. Computer Networks: The International Journal of Computer and Telecommunications Networking. (2015).
35.
Hirth, M., Hoßfeld, T., Mellia, M., Schwartz, C., Lehrieder, F.: Crowdsourced Network Measurements: Benefits and Best Practices. Computer Networks Special Issue on Crowdsourcing. (2015).
Network measurements are of high importance both for the operation of networks and for the design and evaluation of new management mechanisms. Therefore, several approaches exist for running network measurements, ranging from analyzing live traffic traces from campus or Internet Service Provider (ISP) networks to performing active measurements on distributed testbeds, e.g., PlanetLab, or involving volunteers. However, each method falls short, offering only a partial view of the network. For instance, the scope of passive traffic traces is limited to an ISP’s network and customers’ habits, whereas active measurements might be biased by the population or node location involved. To complement these techniques, we pro- pose to use (commercial) crowdsourcing platforms for network measurements. They permit a controllable, diverse and realistic view of the Internet and pro- vide better control than do measurements with voluntary participants. In this study, we compare crowdsourcing with traditional measurement techniques, describe possible pitfalls and limitations, and present best practices to overcome these issues. The contribution of this paper is a guideline for researchers to understand when and how to exploit crowdsourcing for network measurements.
36.
Jarschel, M., Zinner, T., Hoßfeld, T., Tran-Gia, P., Kellerer, W.: Interfaces, Attributes, and Use Cases: A Compass for SDN. IEEE Communications Magazine. 52, 210–217 (2014).
The term Software Defined Networking (SDN) is prevalent in today’s discussion about future communication networks. As with any new term or paradigm, however, no consistent definition regarding this technology has formed. The fragmented view on SDN results in legacy products being passed off by equipment vendors as SDN, academics mixing up the attributes of SDN with those of network virtualization, and users not fully understanding the benefits. Therefore, establishing SDN as a widely adopted technology beyond laboratories and insular deployments requires a compass to navigate the multitude of ideas and concepts that make up SDN today. The contribution of this article represents an important step toward such an instrument. It gives a thorough definition of SDN and its interfaces as well as a list of its key attributes. Furthermore, a mapping of interfaces and attributes to SDN use cases is provided, highlighting the relevance of the interfaces and attributes for each scenario. This compass gives guidance to a potential adopter of SDN on whether SDN is in fact the right technology for a specific use case.
37.
Hoßfeld, T., Burger, V., Hinrichsen, H., Hirth, M., Tran-Gia, P.: On the computation of entropy production in stationary social networks. Social Network Analysis and Mining. 4, (2014).
Completing their initial phase of rapid growth, social networks are expected to reach a plateau from where on they are in a statistically stationary state. Such stationary conditions may have different dynamical properties. For example, if each message in a network is followed by a reply in opposite direction, the dynamics is locally balanced. Otherwise, if messages are ignored or forwarded to a different user, one may reach a stationary state with a directed flow of information. To distinguish between the two situations, we propose a quantity called entropy production that was introduced in statistical physics as a measure for non-vanishing probability currents in nonequilibrium stationary states. The proposed quantity closes a gap for characterizing online social networks. As major contribution, we show the relation and difference between entropy production and existing metrics. The comparison shows that computational intensive metrics like centrality can be approximated by entropy production for typical online social networks. To compute the entropy production from real-world measurements, the need for Bayesian inference and the limits of naïve estimates for those probability currents are shown. As further contribution, a general scheme is presented to measure the entropy production in small-world networks using Bayesian inference. The scheme is then applied for a specific example of the R mailing list.
38.
Hoßfeld, T., Seufert, M., Sieber, C., Zinner, T., Tran-Gia, P.: Close to Optimum? User-centric Evaluation of Adaptation Logics for HTTP Adaptive Streaming. PIK - Praxis der Informationsverarbeitung und Kommunikation. 37, 275–285 (2014).
HTTP Adaptive Streaming (HAS) is the de-facto standard for over-the-top (OTT) video streaming services. It allows to react to fluctuating network conditions on short time scales by adapting the video bit rate in order to avoid stalling of the video playback. With HAS the video content is split into small segments of a few seconds playtime each, which are available in different bit rates, i.e., quality level representations. Depending on the current conditions, the adaptation algorithm on the client side chooses the appropriate quality level and downloads the respective segment. This allows to avoid stalling, which is seen as the worst possible disturbance of HTTP video streaming, to the most possible extend. Nevertheless, the user perceived Quality of Experience (QoE) may be affected, namely by playing back lower qualities and by switching between different qualities. Therefore, adaptation algorithms are desired which maximize the user’sQoEfor the currently available network resources. Many downloading strategies have been proposed in literature, but a solid user-centric comparison of these mechanisms among each other and with the global optimum is missing. The major contributions of this work are as follows. A proper analysis of the influence of quality switches and played out representations on QoE is conducted by means of subjective user studies. The results suggest that, in order to optimize QoE, first, the quality level of the video stream has to be maximized and second, the number of quality switches should be minimized. Based on our findings, a QoEoptimization problem is formulated and the performance of our proposed algorithm is compared to other algorithms and to the QoE-optimal adaptation.
39.
Hoßfeld, T., Keimel, C., Timmerer, C.: Crowdsourcing Quality-of-Experience Assessments. IEEE Computer. 47, 98–102 (2014).
Crowdsourced quality-of-experience (QoE) assessments are more costeffective and flexible than traditional in-lab evaluations but require careful test design, innovative incentive mechanisms, and technical expertise to address various implementation challenges.
40.
Hoßfeld, T., Timmerer, C.: Quality of Experience Assessment using Crowdsourcing. IEEE COMSOC MMTC R-Letter. 5, (2014).
A short review for the article 'Best Practices for QoE Crowdtesting: QoE Assessment with Crowdsourcing' Edited by Tobias Hoßfeld and Christian Timmerer Online at: http://committees.comsoc.org/mmc/r-letters/MMTC-RLetter-Jun2014.pdf Tobias Hoßfeld, Christian Keimel, Matthias Hirth, Bruno Gardlo, Julian Habigt, Klaus Diepold, Phuoc Tran-Gia, 'Best Practices for QoE Crowdtesting: QoE Assessment with Crowdsourcing', IEEE Trans. on Multimedia, vol. 16, no. 2, pp. 541–558, Feb. 2014.
41.
Zinner, T., Hoßfeld, T., Fiedler, M., Liers, F., Volkert, T., Khondoker, R., Schatz, R.: Requirement Driven Prospects for Realizing User-Centric Network Orchestration. Multimedia Tools and Applications. (2014).
The Internet’s infrastructure shows severe limitations when an optimal end user experience for multimedia applications should be achieved in a resource-efficiently way. In order to realize truly user-centric networking, an information exchange between applications and networks is required. To this end, network-application interfaces need to be deployed that enable a better mediation of application data through the Internet. For smart multimedia applications and services, the application and the network should directly communicate with each other and exchange information in order to ensure an optimal Quality of Experience (QoE). In this article, we follow a use-case driven approach towards user-centric network orchestration. We derive user, application, and network requirements for three complementary use cases: HD live TV streaming, video-on-demand streaming and user authentication with high security and privacy demands, as typically required for payed multimedia services. We provide practical guidelines for achieving an optimal QoE efficiently in the context of these use cases. Based on these results, we demonstrate how to overcome one of the main limitations of today’s Internet by introducing the major steps required for user-centric network orchestration. Finally, we show conceptual prospects for realizing these steps by discussing a possible implementation with an inter-network architecture based on functional blocks.
42.
Hoßfeld, T., Keimel, C., Hirth, M., Gardlo, B., Habigt, J., Diepold, K., Tran-Gia, P.: Best Practices for QoE Crowdtesting: QoE Assessment with Crowdsourcing. Transactions on Multimedia. 16, (2014).
Quality of Experience (QoE) in multimedia applications is closely linked to the end users’ perception and therefore its assessment requires subjective user studies in order to evaluate the degree of delight or annoyance as experienced by the users. QoE crowdtesting refers to QoE assessment using crowdsourcing, where anonymous test subjects conduct subjective tests remotely in their preferred environment. The advantages of QoE crowdtesting lie not only in the reduced time and costs for the tests, but also in a large and diverse panel of international, geographically distributed users in realistic user settings. However, conceptual and technical challenges emerge due to the remote test settings. Key issues arising from QoE crowdtesting include the reliability of user ratings, the influence of incentives, payment schemes and the unknown environmental context of the tests on the results. In order to counter these issues, strategies and methods need to be developed, included in the test design, and also implemented in the actual test campaign, while statistical methods are required to identify reliable user ratings and to ensure high data quality. This contribution therefore provides a collection of best practices addressing these issues based on our experience gained in a large set of conducted QoE crowdtesting studies. The focus of this article is in particular on the issue of reliability and we use video quality assessment as an example for the proposed best practices, showing that our recommended two-stage QoE crowdtesting design leads to more reliable results.
43.
Hoßfeld, T.: On Training the Crowd for Subjective Quality Studies. VQEG eLetter. 1, (2014).
“On Training the Crowd for Subjective Quality Studies” by Tobias Hossfeld from University of Würzburg presents new possibilities for quality evaluation by conducting subjective studies with the crowd of Internet users. The challenges of conducting training sessions for different methods of crowd sourcing are also elaborated in the article. The eLetter is online at the <a href='http://www.its.bldrdoc.gov/vqeg/eletter.aspx'>Video Quality Experts Group (VQEG) homepage</a>.
44.
Schwartz, C., Hoßfeld, T., Lehrieder, F., Tran-Gia, P.: Angry Apps: The Impact of Network Timer Selection on Power Consumption, Signalling Load, and Web QoE. Journal of Computer Networks and Communications. 1–22 (2013).
The popularity of smartphones and mobile applications has experienced a considerable growth during recent years and this growth is expected to continue in the future. Since smartphones have only very limited energy resources, battery efficiency is one of the deter- mining factors for a good user experience. Therefore, some smartphones tear down connec- tions to the mobile network soon after a completed data transmission to reduce the power consumption of their transmission unit. However, frequent connection re-establishments caused by apps which send or receive small amounts of data often lead to a heavy signalling load within the mobile network. One of the major contributions of this article is the investigation of the resulting trade-off between energy consumption at the smartphone and the generated signalling traffic in the mobile network. We explain that this trade-off can be controlled by the connection release timeout and study the impact of this parameter for a number of popular apps that cover a wide range of traffic characteristics in terms of bandwidth requirements and resulting signalling traffic. Finally, we study the impact of the timer settings on QoE for web traffic. This is an important aspect since connection establishments do not only lead to signalling traffic, but they also increase the load time of web pages.
45.
Tran-Gia, P., Hoßfeld, T., Hartmann, M., Hirth, M.: Crowdsourcing and its Impact on Future Internet Usage. it - Information Technology. 55, 139–145 (2013).
Crowdsourcing is an emerging service platform and business model in the Internet. In contrast to outsourcing where a job is performed by a designated contractor, with Crowdsourcing, jobs are outsourced to a large, anonymous crowd of workers, the so-called human cloud. The rise of Crowdsourcing and its seamless integration in current workflows may have a huge impact on the Internet and on society, and will be a guiding paradigm that can form the evolution of work in the years to come. In this article, we discuss applications and use cases of Crowdsourcing to demonstrate the impact on Internet usage. Novel measurement approaches are presented and the impact of Crowdsourcing on Internet traffic is evaluated by measuring the activity of a particular Crowdsourcing platform. New technical solutions are necessary for the operation of efficient, distributed Crowdsourcing platforms. Special attention is drawn to the integration of machine clouds and human crowds, and appropriate inter-cloud solutions. Finally, we discuss current research challenges from a scientific and from the platform provider’s point of view.
46.
Hoßfeld, T., Hirth, M., Tran-Gia, P.: Crowdsourcing - Modell einer neuen Arbeitswelt im Internet. Informatik Spektrum, Wirtschaftsinformatik & Management. 5, (2013).
Das Internet hat bereits viele höchst erfolgreiche Geschäftsmodelle hervorgebracht. Jedoch basierten bisher die meisten neuen Anwendungen oder Dienstleistungen auf technischen Neuerungen, beispielsweise schnelleren Rechner, kürzeren Verbindungsdauern oder auf neuartigen algorithmischen Ansätzen, etwa der Page-Rank-Algorithmus von Google. Erst mit den Sozialen Medien wie Youtube oder Facebook wurden die Nutzer ein integraler Bestandteil der Wertschöpfungskette, ohne die das „Produkt“ nicht funktioniert. Einen ähnlich starken Einfluss auf den Erfolg eines Unternehmens haben Nutzer in Geschäftsmodellen, die auf dem Crowdsourcing-Paradigma beruhen, welches im Folgenden genauer beleuchtet werden soll.
47.
Zinner, T., Hoßfeld, T., Tran-Gia, P., Kellerer, W.: Software defined Networks - Das Internet flexibler gestalten und dynamischer steuern. ITG Mitgliederbeilage / VDE dialog (invited article). 6–9 (2013).
Vor allem aufgrund seiner starren Architektur und mangelnden Ressourcennutzung ist die Flexibilität der aktuellen Internettechnologie eingeschränkt. Dies könnte sich durch Anwendung von Software Defined Networks (SDN) ändern. Hierbei wird die Steuerung der Netze und Datenflüsse von den bisherigen Netzkomponenten auf eine zentrale logische Einheit übertragen.
48.
Lehrieder, F., Dán, G., Hoßfeld, T., Oechsner, S., Singeorzan, V.: Caching for BitTorrent-like P2P Systems: A Simple Fluid Model and its Implications. IEEE/ACM Transactions on Networking. 20(4), (2012).
Peer-to-peer file-sharing systems are responsible for a significant share of the traffic between Internet service providers (ISPs) in the Internet. In order to decrease their peer-to-peer-related transit traffic costs, many ISPs have deployed caches for peer-to-peer traffic in recent years. We consider how the different types of peer-to-peer caches—caches already available on the market and caches expected to become available in the future—can possibly affect the amount of inter-ISP traffic. We develop a fluid model that captures the effects of the caches on the system dynamics of peer-to-peer networks and show that caches can have adverse effects on the system dynamics depending on the system parameters. We combine the fluid model with a simple model of inter-ISP traffic and show that the impact of caches cannot be accurately assessed without considering the effects of the caches on the system dynamics. We identify scenarios when caching actually leads to increased transit traffic. Motivated by our findings, we propose a proximity-aware peer-selection mechanism that avoids the increase of the transit traffic and improves the cache efficiency.We support the analytical results by extensive simulations and experiments with real BitTorrent clients.
49.
Hoßfeld, T., Hirth, M., Tran-Gia, P.: Aktuelles Schlagwort: Crowdsourcing. Informatik Spektrum. 35, (2012).
Seit der Öffnung des Internets für die Allgemeinheit Anfang der 90er Jahre hat eine rasante Entwicklung stattgefunden. Neue Paradigmen wie Peer-to-Peer (P2P), Web 2.0 oder Cloud Computing führen zu neuartigen Diensten und Anwendungen, welche bei den Anwendern längst etabliert sind und einen Großteil des Datenverkehrs im Internet ausmachen. Beispiele hierfür sind unter anderem P2P-Anwendungen wie BitTorrent zum Austausch riesiger Datenmengen, Skype für Sprach- und Videokonferenzen, Soziale Medien wie Facebook oder Twitter, Cloud Anwendungen wie DropBox als synchronisiertes Netzwerk-Dateisystem für verteilte Rechner oder Cloud Gaming. Aktuell taucht ein neues Schlagwort im Internet auf: „Crowdsourcing“. Einige Aufgaben und Probleme, die für Menschen relativ einfach zu lösen sind, können derzeit selbst von modernen Machine Clouds noch nicht algorithmisch bewältigt werden. Hierzu zählen etwa Text- und Bilderkennung, das Verifizieren, Analysieren und Kategorisieren von Videoinhalten, das Schaffen von Wissen, das Verbessern und Kreieren von Produkten oder wissenschaftliche Forschung. Diese stellen Anwendungsgebiete von Crowdsourcing dar. Statt (oder zusätzlich zu) Machine Clouds wird die Masse der Internetnutzer in die Wertschöpfungskette eingebunden. Man spricht hier auch von Human Clouds. Neben Sozialen Medien ist Crowdsourcing eine der wichtigsten aktuell aufstrebenden Technologien und Geschäftsmodelle im Internet, die die Zukunft des Arbeitens und der Arbeitsorganisation von Grund auf verändern wird. Die wirtschaftliche und gesellschaftliche Bedeutung von Crowdsourcing-Plattformen wächst ständig und fördert die Entstehung neuer Formen der Arbeitsorganisation. Jobs in Crowdsourcing-Plattformen besitzen eine viel kleinere Granularität im Vergleich zu denen im traditionellen Outsourcing bzw. Outtasking Bereich. Das Aktuelle Schlagwort beleuchtet den Begriff „Crowdsourcing“ näher und wird zunächst wichtige Begriffe einführen, bevor die Anwendungsgebiete von Crowdsourcing sowie dessen Bedeutung in der Praxis und zukünftige Weiterentwicklung betrachtet werden.
50.
Fiedler, M., Hoßfeld, T., Norros, I., Rodrigues, J., Rogério Pereira, P.: The Network of Excellence Euro-NF and its Specific Joint Research Projects. ICST Global Community Magazine. (2012).
This article presents the European FP7 Network of Excellence “Euro-NF” (Networks of the Future) and reviews its set of activities. Specific attention is paid to the concept of Specific Joint Research Projects (SJRP), a series of small but focused projects, integrating at least three Euro-NF partners and targeting joint seminal work, publications as well as full-size follow-up projects. Further to the description of the SJRP concept, a set of three selected SJRP from different areas are presented in detail with respect to motivation, goal, contents, results, and impact.
51.
Hoßfeld, T., Schatz, R., Varela, M., Timmerer, C.: Challenges of QoE Management for Cloud Applications. IEEE Communications Magazine. April issue, (2012).
Cloud computing is currently gaining enormous momentum due to a number of promised benefits: ease of use in terms of deployment, administration and maintenance, high scalability and flexibility to create new services. However, as more personal and business applications migrate to the Cloud, the service quality will become an important differentiator between providers. In particular, Quality of Experience (QoE) as perceived by users has the potential become the guiding paradigm for managing quality in the Cloud. In this article, we discuss technical challenges emerging from shifting services to the Cloud, as well as how this shift impacts QoE and QoE management. Thereby, a particular focus is on multimedia Cloud applications. Together with a novel QoE-based classification scheme of cloud applications, these challenges drive the research agenda on QoE management for Cloud applications.
52.
Hirth, M., Hoßfeld, T., Tran-Gia, P.: Analyzing Costs and Accuracy of Validation Mechanisms for Crowdsourcing Platforms. Mathematical and Computer Modelling. (2012).
Crowdsourcing is becoming more and more important for commercial purposes. With the growth of crowdsourcing platforms like Amazon Mechanical Turk or Microworkers, a huge work force and a large knowledge base can be easily accessed and utilized. But due to the anonymity of the workers, they are encouraged to cheat the employers in order to maximize their income. Thus, this paper we analyze two widely used crowd-based approaches to validate the submitted work. Both approaches are evaluated with regard to their detection quality, their costs and their applicability to different types of typical crowdsourcing tasks
53.
Hoßfeld, T., Liers, F., Schatz, R., Staehle, B., Staehle, D., Volkert, T., Wamser, F.: Quality of Experience Management for YouTube: Clouds, FoG and the AquareYoum. PIK - Praxis der Informationverarbeitung und -kommunikation (PIK). (2012).
Over the last decade, Quality of Experience (QoE) has become a new, central paradigm for understanding the quality of networks and services. In particular, the concept has attracted the interest of communication network and service providers, since being able to guarantee good QoE to customers provides an opportunity for differentiation. In this paper we investigate the potential as well as the implementation challenges of QoE management in the Internet. Using YouTube video streaming service as example, we discuss the different elements that are required for the realization of the paradigm-shift towards truly user-centric network orchestration. To this end, we elaborate QoE management requirements for two complementary network scenarios (wireless mesh Internet access networks vs. global Internet delivery) and provide a QoE model for YouTube taking into account impairments like stalling and initial delay. We present two YouTube QoE monitoring approaches operating on the network and the end user level. Finally, we demonstrate how QoE can be dynamically optimized in both network scenarios with two exemplary concepts, AquareYoum and FoG, respectively. Our results show how QoE management can truly improve the user experience while at the same time increase the efficiency of network resource allocation.
54.
Jarschel, M., Schlosser, D., Scheuring, S., Hoßfeld, T.: Gaming in the clouds: QoE and the users’ perspective. Mathematical and Computer Modelling. (2012).
Cloud Gaming is a new kind of service, which combines the successful concepts of Cloud Computing and Online Gaming. It provides the entire game experience to the users remotely from a data center. The player is no longer dependent on a specific type or quality of gaming hardware, but is able to use common devices. The end device only needs a broadband internet connection and the ability to display High Definition (HD) video. While this may reduce hardware costs for users and increase the revenue for developers by leaving out the retail chain, it also raises new challenges for service quality in terms of bandwidth and latency for the underlying network. In this paper we present the results of a subjective user study we conducted into the user-perceived quality of experience (QoE) in Cloud Gaming. We design a measurement environment, that emulates this new type of service, define tests for users to assess the QoE, derive Key Influence Factors (KFI) and influences of content and perception from our results.
55.
Meier, S., Barisch, M., Kirstädter, A., Schlosser, D., Duelli, M., Jarschel, M., Hoßfeld, T., Hoffmann, K., Hoffmann, M., Kellerer, W., Khan, A., Jurca, D., Kozu, K.: Provisioning and Operation of Virtual Networks. Electronic Communications of the EASST, Kommunikation in Verteilten Systemen 2011. 37, (2011).
In today’s Internet, requirements of services regarding the underlying transport network are very diverse. In the future, this diversity will increase and make it harder to accommodate all services in a single network. A possible approach to keep up with this diversity in future networks is the deployment of isolated, custom tailored networks on top of a single shared physical substrate. The COMCON (COntrol and Monitoring of COexisting Networks) project aims to define a reference architecture for setup, control, and monitoring of virtual networks on a provider- and operator-grade level. In this paper, we present the building blocks and interfaces of our architecture.
56.
Hoßfeld, T., Tran-Gia, P.: EuroView 2010: Visions of Future Generation Networks. Computer Communications Review CCR. Volume 41, Number 2, (2011).
On August 2nd – 3rd, 2010, the EuroView 2010 workshop on 'Visions of Future Generation Networks' was held at the University of Würzburg. The event was sponsored by the European Network of Excellence Euro-NF, the German Information Technology Society ITG, and the International Teletraffic Congress ITC. EuroView 2010 brought together Internet and network technology researchers, network providers, as well as equipment and device manufacturers. In 2010, the focus was on 'Future Internet Design and Experimental Facilities' and on current efforts towards a Future Internet. Special sessions were organized reflecting the latest results of selected testbed expert groups as well as current and future national and international collaborative projects: (1) the German G-Lab project offering a national platform for Future Internet studies, (2) the Future Internet Activities in the European Framework FP7 organized by Max Lemke, and (3) the GENI project in US organized by Aaron Falk. A keynote talk was given by Lawrence Landweber on the challenges and paradigms emerging in the Future (Inter)Network.
57.
Hoßfeld, T., Lehrieder, F., Hock, D., Oechsner, S., Despotovic, Z., Kellerer, W., Michel, M.: Characterization of BitTorrent Swarms and their Distribution in the Internet. Computer Networks. 55(5), 1197–1215 (2011).
The optimization of overlay traffic resulting from applications such as BitTorrent is a challenge addressed by several recent research initiatives. However, the assessment of such optimization techniques and their performance in the real Internet remains difficult. Despite a considerable set of works measuring real-life BitTorrent swarms, several characteristics of those swarms relevant for the optimization of overlay traffic have not yet been investigated. In this work, we address this lack of realistic swarm statistics by presenting our measurement results. In particular, we provide a statistical characterization of the swarm sizes, the distribution of peers over autonomous systems (ASs), the fraction of peers in the largest AS, and the size of the shared files. To this end, we consider different types of shared content and identify particular characteristics of regional swarms. The selection of the presented data is inspired by ongoing discussions in the IETF working group on application layer traffic optimization (ALTO). Our study is intended to provide input for the design and the assessment of ALTO solutions for BitTorrent, but the applicability of the results is not limited to that purpose.
58.
Lehrieder, F., Oechsner, S., Hoßfeld, T., Staehle, D., Despotovic, Z., Kellerer, W., Michel, M.: Mitigating Unfairness in Locality-Aware Peer-to-Peer Networks. International Journal of Network Management (IJNM), Special Issue on Economic Traffic Management. 21(1), (2011).
Locality-awareness is considered as a promising approach to increase the efficiency of content distribution by peer-to-peer (P2P) networks, e.g., BitTorrent. It is intended to reduce the inter-domain traffic which is costly for Internet service providers (ISPs) and to simultaneously increase the performance from the viewpoint of the P2P users, i.e., to shorten download times. This win-win situation should be achieved by a preferred exchange of information between peers which are located closely to each other in the underlying network topology. A set of studies shows that these approaches can lead to a win-win situation under certain conditions, and to a win-no lose situation in most cases. However, the scenarios used mostly assume homogeneous peer distributions. This is not the case in practice according to recent measurement studies. Therefore, we extend previous work in this paper by studying scenarios with real-life, skewed peer distributions. We show that even a win-no lose situation is difficult to achieve under those conditions and that the actual impact for a specific peer heavily depends on the used locality-aware peer selection and the specific scenario. This contradicts the principle of economic traffic management (ETM) which aims for a solution where all involved players benefit and consequently have an incentive to adopt locality-awareness. Therefore, we propose and evaluate refinements of current proposals, achieving that all users of P2P networks can be sure that their application performance is not reduced. This mitigates the unfairness introduced by current proposals which is a key requirement for a broad acceptance of the concept of locality-awareness in the user community of P2P networks.
59.
Ciszkowski, T., Mazurczyk, W., Kotulski, Z., Hoßfeld, T., Fiedler, M., Collange, D.: Towards Quality of Experience-based Reputation Models for Future Web Service Provisioning. Special Issue of the Springer Telecommunication Systems Journal: Future Internet Services and Architectures - Trends and Visions, print available in 2013. 51, 283–295 (2010).
This paper concerns the applicability of reputations systems for assessing Quality of Experience (QoE) for web services in the Future Internet. Reputation systems provide mechanisms to manage subjective opinions in societies and yield a general scoring of a particular behavior. Thus, they are likely to become an important ingredient of the Future Internet. Parameters under evaluation by a reputation system may vary greatly and, particularly, may be chosen to assess the users' satisfaction with (composite) web services. Currently, this satisfaction is usually expressed by QoE, which represents subjective users' opinions. The goal of this paper is to present a novel framework of web services where a reputation system is incorporated for tracking and predicting of users' satisfaction. This approach is a beneficial tool which enables providers to facilitate service adaptation according to users' expectations and maintain QoE at a satisfactory level. Presented reputation systems operate in an environment of composite services that integrate client and server-side. This approach is highly suitable for effective QoE differentiating and maximizing user experience for specific customer profiles as even the service and network resources are shared.
60.
Dán, G., Stamoulis, G.D., Hoßfeld, T., Oechsner, S., Cholda, P., Stankiewicz, R., Papafili, I.: Interaction Patterns between P2P Content Distribution Systems and ISPs. to be published in IEEE Communications Magazine. (2010).
Peer-to-peer (P2P) content distribution systems are a major source of traffic in the Internet, but the application layer protocols they use are mostly unaware of the underlying network in accordance with the layered structure of the Internet’s protocol stack. Nevertheless, the need for improved network efficiency and the business interests of Internet service providers (ISPs) are both strong drivers towards a cross-layer approach in peer-to-peer protocol design, calling for P2P systems that would in some way interact with the ISPs. Recent research shows that the interaction, which can rely on information provided by both parties, can be mutually beneficial. In this paper first we give an overview of the kinds of information that could potentially be exchanged between the P2P systems and the ISPs, and discuss their usefulness and the ease of obtaining and exchanging them. We also present a classification of the possible approaches for interaction based on the level of involvement of the ISPs and the P2P systems, and we discuss the potential strengths and the weaknesses of these approaches.
61.
Fiedler, M., Hoßfeld, T., Tran-Gia, P.: A Generic Quantitative Relationship between Quality of Experience and Quality of Service. IEEE Network Special Issue on Improving QoE for Network Services. (2010).
Quality of Experience (QoE) ties together user perception, experience and expectations to application and network performance, typically expressed by Quality of Service (QoS) parameters. Quantitative relationships between QoE and QoS are required in order to be able to build effective QoE control mechanisms onto measurable QoS parameters. On this background, this paper proposes a generic formula in which QoE and QoS parameters are connected through an exponential relationship, called IQX hypothesis. The formula relates changes of QoE with respect to QoS to the current level of QoE, is simple to match, and its limit behaviours are straighforward to interpret. It validates the IQX hypothesis for streaming services, where QoE in terms of Mean Opinion Scores (MOS) is expressed as functions of loss and reordering ratio, the latter of which is caused by jitter. For web surfing as the second application area, matchings provided by the IQX hypothesis are shown to outperform previously published logarithmic functions. We conclude that the IQX hypothesis is a strong candidate to be taken into account when deriving relationships between QoE and QoS parameters.
62.
Chan Hung, N., Hoßfeld, T., Ngo Hoang, G., Thanh Vinh, V., Manh Thang, N.: Challenges in the Development of Mobile P2P Applications and Services. Vietname Journal on Information and Communication Technologies. E-1, Number 2 (6), (2009).
The recent trends of decentralizing enterprise applications toward the new peer-to-peer (P2P) architecture and the fast growth of wireless communication lead to a new tendency of combining these technologies to inherit their great advantages of mobility, reliability, flexibility and scalability. However, this technological integration raises a large number of new challenges and issues to be addressed. This paper focuses on the challenges of the development of enterprise mobile applications and services based on P2P architecture. Using selected results from related projects, we present and analyze these important issues and also propose an exemplary solution to address some of these issues.
63.
Schlosser, D., Hoßfeld, T.: Mastering Selfishness and Heterogeneity in Mobile P2P Content Distribution Networks with Multiple Source Download in Cellular Networks. Peer-to-Peer Networking and Applications, Special Issue on Mobile P2P Networking and Computing. (2009).
The performance of Peer-to-Peer (P2P) content distribution networks depends highly on the coordination of the peers. This is especially true for cellular networks with mobile and often selfish users, as the resource constraints on accessible bandwidth and battery power are even more limitating in this context. Thus, it is a major challenge to indentify mobile network specific problems and to develop sophisticated cooperation strategies to overcome these difficulties. Cooperation strategies, which are able to cope these problems, are the foundation for efficient mobile file exchange. The detailed performance of the strategies are determined by the peer capabilities and the peer behavior, such as the number of parallel upload connections, the selfishness, or the altruistic re-distribution of data. The purpose of this work is to evaluate and investigate different cooperation strategies which are based on multiple source download and select the best one for mobile scenarios with even leeching peers, i.e. peers which depart as soon as they have finished their download. The question arises whether the cooperation strategy can smoothen the overall performance degradation caused by a selfish peer behavior. As performance indicators the efficiency, fairness, and robustness of the cooperation strategies are applied. The considered scenarios comprise best-case (altruistic peers) and worst-case scenarios (selfish peers). We further propose a new cooperation strategy to improve the file transfer even when mainly selfish peers are present, the CycPriM (cyclic priority masking) strategy. The strategy allows an efficient P2P based content distribution using ordered chunk delivery with only local information available at a peer.
64.
Tran-Gia, P., Hoßfeld, T., Menth, M., Pries, R.: Emerging Issues in Current Future Internet Design. e&i Elektrotechnik und Informationstechnik, Special Issue ’Future Internet’, ISSN: 0932-383X (print), ISSN: 1613-7620 (online). 07/08, (2009).
From its inception, the Internet was not intended as the worldwide universal communication platform. It developed over almost four decades to its current state. As a result of this unplanned evolution, we currently witness scalability problems, increased complexity, missing modularity as well as missing flexibility for emerging services. In this report we focus on two selected issues: i) the changing routing paradigm and ii) edge‐based intelligence. We will then present a variety of projects on future Internet and finally assess recently established experimental facilities and their role in the Future Internet design.
65.
Binzenhöfer, A., Hoßfeld, T., Kunzmann, G., Eger, K.: Efficient Simulation of Large-Scale P2P Networks. International Journal of Computational Science and Engineering (IJCSE): Special Issue on Parallel, Distributed and Network-Based Processing. (2008).
66.
Hoßfeld, T., Binzenhöfer, A.: Analysis of Skype VoIP Traffic in UMTS: End-to-End QoS and QoE Measurements. Computer Networks. 52, 650–666 (2008).
In the future Internet, multi-network services will follow a new paradigm in which the intelligence of the network control is gradually moved to the edge of the network. This impacts both the objective Quality of Service (QoS) of the end-toend connection as well as the subjective Quality of Experience (QoE) as perceived by the end user. Skype already offers such a multi-network Voice-over-IP (VoIP) telephony service today. Due to its ease of use and a high sound quality, it becomes increasingly popular in the wired Internet. UMTS operators promise to offer large data rates which should suffice to support VoIP calls in a mobile environment. However, the success of those applications strongly depends on the corresponding QoE. In this work, we analyze the theoretically achievable as well as the actually achieved quality of IP-based voice calls using Skype. This is done performing measurements in both a real UMTS network and a testbed environment. The latter is used to emulate rate control mechanisms and changing system conditions of UMTS networks. The results show in how far Skype over UMTS is able to keep pace with existing mobile telephony systems and how it reacts to different network characteristics. The investigated performance measures comprise the QoE in terms of the MOS value and the QoS in terms of network-based factors like throughput, packet interarrival times, or packet loss.
67.
Hoßfeld, T., Leibnitz, K., Remiche, M.-A.: Modeling of an Online TV Recording System. SIGMETRICS Performance Evaluation Review. Volume 35, Number 2, (2007).
Recently, new services have emerged which utilize the Internet as a delivery mechanism for multimedia content. With the advent of broadband accesses, more users are willing to download large volume content from servers, such as video files of TV shows. While some popular video services (e.g. YouTube.com) or some broadcasting companies (e.g. ABC.com) use streaming data with Flash technology, some media distributors (e.g. iTunes) offer entire TV shows for download. In this study, we investigate the performance of the German site OnlineTVRecorder.com (OTR), which acts as an online video cassette recorder (VCR) where users can program their favorite shows over a web interface and download the recorded files from a server or its mirrors. These files are offered in different file formats and can consist of several hundred megabytes up to 1GB or more depending on the length of the TV show as well as the encoding format. OTR can, thus, be seen as an example for a server-based content distribution system with large data files. However, as these server farms are often overloaded, new requests are queued when the provided download slots are full. The restriction to a maximum number of simultaneous downloads guarantees a minimal download bandwidth for each user. Additionally, the service offers premium users prioritized access to downloading. The download duration itself depends on the total capacity of the server and the number of users currently sharing this capacity. On the other hand, users who might encounter slow downloads may abort their downloading attempt if their patience is exceeded. In this paper, we discuss analytical modeling approaches which consider the impact of the user’s impatience on the performance of an OTR server with different file size distributions. The paper is organized as follows. After describing the problem and related work, we formulate simple analytical models and compare their performance in terms of download duration and success ratio. Especially, we address the question of how to properly dimension the number of simultaneous downloads at a server in order to optimize the performance of the system and to maximize the user satisfaction.