For cloud enterprise customers that require services on demand, data centers must allocate and partition data center resources in a dynamic fashion. We consider the problem in which a request from an enterprise customer is mapped to a virtual network (VN) that is allocated requiring both bandwidth and compute resources by connecting it from an entry point of a data center to one or more servers, should this data center be selected from multiple geographically distributed data centers. We present a dynamic traffic engineering framework, for which we develop an optimization model based on a mixed-integer linear programming (MILP) formulation that a data center operator can use at each review point to optimally assign VN customers. Through a series of studies, we then present results on how different VN customers are treated in terms of request acceptance when each VN class has a different resource requirement. We found that a VN class with a low resource requirement has a low blocking even in heavy traffic, while the VN class with a high resource requirement faces a high service denial. On the other hand, cost for the VN with the highest resource requirement is not always the highest in the heavy traffic because of the significantly high service denial faced by this VN class.
The different versions of the original document can be found in:
Published on 01/01/2016
Volume 2016, 2016
DOI: 10.1109/itc-28.2016.111
Licence: CC BY-NC-SA license
Are you one of the authors of this document?