标题: | 在以道路基础蜂巢式网路中以马可夫决策为基础并使用邻近状态资讯的允入控制机制 Markov-Decision-Based Call Admission Control with State Information of Adjacent Cells in Road-based Cellular Networks |
作者: | 曹贤宗 Hsien-Chung Tsao 廖维国 Wei-kuo Liao 电信工程研究所 |
关键字: | 允入控制机制;蜂巢式网路;马可夫决策;Call Admission Control;Cellular Networks;Markov Decision |
公开日期: | 2007 |
摘要: | 在此篇论文中,我们研究将以固定通道(Fix Channel Allocation)分配蜂巢式网路邻近状态资讯列入考虑的允入控制机制。在某种程度的假设前提之下,对于每个细胞,我们使用二维状态的马可夫链(Markov chain)来模拟此系统。其第一维是代表基础细胞的50个状态,其第二维是代表周围细胞的300个状态。因此,我们的模型变成一个总共15000状态数的二维马可夫链。对于以最小化新连线的阻断率(new call blocking probability)和连线交递的失败率(handoff dropping probability)为目标函数的问题,可用马可夫决策过程(Markov Decision Process)来描述。然而,此一庞大的状态数目使得转置机率的反矩阵(其大小为 2.25×108)无法计算出来,也因此复杂化了应用马可夫决策过程中的“策略叠代法”(policy-iteration method)来解决我们的问题。于是我们使用把在几步之内可以到达的状态们群集在一起的“状态聚合法”(state aggregation method)来克服此困难。如此做了之后,我们的模型变成总共只有66个状态而且可用“策略叠代法”来解。最后,我们证实与熟知的“保护频道策略”(Guard Channel policy)相比较之下,我们的策略可以很容易的得到并且有较低的平均成本。然而,以通道借偿并随方向锁定(Borrowing with Directional Locking)的通道分配方式比固定通道分配方式能有更好的通话失败率。因此我们多加入单步决策(One-Step Policy)来改变以上码可夫决策过程以期待得到更好的效能。 We study the admission control problem in cellular network when taking the neighboring state information into consideration with FA Strategy. For each cell, under certain assumptions, we model the system by Markov chain with two-dimensional states where the first dimension represents the base cell’s 50 states and the second dimension stands for the adjacent cells’ 300 states. As a result, the model becomes a two-dimensional Markov chain with 15000 states in total. The problem of minimizing a linear objective function of new call blocking and handoff call dropping probabilities can then be formulated as a Markov Decision Process. However, the enormous number of states makes the inverse of the transition probability matrix (which is of size 2.25×108) computation-prohibitive and thus complicates the application of policy iteration method in the context of Markov Decision Process to solve our problem. To attack such, we use the state aggregation method where we group those states which basically are few steps reachable from each other. After doing so, our model turns into involving only 66 states in total and solvable by the policy-iteration method. Finally, we show that our policy can be easily derived and has lower average cost than the well-known Guard Channel policy. However, we find that BDCL strategy outperforms FA strategy. Therefore, we modify the above MDP model by adding One-Step policy to it in order to get better efficiency. |
URI: | http://140.113.39.130/cdrfb3/record/nctu/#GT009213615 http://hdl.handle.net/11536/70557 |
显示于类别: | Thesis |
文件中的档案:
If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.