Home / Technology Policy & Ethics / September 2019 / Automated Machine Learning for Future Networks including 5G

Automated Machine Learning for Future Networks including 5G

By Shagufta Henna and Alan Davy

September 2019

It is possible to select a set of potential candidate machine learning (ML) models based on 5G use-case requirements and characteristics of the ML model, however, it is extremely difficult to predict the best model right at the start. This work proposes an automated ML  framework called automated 5G machine learning (Auto5GML), which can be integrated with the unified ML architecture, by the International Telecommunication Union (ITU). Based on the potential search space of ML models, the Auto5GML framework selects the best model to be used for a 5G use-case. It evaluates the potential models by inserting data and running them in parallel by using data parallelism or model parallelism. The proposed framework can optimize the learning performance based on the strict use-case requirements.

1. 5G Use-case to ML Model

The first 5G standard defined under 3GPP release 15 dictates 5G radio specifications for mobile communications. Popular scenarios of 5G under this release include enhanced Mobile Broadband (eMBB), massive Machine Type Communications (mMTC), and Ultra-Reliable and Low Latency Communications (URLLC)1. These scenarios demand efficient management of network resources with improved quality of service guarantees. These challenges are more prominent with the new features of future networks including multi-RAT, multi-service, and multi-connectivity scenarios. Existing radio access techniques are inefficient, unreliable or impractical under these demanding scenarios. Future generation networks may witness high heterogeneity and diversity in terms of multiple radio access technologies, multiple connectivity, and multiple types of services. These services require robust ML techniques which can address diverse quality of service requirements with added computational complexity.

The selection of a ML algorithm among the pool of learning mechanisms is one of the challenging issues and requires an understanding of the problem. No one solution fits all the problem domains. Rather, there are several critical factors which can affect the choice of a ML model. Some problems in the 5G network are very specific and require a unique machine-learning algorithm to solve it. However, most of the 5G problems are open and need a trial and error approach. These problems may be addressed by using either supervised learning2, classification, or regression to build generic predictive models. Given a 5G problem domain, selecting an appropriate ML solution requires consideration of several factors. These factors are briefly discussed below:

1.1.   5G Problem Domain Understanding

The type of data available for a 5G problem domain plays a key role in the selection of a ML algorithm3. Some ML solutions work well with small datasets, while others may require massive data. Some are sensitive to the data type in terms of statistics and visualizations and less sensitive to missing data. Others are sensitive to outliers and may result in poor predictions. Based on the problem domain, different ML models rely on different feature engineering approaches. These approaches can assist to reduce data redundancy and dimensionality, capture relationships, and rescale variables.

Depending on the input data from a 5G domain, an ML problem can be classified as supervised, unsupervised, or reinforcement learning problem4. Labelled data urges the need for a supervised learning solution, whereas unlabelled data would recommend an unsupervised learning solution. If an objective function needs optimization with an interaction with the environment, it will be a potential reinforcement learning problem.

Additionally, the constraints imposed by the 5G use-cases including storage capacity, processing limitations, and strict application requirements may also determine the potential ML model.  For 5G real-time use-cases, a fast prediction is desirable. For example, in autonomous driving, interpretation of road signs and signals must be performed in real-time to avoid accidents. Some 5G use-cases require  rapidly updating the deployed ML model on the fly with an updated dataset in an online manner.

2. Auto5GML Framework

This section discusses Auto5GML framework in detail.

2.1 Auto5GML from the learning perspective

From an operator‘s perspective, an Auto5GML can be a powerful learning tool which can provide good generalization performance to their network to perform a certain task. Auto5GML emphasizes a good learning tool to adapt to 5G problems with optimal solutions in an automated manner as illustrated in Figure 1.

Figure 1:  Auto5GML from operator ‘s perspective

2.2 Auto5GML from automation perspective

Auto5GML aims to automate various operations underneath building blocks from the ML pipeline. In pursuit of better prediction, the configuration of ML pipeline tools needs to be adapted according to the use-case specified, which is currently performed manually. As illustrated in Figure 2, Auto5GML is equipped with a high-level controller to automate the optimal configurations of the ML pipeline without any human intervention. The pipeline is capable of automating the ML model, algorithms, and feature engineering. However, the proposed framework considers the model search space only.

Figure 2:  Automation perspective of Auto5GML

3. Auto5GML Framework

The Auto5GML controller is at the core of the framework and is illustrated in Figure 3. It consists of several components which contribute to automating the model selection based on the 5G use-case. The interactions of these components are illustrated in Figure 3. The Auto5GML controller can use the best ML model based on the use-case requirements from a potential set of ML models. The ML model search space can be specified based on the performance of ML models on similar use-cases in the past. Based on this search space, Auto5GML uses different ML optimization techniques to suggest a configuration proposal to try on a selected ML model.  This module can also refer to the past successful configurations for an ML model to try. The selected model along with the proposed configuration is trained and evaluated for its performance based on a user data set. This data set can be obtained through simulations as well. Depending on the constraints, a set of configurations suggested by the ML optimization techniques can be tried.  ML optimizations can be handled by the Auto5GML in a user oblivious manner. One of the important modules of Auto5GML is training optimizations. Depending on the resource constraints and use-case requirements, this module can optimize the training process in terms of time, storage, and processing. To speed up the training process of several ML techniques, Auto5GML can exploit the use of model parallelism and data parallelism.

Model selection or reselection may require training of the model to evaluate hundreds of models before actual deployment. Auto5GML assumes the provision of sandbox with necessary hardware, accelerators, software, data tools, interfaces, and applications.

Figure 3:  Basic Framework for Auto5GML

Figure 4: Auto5GML Bandit Arm-based Training Optimization

Although Auto5GML considers the search space from the potential ML models which performed best on similar 5G use-cases in the past, still only a fraction of these models are of high-quality.  The Auto5GML ML training optimization module can be replaced with a bandit arm-based training5,6 which pre-emptively prunes a model without training it fully if a better ML model is already available. A model will be trained fully only if it shows promise of converging. In addition to efficiently utilizing resources for training, this training optimization may return the result as quickly as possible. The above instance of the Auto5GML utilizes a recent hyperparameter tuning approach known as Tree-based Parzen Estimators (Hyperopt)7. Hyperopt begins with a random search and probabilistically samples with more promising minima.

The use of Auto5GML can be considered for the automation of various network functions based on application requirements and constraints. These functions may include network operation and management, fault detection and recovery, mobility management, and service orchestration. One of the major characteristics of 5G is service provisioning through network slicing. Auto5GML can dynamically optimize the allocation of network resources to the users8. It can accelerate the service provisioning time by automatically selecting an optimal ML model to meet best the application requirements and constraints. It can shorten the time for on-demand network slice allocation with faster internet of things (IoT) services. Further, it can enable efficient ML mechanisms for network slicing to adapt to dynamic network environments under a variety of scenarios including dynamic workloads, customer base, and connectivity.

3.1 An Instance of Auto5GML Framework with Meta Learning

Meta-learning9 is capable of speeding up the learning process for similar 5G use-cases10. It focuses on learning at the meta-level by using the experience on a similar task. It can better capture the tasks of similar 5G use-cases. Rather than starting the learning process for a new 5G use-case, meta-learning can utilize the performance information of learning models on previous similar 5G use-cases and can recommend the better learning model to maximize the utility function of the new 5G use-case. Based on the past meta-samples, meta-learning module exploits the similarity of the 5G use-case requirements. It utilizes the successful configurations information of the retrieved meta-samples for the training. Based on this meta information, this module ranks the models used for the similar 5G use-case and trains the highest rank model for prediction.

4. Conclusion and Future Directions

This article introduces a highly automated framework to make ML adaptable to 5G use-cases. The core of Auto5GML is its ML optimization and training optimization processes.  On one hand, the ML optimization process selects an optimized ML configuration by considering past successful configurations. On the other hand, the optimized training process speeds up the training process according to the use-case requirements. Auto5GML can select the high-quality model to best meet the specific use-case requirements defined in the intent. In the future, it is possible to extend the Auto5GML as Multi-stage Auto5GML pipelines, where initial data will be transformed before being fed into the system for more efficient model search.

  Figure 5: Auto5GML Meta-Learning

References

  1. ITU-R, “ITU-R M.[IMT-2020.TECH PERF REQ] –” Minimum Requirements Related to Technical Performance for IMT2020 Radio Interface(s),” Report ITU-R M.2410-0, Nov. 2017.
  2. Huang, Z. Liu, K. Q Weinberger, and L van der Maaten. “Densely connected convolutional networks”, IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  3. F. Akyildiz, S. Nie, S.-C. Lin, and M. Chandrasekaran, “5G Roadmap,” Computer Networks, vol. 106, pp. 17–48, Sept. 2016.
  4. Y He, Z Zhang, F R Yu et al., “Deep Reinforcement Learning-based Optimization for Cache-enabled Opportunistic Interference Alignment Wireless Networks”, IEEE Transactions on Vehicular Technology, no. 99, pp. 1-1, 2018.
  5. Kaufmann, O. Cappé , A. Garivier, “On the complexity of best-arm identification in multi-armed bandit models”, The Journal of Machine Learning Research, vol 7, no 1, pp. 1-42, 2016.
  6. Boldrini, L. De Nardis, Caso, G., Le, M.T.P., J. Fiorina, Benedetto, M.-G, “muMAB: A Multi-Armed Bandit Model for Wireless Network Selection”, Algorithms 2018.
  7. Bergstra and Y. Bengio, “Random search for hyper-parameter optimization”, Journal of Machine Learning Research, vol 13, pp. 281–305, 2012.
  8. Tao X F, Han Y, Xu X D, et al. Recent advances and future challenges for mobile network virtualization. Sci China Inf Sci, 2017, 60: 040301.
  9. Finn C, P Abbeel, Levine S. Model-agnostic meta-learning for fast adaptation of deep networks. 2017. ArXiv: 1703.03400.
  10. Lemke, M. Budka and B. Gabrys, “Metalearning: a survey of trends and technologies,” Artificial Intelligence Review, vol. 44, no. 1, pp. 117-130, 2015.

 

Shagufta Henna is an Assistant Lecturer of Computing with the Letterkenny Institute of Technology Co. Donegal. She was a postdoctoral researcher with the telecommunication software and systems group, Waterford Institute of Technology, Waterford, Ireland. She received her doctoral degree in Computer Science from the University of Leicester, UK in 2013. She is an Associate Editor for IEEE Access, EURASIP Journal on Wireless Communications and Networking, IEEE Future Directions, and Humancentric Computing and Information Sciences, Springer. Her current research interests include deep learning, edge intelligence, data analytics, network security, machine learning for 5G and beyond, and Intent-based networking.

Alan Davy was awarded BSc (with honours) in Applied Computing and PhD degrees from Waterford Institute of Technology, Waterford, Ireland in 2002 and 2008 respectively. He has worked at TSSG originally as a student since 2002 and became a PostDoc Researcher in 2008. In 2010 he worked at IIT Madras in India as an assistant professor lecturing in Network Management Systems. He received a Marie Curie international Mobility fellowship in 2010, which brought him to work at the Universitat Politècnica de Catalunya for 2 years. He is now an adjunct Senior Research Fellow at TSSG and Head of Department of Computer Science at the Waterford Institute of Technology.

Editor:

Kathiravan Srinivasan, received his Ph.D., in Information and Communication Engineering from Anna University Chennai, India. He also received his M.E., in Communication Systems Engineering and B.E., in Electronics and Communication Engineering from Anna University, Chennai, India. He has around 15 years of experience in research in the area of Machine Learning and applications. He is presently working as an Associate Professor in the School of Information Technology and Engineering at Vellore Institute of Technology (VIT), India.  He was previously working as a faculty in the Department of Computer Science and Information Engineering and also as the Deputy Director- Office of International Affairs at National Ilan University, Taiwan.  He has won the Best Conference Paper Award at 2018 IEEE International Conference on Applied System Innovation, Chiba, Tokyo, April 13-17, 2018.

Further, he has also received the Best Service Award, Department of Computer Science & Information Engineering, National Ilan University, Taiwan. In 2017, he won Best Paper Award at 2017 IEEE International Conference on Applied System Innovation, Sapporo, Japan, May 13-17, 2017 and Best Paper Award at International Conference on Communication, Management and Information Technology (ICCMIT 2017), Warsaw, Poland. In 2016, he received the Best Service Award as the Deputy Director at Office of International Affairs, National Ilan University. He is presently serving as the Editor of KSII Transactions on Internet and Information Systems (TIIS), Associate Editor for IEEE Access, Journal of Internet Technology, and Editorial Board Member and reviewer for various IEEE Transactions, SCI, SCIE and Scopus Indexed Journals. He has played an active role in organizing several International Conferences, Seminars, and Lectures. He has been a keynote speaker in many International Conferences and IEEE events.