Uso de aprendizado profundo em redes 5g através de representações pictóricas de séries temporais
Data
2022-01-23
Tipo
Trabalho de conclusão de curso
Título da Revista
ISSN da Revista
Título de Volume
Resumo
Novos padrões de consumo de dados, em sua maioria não centralizado no comportamento humano, estão começando a esgotar os recursos de transmissão da infraestrutura de rede móvel atual. Por conta disso redes 5G começaram a ser padronizadas e implantadas ao redor do mundo, adotando enlaces de ondas de rádio milimétricas de alta frequência que permitem suprir as demandas de velocidades de transferência de dados atuais. Entretanto, a gestão das redes 5G atuais acontece de modo reativo, em que parâmetros de rede são calculados pelos dispositivos de usuários e enviados periodicamente as estações-base, porém isso não é ideal pois geram atrasos e lentidões na rede que podem impossibilitar o cumprimento de seus requisitos de tempo. Assim, as redes além das redes 5G estão sendo desenvolvidas com objetivo de apresentar sistemas mais inteligentes através do uso de métodos de aprendizado de máquina para melhoria de tomadas de decisões e alocações de recursos. Com base nas redes além das redes 5G, este trabalho propõe o desenvolvimento de um arcabouço de aprendizado profundo que visa prever valores futuros de um parâmetros de qualidade da rede, o Indicador de Qualidade de Canal, com base em um histórico de valores passados. Espera-se que esse sistema de previsão seja executado em infraestruturas de nuvem que pró-ativamente envia previsões para as estações-bases, permitindo que elas escolham seu curso de ações futuro para manter a qualidade dos canais de comunicação com os equipamentos de usuário. Na literatura existem algumas propostas para a previsão de valores futuros de CQI, mas a maior parte delas investigam modelos de aprendizado profundo custosos sobre ambientes simulados ou, quando em ambientes reais, em redes móveis de gerações anteriores, como as redes 4G. Dado estas limitações, este trabalho realiza uma comparação entre diferentes técnicas de aprendizagem profunda em um conjunto de dados reais (registros de transmissão) obtidos da perspectiva de um equipamento de usuário de uma operadora 5G da Irlanda. Foram usadas redes recorrentes LSTM, modelos convolucionais tradicionais e modelos convolucionais residuais, ambos modelos convolucionais voltadas para representações de dados unidimensionais, usando diretamente séries temporais pré-processadas, e bidimensionais, que adicionam uma etapa adicional de processamento séries temporais por técnicas de transformação de séries temporais em imagens, e.g., Gráficos de Recorrência, Campos Angulares Gramianos e Campos de Transição de Markov. Após uma extensa avaliação experimental sobre dois padrões de movimentação distintos, estático e móvel, pôde-se verificar que a LSTM obteve o melhor desempenho médio (aproximadamente 12% de NRSME) para previsão, seguida pelo modelo convolucional tradicional 1D (16%) e o residual 2D (19%). Porém, informações temporais de processamento e treinamento são vitais em redes 5G, e nesse sentido observa-se que a LSTM apresentou o maior tempo dentre todos os modelos avaliados, impossibilitando sua aplicação no mundo real. Assim, constatou-se que o modelo com melhor compromisso entre desempenho e complexidade foram as redes residuais, particularmente, a ResNet5 2D, com um erro de aproximadamente 19% NRMSE e um tempo de inferência e treinamento aproximadamente 99% menor que LSTM.
New patterns of data consumption, generally different from human behavior, are starting to exhaust the resources of today's mobile network. As a result, 5G networks began to be standardized and deployed around the world, which uses high-frequency millimeter-wave as links to ensure the current data transfer demands. However, the current 5G networks work in a reactive way, that is, network parameters are calculated by the user equipment and periodically send to the base stations. Acting in a reactive manner might generate delays that slow down the network traffic, not letting the network reach its time requirements. Thus, beyond 5G networks have been developed to improve the smart capabilities of the network infrastructure by the use of machine learning methods, providing smarter decision-making and resource allocation procedures. Based on beyond 5G networks, this work proposes a deep learning framework to predict future values of a network quality parameter, the Channel Quality Indicator, based on its past values. This forecasting system is expected to run in a cloud infrastructure that proactively sends forecasts to the base stations, allowing them to choose their future action in order to maintain the quality of the communication channel with the user equipment. In the literature, there have been some works about network parameters forecasting, but most of them use expensive deep learning models on computer-simulated environments or, when dealing with real environments, only above past generations mobile networks, such as 4G networks. So, in this work, we made a comparison between different deep learning techniques trained in a real-valued dataset composed of transmission logs obtained from a user equipment perspective, which was attached to a 5G operator in Ireland. We evaluated LSTM recurrent networks, two traditional convolutional models, and three residual convolutional models, where both convolutional models (traditional and residual methods) were further divided into one-dimensional kernels or two-dimensional kernels. One-dimensional kernels can natively process time series, but the two-dimensional kernel method needs an additional step of transforming the time series into images by image-based time series transformations, e.g., Recurrence Plots, Gramian Angular Fields, and Markov Transition Fields. Next, an extensive experimental evaluation was performed separating two distinct movement patterns, static and mobile. We could verify that LSTM obtained the best average performance (approximately 12% of NRSME) for prediction, which was followed by the traditional 1D convolutional model (16%) and, finally, the residual 2D model (19%). However, in the context of 5G networks time metrics of training and inference are vital. When leveraging this information we can see that LSTM had the longest time processing among all the models, which limits its applicability in the real-world scenario. Finally, we conclude by showing that ResNet5, the two-dimensional convolutional residual network has the best compromise between performance and complexity, with an error of approximately 19% NRMSE and with an inference and training time approximately 99% smaller than LSTM time metrics.
New patterns of data consumption, generally different from human behavior, are starting to exhaust the resources of today's mobile network. As a result, 5G networks began to be standardized and deployed around the world, which uses high-frequency millimeter-wave as links to ensure the current data transfer demands. However, the current 5G networks work in a reactive way, that is, network parameters are calculated by the user equipment and periodically send to the base stations. Acting in a reactive manner might generate delays that slow down the network traffic, not letting the network reach its time requirements. Thus, beyond 5G networks have been developed to improve the smart capabilities of the network infrastructure by the use of machine learning methods, providing smarter decision-making and resource allocation procedures. Based on beyond 5G networks, this work proposes a deep learning framework to predict future values of a network quality parameter, the Channel Quality Indicator, based on its past values. This forecasting system is expected to run in a cloud infrastructure that proactively sends forecasts to the base stations, allowing them to choose their future action in order to maintain the quality of the communication channel with the user equipment. In the literature, there have been some works about network parameters forecasting, but most of them use expensive deep learning models on computer-simulated environments or, when dealing with real environments, only above past generations mobile networks, such as 4G networks. So, in this work, we made a comparison between different deep learning techniques trained in a real-valued dataset composed of transmission logs obtained from a user equipment perspective, which was attached to a 5G operator in Ireland. We evaluated LSTM recurrent networks, two traditional convolutional models, and three residual convolutional models, where both convolutional models (traditional and residual methods) were further divided into one-dimensional kernels or two-dimensional kernels. One-dimensional kernels can natively process time series, but the two-dimensional kernel method needs an additional step of transforming the time series into images by image-based time series transformations, e.g., Recurrence Plots, Gramian Angular Fields, and Markov Transition Fields. Next, an extensive experimental evaluation was performed separating two distinct movement patterns, static and mobile. We could verify that LSTM obtained the best average performance (approximately 12% of NRSME) for prediction, which was followed by the traditional 1D convolutional model (16%) and, finally, the residual 2D model (19%). However, in the context of 5G networks time metrics of training and inference are vital. When leveraging this information we can see that LSTM had the longest time processing among all the models, which limits its applicability in the real-world scenario. Finally, we conclude by showing that ResNet5, the two-dimensional convolutional residual network has the best compromise between performance and complexity, with an error of approximately 19% NRMSE and with an inference and training time approximately 99% smaller than LSTM time metrics.