Citas bibligráficas
Pinto, M., (2024). Machine learning en linux kernel: implementacion de un predictor de migraciones forzadas en schedulers multicore [Tesis, Universidad de Ingeniería y Tecnología]. https://hdl.handle.net/20.500.12815/355
Pinto, M., Machine learning en linux kernel: implementacion de un predictor de migraciones forzadas en schedulers multicore [Tesis]. PE: Universidad de Ingeniería y Tecnología; 2024. https://hdl.handle.net/20.500.12815/355
@misc{renati/231229,
title = "Machine learning en linux kernel: implementacion de un predictor de migraciones forzadas en schedulers multicore",
author = "Pinto Larrea, Mauricio Jorge",
publisher = "Universidad de Ingeniería y Tecnología",
year = "2024"
}
Although Linux’s Completely Fair Scheduler (CFS) is capable of achieving fairness and managing task allocation and migration among cores through a load balancer, recent studies have proposed the use of low level Machine Learning (ML) for optimizing kernel decisions. In this work, a specific case of scheduling decisions is studied, where tasks are migrated aggressively between cores, either due to being cache-cold, having different NUMA node affinities or having too many failed balance attempts. This is sub-optimal especially when cache-hot tasks are forcefully migrated due to the latter condition being true. In order to solve this problem, this work proposes the use of ML in a way similar to [1] in order to predict incidences of forced migrations. For this, we implemented a system capable of collecting migration related data from calls to the can_migrate_task () function and using these to (i) train ML models, (ii) make inferences in kernel space, (iii) configure models in real time through LibML, a library that allows the use of neural networks in kernel and user space in a hybrid manner. Experiment results where neural networks were trained in userspace with collected migration datasets show that it is possible to predict the occurrence of aggressive migrations with a high precision, reaching accuracy values above 95 % in general terms when running in kernelspace. Additionally, these inferences don’t seem to impact performance significantly, as the modified kernel’s average runtime for all benchmarks is 2.3 % lower than the original.
Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons