Open Position

Prédiction frugale de la charge d’un supercalculateur pour réduire son impact énergétique

Keywords prédiction, charge, hpc, ordonnancement, incertitude, énergie Encadrants Millian Poquet, Georges Da Costa Contexte Dans le monde du calcul à haute performance, un supercalculateur est une plateforme de calcul utilisée par de nombreux utilisateurs pour y exécuter des applications, notamment pour lancer des campagnes de simulations scientifiques à grande échelle. Les supercalculateurs récents peuvent avoir un nombre très grand de ressources (de l’ordre du million de cœurs) et les utilisateurs n’accèdent donc pas directement aux ressources ; ils passent par l’intermédiaire d’un gestionnaire de ressources (comme SLURM[1]) pour réserver des nœuds/cœuds de calcul et pour y exécuter des applications.

Replaying with feedback: towards more realistic HPC simulations

Topic Researchers use simulations to compare the performance (execution time, energy efficiency, …) of different scheduling algorithms in High-Performance Computing (HPC) platforms. The most common method is to replay historic workloads recorded in real HPC infrastructures (like the ones available in the Parallel Workloads Archive): jobs are submitted to the simulation at the same timestamp as in the original log. A major drawback of this method is that it does not preserve the submission behavior of the users of the platform.

Post-doc position on scheduling and management of a 100% renewable energy datacenter

Duration: between 16 and 18 months Remuneration: depending on profile and experience, according to the University’s salary scale (from 2700 to 3600 € gross monthly). Position to be filled as soon as possible. The applications will be evaluated as soon as they arrive. Location: IRIT laboratory, Paul Sabatier University (Toulouse, in France) Following the ANR DATAZERO [1] project (2015-2019), this position is placed in the context of the ANR Datazero2 (2020-2024).

Open position for post-doc and engineer: Smart cities and fog computing

Laboratory: IRIT (Computer science Laboratory) in Toulouse in the SEPIA TEAM Localisation: University Toulouse III Salary: Starts at 2650€ brut for post-doc and 2330€ for Engineer and increases depending on previous experience. Duration: 1 year, can be extended for another year Start of the position: September 2022 Keywords: Edge and Fog computing, Gama, Multi-agent systems, scheduling Profiles: Master or PhD Expected abilities, one or more of the following Distributed systems Optimization techniques (A.

Poste ingénieur de recherche intégrateur d’applications et définition du processus d'intégration continu, CDD 12 mois (possibilité d'extension)

Date Début mission : entre septembre et décembre 2022. Poste à pourvoir dès que possible, les candidatures seront évaluées au fil de l’eau. Période : Année 2022 / 2023 durée 12 mois (prolongement possible) Expérience : expérience en DevOps Rémunération : en fonction du profil et de l’expérience, selon la grille indiciaire de l’Université (de 1800 à 2300 € brut mensuel en fonction de l’expérience). Lieu d’affectation : laboratoire IRIT, Université Paul Sabatier Contraintes du poste : participer à toutes les réunions du projet incluant certains déplacements potentiels

Game Theory for Green Datacenters

In order to operate a datacenter only with renewable energies, a negotiation has to be undertaken between the sources providing and storing the energy (solar panels, wind turbine, batteries, hydrogen tanks) and the consumers of the energy (basically the IT infrastructure). In the context of the ANR DATAZERO2 project (, a negotiation module has to be improved, starting from a existing proof of concept already published. The improvement will be included in a dedicated module, interoperable with a functioning middleware developed in the project.

Federation of clouds: Multi-Clouds overflow

To cover data analytics needs, Cloud providers need to adapt their IaaS services to resources consumption fluctuations and demands. This requests geographical distribution of tasks excutions and flexible services. Having a federation of cloud providers allows to provide such services to users. In this project, users submit their applications on a cloud broker. The aim is to find resources in one or many clouds to be able to answer the request.

DVFS-aware performance and energy model of HPC applications

Power consumption of computers is becoming an major concern. To optimise their power consumption it is necessary to have precise information on the behavior of applications. With this information, it is possible to choose the right frequency of a processor. The speed of some applications is not really impacted by changes of this frequency, while for some application it has an important effect. The goal of this internship is to model the fine grained behavior of applications and to link this behavior with the impact (on performance and energy) of frequency changes.

Fast scheduling under energy and QoS constraints in a Fog computing environment

Location: LAAS-CNRS - Team SARA or IRIT - Team SEPIA Supervisors: Da Costa Georges ( / Guérout Tom ( Duration: 6 months, possibility of thesis afterwards. Context The explosion of the volume of data exchanged within today’s IT systems, due to an increasingly wide use and by an increasingly wide audience (large organizations, companies, general public etc.), has led for several years to question the architectures used until now. Indeed, for the past few years, Fog computing [1], which extends the Cloud computing paradigm to the edge of the network, has been developing steadily, offering more and more possibilities and thus extending the field of Internet of Things applications.

Impact of processor temperature on HPC application performance and energy conusmption

Large scale datacenters manage applications as black boxes. Most of the time, they assume that application behavior is not linked to the state of the underlying hardware. When an applications runs on a hot processors, it can be slowed down arbitrary by the processor as it tries to protect itself. The goal of this internship is to evaluate the impact of temperature on the speed of the code, the impact of the execution of the code on temperature, and the possibility to reduce the frequency of the processor to cool down the processor at key points to cool down the processor (and thus speed up the application)