Post

CFP: Special Issue on AI-driven Sustainable Cloud/Edge System, deadline extension

Special Issue on AI driven Sustainable Cloud/Edge System in Sustainable Computing: Informatics and Systems (SUSCOM) journal Scope It is essential to broadly make intelligence an integral part of every sustainable computing work, but more importantly in cloud/edge systems due to their inherent closeness to the physical world we live in. With that in mind, we see extensive research on intelligence in cloud/edge systems, which has manifested itself in the development of exciting new systems including those that improve efficiency, reduce cost, optimize sustainability or bring new business venues to bear.

Reconnecter un monde dévasté - Appel à communications

Dans le cadre du cycle de travaux “Le numérique à son miroir”, la journée « Reconnecter un monde dévasté » ? Représentation des enjeux environnementaux dans les jeux vidéo Sera organisée le 8 avril 2022 à l’Université de Rouen Normandie, Campus de Mont-Saint-Aignan Comité d’organisation : Sandra Provini, Mélanie Lucciano, Tony Gheeraert Lien vers l’appel à communications Les communications pourront par exemple, et sans exclusive, s’intéresser aux questions suivantes :

Game Theory for Green Datacenters

In order to operate a datacenter only with renewable energies, a negotiation has to be undertaken between the sources providing and storing the energy (solar panels, wind turbine, batteries, hydrogen tanks) and the consumers of the energy (basically the IT infrastructure). In the context of the ANR DATAZERO2 project (datazero.org), a negotiation module has to be improved, starting from a existing proof of concept already published. The improvement will be included in a dedicated module, interoperable with a functioning middleware developed in the project.

Federation of clouds: Multi-Clouds overflow

To cover data analytics needs, Cloud providers need to adapt their IaaS services to resources consumption fluctuations and demands. This requests geographical distribution of tasks excutions and flexible services. Having a federation of cloud providers allows to provide such services to users. In this project, users submit their applications on a cloud broker. The aim is to find resources in one or many clouds to be able to answer the request.

DVFS-aware performance and energy model of HPC applications

Power consumption of computers is becoming an major concern. To optimise their power consumption it is necessary to have precise information on the behavior of applications. With this information, it is possible to choose the right frequency of a processor. The speed of some applications is not really impacted by changes of this frequency, while for some application it has an important effect. The goal of this internship is to model the fine grained behavior of applications and to link this behavior with the impact (on performance and energy) of frequency changes.

Fast scheduling under energy and QoS constraints in a Fog computing environment

Location: LAAS-CNRS - Team SARA or IRIT - Team SEPIA Supervisors: Da Costa Georges (georges.da-costa@irit.fr) / Guérout Tom (tguerout@laas.fr) Duration: 6 months, possibility of thesis afterwards. Context The explosion of the volume of data exchanged within today’s IT systems, due to an increasingly wide use and by an increasingly wide audience (large organizations, companies, general public etc.), has led for several years to question the architectures used until now. Indeed, for the past few years, Fog computing [1], which extends the Cloud computing paradigm to the edge of the network, has been developing steadily, offering more and more possibilities and thus extending the field of Internet of Things applications.

Impact of processor temperature on HPC application performance and energy consumption

Large scale datacenters manage applications as black boxes. Most of the time, they assume that application behavior is not linked to the state of the underlying hardware. When an applications runs on a hot processors, it can be slowed down arbitrary by the processor as it tries to protect itself. The goal of this internship is to evaluate the impact of temperature on the speed of the code, the impact of the execution of the code on temperature, and the possibility to reduce the frequency of the processor to cool down the processor at key points to cool down the processor (and thus speed up the application)

Performance and energy models of colocated applications

Large scale datacenters manage applications as black boxes. Most of the time, they assume that applications have no cross impact. When multiple applications are using the memory, their speed is reduces because of the bottleneck of the memory bus. In the other direction, two applications on the same core might not go at half the speed each: if one uses only floting point operations, while the other only memory access for example.