Skip to the content.

Photo of Girolamo

About

Girolamo Macaluso
PhD Student in Artificial Intelligence
Media Integration and Communication Center, University of Florence, Italy

馃搷 Florence, Italy 路 鉁夛笍 girolamo.macaluso@unifi.it 路 馃敆 LinkedIn 路 馃捇 GitHub 路 馃帗 Scholar

My research is dedicated to making Reinforcement Learning more efficient and scalable. I began by exploring offline and hybrid offline鈫抩nline RL methods to maximize learning from static datasets, then developed techniques to cut the computational and memory costs of state-of-the-art algorithms. I鈥檓 now advancing continual RL frameworks that adapt to evolving environments and applying RL to fine-tune diffusion models.


Education


Experience

Part-time Software Engineer, Maba S.R.L, Florence
Jan 2019鈥揓ul 2023
Designed and delivered end-to-end enhancements to a centralized ERP platform, including database modeling, backend API development, and user-facing interface components to support customer-specific workflows.


Research Interests


Publications

  1. No MoCap Needed: Post-Training Motion Diffusion Models with Reinforcement Learning using Only Textual Prompts.
    Accepted at Winter Conference on Applications of Computer Vision (WACV) 2026. 馃敆arxiv 馃敆GitHub (Code available soon)
    Propose a reinforcement learning鈥揵ased method to fine-tune pretrained motion diffusion models using only textual prompts.

  2. SPEQ: Offline Stabilization Phases for Efficient Q-Learning in High Update-To-Data Ratio RL.
    Accepted at Reinforcement Learning Conference (RLC) 2025. 馃敆Open Review 馃敆GitHub Introduces a novel online reinforcement learning framework with periodic offline stabilization that cuts computational requirements and training time while matching or exceeding state-of-the-art performance.

  3. A Benchmark Environment for Offline RL in Racing Games.
    Oral at IEEE Conference on Games (COG) 2024. 馃敆Arxiv 馃敆GitHub
    Introduces OfflineMania a novel benchmark environment for offline and offline to online RL with challenging data distribution.

  4. Small Dataset, Big Gains: Enhancing RL by Offline Pre-Training with Model-Based Augmentation.
    Oral at AIBSD Workshop, AAAI 2024. 馃敆Arxiv
    Proposed a novel technique based on a generative model on for improving offline to online RL performance when dealing with small datasets.


Teaching & Mentoring

Teaching Assistant, University of Florence
Jan 2023鈥揚resent
Interactive C/C++ & Python lessons for 200+ undergraduates.

STEM Outreach Teacher, European STEM Project
Feb 2024鈥揚resent
Introduce students to the field of engineering and explain the challenges of applying AI to real world problems.