Sitemap

A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

Improving Pareto Set Learning for Expensive Multi-objective Optimization via Stein Variational Hypernetworks

Published in AAAI Conference on Artificial Intelligence, 2024

Expensive multi-objective optimization problems (EMOPs) are common in real-world scenarios where evaluating objective functions is costly and involves extensive computations or physical experiments. Current Pareto set learning methods for such problems often rely on surrogate models like Gaussian processes to approximate the objective functions. These surrogate models can become fragmented, resulting in numerous small uncertain regions between explored solutions. When using acquisition functions such as the Lower Confidence Bound (LCB), these uncertain regions can turn into pseudo-local optima, complicating the search for globally optimal solutions. To address these challenges, we propose a novel approach called SVH-PSL, which integrates Stein Variational Gradient Descent (SVGD) with Hypernetworks for efficient Pareto set learning. Our method addresses the issues of fragmented surrogate models and pseudo-local optima by collectively moving particles in a manner that smooths out the solution space. The particles interact with each other through a kernel function, which helps maintain diversity and encourages the exploration of underexplored regions. This kernel-based interaction prevents particles from clustering around pseudo-local optima and promotes convergence towards globally optimal solutions. Our approach aims to establish robust relationships between trade-off reference vectors and their corresponding true Pareto solutions, overcoming the limitations of existing methods. Through extensive experiments across both synthetic and real-world MOO benchmarks, we demonstrate that SVH-PSL significantly improves the quality of the learned Pareto set, offering a promising solution for expensive multi-objective optimization problems.

Recommended citation: Nguyen, M. D., Dinh, P. M., Nguyen, Q. H., Hoang, L. P., & Le, D. D. (2024). Improving Pareto Set Learning for Expensive Multi-objective Optimization via Stein Variational Hypernetworks. arXiv preprint arXiv:2412.17312.
Download Paper

Keyword-driven Retrieval-Augmented Large Language Models for Cold-start User Recommendations

Published in The Web Conference 2025, Workshop LLL 4 E-Commerce, 2025

KALM4Rec operates in two main stages: candidates retrieval and LLM-based candidates re-ranking. In the first stage, keyword-driven retrieval models are used to identify potential candidates, addressing LLMs’ limitations in processing extensive tokens and reducing the risk of generating misleading information.

Recommended citation: Kieu, H. D., Nguyen, M. D., Nguyen, T. S., & Le, D. D. (2024). Keyword-driven retrieval-augmented large language models for cold-start user recommendations. arXiv preprint arXiv:2405.19612.
Download Paper

Improving Pareto Set Learning for Expensive Multi-objective Optimization via Stein Variational Hypernetworks

Published in Proceedings of the AAAI Conference on Artificial Intelligence, 39(18), 19677-19685., 2025

We propose a novel approach called SVH-PSL, which integrates Stein Variational Gradient Descent (SVGD) with Hypernetworks for efficient Pareto set learning. Our method addresses the issues of fragmented surrogate models and pseudo-local optima by collectively moving particles in a manner that smooths out the solution space. The particles interact with each other through a kernel function, which helps maintain diversity and encourages the exploration of underexplored regions. This kernel-based interaction prevents particles from clustering around pseudo-local optima and promotes convergence towards globally optimal solutions.

Recommended citation: Nguyen, M.-D., Dinh, P. M., Nguyen, Q.-H., Hoang, L. P., & Le, D. D. (2025). Improving Pareto Set Learning for Expensive Multi-objective Optimization via Stein Variational Hypernetworks. Proceedings of the AAAI Conference on Artificial Intelligence, 39(18), 19677-19685. https://doi.org/10.1609/aaai.v39i18.34167
Download Paper

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.