Yandex Research Highlights Major Progress and Plans for 2025

Yandex Research has wrapped up a productive 2024 with notable achievements across multiple areas of artificial intelligence. The team worked on Generative Computer Vision, Efficient Large Language Models (LLMs), Graph Neural Networks, and Deep Learning for Tabular Data.

Each research group made contributions that reached a global audience, with highly cited papers and real-world implementations in Yandex products.

In 2024, the team published 17 research papers at top-tier conferences. This includes 8 papers at NeurIPS, 4 at ICLR, 3 at ICML, and 1 paper each at CVPR and ACL. Ten team members also participated in five major international events.

Yandex Research collaborated with leading institutions such as IST Austria, KAUST, Carnegie Mellon University, Meta AI, and Together AI. These partnerships helped strengthen the team’s global presence.

Two standout projects were highlighted. The first is a method for compressing large language models called AQLM + PV Tuning, which offers improved quality over existing compression techniques. The full paper is available on arXiv.

The second project, YandexART, is a generative text-to-image model developed with Yandex’s Computer Vision group. It significantly improved image generation quality and now ranks among the top models in its category.

Outside of research, the team enjoyed a kayaking trip during the summer, balancing high-impact work with team-building and nature.

Looking ahead, Yandex Research plans to expand its team beyond the current 30 members in 2025. The group also expressed hopes for more powerful GPUs and aims to explore new scientific areas.

Dan Taylor is an award-winning SEO consultant and digital marketing strategist based in the United Kingdom. He currently serves as the Head of Technical SEO at SALT.agency, a UK-based technical SEO specialist firm.