new

Get trending papers in your email inbox!

Subscribe

Trending Papers

byAK and the research community

Trending Papers
Submitted by
unilm

VibeVoice Technical Report

VibeVoice synthesizes long-form multi-speaker speech using next-token diffusion and a highly efficient continuous speech tokenizer, achieving superior performance and fidelity.

MicrosoftResearch Microsoft Research · Aug 26, 2025

TradingAgents: Multi-Agents LLM Financial Trading Framework

A multi-agent framework using large language models for stock trading simulates real-world trading firms, improving performance metrics like cumulative returns and Sharpe ratio.

  • 4 authors
· Dec 28, 2024

A decoder-only foundation model for time-series forecasting

A large language model adapted for time-series forecasting achieves near-optimal zero-shot performance on diverse datasets across different time scales and granularities.

  • 4 authors
· Oct 14, 2023
Submitted by
BradyFU

Video-MME-v2: Towards the Next Stage in Benchmarks for Comprehensive Video Understanding

Video-MME-v2 presents a comprehensive benchmark for evaluating video understanding models through a progressive hierarchy and group-based evaluation to assess robustness and faithfulness.

MME-Benchmarks MME-Benchmarks · Apr 6, 2026
Submitted by
chengtim

VOID: Video Object and Interaction Deletion

VOID is a video object removal framework that uses vision-language models and video diffusion models to generate physically plausible scenes by leveraging causal reasoning and counterfactual reasoning.

netflix Netflix · Apr 2, 2026
Submitted by
AaronHuangWei

TriAttention: Efficient Long Reasoning with Trigonometric KV Compression

TriAttention addresses KV cache memory bottlenecks in LLMs by leveraging Q/K vector concentration in pre-RoPE space to improve key importance estimation and enable efficient long-context generation.

nvidia NVIDIA · Apr 6, 2026
Submitted by
taesiri

MinerU2.5: A Decoupled Vision-Language Model for Efficient High-Resolution Document Parsing

MinerU2.5, a 1.2B-parameter document parsing vision-language model, achieves state-of-the-art recognition accuracy with computational efficiency through a coarse-to-fine parsing strategy.

  • 61 authors
· Sep 26, 2025

Bitnet.cpp: Efficient Edge Inference for Ternary LLMs

Bitnet.cpp enhances edge inference for ternary LLMs using a novel mixed-precision matrix multiplication library, achieving significant speed improvements over baselines.

  • 10 authors
· Feb 17, 2025
Submitted by
WENGSYX

DeepScientist: Advancing Frontier-Pushing Scientific Findings Progressively

DeepScientist autonomously conducts scientific discovery through Bayesian Optimization, surpassing human state-of-the-art methods on multiple AI tasks.

LightRAG: Simple and Fast Retrieval-Augmented Generation

LightRAG improves Retrieval-Augmented Generation by integrating graph structures for enhanced contextual awareness and efficient information retrieval, achieving better accuracy and response times.

  • 5 authors
· Oct 8, 2024
Submitted by
Tyrannosaurus

MegaTrain: Full Precision Training of 100B+ Parameter Large Language Models on a Single GPU

MegaTrain enables efficient training of large language models with over 100 billion parameters on a single GPU by utilizing host memory storage and optimized data streaming techniques.

  • 4 authors
· Apr 6, 2026

AI-Trader: Benchmarking Autonomous Agents in Real-Time Financial Markets

AI-Trader presents the first fully automated live benchmark for evaluating large language models in financial decision-making across multiple markets with autonomous information processing.

  • 6 authors
· Dec 1, 2025
Submitted by
yyamada

The AI Scientist-v2: Workshop-Level Automated Scientific Discovery via Agentic Tree Search

The AI Scientist-v2 autonomously proposes hypotheses, performs experiments, analyzes data, and writes peer-reviewed scientific papers, marking the first fully AI-generated paper accepted by a conference.

  • 8 authors
· Apr 10, 2025
Submitted by
akhaliq

Efficient Memory Management for Large Language Model Serving with PagedAttention

PagedAttention algorithm and vLLM system enhance the throughput of large language models by efficiently managing memory and reducing waste in the key-value cache.

  • 9 authors
· Sep 12, 2023
Submitted by
rubenohana

The Well: a Large-Scale Collection of Diverse Physics Simulations for Machine Learning

A large-scale dataset collection, The Well, provides diverse numerical simulations for benchmarking machine learning models in physical systems simulation.

  • 26 authors
· Nov 30, 2024

AutoDev: Automated AI-Driven Development

AutoDev is an AI-driven software development framework that automates complex engineering tasks within a secure Docker environment, achieving high performance in code and test generation.

  • 5 authors
· Mar 13, 2024
Submitted by
taesiri

OpenWorldLib: A Unified Codebase and Definition of Advanced World Models

OpenWorldLib presents a standardized framework for advanced world models that integrate perception, interaction, and long-term memory capabilities for comprehensive world understanding and prediction.

PekingUniversity Peking University · Apr 6, 2026
Submitted by
taesiri

PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B Ultra-Compact Vision-Language Model

PaddleOCR-VL, a vision-language model combining NaViT-style dynamic resolution and ERNIE, achieves state-of-the-art performance in document parsing and element recognition with high efficiency.

PaddlePaddle PaddlePaddle · Oct 16, 2025
Submitted by
wangzx1994

Generative World Renderer

A large-scale dynamic dataset derived from AAA games is introduced to improve generative inverse and forward rendering, featuring high-resolution synchronized RGB and G-buffer data alongside a novel VLM-based evaluation method that correlates well with human judgment.

Submitted by
akhaliq

Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory

Mem0, a memory-centric architecture with graph-based memory, enhances long-term conversational coherence in LLMs by efficiently extracting, consolidating, and retrieving information, outperforming existing memory systems in terms of accuracy and computational efficiency.

  • 5 authors
· Apr 28, 2025
Submitted by
taesiri

GLM-5: from Vibe Coding to Agentic Engineering

GLM-5 advances foundation models with DSA for cost reduction, asynchronous reinforcement learning for improved alignment, and enhanced coding capabilities for real-world software engineering.

  • 186 authors
· Feb 17, 2026
Submitted by
taesiri

AgentScope 1.0: A Developer-Centric Framework for Building Agentic Applications

AgentScope enhances agentic applications by providing flexible tool-based interactions, unified interfaces, and advanced infrastructure based on the ReAct paradigm, supporting efficient and safe development and deployment.

  • 23 authors
· Aug 22, 2025

Kronos: A Foundation Model for the Language of Financial Markets

Kronos, a specialized pre-training framework for financial K-line data, outperforms existing models in forecasting and synthetic data generation through a unique tokenizer and autoregressive pre-training on a large dataset.

  • 7 authors
· Aug 2, 2025
Submitted by
akhaliq

Very Large-Scale Multi-Agent Simulation in AgentScope

Enhancements to the AgentScope platform improve scalability, efficiency, and ease of use for large-scale multi-agent simulations through distributed mechanisms, flexible environments, and user-friendly tools.

  • 8 authors
· Jul 25, 2024
Submitted by
Rbin

RAG-Anything: All-in-One RAG Framework

RAG-Anything is a unified framework that enhances multimodal knowledge retrieval by integrating cross-modal relationships and semantic matching, outperforming existing methods on complex benchmarks.

Submitted by
Virgilllll

MSA: Memory Sparse Attention for Efficient End-to-End Memory Model Scaling to 100M Tokens

Memory Sparse Attention (MSA) enables large language models to process extremely long contexts with linear complexity and high efficiency through innovations like sparse attention and document-wise RoPE.

EverMindAI EverMind-AI · Mar 6, 2026
Submitted by
youganglyu

EvoScientist: Towards Multi-Agent Evolving AI Scientists for End-to-End Scientific Discovery

EvoScientist is an adaptive multi-agent framework that enhances scientific discovery by continuously learning from past interactions through persistent memory modules.

  • 12 authors
· Mar 9, 2026
Submitted by
akhaliq

OpenDevin: An Open Platform for AI Software Developers as Generalist Agents

OpenDevin is a platform for developing AI agents that interact with the world by writing code, using command lines, and browsing the web, with support for multiple agents and evaluation benchmarks.

  • 24 authors
· Jul 23, 2024
Submitted by
taesiri

Embarrassingly Simple Self-Distillation Improves Code Generation

Simple self-distillation improves code generation in large language models by fine-tuning on model-generated samples, effectively addressing precision-exploration trade-offs in decoding.

apple Apple · Apr 1, 2026
Submitted by
quao627

CORAL: Towards Autonomous Multi-Agent Evolution for Open-Ended Discovery

Autonomous multi-agent evolution framework enables open-ended discovery through persistent memory, asynchronous execution, and collaborative problem-solving, achieving superior performance on mathematical and optimization tasks.

Submitted by
vinthony

CutClaw: Agentic Hours-Long Video Editing via Music Synchronization

CutClaw is an autonomous multi-agent framework that uses multimodal language models to automatically edit long video footage into rhythmic, narratively consistent short videos with synchronized audio and visual elements.

Submitted by
akhaliq

LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models

LlamaFactory is a unified framework enabling efficient fine-tuning of large language models across various tasks using a web-based user interface.

  • 5 authors
· Mar 20, 2024

OmniFlatten: An End-to-end GPT Model for Seamless Voice Conversation

A novel GPT-based model, OmniFlatten, enables real-time natural full-duplex spoken dialogue through a multi-stage post-training technique that integrates speech and text without altering the original model's architecture.

  • 9 authors
· Oct 23, 2024
Submitted by
andito

SmolDocling: An ultra-compact vision-language model for end-to-end multi-modal document conversion

SmolDocling is a compact vision-language model that performs end-to-end document conversion with robust performance across various document types using 256M parameters and a new markup format.

ibm-granite IBM Granite · Mar 14, 2025
Submitted by
Jiabin99

MetaChain: A Fully-Automated and Zero-Code Framework for LLM Agents

MetaChain, a fully-automated natural language-based framework, enables non-technical users to create and deploy LLM agents efficiently, demonstrating superior performance on multi-agent tasks and retrieval-augmented generation.

  • 3 authors
· Feb 9, 2025
Submitted by
taesiri

Memory Intelligence Agent

Memory Intelligence Agent framework integrates non-parametric and parametric memory systems with reinforcement learning to enable efficient reasoning and autonomous evolution in open-world environments.

  • 9 authors
· Apr 6, 2026
Submitted by
Dongchao

HeartMuLa: A Family of Open Sourced Music Foundation Models

A suite of open-source music foundation models is introduced, featuring components for audio-text alignment, lyric recognition, music coding, and large language model-based song generation with controllable attributes and scalable parameterization.

  • 28 authors
· Jan 15, 2026
Submitted by
jianchen0311

DFlash: Block Diffusion for Flash Speculative Decoding

DFlash is a speculative decoding framework that uses a lightweight block diffusion model for parallel token drafting, achieving significant speedup over existing autoregressive methods while maintaining high-quality outputs.

z-lab Z Lab · Feb 5, 2026
Submitted by
ethanchern

Speed by Simplicity: A Single-Stream Architecture for Fast Audio-Video Generative Foundation Model

daVinci-MagiHuman is an open-source audio-video generative model that synchronizes text, video, and audio through a single-stream Transformer architecture, achieving high-quality human-centric content generation with efficient inference capabilities.

  • 45 authors
· Mar 23, 2026
Submitted by
groundhogLLM

Beyond Accuracy: Unveiling Inefficiency Patterns in Tool-Integrated Reasoning

Researchers introduce PTE (Prefill Token Equivalents), a hardware-aware metric for measuring efficiency in Tool-Integrated Reasoning scenarios, which better correlates with actual inference latency than traditional token counts by accounting for KV-Cache inefficiencies and long tool responses.

Submitted by
yxl66666

The Latent Space: Foundation, Evolution, Mechanism, Ability, and Outlook

Latent space is emerging as a fundamental computational substrate for language-based models, offering advantages over explicit token-level approaches through continuous representation that mitigates linguistic redundancy and sequential inefficiency.

  • 37 authors
· Apr 2, 2026

AutoFigure-Edit: Generating Editable Scientific Illustration

AutoFigure-Edit is an end-to-end system that generates editable scientific illustrations from text descriptions and reference images, supporting flexible style adaptation and efficient refinement.

Westlake-University Westlake University · Mar 3, 2026

LeWorldModel: Stable End-to-End Joint-Embedding Predictive Architecture from Pixels

LeWorldModel presents a stable end-to-end JEPA framework that trains efficiently from raw pixels using minimal loss terms while maintaining competitive performance in control tasks and encoding meaningful physical structures.

randall-lab galilai-group · Mar 13, 2026
Submitted by
jinpeng0528

AURA: Always-On Understanding and Real-Time Assistance via Video Streams

AURA is an end-to-end streaming visual interaction framework that enables continuous video stream processing with real-time question answering and proactive responses through integrated context management and optimized deployment.

  • 12 authors
· Apr 5, 2026

Efficient Universal Perception Encoder

Efficient Universal Perception Encoder (EUPE) improves edge device performance by distilling knowledge from multiple vision encoders through a two-stage scaling approach, achieving superior representation quality compared to previous methods.

  • 11 authors
· Mar 23, 2026
Submitted by
Jeff-Wang

GigaWorld-Policy: An Efficient Action-Centered World--Action Model

GigaWorld-Policy introduces an action-centered World-Action Model that improves robotic policy learning by decoupling visual and motion representations, enabling faster inference and better task performance through dual supervision from action prediction and video generation.

open-gigaai GigaAI · Mar 18, 2026
Submitted by
daixufang

Agent Lightning: Train ANY AI Agents with Reinforcement Learning

Agent Lightning is a flexible RL framework for training LLMs in various agents, using a hierarchical RL algorithm and decoupling execution from training to handle complex interactions.

  • 8 authors
· Aug 5, 2025
Submitted by
taesiri

In-Place Test-Time Training

In-Place Test-Time Training enables large language models to adapt parameters during inference by modifying the final projection matrix in MLP blocks with a task-aligned objective and efficient update mechanism.

  • 7 authors
· Apr 7, 2026

Zep: A Temporal Knowledge Graph Architecture for Agent Memory

Zep, a memory layer service, outperforms MemGPT in the DMR benchmark and LongMemEval by excelling in dynamic knowledge integration and temporal reasoning, critical for enterprise use cases.

  • 5 authors
· Jan 20, 2025

Self-Supervised Prompt Optimization

A self-supervised framework optimizes prompts for both closed and open-ended tasks by evaluating LLM outputs without external references, reducing costs and required data.

  • 9 authors
· Feb 7, 2025