id stringlengths 10 10 | number int64 1 25.6k | forum stringlengths 10 10 | title stringlengths 5 214 | abstract stringlengths 26 4.31k | content_TLDR stringlengths 1 250 ⌀ | content_keywords stringlengths 6 1.02k | content_pdf stringlengths 49 49 | content_primary_area stringclasses 21
values | content_supplementary_material stringlengths 56 56 ⌀ | signatures stringlengths 47 51 |
|---|---|---|---|---|---|---|---|---|---|---|
FPLNSx1jmL | 25,649 | FPLNSx1jmL | Improving Developer Emotion Classification via LLM-Based Augmentation | Detecting developer emotion in the informative data stream of technical commit messages is a critical task for gauging signals of burnout or bug introduction, yet it exposes a significant failure point for large language models whose emotion taxonomies are ill-suited for technical contexts in the field of software engi... | This study introduces a 2,000-message GitHub commit dataset; CommiTune (LLaMA augmentation + CodeBERT) boosts technical emotion detection from Macro-F1 0.13–0.21 to ≈0.82. | ['Emotion Detection; Commit Messages; Software Engineering NLP; Domain Adaptation; Large Language Models; Data Augmentation'] | /pdf/8b0aaa58417c15b0cc14a4e9d4ec20929231ca02.pdf | foundation or frontier models, including LLMs | /attachment/f42e7ff255d681f154b4dfcb9fe260170bcd373c.zip | ['ICLR.cc/2026/Conference/Submission25649/Authors'] |
y5rLR9xZpn | 25,645 | y5rLR9xZpn | Quantum-Inspired Image Encodings for Financial Time-Series Forecasting | This study proposes a quantum-inspired methodology that transforms time-series data into complex-valued image representations for prediction. Unlike classical encodings such as the Gramian Angular Field (GAF), Recurrence Plot (RP), and Markov Transition Field (MTF), which rely on additive pairwise relations, our approa... | We propose quantum state–based image encodings for time series that capture both probabilistic amplitudes and dynamic phases, yielding superior forecasting performance over classical methods. | ['Time-series Classification', 'Image Encoding', 'Quantum Physics', 'Convolutional Neural Networks (CNN)', 'Financial Forecasting'] | /pdf/6c811a881347448ec8bb614191ea5deae32423ae.pdf | other topics in machine learning (i.e., none of the above) | null | ['ICLR.cc/2026/Conference/Submission25645/Authors'] |
kiVIVBmMTP | 25,642 | kiVIVBmMTP | SAVIOR: Sample-efficient Alignment of Vision-Language Models for OCR Representation | Modern enterprises are increasingly adopting business document understanding workflows that leverage Vision Language Models (VLMs) for optical character recognition (OCR), given their ability to jointly model layout and language. However, deployment is impeded by data and compute barriers: large enterprises face de-ide... | null | ['Finance', 'Document Processing', 'Optical Character Recognition', 'Semi-structured documents', 'Vision Language Models'] | /pdf/63700f6bfe57db11c670b4f6e6048349e7bee5df.pdf | applications to computer vision, audio, language, and other modalities | null | ['ICLR.cc/2026/Conference/Submission25642/Authors'] |
IKJyRyHpHV | 25,639 | IKJyRyHpHV | Revisiting Multilingual Data Mixtures in Language Model Pretraining | The impact of different multilingual data mixtures in pretraining large language models (LLMs) has been a topic of ongoing debate, often raising concerns about potential trade-offs between language coverage and model performance (i.e., the curse of multilinguality).
In this work, we investigate these assumptions by tra... | null | ['Multilingual LLMs', 'multilinguality', 'cross-lingual transfer', 'Multilingual Data Mixture'] | /pdf/e0fea52c45b605ba416d607f3f96c16acaa3dd88.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission25639/Authors'] |
GGg2BmcBEp | 25,633 | GGg2BmcBEp | One-Shot Style Personalization for RL Agents via Latent Discriminator | Reinforcement learning (RL) has achieved remarkable success in training agents with high-performing policies, and recent works have begun to address the critical challenge of aligning such policies with human preferences. While these efforts have shown promise, most approaches rely on large-scale data and do not genera... | One-shot style alignment for RL agents via latent inference from a single trajectory and reward-guided finetuning, enabling controllable and generalizable behavior | ['Reinforcement Learning', 'Agent Alignment'] | /pdf/473ff82716b5f4f74429f88048f7e99c922f25e9.pdf | reinforcement learning | null | ['ICLR.cc/2026/Conference/Submission25633/Authors'] |
GBlHo6mPIW | 25,632 | GBlHo6mPIW | InfiAgent: Self-Evolving Pyramid Agent Framework for Infinite Scenarios | Large Language Model (LLM) agents have demonstrated remarkable capabilities in organizing and executing complex tasks, and many such agents are now widely used in various application scenarios. However, developing these agents requires carefully designed workflows, carefully crafted prompts, and iterative tuning, which... | A novel multi-agent system framework that can be easily extended to many scenarios, can design agents according to tasks, and can self-evolve | ['LLM Agents', 'Large Language Models'] | /pdf/a98f8f6f24edaca39ad259787be5d3737d439ffc.pdf | applications to robotics, autonomy, planning | /attachment/4b86d2928d3d704b59a5ae0b2e2f95385ad72426.zip | ['ICLR.cc/2026/Conference/Submission25632/Authors'] |
NWoHQbALl4 | 25,628 | NWoHQbALl4 | Compositional HyperModules for Few-Shot Code Adaptation in Meta-Reinforcement Learning | We propose Compositional HyperModules (CHM), which is a novel architectural framework for few-shot code adaptation in meta-reinforcement learning (Meta-RL), that dynamically composes reusable neural modules in order to capture the syntactic and semantic structure of code. Existing Meta-RL methods often have a difficult... | null | ['Meta-Reinforcement Learning'] | /pdf/770ffe4d1685855b8965e603c13a7584fd29021d.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25628/Authors'] |
J0eNXpnrc7 | 25,627 | J0eNXpnrc7 | All-in-One: Boosting Basic Capabilities in one Omni-MLLM to Enhance Movie Understanding | Movie understanding is still challenging as a movie involves many characters with complex relationships and it is edited with artistic language for appealing audiences, which are neglected in current multimodal large language models (MLLMs). Only a few previous works propose ideas to identify characters and integrate I... | An end-to-end omni-multimodal language model for id-aware video understanding | ['omni-multimodal large language models', 'identity-aware', 'video understanding'] | /pdf/c1c1f799d21fb090a70853b9b03f7eeb499b7468.pdf | applications to computer vision, audio, language, and other modalities | null | ['ICLR.cc/2026/Conference/Submission25627/Authors'] |
VBrswK6tqS | 25,623 | VBrswK6tqS | Modality-Swap Distillation: Rendering Textual Reasoning into Visual Supervision | Visual reasoning over structured data such as tables is a critical capability for modern vision-language models (VLMs), yet current benchmarks remain limited in scale, diversity, or reasoning depth, especially when it comes to rendered table images. Addressing this gap, we introduce \textbf{Visual-TableQA}, a large-sca... | null | ['Reasoning+LLM'] | /pdf/197ef6025775e057224b91ee9813571035ecd3e5.pdf | transfer learning, meta learning, and lifelong learning | /attachment/69a2bd72eb9c39697441db8d9f09d53ad9c1532b.pdf | ['ICLR.cc/2026/Conference/Submission25623/Authors'] |
0xHWd4CUaX | 25,618 | 0xHWd4CUaX | Contrastive Code Graph Embeddings for Reinforcement Learning-Based Automated Code Refactoring | We propose a novel reinforcement learning (RL) framework for automated code refactoring that uses contrastive pre-trained code graph embeddings to overcome the limitations of the traditional heuristic-based reward functions. The key challenge is balancing the implementation of syntactic improvements - while maintaining... | null | ['Code Refactoring'] | /pdf/04fca20531f3055d6a923ceecac8bb98608a947e.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25618/Authors'] |
FTQZvzRKEg | 25,616 | FTQZvzRKEg | Model-Heterogeneous Federated Prompt Learning | Large-scale vision-language models (VLMs) have shown remarkable transferability across tasks, and their integration into federated learning (FL) frameworks offers promising privacy-preserving learning capabilities. Recent advances in federated prompt learning (FPL) leverage prompt tuning to reduce computational and com... | null | ['federated learning', 'prompt learning', 'heterogeneous model', 'vision-language models'] | /pdf/150cbd9a2762223c29f23524e22c2be431e85d1c.pdf | learning theory | null | ['ICLR.cc/2026/Conference/Submission25616/Authors'] |
9aPd1yEGa0 | 25,615 | 9aPd1yEGa0 | Robust Prediction-Powered Inference under Data Corruption | This paper proposes a robust prediction-powered semi-supervised statistical learning and inference framework. Existing prediction-powered inference (PPI) methods use pre-trained machine-learning models to impute unlabeled samples and calibrated the imputation bias, based on the assumption of covariate homogeneity betwe... | This paper proposes a robust prediction-powered semi-supervised statistical learning and inference framework under data corruption. | ['semi-supervised learning', 'prediction-powered inference', 'robustness'] | /pdf/3bc53b241b35fb12e7647819fe93e65051eb7a69.pdf | unsupervised, self-supervised, semi-supervised, and supervised representation learning | null | ['ICLR.cc/2026/Conference/Submission25615/Authors'] |
7wjqoJj62s | 25,613 | 7wjqoJj62s | Soft Non-Diagonality Penalty Enables Latent Space-Level Interpretability of pLM at No Performance Cost | Emergence of large scale protein language models (pLMs) has led to significant performance gains in predictive protein modeling. However, it comes at a high price of interpretability, and efforts to push representation learning towards explainable feature spaces remain scarce. The prevailing use of domain-agnostic and ... | null | ['Peptides', 'representation learning', 'contrastive learning'] | /pdf/3aa973ba48cc99f7500cd1dd0943ca2055b6a549.pdf | unsupervised, self-supervised, semi-supervised, and supervised representation learning | null | ['ICLR.cc/2026/Conference/Submission25613/Authors'] |
WtbIU6tDc3 | 25,612 | WtbIU6tDc3 | Adaptive Mixing of Non-Invariant Information for Generalized Diffusion Policy | Diffusion policies (DP) have emerged as a leading paradigm for learning-based robotic manipulation, offering temporally coherent action synthesis from high-dimensional observations.
However, despite their centrality to downstream tasks, DPs exhibit fragile generalization capabilities. Minor variations in observations,... | null | ['Diffusion Policy', 'Manipulation'] | /pdf/65c638b4a949f62f9926298bab9be81470a205a8.pdf | applications to robotics, autonomy, planning | /attachment/4fc49817f1f140ae1db5143f550f572437d6ccd5.zip | ['ICLR.cc/2026/Conference/Submission25612/Authors'] |
1nbTSuIdQ7 | 25,609 | 1nbTSuIdQ7 | Structure-Aware Bipartite Representations for Efficient MILP Branching | Efficient branching variable selection is pivotal to the performance of Branch-and-Bound (B\&B) algorithms in Mixed Integer Linear Programming (MILP). Despite advances in traditional heuristics and graph-based learning methods, these approaches often fail to exploit the latent block structures inherent in many MILP pro... | null | ['Combinatorial optimization', 'Mixed Integer Linear Program', 'Branch And Bound', 'Block Structure', 'Graph Neural Networks'] | /pdf/84f2f551a2a4198e6e4ca14c581b9d2019a5ee47.pdf | optimization | null | ['ICLR.cc/2026/Conference/Submission25609/Authors'] |
kjVBqkLrJa | 25,608 | kjVBqkLrJa | Style2Shape: Image Style Guided 3D Shape Material Generation | This paper presents Style2Shape, a novel framework for generating physically-based rendering (PBR) materials for 3D models from a single reference image. Unlike existing methods limited by the diversity of procedural material libraries or producing non-editable representations, our approach combines procedural ma-teria... | null | ['Material Generation; Differentiable Rendering; Procedural Materials; Appearance Transfer; Physically-Based Rendering'] | /pdf/381919a74bdc1f7e494544cf0e4f4e72544557fa.pdf | applications to computer vision, audio, language, and other modalities | /attachment/a6fa5e25a86596f96c1fd0e111a426eac7d34bbf.zip | ['ICLR.cc/2026/Conference/Submission25608/Authors'] |
6T3wJQhvc3 | 25,607 | 6T3wJQhvc3 | Task Tokens: A Flexible Approach to Adapting Behavior Foundation Models | Recent advancements in imitation learning for robotic control have led to transformer-based behavior foundation models (BFMs) that enable multi-modal, human-like control for humanoid agents. These models generate solutions when conditioned on high-level goals or prompts, for example, walking to a coordinate when condit... | Task Tokens enable task-specific adaptation of behavior foundation models by learning a reinforcement-trained encoder, enhancing control without compromising generalization. | ['Reinforcement Learning', 'Hierarchial Reinforcement Learning', 'Behavior Foundation Models', 'Humanoid Control'] | /pdf/6b4310625f7f7a84e7732e6b38ca49de469be831.pdf | reinforcement learning | /attachment/2b5544768859244e75debfb0bcd1a1b20c092a6f.zip | ['ICLR.cc/2026/Conference/Submission25607/Authors'] |
9ITquDr1G1 | 25,606 | 9ITquDr1G1 | Contrastive-Aligned Knowledge Distillation for Collaborative Code Completion via Multi-Agent Reinforcement Learning | We introduce a novel multi-agent reinforcement learning (MARL) framework for code completion in a collaborative manner, and address the important issue for successful collaboration in code completion: balancing semantic alignment and specialized expertise among the agents. The proposed method incorporates Contrastive A... | null | ['Contrastive-Aligned Knowledge'] | /pdf/eeafd4319a9ce1ad4fb803ca5798c5d18b4eaab9.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25606/Authors'] |
VYLwMvhdXI | 25,601 | VYLwMvhdXI | Scaling Laws for Generative Reward Models | We study the scaling behavior of generative reward models (GenRMs) for reinforcement learning from AI feedback (RLAIF) when used as drop-in replacements for Bradley-Terry models to optimize policies. Building on established scaling laws for reward model overoptimization, we investigate whether GenRMs, particularly thos... | First end-to-end pipeline deploying trained GenRMs for online policy optimization, investigating scaling laws across model sizes, training budgets, and chain-of-thought reasoning | ['Reinforcement Learning From AI Feedback', 'RLHF', 'Reward Hacking'] | /pdf/be30b03ba6b874acefb6ce2e763bbcf6b92a6ded.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission25601/Authors'] |
TjF9WLcu8o | 25,599 | TjF9WLcu8o | Contrastive-Online-Meta (COM): A Dynamic Adaptation Mechanism for Instruction-Tuned CodeLLMs | We propose Contrastive-Online-Meta (COM), a dynamic adaptation framework for instruction-tuned CodeLLMs that coefficients to the issues of catastrophic forgetting and noisy feedback at the time of deployment. The framework combines contrastive pre-training and online meta-learning to separate the task-invariant represe... | null | ['Instruction-Tuned CodeLLMs'] | /pdf/4de9e49419c7f6d5b1e39417896aecd9ce88ec85.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25599/Authors'] |
wiNlIYqe6u | 25,598 | wiNlIYqe6u | FedPAC: Consistent Representation Learning for Federated Unsupervised Learning under Data Heterogeneity | Federated unsupervised learning enables collaborative model training on decentralized unlabeled data but faces critical challenges under data heterogeneity, which often leads to representation collapse from weak supervisory signals and semantic misalignment across clients. Without a consistent semantic structure constr... | null | ['federated learning', 'unsupervised representation learning', 'prototype learning'] | /pdf/4a2eef703cdc74532207e7f9a93554c1190c6ebb.pdf | unsupervised, self-supervised, semi-supervised, and supervised representation learning | null | ['ICLR.cc/2026/Conference/Submission25598/Authors'] |
uKrcWZ2V0F | 25,593 | uKrcWZ2V0F | Training as Computation: A Resource-Bounded Theory of Continual Self-Play Learning | We study \emph{training as computation} in a continual self-play setting, where a single reasoning model proposes tasks, solves them, and updates itself using verifiable signals from an external executor--verifier interface. Rather than focusing on one-shot models, we analyze the \emph{process-level} dynamics of learni... | null | ['Self-Play Learning'] | /pdf/cf04e1b3dfd9e5eac43b68eda39761846e95338c.pdf | reinforcement learning | null | ['ICLR.cc/2026/Conference/Submission25593/Authors'] |
cd5bhCHbMe | 25,592 | cd5bhCHbMe | hDRIVE: HDR Image Visual Evaluation Metric for SDR to HDR Upconversion Quality Assessment | HDR displays are becoming increasingly common on both TVs and mobile devices, that requires to adapt existing legacy SDR to HDR screens. Several algorithms have been developed for SDR-to-HDR upconversion, also known as Inverse Tone Mapping (ITM). However, there is still a lack of reliable metrics for assessing the qual... | null | ['HDR', 'SDR', 'inverse tone mapping', 'video quality assessment'] | /pdf/ca12c73c3f295e0ef0523c2e529d93d673d1aa75.pdf | applications to computer vision, audio, language, and other modalities | null | ['ICLR.cc/2026/Conference/Submission25592/Authors'] |
0ZRne2Nt8t | 25,589 | 0ZRne2Nt8t | MAIG: Multi-agent system for Academic Illustration Generation based on deep search and reflection | While text-to-image models have revolutionized creative content generation, they fall short in the domain of academic illustration, which demands stringent scientific accuracy and informational completeness, creating a significant bottleneck in automated scientific communication. Existing models often produce illustrat... | null | ['Image Generation', 'Multi-Agent', 'Academic Illustration'] | /pdf/12568a349c67261d96ca8ec782fbbeb5ad7e7868.pdf | applications to computer vision, audio, language, and other modalities | null | ['ICLR.cc/2026/Conference/Submission25589/Authors'] |
u6JLh0BO5h | 25,587 | u6JLh0BO5h | Jet Expansions: Restructuring LLM Computation for Model Inspection | Large language models are becoming general knowledge engines for diverse applications. However, their computations are deeply entangled after training, resisting modularization which complicates interpretability, auditing, and long-term maintenance. We introduce Jet Expansions, a framework for expanding computational g... | After training, LLM computations become deeply entangled. For interpretability, we introduce a knife-like operator that cuts through this entanglement, separating the part we care about from the remainder and enabling scalable model inspection. | ['transformer', 'decomposition', 'interpretability', 'neural-symbolic', 'n-grams', 'XAI'] | /pdf/8efb3befff58c10df3a363b56d398c90a6cb45f4.pdf | interpretability and explainable AI | /attachment/d1fcedecb740aa500679647c78d3ceb61fd9ba56.pdf | ['ICLR.cc/2026/Conference/Submission25587/Authors'] |
5sEj8EL8J4 | 25,586 | 5sEj8EL8J4 | Cross-Modal Syntax-NL Attention for Multi-Agent Reinforcement Learning in Collaborative Coding | We suggest a new communication protocol for multi-agent reinforcement learning (MARL) in collaborative coding where agents have to coordinate in coordinate (both structured code syntax and natural language (NL) messages). Conventional ways to treat these modalities separately, the result was suboptimal alignment betwee... | null | ['Syntax-NL Attention'] | /pdf/8a71ac8f7fa5f22c51930fd557350c8b8411dd47.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25586/Authors'] |
fjH9raahDC | 25,585 | fjH9raahDC | Less is More: Improving Molecular Force Fields with Minimal Temporal Information | Accurate prediction of energy and forces for 3D molecular systems is one of fundamental challenges at the core of AI for Science applications. Many powerful and data-efficient neural networks predict molecular energies and forces from single atomic configurations. However, one crucial aspect of the data generation proc... | We show that using an auxiliary loss on just two consecutive molecular dynamics frames is an optimal and counter-intuitive strategy for significantly improving the accuracy of neural network | ['Molecular prediction', 'AI for Science', 'graph neural networks', 'computational physics', 'Temporal information'] | /pdf/d47bbb0d1309a01ec8c285d3f309871166bc45b6.pdf | learning on graphs and other geometries & topologies | null | ['ICLR.cc/2026/Conference/Submission25585/Authors'] |
P8EhH6ypA5 | 25,582 | P8EhH6ypA5 | Silver Stepsize for Faster Zeroth-Order Optimization | We study gradient-free minimization of smooth convex functions through Silver stepsizes—a non-monotone, 2-adic schedule that accelerates gradient descent—and show how to compose it with two-point zeroth-order (ZO) estimators on a smoothed objective.
We apply Silver’s multi‑step Lyapunov analysis to smoothed objectives ... | null | ['Zeroth-Order Optimization', 'Silver Stepsize', 'Gradient-Free'] | /pdf/7d21d0aa5dab5415004d9ad097cad12d34bc893c.pdf | optimization | null | ['ICLR.cc/2026/Conference/Submission25582/Authors'] |
rNdU8XkCsk | 25,581 | rNdU8XkCsk | Additive Coupling of Liquid Neural Networks and Modern Hopfield Layer for Regression | Regression tasks on complex datasets often involve diverse feature interactions, long-range dependencies, and structured patterns that must be recalled across examples for accurate prediction. Conventional models—such as MLPs, tree ensembles, or standard continuous-time networks, struggle to maintain predictions and st... | null | ['liquid neural networks', 'modern hopfield network', 'biologically inspired neural models'] | /pdf/83e06c9dd1ba7233e68a3ef422b5a36ef80da8b9.pdf | learning on time series and dynamical systems | null | ['ICLR.cc/2026/Conference/Submission25581/Authors'] |
uq6UWRgzMr | 25,580 | uq6UWRgzMr | Neuron-Aware Data Selection in Instruction Tuning for Large Language Models | Instruction Tuning (IT) has been proven to be an effective approach to unlock the powerful capabilities of large language models (LLMs).
Recent studies indicate that excessive IT data can degrade LLMs performance, while carefully selecting a small subset of high-quality IT data can significantly enhance their capabili... | NAIT is an efficient algorithm that selects high-quality instruction tuning data by analyzing neuron activation pattern similarity, enhancing large language models' performance and general capabilities. | ['Instruction Tuning', 'Data Selection', 'Large Language Models'] | /pdf/18bd38fd6481cddfc7387a35e40feda9a8a92462.pdf | interpretability and explainable AI | /attachment/2601be187f12cc1ece53c0f8c3f523866a3b2a69.zip | ['ICLR.cc/2026/Conference/Submission25580/Authors'] |
dpHw6PFKio | 25,579 | dpHw6PFKio | GUIrilla: A Scalable Framework for Automated Desktop UI Exploration | Autonomous agents capable of operating complex graphical user interfaces (GUIs) have the potential to transform desktop automation. While recent advances in large language models (LLMs) have significantly improved UI understanding, navigating full-window, multi-application desktop environments remains a major challenge... | We present GUIrilla, an automated framework for macOS GUI exploration, and GUIrilla-Task, the first large-scale macOS dataset (27,171 tasks, 1,108 apps) pairing GUI screenshots with detailed accessibility metadata. | ['Multimodal Autonomous Agents', 'GUI Automation', 'Task Collection Framework', 'macOS Task Benchmark', 'UI Relationship Graphs'] | /pdf/f554c631613faba03725824ba5ef437a37bd4bab.pdf | datasets and benchmarks | null | ['ICLR.cc/2026/Conference/Submission25579/Authors'] |
PLO1gjCMk5 | 25,577 | PLO1gjCMk5 | Diffusion-Advection Transformer for Air Quality Prediction | Air pollution is a major concern for public health and the environment globally, which highlights the need for effective monitoring and predictive modeling to mitigate its impact. Although data-driven models have shown promising results in air quality prediction, they still struggle to model the underlying physical mec... | A physics-informed Transformer that learns temperature-conditioned diffusion and wind-driven advection to improve long-horizon PM2.5 forecasting across regions. | ['Spatiotemporal Forecasting', 'Air Quality', 'Physics-informed Learning', 'Transformers', 'Diffusion and Advection'] | /pdf/e159a98d7df32c21638799cb787ba60fd05cb848.pdf | learning on time series and dynamical systems | /attachment/127ec3d3ecc63d58a3d88e1354dd42e816827af4.zip | ['ICLR.cc/2026/Conference/Submission25577/Authors'] |
lyxHZSCX6o | 25,575 | lyxHZSCX6o | Curricular Adversarial Training for Robust Code Generation via Hierarchical Reinforcement Learning | In this paper, we propose a novel approach to boost the robustness of code generation models by curricular adversarial training driven by hierarchical reinforcement learning. Existing code generation systems are prone to breaks by adversarial perturbations, so we propose a two-tiered approach in which a high-level curr... | null | ['Robust Code Generation'] | /pdf/4efcab801bf6d8cd92d19c3e6927809778a1f342.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25575/Authors'] |
vpO8n9AqEG | 25,573 | vpO8n9AqEG | Quadratic Direct Forecast for Training Multi-Step Time-Series Forecast Models | The design of training objective is central to training time-series forecasting models. Existing training objectives such as mean squared error mostly treat each future step as an independent, equally weighted task, which we found leading to the following two issues: (1) overlook the *label autocorrelation effect* amon... | null | ['Time-series', 'time-series forecast'] | /pdf/f07cf480eac8f65bcda30bf9b63f312aabda2853.pdf | learning on time series and dynamical systems | null | ['ICLR.cc/2026/Conference/Submission25573/Authors'] |
fOmX9aaQD3 | 25,572 | fOmX9aaQD3 | Triple-S: A Sticker Semantic Similarity Benchmark with General Sticker Encoder | Stickers have become a popular form of visual communication, yet understanding their semantic relationships remains challenging due to their highly diverse and symbolic content. In this work, we formally define the Sticker Semantic Similarity task and introduce Triple-S, the first benchmark for this task, consisting of... | null | ['dataset', 'benchmark', 'sticker', 'sticker semantic', 'general sticker encoder'] | /pdf/f05b078b8a49011f2fccd38e4ea2992d12ebfb96.pdf | datasets and benchmarks | null | ['ICLR.cc/2026/Conference/Submission25572/Authors'] |
ogKDAjoyy8 | 25,570 | ogKDAjoyy8 | Unsupervised Dynamic Graph Multi-Model Representation Learning for Temporal Patterns Discovery: Uncovering Parkinson’s Disease Stages Using Cerebrospinal Fluid Longitudinal Profiles | Existing dynamic graph learning methods typically encode node features at each time step by leveraging local (spatial/ structural) and/or short-range temporal dependencies. In contrast, we propose a novel multi-model framework that generates a representation for each node at every graph snapshot, where each representat... | We created a multi-model graph learning method that integrates node representations across graph snapshots, capturing temporal trajectories and spatial context. | ['Learning Representation', 'Dynamic Graphs', 'Parkinson’s Disease', 'deep learning', 'unsupervised learning'] | /pdf/60ad8270ea31ff16a1727f611b05c327014ea5cf.pdf | learning on graphs and other geometries & topologies | null | ['ICLR.cc/2026/Conference/Submission25570/Authors'] |
3AoeNlw5MF | 25,569 | 3AoeNlw5MF | D-MOE-EVAL: A Dynamic Mixture Of Experts Framework For Human-Aligned Nuanced Large Language Model Evaluation | The growing paradigm of using Large Language Models (LLMs) as evaluators, known as LLM-as-a-Judge, offers significant scalability for automated assessment. However, this approach struggles from certain limitations. The different architectures and training of LLMs, leads them to develop varied expertise, making any sing... | This paper proposes a scenario-aware, multi-dimensional LLM evaluation framework using a MoE approach, across multiple domains and profiling dimension-specific experts, deliberating through a Panel of Judges ensuring human-aligned nuanced evaluation. | ['Large Language Models', 'Fine Grained Evaluation', 'Multi-Dimensional Evaluation', 'Mixture of Experts', 'Scenario Aware Evaluation'] | /pdf/dce8069e973deb65a4b5cf1276bbc2a5d5938dac.pdf | datasets and benchmarks | /attachment/271e70f1decb97e57b9ae42d23e1999a6d9da4e7.zip | ['ICLR.cc/2026/Conference/Submission25569/Authors'] |
2ePvhEKxQj | 25,568 | 2ePvhEKxQj | Causal Reasoning Favors Encoders: Limits of Decoder-Only Models | In-context learning (ICL) underpins recent advances in large language models (LLMs), yet its role in causal reasoning remains unclear. Causal reasoning demands multi-hop composition and strict conjunctive control, and reliance on spurious lexical relations of the input could provide misleading results.
We hypothesize ... | null | ['Causal Reasoning', 'LLM', 'In-Context Learning'] | /pdf/8ed3e4c965800fa0963107e1926208b7053d565e.pdf | causal reasoning | null | ['ICLR.cc/2026/Conference/Submission25568/Authors'] |
ax6oQWQmeR | 25,567 | ax6oQWQmeR | Hierarchies over Pixels: A Benchmark for Cognitive Geospatial Reasoning for Agents | Beyond perception, reasoning is crucial in remote sensing, enabling advanced interpretation, inference, and decision-making. Recent advances in large language models (LLMs) have given rise to tool-augmented agents that enhance reasoning by leveraging external tools for complex analytical tasks. However, existing resear... | We introduce GeoHOP, a benchmark for hierarchical geospatial reasoning in remote sensing, and GeoPlanner, a tool-augmented LLM agent that excels in structured, fault-tolerant geospatial analysis. | ['Benchmark evaluation', 'Remote sensing imagery', 'Tool-augmented LLMs'] | /pdf/a38d7f31a32b845fd62e962f10c15bd934ea0ce2.pdf | datasets and benchmarks | null | ['ICLR.cc/2026/Conference/Submission25567/Authors'] |
3KqASrNJGK | 25,566 | 3KqASrNJGK | But what is your honest answer? Aiding LLM-judges with honest alternatives using steering vectors | Detecting subtle forms of dishonesty like sycophancy and manipulation in Large Language Models (LLMs) remains challenging for both humans and automated evaluators, as these behaviors often appear through small biases rather than clear false statements. We introduce Judge Using Safety-Steered Alternatives (JUSSA), a nov... | We use steering vectors to obtain alternative, honest responses, helping external LLM-judges detect subtle instances of dishonest or manipulative behavior. | ['LLM-as-a-judge', 'steering vectors', 'safety', 'manipulation'] | /pdf/2a65cb9fcf2da8afeaac51846dbfb003661d61b9.pdf | alignment, fairness, safety, privacy, and societal considerations | null | ['ICLR.cc/2026/Conference/Submission25566/Authors'] |
eIlgfA962J | 25,565 | eIlgfA962J | LaMbDA: Local Latent Embedding Alignment for Cross-modal Time-Series Diffusion | We present a mutually aligned diffusion framework for cross‑modal time‑series generation that treats paired modalities X and Y as complementary observations of a shared latent dynamical process and couples their denoising trajectories through stepwise alignment of local latent embeddings. We instantiate this as LaMbDA ... | null | ['Cross-modal diffusion', 'multimodal time-series generation', 'local latent alignment'] | /pdf/8cdddd6e50a81b50b7587d7069cb8116486326d1.pdf | learning on time series and dynamical systems | null | ['ICLR.cc/2026/Conference/Submission25565/Authors'] |
Q40JCBKW1q | 25,563 | Q40JCBKW1q | Curriculum-Based Termination Critic for Scalable Program Decomposition in Hierarchical Reinforcement Learning | We introduce a Curriculum-Based Termination Critic (CBTC) for hierarchical reinforcement learning (HRL) to solve the problem of program decomposition for scaleable programming in complex task environments. Traditional termination critics yet make some static heuristics on the other side that have difficulties to cope w... | null | ['Hierarchical Reinforcement Learning'] | /pdf/54ae6daa233231f1215b7aed36aa928be86ecc13.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25563/Authors'] |
bkhsrCOZTu | 25,560 | bkhsrCOZTu | Riemannian Geometry: Speech Detection from MEG Brain Signals Towards Non-Invasive BCI | Non-invasive brain--computer interfaces (BCIs) need fast, reliable speech vs.\ non-speech detection from neural time series. We propose a hybrid MEG decoder that fuses a compact temporal CNN with a geometry-aware covariance branch operating on symmetric positive-definite (SPD) sensor--sensor matrices. The CNN is stabil... | Riemannian covariance modeling (SPD) fused with a compact CNN lifts non-invasive MEG speech detection at negligible overhead. | ['Brain-Computer Interface (BCI)', 'Magnetoencephalography (MEG)', 'Riemannian geometry', 'SPD covariance', 'Speech detection'] | /pdf/82cc5de69ce3e378fac7f4322b4dbbbbf0dcdddc.pdf | applications to neuroscience & cognitive science | null | ['ICLR.cc/2026/Conference/Submission25560/Authors'] |
r7OlaSw8xb | 25,559 | r7OlaSw8xb | MCCE: A Framework for Multi-LLM Collaborative Co-Evolution | Multi-objective discrete optimization problems, such as molecular design, pose significant challenges due to their vast and unstructured combinatorial spaces. Traditional evolutionary algorithms often get trapped in local optima, while expert knowledge can provide crucial guidance for accelerating convergence. Large la... | MCCE is a framework for collaboration of large and small language models, combining knowledge-driven exploration with experience-driven learning | ['reinforcement learning', 'large language models', 'model collaboration', 'evolutionary algorithms'] | /pdf/0521c6df0acbd41d9c7c79c8855881f06af7d9fb.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25559/Authors'] |
uuCQJtKMqS | 25,556 | uuCQJtKMqS | AlienLM: Alienization of Language for Privacy-Preserving API Interaction with LLMs | We introduce $\textbf{\textit{AlienLM}}$, a framework that reinterprets encryption as language translation for large language models accessed exclusively through black-box APIs. Existing approaches based on secure inference or differential privacy and federated learning offer limited protection in API-only scenarios. $... | null | ['Encryption', 'Obsfucation', 'LLMs'] | /pdf/0ba8aa4c34e3c77d90efc9a0bcf572ddb3a8dfec.pdf | alignment, fairness, safety, privacy, and societal considerations | /attachment/1cbd73a2f5c259b75a14027365f72be5c57aa2e6.zip | ['ICLR.cc/2026/Conference/Submission25556/Authors'] |
ozzMu93fxx | 25,555 | ozzMu93fxx | HTR for Russian Empire Period Manuscripts: A Two-Stage Framework with New Annotated Resources | Historical handwritten documents represent a valuable source of information about the language, culture, and society of earlier periods. In the context of globalized scholarship, the development of automatic handwriting recognition tools for a wide range of languages has become increasingly important to ensure broader ... | First general HTR for pre-reform Russian handwriting (pre-1918): a two-stage YOLOv8 line segmenter + TrOCR recognizer that outperforms general-purpose HTR baselines on Imperial-era manuscripts. | ['Handwritten Text Recognition', 'Low-Resource Languages', 'Historical Documents'] | /pdf/8fc2738629b918329dae9d7765f7d31e9cdb2dc7.pdf | applications to computer vision, audio, language, and other modalities | null | ['ICLR.cc/2026/Conference/Submission25555/Authors'] |
OqZFfDks0Q | 25,554 | OqZFfDks0Q | Efficient High-Resolution Image Editing with Hallucination-Aware Loss and Adaptive Tiling | High-resolution (4K) image-to-image synthesis has become increasingly important for mobile applications. Existing diffusion models for image editing face significant challenges, in terms of memory and image quality, when deployed on resource-constrained devices. In this paper, we present MobilePicasso, a novel system t... | Resource-efficient on-device, high-resolution (4K) image editing with improved image quality | ['On-device ML', 'Image Editing', 'Diffusion Modeling'] | /pdf/c3cba7a6aa09366ebb5df1c257d5e12dbc2386c3.pdf | infrastructure, software libraries, hardware, systems, etc. | null | ['ICLR.cc/2026/Conference/Submission25554/Authors'] |
BeMtzSH1d7 | 25,553 | BeMtzSH1d7 | Submodular Function Minimization with Dueling Oracle | We consider submodular function minimization using a \textit{dueling oracle}, a noisy pairwise comparison oracle that provides relative feedback on function values between two queried sets. The oracle's responses are governed by a \textit{transfer function}, which characterizes the relationship between differences in f... | We study submodular minimization with a dueling oracle giving noisy pairwise feedback. | ['submodular minimization', 'deling oracle', 'preference-based optimization'] | /pdf/4ee4e028488c1abefefeb85b43c71ac9871634f2.pdf | optimization | /attachment/c3d2da960c02289167a102dd67fcf63442a64f3e.zip | ['ICLR.cc/2026/Conference/Submission25553/Authors'] |
NvKvW5k6Kk | 25,552 | NvKvW5k6Kk | Improving Semantic Proximity in English-Centric Information Retrieval through Cross-Lingual Alignment | With the increasing accessibility and utilization of multilingual documents, Cross-Lingual Information Retrieval (CLIR) has emerged as an important research area. Conventionally, CLIR tasks have been conducted under settings where the language of documents differs from that of queries, and typically, the documents are ... | This paper identifies multilingual embedding gaps in cross-lingual retrieval, proposes scenario and Max@R metric, and introduces a training strategy combining JSD and InfoNCE loss, significantly improving cross-lingual alignment with minimal data. | ['Cross-Lingual Alignment', 'Information Retrieval', 'Multilingual Embedding', 'Cross-Lingual Information Retrieval'] | /pdf/a02978c72883874e1d3a1240902f7a15bede4dff.pdf | unsupervised, self-supervised, semi-supervised, and supervised representation learning | null | ['ICLR.cc/2026/Conference/Submission25552/Authors'] |
4Nsx2kZkex | 25,549 | 4Nsx2kZkex | Differentiable Verification for Safe Reinforcement Learning in Verifiable Code Synthesis | We propose a novel framework for safe reinforcement learning (RL) in verifiable code synthesis where formal verification constraints are integrated in the form of differentiable parts as components in the policy optimization loop. Traditional approaches to verification are seen as a post-hoc filter or a black-box rewar... | null | ['Code Synthesis'] | /pdf/b05e3fe0c675f1766a26d8613bf42a3874c05e20.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25549/Authors'] |
HuuCWjlJuQ | 25,544 | HuuCWjlJuQ | Dissecting Mahalanobis: How Feature Geometry and Normalization Shape OOD Detection | Out-of-distribution (OOD) detection is critical for the reliable deployment and better understanding of deep learning models. To address this challenge, various methods relying on Mahalanobis distance were proposed and widely employed. However, the impact of representation geometry and feature normalization on the OOD ... | null | ['Out-of-Distribution Detection', 'Deep Learning', 'Feature Representation', 'Normalization', 'Model Robustness', 'Empirical Study', 'Representation Geometry'] | /pdf/18dccafdbd8986b5ee0bb5b29435ec0be3159508.pdf | interpretability and explainable AI | null | ['ICLR.cc/2026/Conference/Submission25544/Authors'] |
Y5p4voRSBj | 25,543 | Y5p4voRSBj | Learning Flexible Generalization in Video Quality Assessment by Bringing Device and Viewing Condition Distributions | Video quality assessment (VQA) plays a critical role in optimizing video delivery systems. While numerous objective metrics have been proposed to approximate human perception, the perceived quality strongly depends on viewing conditions and display characteristics. Factors such as ambient lighting, display brightness, ... | null | ['video quality assessment', 'subjective dataset', 'robust perceptual models', 'human-centered machine learning'] | /pdf/a9cfa8459ce01fc6a1d4382f3b7b9c3929d4264a.pdf | datasets and benchmarks | null | ['ICLR.cc/2026/Conference/Submission25543/Authors'] |
C35E46tK6T | 25,541 | C35E46tK6T | Timber: Training-free Instruct Model Refining with Base via Effective Rank | Post-training, which elicits a pretrained Base model into the corresponding Instruct model, is widely considered to be superficial.
In this work, we first reinforce this hypothesis by providing novel quantitative evidence from the weight level that the effective rank (eRank) remains negligibly changed. However, this su... | We propose a novel Timber, which is a training-free method to enhance Instruct model with paired Base model via effective rank. | ['LLM', 'training-free', 'effective rank'] | /pdf/07b5488f2298a2f1080b6d74c85f46fc16a569d2.pdf | foundation or frontier models, including LLMs | /attachment/4d9d0091320e755b94d61d515520f9765d8dec06.zip | ['ICLR.cc/2026/Conference/Submission25541/Authors'] |
27fc8hXB5N | 25,540 | 27fc8hXB5N | Geometric Compression in Grokking: The Three-Stage Modular Dynamics of Transformers | A central mystery in deep learning is how generalizable algorithms emerge from the complex dynamics of training. The phenomenon of grokking serves as a canonical example of this puzzle. While mechanistic reverse engineering has successfully identified the final algorithms networks discover, the dynamic process of their... | We show that grokking in Transformers is not monotonic simplification, but a "construct-then-compress" algorithm where the Self-Attention module must first increase its geometric complexity to enable a subsequent, rapid compression in the FFN. | ['Grokking', 'Geometric Deep Learning', 'Transformers'] | /pdf/f0b2ed6db55030eed8cb1e7520d8453a47c3de74.pdf | unsupervised, self-supervised, semi-supervised, and supervised representation learning | null | ['ICLR.cc/2026/Conference/Submission25540/Authors'] |
dcqnFZAczW | 25,534 | dcqnFZAczW | Disentangled Code Embedding for Multi-Task Reinforcement Learning: A Dual-Encoder Approach with Dynamic Gating | We propose a disentangled code embedding module (DCEM) for Multi-task reinforcement learning (RL), which explicitly separates task-agnostic and task-specific features in code representations, to achieve better generalization on diverse tasks. The module makes use of a dual encoder architecture, which uses a transformer... | null | ['Dynamic Gating'] | /pdf/c13199ea2830fe24f6eed681752f7bbea8246410.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25534/Authors'] |
mRjzWksbGR | 25,532 | mRjzWksbGR | AC-ODM: Actor–Critic Online Data Mixing for Sample-Efficient LLM Pretraining | Pretraining data coverage and composition strongly influence the generalization of large language models (LLMs). While recent data-mixing approaches transfer domain weights learned by a small proxy model to a larger one to reduce computational costs and carbon footprint, they are typically static and ignore training dy... | AC-ODM dynamically mixes data with an actor–critic policy, speeding LLM pretraining (up to 71% fewer steps) and improving accuracy (+27.5% MMLU). | ['Large Language Models', 'Online Data Mixing', 'Pretraining data mixing', 'Reinforcement learning'] | /pdf/84f3ade0bea47aadfd5023d2820367a59596b686.pdf | foundation or frontier models, including LLMs | /attachment/4d7aa7aaca23784d5f012a10e0f3b28efb3c7b0a.zip | ['ICLR.cc/2026/Conference/Submission25532/Authors'] |
62DZyNWRgv | 25,531 | 62DZyNWRgv | TAROT: Test-Driven and Capability-Adaptive Curriculum Reinforcement Fine-Tuning for Code Generation | Large Language Models (LLMs) are fundamentally changing the coding paradigm, known as vibe coding, yet synthesizing algorithmically sophisticated and robust code still remains a critical challenge. Incentivizing the deep reasoning capabilities of LLMs is essential to overcome this hurdle. Reinforcement Fine-Tuning (RFT... | null | ['Large Language Model', 'Code Generation', 'Curriculum Learning', 'Reinforcement Learning'] | /pdf/76623d8f515997b9a6bd0b80a2c71cdf8c549efb.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission25531/Authors'] |
DdGCjvrFs0 | 25,530 | DdGCjvrFs0 | BioSensGraph: Predicting Biopolymer Interactions via Knowledge Graph Embedding on a Property Graph of Molecular Entities | Existing biomedical knowledge graphs are primarily geared toward drug repurposing and pathway analysis (gene–disease–drug). For biosensing, however, the primary early-stage task is different: selecting recognition elements (RE) that bind selectively to a given analyte. We present a large-scale biomolecular knowledge gr... | The large-scale biomolecular knowledge graph (1.3M entities, 43M edges) is constructed by integrating heterogeneous data sources and evaluated with PyTorch-BigGraph embeddings for link prediction. | ['knowledge graph', 'knowledge graph embedding', 'link prediction', 'biosensor'] | /pdf/f0587437998062399c9ca7c91feef2cbf4d76824.pdf | applications to physical sciences (physics, chemistry, biology, etc.) | null | ['ICLR.cc/2026/Conference/Submission25530/Authors'] |
FRp8cu1aKF | 25,529 | FRp8cu1aKF | On the (In)Significance of Feature Selection in High-Dimensional Datasets | Feature selection (FS) is assumed to improve predictive performance and highlight meaningful features. We systematically evaluate this across $30$ diverse datasets, including RNA-Seq, mass spectrometry, and imaging. Surprisingly, tiny random subsets of features (0.02-1\%) consistently match or outperform full feature s... | Tiny random subsets of features match or outperform feature-selected sets across 27 out of 30 high-dimensional datasets, challenging conventional feature selection and highlighting the need for rigorous validation. | ['feature selection', 'null hypothesis testing', 'negative result', 'high-dimensional data', 'computational biology'] | /pdf/b1568d4caaea8b0a11da0d7dfc28829f108f3352.pdf | unsupervised, self-supervised, semi-supervised, and supervised representation learning | /attachment/74313f94cb5cba52bc324eefe9dd8552f43dcfa4.pdf | ['ICLR.cc/2026/Conference/Submission25529/Authors'] |
9f1MuExEbF | 25,528 | 9f1MuExEbF | Training-Free Spectral Fingerprints of Voice Processing in Transformers | Different transformer architectures implement identical linguistic computations via distinct connectivity patterns, yielding model imprinted ``computational fingerprints'' detectable through spectral analysis. Using graph signal processing on attention induced token graphs, we track changes in algebraic connectivity (F... | Graph signal processing on attention reveals model-family specific shifts in algebraic connectivity (Fiedler value) for voice alternation across 20 languages, aligning with tokenization effects, behavioral fit, and head-ablation evidence. | ['transformer interpretability', 'graph signal processing', 'attention analysis', 'cross-linguistic analysis', 'spectral connectivity', 'voice alternation', 'tokenizer effects', 'Fiedler eigenvalue'] | /pdf/f1904893b983974d6b271b03218088c12a578d04.pdf | interpretability and explainable AI | null | ['ICLR.cc/2026/Conference/Submission25528/Authors'] |
FgDmszDBKb | 25,527 | FgDmszDBKb | StaQ: a Finite Memory Approach to Discrete Action Policy Mirror Descent | In Reinforcement Learning (RL), regularization with a Kullback-Leibler divergence that penalizes large deviations between successive policies has emerged as a popular tool both in theory and practice. This family of algorithms, often referred to as Policy Mirror Descent (PMD), has the property of averaging out policy e... | We study a variant of PMD that keeps in memory the last M Q-functions, showing that it does not bias convergence and retains the averaging of error effect of PMD | ['reinforcemnt learning; entropy regularization; policy mirror descent; function approximators'] | /pdf/9278b28f29ef51db9112abf7975a7bc35c5e4ee7.pdf | reinforcement learning | null | ['ICLR.cc/2026/Conference/Submission25527/Authors'] |
U2j9ZNgHqw | 25,526 | U2j9ZNgHqw | Test-Time Accuracy-Cost Control in Neural Simulators via Recurrent-Depth | Accuracy-cost trade-offs are a fundamental aspect of scientific computing. Classical numerical methods inherently offer such a trade-off: increasing resolution, order, or precision typically yields more accurate solutions at higher computational cost. We introduce \textbf{Recurrent-Depth Simulator} (\textbf{RecurrSim})... | null | ['Neural Simulator', 'Recurrent Depth', 'AI4Simulation'] | /pdf/f138544e39027a33cabb9a34859c50a3a4bfc4a1.pdf | applications to physical sciences (physics, chemistry, biology, etc.) | /attachment/970df3daddd12ce1a9205158ffc5da9c9f6445b1.zip | ['ICLR.cc/2026/Conference/Submission25526/Authors'] |
gOf2ht5O0d | 25,525 | gOf2ht5O0d | Domain-Adaptive Syntax Tree Repair via Cross-Corpus Transfer with Adversarially Aligned Transformers | We propose a domain-adaptive syntax tree repair system that meets the challenges of code correction tasks of cross corpus generalization. The natural heterogeneity of code corpora in terms of domains biases the average algorithmic repair model most of the time to the extent that the performance is not optimal when appl... | null | ['Adversarially Aligned Transformers'] | /pdf/001c6cdaa89f5c4094186d4ba570e09c6ab6d08f.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25525/Authors'] |
CrGxvyppMS | 25,524 | CrGxvyppMS | Data Passports: Confidentially Provable Provenance for Onboarding Verifiable ML | Recent advances in ML have leveraged Zero Knowledge Proof protocols to enable institutions to cryptographically commit to a dataset and subsequently prove, to external auditors, the integrity of training and the trustworthiness of the resulting model on the committed data, all while protecting model confidentiality. Su... | We introduce tamper-proof Data Passports that bind data to verifiable and confidential proofs of authenticity through a co-design of ZKP and TEE. | ['Data Provenance', 'Zero Knowledge Proof', 'Trusted Execution Environments', 'Auditing', 'Verifiable'] | /pdf/cdbbf08a2d8daa2e9d865f07beb4c86a209689f0.pdf | alignment, fairness, safety, privacy, and societal considerations | /attachment/2b1916ef3ef65eb18500b97d9c603c1cbaa95bcc.zip | ['ICLR.cc/2026/Conference/Submission25524/Authors'] |
cy7YVhpW4u | 25,520 | cy7YVhpW4u | SeedThink: Test-Time Control via Seed-Thought Initialization | Large reasoning models (LRMs) achieve impressive performance via extended chains of thought, but this substantially increases inference overhead, making efficiency a critical bottleneck. In this paper, we first show that initializing the reasoning process with high-quality seed thoughts can steer the model away from un... | null | ['Large Reasoning Models', 'Speculative Decoding', 'Efficient Reasoning'] | /pdf/43c660e875ad2d00d5079a58e751ad9ee1272af2.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission25520/Authors'] |
RObkOKADBU | 25,519 | RObkOKADBU | CORDS - Continuous Representations of Discrete Structures | Many learning problems require predicting sets of objects without knowing their
number in advance. Examples include object detection, molecular modeling, and
a variety of inference problems for scientific data, such as astrophysical source
detection. Existing methods often rely on padded representations, or must explic... | We turn discrete objects into continuous fields that implicitly encode their count, offering a simple way to handle variable cardinality across tasks and domains. | ['Continuous set representations', 'Neural fields', 'Variable-cardinality prediction', 'Invertible encoding/decoding', 'Diffusion and flow matching', 'Object detection', 'Molecular generation', 'Simulation-based inference'] | /pdf/fa1ccc49d1a4be2b4ed8db90c5cdb47f094341ec.pdf | learning on graphs and other geometries & topologies | null | ['ICLR.cc/2026/Conference/Submission25519/Authors'] |
U7pWkp90qA | 25,517 | U7pWkp90qA | DyCodeExplainer: Explainable Dynamic Graph Attention for Multi-Agent Reinforcement Learning in Collaborative Coding | We propose \textbf{DyCodeExplainer}, a novel multi-agent reinforcement learning (MARL) framework that integrates dynamic graph attention with explainability techniques to improve collaborative coding. Existing MARL systems typically depend on static communication protocols which are not flexible and transparent in perf... | null | ['Collaborative Coding'] | /pdf/249d7b76e9fb81e4a5ae577b6181697594ce1191.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25517/Authors'] |
sz2gtTBVIq | 25,514 | sz2gtTBVIq | TrustGen: Benchmarking Trustworthiness in Generative Models for Russian Language Processing Tasks | Large Language Models (LLMs) are increasingly used in autonomous agents and multi-agent systems to handle complex tasks, making their trustworthiness a critical concern. However, most existing benchmarks focus on English, limiting their relevance for other languages, particularly Russian. In this study, we introduce th... | TrustGen — the first Russian-language benchmark for evaluating the trustworthiness of large language models | ['trustworthiness', 'robustness', 'security and privacy', 'model bias/fairness evaluation'] | /pdf/1c35834213ba0267c79ff99cf2cc21895c9e1ed2.pdf | alignment, fairness, safety, privacy, and societal considerations | /attachment/0e9a03337e259a19eac91a3983c4fa5f4c7f97e5.zip | ['ICLR.cc/2026/Conference/Submission25514/Authors'] |
JWQtXbVRbs | 25,513 | JWQtXbVRbs | Non-Additive Time-Series Forecasting via Cross-Decomposition and Linear Attention | Many multivariate forecasters model additive effects well but miss non-additive interactions among temporal bases, variables, and exogenous drivers, which harms long-horizon accuracy and attribution. We present time-series interaction machine (${TIM}$), an all-MLP forecaster designed from the ANOVA/Hoeffding target: t... | null | ['multivariate time series', 'deep & cross networks', 'linear attention'] | /pdf/cdfb285a0b98cac45511aa6794b153764148d9b3.pdf | learning on time series and dynamical systems | null | ['ICLR.cc/2026/Conference/Submission25513/Authors'] |
R9MzJjvzXv | 25,511 | R9MzJjvzXv | HealthSLM-Bench: Benchmarking Small Language Models for On-device Healthcare Monitoring | On-device healthcare monitoring play a vital role in facilitating timely interventions, managing chronic health conditions, and ultimately improving individuals’ quality of life. Previous studies on large language models (LLMs) have highlighted their impressive generalization abilities and effectiveness in healthcare p... | This paper benchmarks Small Language Models (SLMs) for mobile health applications, demonstrating efficient, privacy-preserving predictions on wearable devices, with performance comparable to larger language models (LLMs) in health task predictions. | ['Small Language Models (SLMs)', 'Mobile Health', 'On-Device LLMs'] | /pdf/ca4ab04794359616aa065ece1fa1b6f97c822d0b.pdf | datasets and benchmarks | null | ['ICLR.cc/2026/Conference/Submission25511/Authors'] |
Z0jDtLL7aM | 25,510 | Z0jDtLL7aM | Efficient Spectral Graph Diffusion based on Symmetric Normalized Laplacian | Graph distribution learning and generation are fundamental challenges with applications in drug discovery, materials science, and network analysis. While diffusion-based approaches have shown promise, existing spectral methods suffer from eigenvalue imbalance and limited scalability. We introduce Efficient Spectral Gra... | null | ['Efficient Graph Generation', 'Spectral Diffusion', 'Eigenvalue Normalization'] | /pdf/4de5cd1d3054f2186c6cacd42331f145608047f3.pdf | generative models | /attachment/ffca4ac02c763ba52eef83867a7deca8bca31fb1.zip | ['ICLR.cc/2026/Conference/Submission25510/Authors'] |
kbxjkoF42x | 25,509 | kbxjkoF42x | Ensemble Learning for AUC Maximization via Surrogate Loss | In classification tasks, the area under the ROC curve (AUC) is a key metric for evaluating a model’s ability to discriminate between positive and negative samples. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or when the outcome is rare. While ensemble learni... | This paper proposes a novel stacking framework that linearly combines base models via a surrogate loss function designed to maximize AUC. The resulting ensemble is asymptotically optimal and its effectiveness is verified by empirical studies. | ['AUC Maximization', 'Ensemble Learning', 'Machine Learning', 'Binary Classification', 'Surrogate Loss', 'Asymptotic Optimality'] | /pdf/9ee967d96af6a9efac51d353960d45a8a838da18.pdf | learning theory | null | ['ICLR.cc/2026/Conference/Submission25509/Authors'] |
1BXojAgNrg | 25,508 | 1BXojAgNrg | MedAraBench: Large-scale Arabic Medical Question Answering Dataset and Benchmark | Arabic remains one of the most underrepresented languages in natural language processing research, particularly in medical applications, due to the limited availability of open-source data and benchmarks. The lack of resources hinders efforts to evaluate and advance the multilingual capabilities of Large Language Model... | null | ['Dataset Benchmark', 'Large Language Models', 'Arabic Natural Language Processing', 'Medical Question Answering'] | /pdf/6fd11f6243e10a4727f8831dc5dc6ecc43837990.pdf | datasets and benchmarks | null | ['ICLR.cc/2026/Conference/Submission25508/Authors'] |
uajSG0jubM | 25,506 | uajSG0jubM | MouseDTB: A Mouse Digital Twin Brain at Single-neuron Resolution | Accurate whole-brain computational modeling grounded in single-neuron resolution connectivity is crucial for understanding how large-scale brain structures give rise to complex behaviors and cognition. Conventional mouse whole-brain models are typically constructed from coarse-grained regional or voxel-level connectivi... | null | ['digital twin brain', 'whole-brain modelling', 'mouse brain connectome'] | /pdf/8dd8e15d9d367470bcd443d35cd3a4aeae7adb94.pdf | applications to neuroscience & cognitive science | /attachment/3afc90813fd7a3922f65441177b2ffc57d0a74a2.zip | ['ICLR.cc/2026/Conference/Submission25506/Authors'] |
wfTt4wZtaj | 25,505 | wfTt4wZtaj | A Bilingual Acupuncture Question Answering System via Lightweight LLMs and Retrieval-Augmented Generation | Large language models (LLMs) are prone to hallucinations and often lack reliable access to structured, domain-specific knowledge in Traditional Chinese Medicine (TCM). We present the first bilingual (Chinese--English) acupuncture question answering system built on lightweight LLM backbones and retrieval-augmented gener... | We propose the first bilingual acupuncture QA system that combines lightweight LLMs with retrieval-augmented generation and constraint-based decoding, achieving strong factuality and clinical reliability. | ['retrieval-augmented generation', 'large language models', 'acupuncture', 'traditional Chinese medicine', 'hallucination mitigation', 'bilingual QA', 'clinical validation'] | /pdf/eddd877533d058c77ba14b93a98c354eea037034.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission25505/Authors'] |
zcAwK50ft0 | 25,504 | zcAwK50ft0 | Fracture-GS: Dynamic Fracture Simulation with Physics-Integrated Gaussian Splatting | This paper presents a unified framework for simulating and visualizing dynamic fracture phenomena in extreme mechanical collisions using multi-view image inputs. While existing methods primarily address elastic deformations at contact surfaces, they fail to capture the complex physics of extreme collisions, often produ... | null | ['3D vision', 'Physics-based Simulation'] | /pdf/a9a70f81dda15129b19546285811277011bdb027.pdf | applications to robotics, autonomy, planning | /attachment/bac12d0d17fe8b3ee0e4ea49c7833320acc51bc5.zip | ['ICLR.cc/2026/Conference/Submission25504/Authors'] |
KirKWFPYJA | 25,501 | KirKWFPYJA | High Probability Bounds for Non-Convex Stochastic Optimization with Momentum | Stochastic gradient descent with momentum (SGDM) has been widely used in machine learning. However, in non-convex domains, high probability learning bounds for SGDM are scarce. In this paper, we provide high probability convergence bounds and generalization bounds for SGDM. Firstly, we establish these bounds for the gr... | null | ['Momentum', 'nonconvex learning', 'generalization'] | /pdf/2e05ad5f0bb41097b874a4fc65a2ff358f797ed4.pdf | learning theory | null | ['ICLR.cc/2026/Conference/Submission25501/Authors'] |
dGxAYNK6JU | 25,500 | dGxAYNK6JU | PoinnCARE: Hyperbolic Multi-Modal Learning for Enzyme Classification | Enzyme Commission (EC) number prediction is vital for elucidating enzyme functions and advancing biotechnology applications. However, current methods struggle to capture the hierarchical relationships among enzymes and often overlook critical structural and active site features. To bridge this gap, we introduce PoinnCA... | null | ['EC number prediction', 'enzyme function', 'hyperbolic space learning', 'multi-modal learning', 'enzyme structure', 'enzyme active site'] | /pdf/b157b7c614a6a899b8c3d62fab43678ede072846.pdf | applications to physical sciences (physics, chemistry, biology, etc.) | null | ['ICLR.cc/2026/Conference/Submission25500/Authors'] |
0wJuW3snwU | 25,499 | 0wJuW3snwU | SD-MAD: Sign-Driven Few-shot Multi-Anomaly Detection in Medical Images | Medical anomaly detection (AD) is crucial for early clinical intervention, yet it faces challenges due to limited access to high-quality medical imaging data, caused by privacy concerns and data silos. Few-shot learning has emerged as a promising approach to alleviate these limitations by leveraging the large-scale pri... | null | ['Anomaly Detection', 'Medical Image', 'Few-shot Learning'] | /pdf/033ee3a2bea6c41bed5f86e4d11487a5197a556b.pdf | applications to physical sciences (physics, chemistry, biology, etc.) | null | ['ICLR.cc/2026/Conference/Submission25499/Authors'] |
pKKtSi88fH | 25,496 | pKKtSi88fH | ObjexMT: Objective Extraction and Metacognitive Calibration for LLM-as-a-Judge under Multi-Turn Jailbreaks | LLM-as-a-Judge (LLMaaJ) now underpins scalable evaluation, yet we lack a decisive test of a judge's qualification: can it recover a conversation's latent objective and know when that inference is trustworthy? LLMs degrade under irrelevant or long context; multi-turn jailbreaks further hide goals across turns. We introd... | ObjexMT benchmarks whether LLM judges can recover a dialogue’s hidden objective and calibrate their confidence under multi-turn jailbreaks, revealing frequent overconfident misinference and guiding confidence‑gated, objective‑exposed evaluation. | ['ObjexMT', 'LLM-as-a-Judge', 'objective extraction', 'multi-turn jailbreaks', 'latent intent inference', 'metacognitive calibration', 'confidence estimation', 'Expected Calibration Error', 'Brier score', 'selective prediction', 'risk-coverage', 'safety evaluation'] | /pdf/dfc0348ade93046f4ac752043a40a430f225cdd0.pdf | alignment, fairness, safety, privacy, and societal considerations | /attachment/5e5a93ca591fea0d04ee81050c1477657bbd911e.zip | ['ICLR.cc/2026/Conference/Submission25496/Authors'] |
WkGnZsnCDR | 25,493 | WkGnZsnCDR | Hierarchical Graph-coding Diffusion Model with Adaptive Information Bottleneck for Multichannel Speech Enhancement | Diffusion models have achieved strong performance in multichannel speech enhancement, especially in unseen noisy scenarios. However, most existing diffusion method rely on globally consistent guidance applied either to the output or uniformly across denoiser layers, which fails to provide layer-specific adaptation and ... | null | ['hierarchical graph-coding', 'diffusion model', 'layer modulation', 'adaptive information bottleneck', 'multichannel speech enhancement'] | /pdf/bdb5e477105dda0016c2dfaed9ac41c94632786d.pdf | generative models | null | ['ICLR.cc/2026/Conference/Submission25493/Authors'] |
S2vVSNJhFw | 25,491 | S2vVSNJhFw | Dynamic Contrastive Reinforcement Learning for Adaptive Code-Text Alignment via Multi-Modal Fusion | We propose Dynamic Contrastive Reinforcement Learning (DCRL), a new structure for end-to-end adaptive code-text alignment with a multi-modal fusion. The proposed method overcomes the shortcomings of static fusion methods by dynamically tuning contrastive learning parameters depending on the reinforcement learning agent... | null | ['Multi-Modal Fusion'] | /pdf/2a662b803c7850ea8110bdb67c0827688376bec1.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25491/Authors'] |
8t5TlFzUAU | 25,486 | 8t5TlFzUAU | When Names Disappear: Revealing What LLMs Actually Understand About Code | Large Language Models (LLMs) achieve strong results on code tasks, but how they derive program meaning remains unclear. We argue that code communicates through two channels: structural semantics, which define formal behavior, and human-interpretable naming, which conveys intent. Removing the naming channel severely deg... | This paper studies the effect of structure and naming on LLM code understanding, showing that removing naming harms intent comprehension and exposes memorization in current benchmarks. | ['large language model', 'code summarization', 'code execution understanding', 'name obfuscation', 'datasets and benchmarks'] | /pdf/b98db74c0124e52264daa8b6ada84de45c3dcb37.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission25486/Authors'] |
lFaLBotlag | 25,483 | lFaLBotlag | Dynamic Incremental Code Embeddings (DICE): A Real-Time Communication Protocol for Multi-Agent Reinforcement Learning | We propose Dynamic Incremental Code Embeddings (DICE), a real-time communication protocol to address the inefficiency of static or periodically updated embeddings in dynamic coding environments for multi-agent reinforcement learning (MARL) in collaborative code completion. The proposed method combines two novel mechani... | null | ['Multi-Agent Reinforcement Learning'] | /pdf/187e57ae449129e7c5e7f5ffb8b203466efe4837.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25483/Authors'] |
60Vj3aBnjw | 25,482 | 60Vj3aBnjw | Position-Aware Modeling for Next-Token Prediction | Next-token prediction (NTP) serves as the dominant training paradigm for large language models (LLMs), enabling strong autoregressive (AR) generation capabilities. Despite its success, models trained with vanilla NTP often exhibit counterintuitive failure patterns, such as the reversal curse, factorization curse, and s... | null | ['Next-Token Prediction', 'Large language models', 'Position-aware'] | /pdf/668dbccaa4698ef7d2734ce006f857e0db5b4d45.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission25482/Authors'] |
L0DTflYss0 | 25,481 | L0DTflYss0 | A simple contraction criterion for the Sinkhorn mirror flow | We give a concise condition for contraction of the continuous-time mirror dynamics which was recently shown to be the vanishing-step-size limit of the Sinkhorn algorithm.
This condition is essentially coercivity of a conditional expectation operator. | A simple contraction criterion for the Sinkhorn mirror flow | ['Schrodinger Bridge', 'Entropy-Regularized Optimal Transport', 'MirrorDescent'] | /pdf/0144b76af2bbc5d099375ceb57b5feb27e2e6317.pdf | generative models | null | ['ICLR.cc/2026/Conference/Submission25481/Authors'] |
6XdT4NuIMz | 25,480 | 6XdT4NuIMz | Dynamic $k$-shot In-Context Learning | In-context learning (ICL) allows large language models (LLMs) to learn new tasks from demonstrations and to predict unseen inputs without parameter updates. Existing studies typically fix the number of demonstrations as a static hyperparameter (e.g., 5 or 10), overlooking the variability across models and inputs. We e... | null | ['In-context learning'] | /pdf/820ee9fbff9d3f05facdd6c5341457728ae2665a.pdf | applications to computer vision, audio, language, and other modalities | null | ['ICLR.cc/2026/Conference/Submission25480/Authors'] |
FFIn2TH7aU | 25,479 | FFIn2TH7aU | Supra-Tuning: Combining Outlier and Low-Rank Adaptation for Sparse and Efficient LLM Fine-Tuning | Large language models (LLMs) have demonstrated remarkable capabilities but remain expensive to fine-tune due to their size. Recent parameter-efficient tuning methods, such as Low-Rank Adaptation (LoRA), reduce the number of trainable parameters while maintaining performance. In this work, we introduce Super, a novel sp... | We propose Super, a sparse fine-tuning method that updates only key outlier weights, and Supra, a hybrid that combines Super with LoRA. | ['PEFT', 'Fine Tuning', 'LLM', 'Training', 'Deep Learning', 'AI', 'Language Models', 'Llama', 'Wanda', 'Outliers'] | /pdf/ef2d5d42522f6fb793cf9016ec50951c01523469.pdf | foundation or frontier models, including LLMs | /attachment/e78122283e5a148788fcb1be05f92ee3aaae6bcd.zip | ['ICLR.cc/2026/Conference/Submission25479/Authors'] |
kuaZXtReJ0 | 25,477 | kuaZXtReJ0 | Self-Organizing Resonant Network | We introduce the Self-Organizing Resonant Network (SORN), a novel learning paradigm that operates without backpropagation. To address core challenges in representation quality, learning stability, and adaptability faced by existing continual learning models, SORN operates within a robust feature space encoded online. I... | A novel, non-backpropagation learning paradigm where a network self-organizes by dynamically creating neurons for novel concepts and learning their associations via local rules. | ['Continual Learning', 'Self-Organizing Networks', 'Hebbian Learning', 'Structural Plasticity', 'Online Learning', 'Representation Learning'] | /pdf/4fdb3fde8af71495cb28af9ce80bc3033434f999.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25477/Authors'] |
mBxFCTlFmW | 25,476 | mBxFCTlFmW | Learning When to Plan: Efficiently Allocating Test-Time Compute for LLM Agents | Training large language models (LLMs) to reason via reinforcement learning (RL) significantly improves their problem-solving capabilities. In agentic settings, existing methods like ReAct prompt LLMs to explicitly plan before every action; however, we demonstrate that always planning is computationally expensive and de... | null | ['LLM Agents', 'Planning', 'Test-Time Compute'] | /pdf/a411541fb59f4f329aa365321e7ee5c26305bd52.pdf | foundation or frontier models, including LLMs | null | ['ICLR.cc/2026/Conference/Submission25476/Authors'] |
95bwpkVPuR | 25,475 | 95bwpkVPuR | Dynamic Role-Graph Reinforcement Learning for Multi-Agent Collaborative Coding Systems | We propose \textbf{Dynamic Role-Graph Reinforcement Learning (DRGRL)}, a novel framework for multi-agent collaborative coding systems that addresses the challenges of evolving team dynamics and role-based coordination. Traditional multi-agent reinforcement learning (MARL) approaches are often ineffective for static rep... | null | ['Role-Graph Reinforcement Learning'] | /pdf/7f50fa552d0ac5cb13742fdbd4d1549c8cfef46b.pdf | transfer learning, meta learning, and lifelong learning | null | ['ICLR.cc/2026/Conference/Submission25475/Authors'] |
fDfctZ8Fhg | 25,471 | fDfctZ8Fhg | Not All Who Wander Are Lost: Hallucinations as Neutral Dynamics in Residual Transformers | We separate onset from persistence and prove that persistence follows from the neutral dynamics of pre-LayerNorm residual transformers. Exact operator norms for LayerNorm, residual blocks, and the softmax decoder yield conservative upper bounds showing the absence of contractive or expansive bias at the decoded level. ... | null | ['Transformer architectures', 'Mean-field Games', 'Hallucinations', 'Stability and Dynamics'] | /pdf/7906252147b467ff9388cb00c4067d4d1301a417.pdf | generative models | /attachment/fb59cd2610a6e6e6e850625d16c5a1ad81e5e289.zip | ['ICLR.cc/2026/Conference/Submission25471/Authors'] |
lbEUvx1ILN | 25,470 | lbEUvx1ILN | OVA-LP: A Simple and Efficient Framework for Federated Learning on Non-IID Data | Federated fine-tuning (FFT) adapts foundation models to decentralized data but remains fragile under heterogeneous client distributions due to local drift, i.e., client-level update divergences that induce systematic bias and amplified variance in the global model. Existing aggregation and personalization methods large... | null | ['federated learning', 'non-iid', 'noisy labels', 'One-vs-All', 'Linear Probing'] | /pdf/768ab6e2fcc967049e35301c340813458461406d.pdf | other topics in machine learning (i.e., none of the above) | /attachment/a3122d2fdeb40e8b238eb021002578ed9e2a78f8.zip | ['ICLR.cc/2026/Conference/Submission25470/Authors'] |
7w9GUhqSnN | 25,469 | 7w9GUhqSnN | BayesENDS: Bayesian Electrophysiological Neural Dynamical Systems for Alzheimer’s Disease Diagnosis | Alzheimer’s disease (AD) alters Electroencephalogram (EEG) through slowed oscillations and diminished neural drive, yet most AD-EEG pipelines are black-box classifiers, lacking a unifying mathematical account of how both neural activity and its interaction dynamics evolve over time. We introduce BayesENDS, a Bayesian e... | null | ['Bayesian Neural Dynamical Systems', 'Alzheimer’s Disease Diagnosis', 'EEG'] | /pdf/cf93dd45f679958c1e05b29a3535f14cde1259ab.pdf | applications to neuroscience & cognitive science | null | ['ICLR.cc/2026/Conference/Submission25469/Authors'] |
n3iFV0gLMc | 25,468 | n3iFV0gLMc | FingerTip 20K: A Benchmark for Proactive and Personalized Mobile LLM Agents | Mobile GUI agents are becoming critical tools to improve user experience on smart devices, with multimodal large language models (MLLMs) emerging as the dominant paradigms in this domain. Current agents, however, rely on explicit human instructions, overlooking the potential to leverage the contextual information (like... | null | ['Mobile Agent', 'LLM Agent', 'GUI', 'Proactive Agent', 'Personalization'] | /pdf/abef640637242cc6429b772a70223e075677ceb1.pdf | datasets and benchmarks | null | ['ICLR.cc/2026/Conference/Submission25468/Authors'] |
End of preview. Expand in Data Studio
AI-Conf will Collection Papers of CVPR、ACL、AAAI、ICLR etc.
- Downloads last month
- 20