CARLoS: Retrieval via Concise Assessment Representation of LoRAs at Scale

TL;DR: We evaluate LoRAs, their effects and properties, and provide an efficient retrieval system for a large corpus collected from CivitAI.

CARLoS Teaser

CARLoS retrieves LoRAs by analyzing their generative behavior rather than relying on creators' descriptions. Our retrieval pipeline (blue) matches query semantics to LoRA effects, outperforming text-based baselines (orange) in visual similarity.

Abstract

The rapid proliferation of generative components, such as LoRAs, has created a vast but unstructured ecosystem. Existing discovery methods depend on unreliable user descriptions or biased popularity metrics, hindering usability. We present CARLoS, a large-scale framework for characterizing LoRAs without requiring additional metadata. Analyzing over 650 LoRAs, we employ them in image generation over a variety of prompts and seeds, as a credible way to assess their behavior. Using CLIP embeddings and their difference to a base-model generation, we concisely define a three-part representation: Directions, defining semantic shift; Strength, quantifying the significance of the effect; and Consistency, quantifying how stable the effect is. Using these representations, we develop an efficient retrieval framework that semantically matches textual queries to relevant LoRAs while filtering overly strong or unstable ones, outperforming textual baselines in automated and human evaluations. While retrieval is our primary focus, the same representation also supports analyses linking Strength and Consistency to legal notions of substantiality and volition, key considerations in copyright, positioning CARLoS as a practical system with broader relevance for LoRA analysis.

How does it work?

CARLoS Overview

Given a set of curated LoRAs, we represent each one as a three-part vector for efficient retrieval. To create this representation (top), we generate images using N = 280 prompts and M = 16 seeds. We measure the semantic difference in CLIP space to define:

  • Direction: The average semantic shift (what the LoRA does).
  • Strength: The mean magnitude of the shift (how intense it is).
  • Consistency: The variance of the shift (how stable the effect is).

During retrieval (bottom), we project the user's query into this same space and retrieve LoRAs with similar Direction vectors, while filtering out those with excessive Strength or poor Consistency.

CARLoS Metrics - Distribution and Visualization

CARLoS Overview

Our LoRA dataset, in Consistency Rank vs. Strength Rank distribution. Too strong and too inconsistent LoRAs (red regions) are filtered out. Example generations for two prompts for different strength and consistency LoRAs are visualized.

Comparisons

Qualitative comparisons of textual description-based retrieval (bottom rows) to CARLoS (top row). While some effects are sufficiently described in text (e.g., Pixel Art) and are therefore retrieved well, more elaborate queries, (such as celestial beings, or futuristic games) are not described well, resorting textual-based retrieval to similar wording as opposed to effects (e.g., clouds, cartoons).

Quantitative Comparison

Quantitative comparisons. Retrieval Performance Evaluated by Different VLMs. Scores indicate the quality of retrieved top-3 LoRAs as judged by state-of-the-art Vision-Language Models. CARLoS consistently yields results preferred by all evaluators. The scores are normalized in min-max manner across all queries and retrievers.

User Study

Aggregated results of our subjective user study. Participants compared CARLoS against four strong textual retrieval baselines (QWEN3, E5, BGE, GTE) across three criteria. CARLoS was consistently preferred in all categories.

Legal Application

Legal Application

Legal considerations. LoRAs expose users to liability and rights. Weak or inconsistent LoRAs (top), are unlikely to impose infringement or authorship. Strong consistent LoRAs may infringe depending on protected elements replication, or distinct styles (bottom-right,middle), but not necessarily (bottom-left).

BibTeX

If you find our work useful, please cite our paper:

@misc{sarfaty2025carlosretrievalconciseassessment,
      title={CARLoS: Retrieval via Concise Assessment Representation of LoRAs at Scale}, 
      author={Shahar Sarfaty and Adi Haviv and Uri Hacohen and Niva Elkin-Koren and Roi Livni and Amit H. Bermano},
      year={2025},
      eprint={2512.08826},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2512.08826}, 
    }