Is RAG obsolete? 2025 roadmap from “long context” to GraphRAG, Retrieval‑Aware Training and KBLaM

Cover

0. What problem does this article solve?

Over the last year, a recurring debate has been: “If LLMs can read a million tokens in one go, do we still need retrieval‑augmented generation (RAG)?” Some argue we can just stuff everything into the prompt; others see RAG as a toolchain with plenty of room to grow.

This article goes beyond a yes/no answer. We explain why long context cannot replace RAG, how to modernize RAG so it stays central in 2025 and beyond, and what it takes to make it work in real projects. It’s our synthesis after surveying the literature and tracking enterprise practice.

We'll cover:

  • Real‑world constraints: why long context can’t solve everything.
  • Postmortems: why naïve RAG fails in practice.
  • The “five‑piece set”: a modern engineering solution.
  • Deep dives: GraphRAG and Retrieval‑Aware Training.
  • Deployment path: how to run RAG in constrained environments (e.g., domestic servers, air‑gapped).
  • Improvements and outlook: where RAG should evolve next.

1. Why “more context” isn’t the finish line

From Gemini 1.5 to Claude 3, context windows have exploded—some to a million tokens. At first glance, that looks like “no more retrieval.” In practice, a bigger backpack doesn’t make a better trip.

1.1 Cost and latency: your budget is finite

More tokens mean more cost and latency. Teams report that cramming a 20k‑character document into 32k+ windows can make a single inference cost >5× more. Under concurrency, P95 latency can jump from <1s to several seconds—unacceptable for interactive apps (support bots, incident triage, etc.).

1.2 Timeliness: knowledge updates outrun parameter updates

Enterprise knowledge changes daily; finance/news/social scenarios can change by the minute. Baking facts into parameters forces frequent fine‑tuning—costly and risky. RAG keeps knowledge external and hot‑swappable without touching the model.

1.3 Compliance and auditability: provenance beats eloquence

Regulated domains (nuclear, power, water, finance) need not just correct answers but provenance: which doc, which clause, which release. Dump‑prompting mixes versions and loses traceability. RAG logs retrievals, snippets, versions, and timestamps—an auditable trail you can replay.

1.4 Privacy and compartmentalization: isolate by clearance

Enterprises segment knowledge by clearance. Shoving everything into one window over‑exposes data. RAG retrieves per‑request within access control boundaries, enforcing isolation.

Bottom line: long context is a bigger backpack; RAG is a live map. Real systems need both—complementary, not substitute.


2. Why naïve RAG crashes—postmortems

The first RAG attempt often looks like: chunk PDFs → embed → k‑NN → take top‑k → stuff into the prompt. It “works” in demos but breaks in production. A few real‑world failure modes:

2.1 Vector‑only, structure‑blind

After chunking equipment manuals, a team used pure vector search. Asked “What’s valve B’s maintenance interval?”, the retriever returned a “valve size comparison table”—semantically related but not answering the numeric policy. Tables and time‑series need dedicated retrievers.

2.2 Brutal chunking fragments facts

An SOP split by fixed length sliced definitions in half. “Safety conditions: (1) pressure ≥ 0.35 MPa; (2) 35–55 ℃” got split, and the model saw only (1), missing (2).

2.3 Context stuffing adds noise

Feeding 10–20 chunks “just in case” dilutes attention. In alarm triage, far more text about “principles/history” buried the actionable “steps,” yielding vague answers.

2.4 Freshness, re‑ranking, auditability

  • Freshness: no incremental indexing → outdated laws/notices; version policy unclear.
  • Re‑ranking: no cross‑encoder → “related but not sufficient” comes first.
  • No audit: answers lack evidence trails regulators can inspect.

The issue isn’t retrieval per se—it’s using the wrong method. Separate retrieval, evidence packaging, generation, and verification to unlock RAG’s value.


3. The new “five‑piece set”: skeleton, page‑flipping, tools, freshness, and minimal parameter updates

From research and deployments, mature 2025 stacks converge on five pieces, each fixing a naïve‑RAG pain point:

ComponentRoleWhere it shinesChallenges
Structured retrieval (GraphRAG)Build the knowledge skeletonMulti‑doc hops, regulations, SOPsKG extraction and upkeep
Retrieval‑aware trainingLearn when/what to retrieveQA and summarizationTraining cost and data
Agentic orchestrationPlan multi‑step toolsPlans, queries, calculationsSafety and efficiency
Freshness managementKeep the index aliveNews, markets, high‑timelinessMonitoring and version policy
Parameter updates (optional)Bake small, stable factsTemplates, disambiguationHallucinations and scope

We’ll unpack each piece and how to land it.


4. Structured retrieval (GraphRAG): build the skeleton for stability

Graph‑shaped retrieval isn’t new; it’s finally practical at scale. The idea: add a structured skeleton (a KG or entity‑relation graph) alongside unstructured text “flesh.” Complex tasks—regulations, processes, root‑cause analysis—often follow paths along entities and relations.

4.1 Building the graph: from text to skeleton

Chunk + extract: chunk by headings/sections/paras with a sliding window; run NER/RE to extract entities/relations into a light KG. Keep entity/edge types simple (2–3 types) to start, e.g., “article → term → applicability” or “device → component → failure mode.”

Disambiguate/merge: handle same‑name/different‑entity and synonyms; use fingerprint similarity (Jaccard/Cosine) and rules (geo codes, equipment IDs). Queue low‑confidence items for review.

4.2 Multi‑retriever + cross‑encoder re‑rank

Combine BM25 (keyword precision) and dense retrieval (semantic recall), then re‑rank top‑K with a cross‑encoder (e.g., bge‑reranker). This slashes “sounds relevant but doesn’t answer” mistakes.

4.3 Path search: turn QA into route finding

Augment candidates via KG paths. For “startup conditions of device A,” expand along components, actions, interlocks → pull matching clauses → then fetch the text spans. “Graph‑then‑text” or the inverse both work.

4.4 Evidence packs: paths + spans, packaged

Return a structured evidence pack: KG path, text spans, source/ID, version, timestamp, offsets. The generator cites the pack rather than free‑for‑all context, reducing noise and yielding a clear provenance chain.


5. Retrieval‑aware training: teach models to “flip pages,” not memorize

Instead of passively swallowing context, make models aware of when/what to retrieve and how to cite.

5.1 Self‑RAG: self‑check + re‑retrieve loops

Draft → self‑evaluate → retrieve more → refine until confidence or step cap. Great for open‑domain QA/long‑form, but control loops and budget carefully.

5.2 RAFT: label noise vs. evidence in training

Label which retrieved chunks are distractors vs. valid evidence; require inline citations during training. The model learns to ignore noise and cite correctly.

5.3 RA‑DIT: two‑way learning between retriever and generator

First fine‑tune the LLM to cite; then tune retriever parameters (dense/BM25 thresholds) using model outputs so retrieval matches model needs. Best gains, more compute.

5.4 Practical path: start small

If you have annotations, start with RAFT‑style fine‑tuning on Q–A–evidence triples. With little data, bootstrap via synthetic labels then human‑review a slice. For RA‑DIT, iteratively tune recall and re‑rank stages—no need to do everything at once.


6. Agentic orchestration: multi‑step plans + tool calls

Many tasks are not single‑turn. Think: check manual → pull telemetry → compute thresholds → compare maintenance plan → produce steps. RAG handles knowledge; an Agent handles planning and tool calls.

6.1 Tool registry and routing

Register tools (SQL, logs, spreadsheets, external APIs) with I/O schemas and permissions. The Agent chooses tools based on intent, feeds outputs back to retrieval/LLM.

6.2 Safety caps and budgets

Hard caps: max 4–6 tool calls; budget per query; P95 latency limits with safe fallbacks (“evidence‑only” answers if over cap).

6.3 Caching and replay

Cache common intents via (intent summary + evidence hash). Log tool inputs/outputs for reproducibility and audits.


7. Freshness and streaming indexes: keep knowledge alive

7.1 CDC and incremental embeddings

Capture inserts/updates/deletes; chunk/embed/update indexes hourly or faster for volatile domains.

7.2 TTL and version policy

Different TTLs per content type and switchable policies: “latest first,” “stable first,” or “historical snapshot.”

7.3 Rolling eval and monitoring

Maintain a last‑7‑days eval set; track recall hit rate, NDCG, P95 latency, and cost. Investigate drift quickly.


8. Parameter‑level updates (optional)

Use LoRA/adapters for style/templates; surgical editing for rare fact fixes. Always keep evidence‑first generation to avoid overconfidence.


9. KBLaM: a trustworthy baseline for constrained environments

We’ve deployed KBLaM on domestic servers with external retrieval and structural encodings. Ideas worth borrowing for “trustworthy, controllable, explainable” RAG.

9.1 Unified knowledge layer: text, tables, graphs, metadata

Unify modalities; use multi‑modal embeddings into one space; record provenance metadata (source/version/time/clearance).

9.2 Evidence chain: question → evidence → answer

Route by intent; retrieve/re‑rank; expand via KG; build evidence packs (source/version/path/offsets); generate; verify via SQL/rules if needed; log for audits.

9.3 Minimal viable flow (pseudo)


def answer(question):
  intent = classify(question)
  route = select_route(intent)
  candidates = retrieve(question, route)
  if intent.requires_graph:
    path = graph_search(candidates)
    candidates = merge(candidates, path)
  evidence = pack(candidates)
  draft = generate(question, evidence)
  if need_verify(draft):
    ans = verify_and_refine(draft)
  else:
    ans = draft
  log(question, evidence, ans)
  return ans

10. Cost model and examples: do the math

10.1 Input tokens dominate

“Dump context” may push inputs to 30k+ tokens; an evidence‑pack approach often needs 1–3 spans (~500–700 tokens each) → ~1.5k–2.1k total—often 10× less.

10.2 Retrieval adds a little cost, saves a lot

BM25/dense retrieval is cheap; cross‑encoder re‑rankers are small and CPU‑friendly. Overall, “retrieve then generate” wins in most cases.

10.3 Capability vs. cost trade‑offs

For content creation or truly long citations, hybrid strategies shine: RAG for facts, long context for free‑form prose. Count tokens/calls/costs and choose per need.


11. Implementation checklist: from prototype to production

  1. Data governance: scrub PII; classify by clearance; track source/version.
  2. Chunking: section‑based with sliding windows.
  3. Retrieval baseline: BM25 + dense + re‑rank; add table/code/figure retrievers.
  4. Evidence packs: include source_id, url, version, timestamp, offset, path, with a unique answer_id.
  5. Generation templates: claim‑evidence style with inline citations.
  6. Agent caps: tool list, step/budget limits; cache frequent queries.
  7. Freshness: incremental crawl/embeddings; TTL and version policies; rolling eval.
  8. Monitoring/replay: dashboards for recall/NDCG/P95/cost; replay question→evidence→answer→tools.
  9. Gray release: canary new models/strategies; compare to baseline; ramp up gradually.
  10. Team ops: clear owners for retrieval/KG/training/infra; fast feedback loops.

12. Next stops: where RAG should evolve

12.1 Dynamic retrieval and gating

Scale brings cost; add ExpertRAG‑style gating: only retrieve when internal knowledge is insufficient, and activate sparse “experts” per query.

12.2 Hierarchical or hybrid retrieval

Two‑stage “coarse → fine” pipelines (doc‑level sparse → in‑doc dense → cross‑encoder) help multi‑hop tasks (cf. HiRAG).

12.3 Preserve structure in knowledge encodings

Avoid compressing KGs into a single vector; encode triples or subgraphs; add numeric/date encodings; chain paths with CoT.

12.4 Adaptive compression and knowledge selection

Allocate vector capacity by importance (access frequency, confidence, business value); drop irrelevant tokens—MoE‑style routing.

12.5 Multilingual and cross‑lingual retrieval

Use multilingual embeddings (e.g., gtr/multi‑qa‑mpnet) with language tags/gates; cross‑lingual reasoning is the hard part.

12.6 External checks and self‑consistency

Post‑answer verification via graph/SQL; re‑retrieve or abstain on contradictions; check KG path coherence; optional web cross‑checks.


13. Conclusion: the map won’t go out of style

RAG addresses “finding and citing knowledge,” largely orthogonal to window size. Long context helps, but not with cost, timeliness, audits, or privacy—and not with multi‑step structured tasks.

The 2025 “five‑piece set” emerges: skeleton + page‑flipping + tools + freshness + modest parameter updates. GraphRAG guides complex paths; retrieval‑aware training teaches citation; agents manage plans; freshness prevents staleness; small parameter updates help templates. KBLaM’s unified layer, evidence packs, and audit trails offer a deployment blueprint for regulated settings.

With dynamic gating, hierarchical retrieval, structured encodings, adaptive compression, and multilingual capability, RAG will keep evolving—likely fusing deeper with knowledge‑injection models like KBLaM. One constant remains: under budget and in complex settings, retrieval is the LLM’s most reliable partner. Bring a live map; any backpack gets lighter.


  1. Akari Asai et al., Self‑RAG, arXiv, 2024.
  2. Microsoft Research, GraphRAG, 2024.
  3. Naman Bansal, Best Open‑Source Embedding Models Benchmarked and Ranked, Supermemory Blog, 2025.
  4. Haoyu Huang et al., HiRAG: Retrieval‑Augmented Generation with Hierarchical Knowledge, arXiv, 2025.
  5. Esmail Gumaan, ExpertRAG: Efficient RAG with Mixture of Experts, arXiv, 2025.
  6. Hang Luo et al., Causal Graphs Meet Thoughts: Enhancing Complex Reasoning in Graph‑Augmented LLMs, arXiv, 2025.
  7. Wei Liu et al., XRAG: Cross‑lingual Retrieval‑Augmented Generation, arXiv, 2025.
Subscribe to new posts

Get updates via RSS or Email. No spam.

RSS 订阅 RSS Subscribe Suscribirse por RSS Email Subscribe