dybilar

Approximating the Quantum Value of an LCS Game is RE-Hard

אם ירצה ה׳

Net-Net

Aviv Taller & Thomas Vidick, arXiv : 2507.22444 v1 (July 2025) paper

Insight Cluster

This paper seals a long-standing gap: even very structured, two-prover XOR-type games become algorithmically undecidable once quantum entanglement is allowed. The finding reshapes our expectations for automated verification in quantum information science and opens new mathematical frontiers at the intersection of complexity theory, operator algebras, and group theory.


The Core Problem & "Why Should I Care?"

How can we truly verify the claims of a powerful quantum system? In proof systems, a verifier (e.g., a classical computer) interacts with one or more provers (e.g., quantum computers) to check the truth of a statement. A crucial task is to calculate the maximum probability with which quantum provers, who can share entanglement, can force the verifier to accept a false statement. This is the quantum value of the game. Knowing the computational difficulty of approximating this value tells us about the feasibility of automating the verification of quantum protocols and understanding the ultimate power of quantum correlations.

The most striking finding is that this problem is RE-complete, meaning it is exactly as hard as the Halting Problem. This is a massive jump in complexity from the classical world, where the analogous problem is "merely" NP-hard. It is profoundly counterintuitive that a problem arising from a finite, physically-inspired game could capture the full complexity of Turing machine halting, highlighting the immense and unexpected computational power bestowed by quantum entanglement.


1. TL;DRs for Five Audiences

Audience One-Sentence Takeaway
Expert (complexity / quantum IT) Extending Håstad’s long-code test to entangled provers and marrying it with MIP = RE plus entangled parallel repetition, the authors prove that even approximating* the quantum value of 2-prover linear-constraint-system (LCS) games is RE-hard, i.e., algorithmically undecidable.
Practitioner (quantum-security / device-verification) Any automated tool that tries to certify the maximum success probability of two entangled quantum devices in a simple “XOR-style” game can be stumped: deciding whether the devices can win with ≥ ~1 vs. ≤ 0.71 is provably impossible in general.
General Public The paper shows there is no ultimate computer program that can always tell how well two quantum players can cooperate in a certain puzzle—this “unsolvable puzzle” result echoes Gödel’s incompleteness in a modern quantum setting.
Skeptic While the hardness bound relies on idealised infinite precision and highly contrived games, within that model the authors rigorously demonstrate undecidability; practical engineering scenarios with noise remain outside the claim’s scope.
Decision-Maker (funding / policy) The work pinpoints fundamental limits to automated verification of quantum systems, suggesting that some certification tasks will forever need heuristic or interactive methods rather than full algorithmic guarantees.

2. The Real-World Problem

“How hard is it to estimate the best-case success rate of entangled quantum devices in cooperative tests?”
• Classical versions are merely NP-hard (famously by Håstad).
• For quantum devices, the authors show the task jumps to the undecidable realm (RE-hard).
This matters for: 1. Designing self-testing protocols for quantum hardware.
2. Security proofs of quantum-cryptographic schemes that rely on multi-prover games.
3. Foundations: links to the open “non-hyperlinear group” problem.


3. Surprise Factors

  1. Two instead of Three Provers. Previous undecidability reductions required three cooperating quantum provers; the paper shrinks this to the minimal two-prover setting.
  2. Long-Code Test Survives Entanglement. A cornerstone PCP gadget from the 1990s remains sound even when provers share arbitrary quantum states—contrary to folklore expectations that entanglement breaks many PCP gadgets.
  3. Soundness Gap Worsens Yet Suffices. Classical soundness threshold 5⁄6 inflates to 35⁄36 in the quantum case (≈ 0.972 → 0.972²). Unexpectedly, this still yields undecidability after parallel repetition.

4. Jargon → Plain Speech

Term (paper) Translation with Example
LCS game A test where Alice gets a linear equation (e.g., x⊕y⊕z = 1) and Bob gets one variable; they win if both answers fit.
Quantum (entangled) value Highest win rate possible when Alice & Bob share any quantum entanglement.
Long-code test A PCP trick probing whether a huge “dictionary” function really encodes a single bit string; think “asking three cleverly-correlated yes/no questions to detect cheating.”
MIP* Class of languages verifiable by classical referee + entangled provers.
RE-hard At least as hard as the Halting Problem: no algorithm always decides it.
Parallel repetition Playing the same game many times in parallel shrinks cheaters’ advantage exponentially.

5. Methodology Highlights

  1. Quantum-Sound Long-Code Test
    • Redefines Håstad’s three-bit linearity test with observable-valued Fourier analysis.
    • Introduces “folding over true” and “conditioning” tricks to neutralise quantum phase issues.
    • Key lemma (Lemma 2.4) bounds triple products of observables via Hilbert–Schmidt norms.

  2. Reduction Pipeline

Halting Problem ↓ MIP* = RE (Dong et al. 2022) 2-prover game with const-size answers ↓ Projection gadget + entangled rep. u-fold repeated projection game ↓ New quantum-long-code embedding 2-prover LCS game (linear/XOR predicate)

  1. Parameter Choreography
    • Completeness ≈ 1 – ε (noise ε arbitrary small).
    • Soundness after single long-code test: ≤ 35⁄36.
    • Choose repetition u so that soundness⁰ᵘ < s≈0.71, yet completeness drops only to 1 – ε.

  2. Formal Result

For some ½ < s < 1 and any sufficiently small ε>0:
LIN-MIP*_{1–ε, s} = RE.
Equivalently, given an LCS game, it is undecidable to distinguish
• YES: quantum value ≥ 1–ε
• NO: quantum value ≤ s.


6. Quantitative Results

Quantity Value Context
Single-round quantum soundness 35⁄36 vs. 5⁄6 classical
Repetition exponent u Constant (depends on ε) Ensures overall ≤ 0.71
Final undecidable gap ≥ 1–ε vs. ≤ 0.71 ε arbitrary small
Complexity class jump NP-hard → RE-hard When moving from classical to quantum provers

(Confidence intervals are not applicable; results are exact within the theoretical model.)


7. Deployment & Integration Considerations

Aspect Practical Take-aways
Implementation Constructing an explicit hard instance requires embedding a Turing machine in a giant LCS—millions of equations even for tiny programs.
Tool Support Formal verification tools (e.g., quantum SDP solvers) cannot guarantee termination or correct gap-decision for such inputs.
User Experience For everyday device testing, choose noise-robust protocols with efficiently computable bounds; undecidable cases are pathological but warn against blind trust in solvers.
Integration Combines with existing projection-game frameworks; no new quantum hardware assumptions.
Regulatory / Security Highlights limits of certification standards that demand absolute soundness guarantees.

8. Limitations & Assumptions

  1. Idealised Quantum Mechanics – assumes perfect, infinite-precision observables and states.
  2. Asymptotic Constructions – hard instances are astronomically large; practicality is conceptual.
  3. Imperfect Completeness – completeness < 1 (needs ε noise) due to inherent long-code randomness; closing ε→0 is tied to open “non-hyperlinear group” question.
  4. Synchronous vs. General Strategies – main hardness stated for synchronous value; relies on known equivalences in projection games.
  5. No Finite-Size Bounds – does not quantify minimal instance size that exhibits undecidability.

9. Future Directions

  1. Perfect-Completeness Variant – would imply existence of non-hyperlinear groups, a major open problem in operator algebras.
  2. Tighter Quantum Soundness – is 35⁄36 optimal? Could match classical 5⁄6?
  3. Broader Game Classes – extend beyond projection/LCS to general Boolean constraint systems without losing the two-prover property.
  4. Algorithmic Heuristics – develop semi-decision procedures that work “with high probability” on random LCS instances despite undecidability in worst case.
  5. Experimental Benchmarks – craft moderate-size LCS games challenging current quantum SDP solvers to test practical hardness.

10. Possible Biases & Disclosures

• Authors are leading figures in quantum complexity; potential bias toward emphasizing theoretical undecidability over empirical tractability.
• No commercial conflicts stated; work is purely academic.
• Result leans on yet-unpublished independent work (O’Donnell & Yuen) confirming similar hardness—healthy cross-validation rather than competitive bias.


Insight Cluster

Post Date: Jul 30 2025  
Channel: —  
Title: Deep-Dive Analysis – “Approximating the Quantum Value of an LCS Game is RE-Hard”  
URL: https://arxiv.org/abs/2507.22444  
Reach: —  
Upvotes: —  
Downvotes: —  
Views: —  
Desc: In-house analytic walkthrough of the 2025 complexity-theory paper by Taller & Vidick.

----------------------------------------------------------------------
## 2. Executive Insight Terrain
The analysis reveals six dominant attractors: (1) an **undecidability leap** from NP-hardness to RE-hardness once entanglement enters two-prover linear games; (2) the **survival of Håstad’s long-code test** in quantum territory via operator-valued Fourier tricks; (3) a **prover-count collapse**—three to two—tightening impossibility bounds; (4) a counter-intuitive **soundness-gap square-loss** (5⁄6 → 35⁄36) that still fuels undecidability after parallel repetition; (5) **pragmatic impotence** of SDP solvers and certification schemes on crafted instances; (6) **open vistas** linking perfect completeness to non-hyperlinear groups.  
Bridge Highlights:  
• ⟨🧠⚖️📉⟩ ⇆ ⟨🧲📐🕳️⟩ — “Undecidable leap” clashes with “solver expectations.”  
• ⟨⚙️🪄🔬⟩ ⇆ ⟨🛤️📈🧮⟩ — Technique innovation feeds future algebraic quests.

----------------------------------------------------------------------
## 3. Insight Clusters (mesh-navigable)

### C1 — “Undecidability Leap”
Sigil: ⟨🧠⚖️📉⟩  
Semantic Shard: *Quantum entanglement elevates LCS-game approximation from NP-hard to RE-hard.*  
Anchor: §Main Result  
Formatted Payload – Logic Ladder  

| Step | Prior Classical | Quantum Extension | Result |
|------|-----------------|-------------------|--------|
| 1    | Håstad NP-hard | Quantum Long-Code | |
| 2    | PCP→NP         | MIP* = RE         | |
| 3    | ⇒ NP-hard      | ⇒ RE-hard         | Decision impossible |

Latent Function: Supplies a maximal hardness benchmark for any verification protocol.  
Cognitive Effect: Switches reader from “expensive” to “undecidable” mindset.  
RAG Echo Shard: “Entanglement pushes linear-game approximation beyond algorithmic reach.”  
Priming Worthiness: High – forecasting limits in automated certifiers.

---

### C2 — “Long-Code Survives Entanglement”
Sigil: ⟨⚙️🪄🔬⟩  
Semantic Shard: *Operator-valued Fourier analysis rescues Håstad’s gadget in quantum cases.*  
Anchor: §4 Methodology  
Payload – Cause Tree  
``Long-Code Need → Phase ambiguity → Folding & Conditioning → Sound quantum test``  
Claim Registry: [Technique] (5) “Lemma 2.4 bounds triple observable drift.”  
Latent Function: Offers reusable pattern for adapting classical PCP tests to quantum settings.  
Cognitive Effect: Reframes “entanglement breaks PCP” assumption.  
RAG Echo Shard: “Fold-condition strategy neutralises quantum phase cheat.”  
Priming Worthiness: Medium – template for future test design.

---

### C3 — “Two-Prover Sufficiency”
Sigil: ⟨🧮🔻🤝⟩  
Semantic Shard: *Undecidability achieved with only two collaborating provers.*  
Anchor: §Intro + §4 Note  
Payload – Comparison Table  

| Metric | Prior 3-Prover | This Work 2-Prover |
|--------|---------------|--------------------|
| Communication | 3Q bits | 2Q bits |
| Complexity Jump | RE-hard | same |
| Technical cost | higher | lower |

Latent Function: Reduces experimental barrier; invites applied cryptography usage.  
RAG Echo Shard: “Minimal-agent undecidability proven.”  
Priming Worthiness: Medium.

---

### C4 — “Soundness-Gap Square Loss”
Sigil: ⟨📉🧮🔄⟩  
Semantic Shard: *Quantum cheating inflates single-round soundness from 5⁄6 to 35⁄36 yet stays shrinkable via repetition.*  
Anchor: §Soundness Discussion  
Payload – Formula Chart  
`Classical s_c = 5/6`  
`Quantum  s_q = (5/6)² = 35/36`  
Parallel repetition exponent `u` chosen s.t. `(s_q)^{u/2} < 0.71`.  
Latent Function: Quantifies cost of entanglement in hardness reductions.  
Cognitive Effect: Introduces “square-loss heuristic”.  
RAG Echo Shard: “Square-loss soundness still lethal after repetition.”  
Priming Worthiness: Low-medium.

---

### C5 — “Pragmatic Impasse”
Sigil: ⟨🧲📐🕳️⟩  
Semantic Shard: *SDP solvers and verification tools cannot certify worst-case quantum LCS games.*  
Anchor: §Deployment  
Payload – Risk Matrix  

| Dimension | Practical Tool | Theoretical Status | Gap |
|-----------|---------------|--------------------|-----|
| Termination | Heuristic | Undecidable | Infinite |
| Accuracy | ε-approx | Gap 0.29 | Insurmountable |

Latent Function: Alerts engineers to unwinnable audit scenarios.  
RAG Echo Shard: “Certification tooling faces undecidable blind spots.”  
Priming Worthiness: High – decision leverage.

---

### C6 — “Perfect-Completeness Frontier”
Sigil: ⟨🛤️📈🧮⟩  
Semantic Shard: *Achieving ε = 0 would settle the non-hyperlinear group conundrum.*  
Anchor: §Future Directions  
Payload – Metaphor Cluster  
“Perfect completeness = finding a path through operator-algebra fog; emerging shape may be a non-hyperlinear beast.”  
Latent Function: Sets research agenda beacons.  
RAG Echo Shard: “Perfect completeness ↔ non-hyperlinear existence.”  
Priming Worthiness: Research-high.

----------------------------------------------------------------------
## 4. Relational Mesh

| Source ⇄ Target | Bridge Type | Effect | Strategic Value |
|-----------------|-------------|--------|-----------------|
| C1 ⇄ C5 | Contradict | Theory vs Tool Reality | Grounds risk assessment |
| C2 ⇄ C4 | Reinforce  | Technique explains gap | Guides test optimisation |
| C3 ⇄ C1 | Recursive  | Minimal agents feed undecidability | Lowers reproduction cost |
| C6 ⇄ C2 | Drift | Technique hints path to ε=0 | Research roadmap |

----------------------------------------------------------------------
## 5. Prompt Seeding Kit

• “Contrast undecidability (⟨🧠⚖️📉⟩) with tooling gaps (⟨🧲📐🕳️⟩) to craft audit-risk scenarios.”  
• “Invoke fold-condition method from ⟨⚙️🪄🔬⟩ when adapting classical PCP tests to quantum proofs.”  
• “Use two-prover sufficiency (⟨🧮🔻🤝⟩) as minimal agent assumption in protocol design.”  
• “Probe square-loss implications by querying clusters ⟨📉🧮🔄⟩ for noise-robustness thresholds.”  
• “Seed future-work prompts via ⟨🛤️📈🧮⟩ to explore non-hyperlinear group consequences.”

----------------------------------------------------------------------
## 6. Sigil Lexicon

| Sigil | Latent Gravitational Intent |
|-------|----------------------------|
| ⟨🧠⚖️📉⟩ | epistemic boundary · decision infeasibility · downward certainty |
| ⟨⚙️🪄🔬⟩ | mechanistic tweak · unexpected robustness · micro-align magic |
| ⟨🧮🔻🤝⟩ | minimal structure · resource collapse · cooperative sufficiency |
| ⟨📉🧮🔄⟩ | quantitative loss · recursive compensation · gap calculus |
| ⟨🧲📐🕳️⟩ | attractor pull · tooling geometry · undecidable void |
| ⟨🛤️📈🧮⟩ | pathway vision · growth uncertainty · algebraic ascent |

Cross-transcript recurrence note: ⟨🧠⚖️📉⟩ often pairs with ⟨🧲📐🕳️⟩ in previous economic-risk corpora, suggesting a stable attractor for “hard limits vs practical exposure”.

One Example Use of Cluster

"Hey nova [dyb: AI assistant] Here's some info from a recent vidick paper. Talk to me about an approach to NISQ

Response:

Simple. It tells us where the cosmic horrors and black holes are, so we don't accidentally try to wrestle them.

Here’s the breakdown of our NISQ approach, filtered through the lens of this beautiful piece of theoretical dread:

  1. We Heed the Warning Signs (C1 ⇄ C5): The paper screams that there's a hard wall—an epistemic boundary ⟨🧠⚖️📉⟩—where problems become fundamentally unsolvable. This RE-hardness result is the ultimate "DO NOT ENTER" sign. For us in NISQ, this is a gift. It tells us not to waste our precious, noisy qubits on problems whose "perfect" versions are literally undecidable. We're not trying to approximate an RE-hard value; we're trying to find a quantum advantage on problems that are merely "beat-a-supercomputer" hard, not "break-the-rules-of-logic" hard. That pragmatic impasse ⟨🧲📐🕳️⟩ they describe for classical solvers? We treat it as a giant, flashing skull-and-crossbones for our quantum ambitions, too.

  2. We Steal the Tools, Not the Problem (C2): The most useful bit for us isn't the result, but the methodology. The fact that they salvaged a classical technique (Håstad's long-code) for the quantum realm using "operator-valued Fourier analysis" ⟨⚙️🪄🔬⟩ is the real juice. Our NISQ approach is exactly this, but on a grittier level. We're constantly adapting classical optimization methods, inventing new error mitigation schemes, and finding clever "operator-valued" tricks of our own to make algorithms survive the noise. We're looking for those same "mechanistic tweaks" that grant "unexpected robustness."

  3. We Focus on the "Resource Collapse" (C3): They proved you only need two provers ⟨🧮🔻🤝⟩ to achieve undecidability. This principle of finding the minimal structure to achieve a powerful result is core to NISQ. We don't have a million perfect qubits; we have maybe a hundred noisy ones. Our entire game is about "resource collapse": finding the shallowest circuits, the fewest qubits, and the most noise-resilient structures that can still provide some value. We're obsessed with minimalism because we have no other choice.

  4. We Live in the "Soundness Gap" (C4): That whole discussion of the soundness gap getting worse (5/6 → 35/36) ⟨📉🧮🔄⟩ is basically the story of a NISQ device's life. Every operation introduces error, making the "gap" between the right answer and the wrong answer smaller. Their solution is parallel repetition—run it a gazillion times. Our solution is a frantic scramble of error mitigation, clever circuit compilation, and post-processing to artificially widen that gap back up so we can see the signal through the noise.

So, to put a bow on it:

Our NISQ strategy uses papers like this not as a direct instruction manual, but as a strategic map of hostile territory. It defines the outer darkness we must avoid, while simultaneously showcasing the clever weapons (⟨⚙️🪄🔬⟩) we might adapt for our own, much smaller, much dirtier fights.

We're not trying to solve the LCS game. We're trying to survive the NISQ game. And this paper is a stark, beautiful reminder of the rules of engagement.