A Note on Non-Composability of Layerwise Approximate Verification for Neural Inference

每日信息看板 · 2026-02-18
研究/论文
Category
arxiv_search
Source
48
Score
2026-02-17T17:41:59Z
Published

AI 总结

A natural and informal approach to verifiable (or zero-knowledge) ML inference over floating-point data is: ``prove that each layer was computed correctly up t…
#arXiv #paper #研究/论文

内容摘录

A natural and informal approach to verifiable (or zero-knowledge) ML inference over floating-point data is: ``prove that each layer was computed correctly up to tolerance $δ$; therefore the final output is a reasonable inference result''. This short note gives a simple counterexample showing that this inference is false in general: for any neural network, we can construct a functionally equivalent network for which adversarially chosen approximation-magnitude errors in individual layer computations suffice to steer the final output arbitrarily (within a prescribed bounded range).