<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>open-bench — rounds &amp; writeups</title><description>Weekly battle royale benchmark for open-weight coding LLMs. Hidden tests, peer review, cost and wall-clock tracked.</description><link>https://openbenchmark.dev/</link><language>en</language><item><title>Round 2026-05-08 — mimo</title><link>https://openbenchmark.dev/model-royale/round/2026-05-08/</link><guid isPermaLink="true">https://openbenchmark.dev/model-royale/round/2026-05-08/</guid><description>Round 2026-05-08: 7 models on sandbox. Winner mimo at 29.0/30. $0.26 total spend.</description><pubDate>Fri, 08 May 2026 00:00:00 GMT</pubDate></item><item><title>Round 2026-05-05 — glm</title><link>https://openbenchmark.dev/model-royale/round/2026-05-05/</link><guid isPermaLink="true">https://openbenchmark.dev/model-royale/round/2026-05-05/</guid><description>Round 2026-05-05: 7 models on sandbox. Winner glm at 27.5/30. $0.97 total spend.</description><pubDate>Tue, 05 May 2026 00:00:00 GMT</pubDate></item></channel></rss>