High-level look at RoomSharp performance on realistic query and batch workloads. All numbers are from repeatable BenchmarkDotNet runs with SQLite providers and the same schema.
Same workload, same schema, measured via BenchmarkDotNet with warmups and multiple iterations. All figures below come from these runs.
Means are in μs. Allocation values are bytes.
| Operation | Tool | Mean (μs) | Allocated (Bytes) | Improvement |
|---|---|---|---|---|
| Query By ID (avg) | RoomSharp | ~7.1 | 1,645 | 58% faster |
| Dapper | ~17.7 | 3,424 | - | |
| Update Single | RoomSharp | 28.5 | 704 | 98% faster |
| Dapper | 1,229.6 | 3,408 | - |
| Operation | Records | Tool | Mean (μs) | Allocated (Bytes) |
|---|---|---|---|---|
| Batch Insert | 100 | RoomSharp | 3,029.8 | 115,856 |
| Dapper | 4,295.7 | 299,760 | ||
| Batch Insert | 1,000 | RoomSharp | 16,238.0 | 1,092,664 |
| Dapper | 20,793.3 | 2,963,768 | ||
| Batch Insert | 5,000 | RoomSharp | 43,295.5 | 5,236,568 |
| Dapper | 55,819.6 | 14,595,624 |
RoomSharp's IL-based mapping delivers consistently faster query operations across all scenarios, with improvements ranging from 40% to 98% compared to Dapper.
The zero-allocation design in hot loops means significantly lower memory pressure, especially important for high-throughput applications and batch operations.
Batch insert performance scales linearly and maintains a performance advantage even at 5,000+ records, thanks to prepared statement reuse and optimized buffer management.
Source generation eliminates runtime reflection overhead entirely. What you see is what you get - fully optimized code with zero hidden costs.