(!) Preview Release: RoomSharp is currently in Preview. Learn more ->
BenchmarkDotNet Results

RoomSharp vs Dapper

High-level look at RoomSharp performance on realistic query and batch workloads. All numbers are from repeatable BenchmarkDotNet runs with SQLite providers and the same schema.

Compile-time mapping Zero reflection Batch + query workloads SQLite provider
Queries ~40% Faster than Dapper (avg)
Batch inserts 22% Higher throughput
Allocations -64% Lower memory pressure

Test Environment

Same workload, same schema, measured via BenchmarkDotNet with warmups and multiple iterations. All figures below come from these runs.

Hardware Controlled

CPU Intel Core i5-11300H @ 3.10GHz
Cores 4 physical / 8 logical
OS Windows 11 (24H2)

Software .NET 9

Runtime .NET 9.0.11
JIT RyuJIT x86-64-v4
Framework BenchmarkDotNet v0.15.7

Configuration BenchmarkDotNet

Iterations 8
Warmup 3 iterations
Launch Count 1

Query By ID Performance

Lower is better Single record by primary key

Batch Insert Scaling

Lower is better Bulk inserts in single transaction

Memory Allocations

Lower is better During batch insert operations

Detailed Results

Means are in μs. Allocation values are bytes.

Query Operations

Operation Tool Mean (μs) Allocated (Bytes) Improvement
Query By ID (avg) RoomSharp ~7.1 1,645 58% faster
Dapper ~17.7 3,424 -
Update Single RoomSharp 28.5 704 98% faster
Dapper 1,229.6 3,408 -

Insert Operations

Operation Records Tool Mean (μs) Allocated (Bytes)
Batch Insert 100 RoomSharp 3,029.8 115,856
Dapper 4,295.7 299,760
Batch Insert 1,000 RoomSharp 16,238.0 1,092,664
Dapper 20,793.3 2,963,768
Batch Insert 5,000 RoomSharp 43,295.5 5,236,568
Dapper 55,819.6 14,595,624

Key Takeaways

Superior query performance

RoomSharp's IL-based mapping delivers consistently faster query operations across all scenarios, with improvements ranging from 40% to 98% compared to Dapper.

Efficient memory usage

The zero-allocation design in hot loops means significantly lower memory pressure, especially important for high-throughput applications and batch operations.

Excellent scaling

Batch insert performance scales linearly and maintains a performance advantage even at 5,000+ records, thanks to prepared statement reuse and optimized buffer management.

Compile-time advantage

Source generation eliminates runtime reflection overhead entirely. What you see is what you get - fully optimized code with zero hidden costs.

Methodology: All benchmarks were run using BenchmarkDotNet with warmup iterations, multiple runs, and statistical analysis. The source code for these benchmarks will be published after the Preview stage.

Ready to Experience the Performance?