Batch Insert Engine
Ultra-fast bulk operations with zero allocations
Overview
RoomSharp 0.4.1 introduced a completely redesigned High-Performance Batch Insert Engine that delivers 2X-5X faster performance than baseline implementations, approaching Dapper's speed.
Key Features
- Zero allocations inside hot loops
- Reusable prepared statements for maximum speed
- Dedicated fast paths for different collection types
- Transaction reuse with safe fallback behavior
- Automatic detection by the DAO source generator
Supported Collection Types
The BatchInsertEngine automatically detects and optimizes for:
IEnumerable<T>- General enumerable supportIAsyncEnumerable<T>- Async streamingList<T>- Direct list optimizationT[]- Array fast pathReadOnlySpan<T>- Span-based zero-copy
Usage
Basic Batch Insert
C#
[Dao]
public interface IUserDao
{
// Automatically uses BatchInsertEngine
[Insert]
void InsertAll(List<User> users);
// Async variant
[Insert]
Task InsertAllAsync(IEnumerable<User> users);
}
// Usage
var users = new List<User>
{
new User { Email = "user1@test.com", Name = "User 1" },
new User { Email = "user2@test.com", Name = "User 2" },
// ... 1000s more
};
db.UserDao.InsertAll(users); // Lightning fast!
Async Streaming
C#
[Insert]
Task InsertStreamAsync(IAsyncEnumerable<User> users);
// Usage with async stream
async IAsyncEnumerable<User> GetUsersFromApi()
{
await foreach (var user in apiClient.StreamUsersAsync())
{
yield return user;
}
}
await db.UserDao.InsertStreamAsync(GetUsersFromApi());
Performance Characteristics
Benchmark Results
Based on RoomSharp vs Dapper benchmarks (5000 records):
| Operation | RoomSharp | Dapper | Improvement |
|---|---|---|---|
| Batch Insert (100 items) | ~2.5ms | ~4.0ms | ~35% faster |
| Batch Insert (1000 items) | ~18ms | ~35ms | ~48% faster |
| Batch Insert (5000 items) | ~82ms | ~170ms | ~52% faster |
| Memory Allocated | Minimal | Higher | 5X less |
Zero Allocations in Hot Loops
The engine achieves zero allocations by:
- Pre-allocating buffers for large batches
- Reusing command and parameter objects
- Using
static readonlycolumn name arrays - Employing
BindValuesdelegates with zero closures
How It Works
Generated Code
When you define a batch insert method, the generator creates optimized code:
C# - Generated
// Column names as static readonly for reuse
private static readonly ImmutableArray<string> s_UserColumns =
ImmutableArray.Create("Id", "Email", "Name");
// Static bind method (zero closures, zero allocations)
private static void BindUserValues(IDbCommand cmd, User entity, int paramOffset)
{
var p = cmd.Parameters;
((IDataParameter)p[paramOffset + 0]).Value = entity.Id;
((IDataParameter)p[paramOffset + 1]).Value = entity.Email;
((IDataParameter)p[paramOffset + 2]).Value = entity.Name ?? (object)DBNull.Value;
}
public void InsertAll(List<User> users)
{
var engine = new BatchInsertEngine<User>(
_database,
tableName: "users",
columnNames: s_UserColumns,
bindValues: BindUserValues
);
engine.ExecuteBatch(users);
}
BatchInsertEngine Internals
C#
public void ExecuteBatch<T>(IEnumerable<T> entities)
{
// Reuse transaction if already in one
var inExistingTxn = _database.InTransaction;
if (!inExistingTxn)
_database.BeginTransaction();
try
{
// Prepare statement once and reuse
using var cmd = PrepareInsertCommand();
foreach (var entity in entities)
{
// Bind values with zero allocations
_bindValues(cmd, entity, 0);
cmd.ExecuteNonQuery();
}
if (!inExistingTxn)
_database.CommitTransaction();
}
catch
{
if (!inExistingTxn)
_database.RollbackTransaction();
throw;
}
}
Best Practices
✅ Do
- Use batch inserts for inserting multiple records (10+ items)
- Prefer
List<T>orT[]for best performance - Use
IAsyncEnumerable<T>for streaming scenarios - Let RoomSharp manage transactions automatically
- Pre-size lists when possible (
new List<User>(capacity))
❌ Don't
- Don't use batch insert for single items (use regular Insert)
- Don't materialize
IAsyncEnumerabletoListunnecessarily - Don't start manual transactions around batch inserts (handled automatically)
Transaction Behavior
The BatchInsertEngine intelligently handles transactions:
- No existing transaction: Automatically creates and manages one
- Existing transaction: Reuses it without creating nested transactions
- Safe fallback: Rolls back on exception if it created the transaction
⚡ Performance Tip: Combining batch inserts with transactions is already optimized.
The engine creates a single transaction for the entire batch, minimizing disk writes.
Large Dataset Handling
For very large datasets (100K+ records):
C#
// Process in chunks for better memory usage
const int ChunkSize = 10000;
var users = GetMillionsOfUsers(); // IEnumerable<User>
foreach (var chunk in users.Chunk(ChunkSize))
{
db.UserDao.InsertAll(chunk.ToList());
// Optional: Log progress
Console.WriteLine($"Inserted {chunk.Count()} users");
}
Async Streaming Pattern
For processing large files or API responses:
C#
async IAsyncEnumerable<User> ReadUsersFromCsvAsync(string filePath)
{
await using var reader = new StreamReader(filePath);
await reader.ReadLineAsync(); // Skip header
string? line;
while ((line = await reader.ReadLineAsync()) != null)
{
var parts = line.Split(',');
yield return new User
{
Email = parts[0],
Name = parts[1]
};
}
}
// Inserts as they stream in - never loads all into memory
await db.UserDao.InsertStreamAsync(ReadUsersFromCsvAsync("users.csv"));
Comparison with Alternatives
| Feature | RoomSharp Batch | Loop with Insert | Dapper |
|---|---|---|---|
| Performance | Excellent (2-5X faster) | Poor | Good |
| Memory Usage | Minimal | Minimal | Moderate |
| Transaction Handling | Automatic | Manual | Manual |
| Type Safety | Full | Full | Partial |
| Code Generation | Yes | Yes | No |
Next Steps
- Performance Notes - Deep dive into optimizations
- Transactions - Understanding transaction behavior
- Benchmarks - See detailed performance comparisons

