DatabaseAdvanced
16 min readNov 24, 2025

Database Transactions and Isolation Levels Explained

Deep dive into ACID transactions, isolation levels (Read Committed, Serializable), and handling concurrent data access.

R

Rithy Tep

Author

Database Transactions and Isolation Levels Explained

ACID Properties

Atomicity

All operations succeed or all fail - no partial transactions.

SET NOCOUNT ON; BEGIN TRANSACTION; UPDATE [dbo].[Accounts] WITH(ROWLOCK) SET [Balance] = [Balance] - 100 WHERE [Id] = 1; UPDATE [dbo].[Accounts] WITH(ROWLOCK) SET [Balance] = [Balance] + 100 WHERE [Id] = 2; -- Both updates or neither COMMIT; -- ROLLBACK if error occurs

Consistency

Database moves from one valid state to another.

SET NOCOUNT ON; -- Check constraint ensures consistency ALTER TABLE [dbo].[Accounts] ADD CONSTRAINT [CK_Accounts_Balance_Positive] CHECK ([Balance] >= 0); -- This will fail if it violates constraint UPDATE [dbo].[Accounts] WITH(ROWLOCK) SET [Balance] = -50 WHERE [Id] = 1;

Isolation

Concurrent transactions don't interfere with each other.

Durability

Committed transactions persist even after system failure.

Isolation Levels

1. Read Uncommitted (Level 0)

Lowest isolation - Can read uncommitted changes (dirty reads).

SET NOCOUNT ON; -- Session 1 BEGIN TRANSACTION; UPDATE [dbo].[Products] WITH(ROWLOCK) SET [Price] = 100 WHERE [Id] = 1; -- Not committed yet -- Session 2 (can see uncommitted change!) SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; SELECT [Price] FROM [dbo].[Products] WHERE [Id] = 1; -- Returns 100 -- Session 1 rolls back ROLLBACK; -- Session 2 read invalid data!

Problems:

  • ❌ Dirty reads
  • ❌ Non-repeatable reads
  • ❌ Phantom reads

2. Read Committed (Level 1)

Default for PostgreSQL/Oracle - Only read committed data.

SET NOCOUNT ON; -- Session 1 BEGIN TRANSACTION; UPDATE [dbo].[Products] WITH(ROWLOCK) SET [Price] = 100 WHERE [Id] = 1; -- Not committed -- Session 2 (waits or sees old value) SET TRANSACTION ISOLATION LEVEL READ COMMITTED; SELECT [Price] FROM [dbo].[Products] WHERE [Id] = 1; -- Returns old value (50) -- Session 1 commits COMMIT; -- Session 2 now sees new value SELECT [Price] FROM [dbo].[Products] WHERE [Id] = 1; -- Returns 100

Problems:

  • ✅ No dirty reads
  • ❌ Non-repeatable reads (same query returns different results)
  • ❌ Phantom reads

3. Repeatable Read (Level 2)

Default for MySQL - Same query returns same results within transaction.

SET NOCOUNT ON; -- Session 1 SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; BEGIN TRANSACTION; SELECT [Id], [Name], [Price], [Category] FROM [dbo].[Products] WITH(NOLOCK) WHERE [Price] > 50; -- Returns 10 rows -- Session 2 inserts new row INSERT INTO [dbo].[Products] ([Name], [Price], [Category]) VALUES (N'New Product', 75, N'Electronics'); COMMIT; -- Session 1 (still sees same 10 rows - snapshot isolation) SELECT [Id], [Name], [Price], [Category] FROM [dbo].[Products] WITH(NOLOCK) WHERE [Price] > 50; -- Still returns 10 rows COMMIT;

Problems:

  • ✅ No dirty reads
  • ✅ No non-repeatable reads
  • ⚠️ Phantom reads possible (PostgreSQL prevents, MySQL doesn't)

4. Serializable (Level 3)

Highest isolation - Transactions execute as if serial (one after another).

-- T-SQL Example: Serializable with explicit locks -- Session 1 SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; BEGIN TRANSACTION; -- This locks the range of rows matching the predicate SELECT id, order_number, customer_id, status, total_amount FROM orders WITH (HOLDLOCK) WHERE status = 'pending'; -- Returns 5 rows -- Session 2 tries to insert (will block!) BEGIN TRANSACTION; INSERT INTO orders (order_number, customer_id, status, total_amount) VALUES ('ORD-001', 123, 'pending', 150.00); -- Waits for Session 1 to complete -- Session 1 completes COMMIT; -- Now Session 2 can proceed -- Session 2 COMMIT;

Problems:

  • ✅ No dirty reads
  • ✅ No non-repeatable reads
  • ✅ No phantom reads
  • ⚠️ Performance impact
  • ⚠️ More deadlocks possible

Choosing the Right Isolation Level

| Level | Dirty Read | Non-Repeatable Read | Phantom Read | Performance | Use Case | |-------|-----------|---------------------|--------------|-------------|----------| | Read Uncommitted | ❌ Yes | ❌ Yes | ❌ Yes | 🚀 Fastest | Read-only analytics | | Read Committed | ✅ No | ❌ Yes | ❌ Yes | ⚡ Fast | Most applications | | Repeatable Read | ✅ No | ✅ No | ⚠️ Maybe | 🐢 Slower | Financial reports | | Serializable | ✅ No | ✅ No | ✅ No | 🐌 Slowest | Critical operations |

Handling Deadlocks

Deadlock Example

SET NOCOUNT ON; -- Session 1 BEGIN TRANSACTION; UPDATE [dbo].[Accounts] WITH(ROWLOCK) SET [Balance] = [Balance] - 50 WHERE [Id] = 1; -- Waiting for Session 2... UPDATE [dbo].[Accounts] WITH(ROWLOCK) SET [Balance] = [Balance] + 50 WHERE [Id] = 2; -- Session 2 BEGIN TRANSACTION; UPDATE [dbo].[Accounts] WITH(ROWLOCK) SET [Balance] = [Balance] - 30 WHERE [Id] = 2; -- Waiting for Session 1... UPDATE [dbo].[Accounts] WITH(ROWLOCK) SET [Balance] = [Balance] + 30 WHERE [Id] = 1; -- DEADLOCK! Database kills one transaction

Avoiding Deadlocks

  1. Consistent lock order
SET NOCOUNT ON; -- Always lock accounts in ID order UPDATE [dbo].[Accounts] WITH(ROWLOCK) SET [Balance] = [Balance] - 50 WHERE [Id] IN (1, 2) ORDER BY [Id]; -- Prevents deadlock
  1. Keep transactions short
// BAD - long transaction await db.transaction(async (trx) => { await trx('orders').insert(order); await sendEmail(customer); // Slow! await trx('inventory').update({ quantity: qty - 1 }); }); // GOOD - minimal transaction await db.transaction(async (trx) => { await trx('orders').insert(order); await trx('inventory').update({ quantity: qty - 1 }); }); await sendEmail(customer); // Outside transaction
  1. Use row-level locking
SET NOCOUNT ON; -- T-SQL: Pessimistic locking with UPDLOCK BEGIN TRANSACTION; SELECT [Id], [Name], [Quantity], [Price] FROM [dbo].[Products] WITH(UPDLOCK, ROWLOCK) WHERE [Id] = 1; -- Now we have exclusive lock, safe to update UPDATE [dbo].[Products] WITH(ROWLOCK) SET [Quantity] = [Quantity] - 1 WHERE [Id] = 1; COMMIT; -- T-SQL: Optimistic locking with version/timestamp DECLARE @productId INT = 1; DECLARE @expectedVersion INT = 5; UPDATE [dbo].[Products] WITH(ROWLOCK) SET [Quantity] = [Quantity] - 1, [Version] = [Version] + 1, [ModifiedDate] = GETDATE() WHERE [Id] = @productId AND [Version] = @expectedVersion; -- Fails if version changed (returns 0 rows affected) -- Check if update succeeded IF @@ROWCOUNT = 0 BEGIN RAISERROR(N'Concurrent modification detected. Please retry.', 16, 1); END

Real-World Transaction Example

// E-commerce order placement async function placeOrder(userId, items) { return await db.transaction(async (trx) => { // 1. Lock user's cart const cart = await trx('carts') .where({ user_id: userId }) .forUpdate() .first(); // 2. Check inventory and lock products for (const item of items) { const product = await trx('products') .where({ id: item.product_id }) .forUpdate() .first(); if (product.stock < item.quantity) { throw new Error(`Insufficient stock for ${product.name}`); } // 3. Decrease inventory await trx('products') .where({ id: item.product_id }) .decrement('stock', item.quantity); } // 4. Create order const [orderId] = await trx('orders').insert({ user_id: userId, total: calculateTotal(items), status: 'pending' }); // 5. Clear cart await trx('carts').where({ user_id: userId }).del(); return orderId; }); // Auto-commits if successful, rolls back on error }

Best Practices

  1. Use Read Committed by default - Good balance of consistency and performance
  2. Use Serializable for financial transactions - Money transfers, billing
  3. Keep transactions short - Minimize lock duration
  4. Handle deadlocks gracefully - Retry with exponential backoff
  5. Test concurrent scenarios - Use tools like Apache JMeter
#SQL#Transactions#ACID#Isolation Levels#Concurrency