вторник, 21 мая 2013 г.

MongoDb vs. MS SQL Server in 'durable insert' benchmark

  Recently I thought about append-only storage to store audit log. Seems like both MS SQL Server and MongoDb fit my needs, but I want to get some numbers. Here is my contest environment:

  • Windows 7 Professional SP1 x64
  • Intel Core i7-2600 @ 3.40 GHz
  • SSD Corsair Force 3 (only for OS, all database files are on HDD)
  • HDD Seagate ST320DM000 320GB @ 7200 rpm (rated as average access time 15.6 ms, 64 IOPS @ 4K block by HDD Tune)
  • MS SQL 2008 R2 SP1
  • MongoDb v2.4.3
The benchmark is very simple - insert small records as fast as possible. Code is written in C# and available here.

  First I ran my benchmark against MS SQL and got something around 2500 inserts/second. I do understand that it is just appending to transaction log file (.ldf) and I expected to get near same results for MongoDb. MongoDb was benchmarked in 'durable' mode which means with journal turned on (I'm not sure is it possible or not to turn off journal in recent versions). First results were surprising, at that moment I was sure I did something wrong - 29 inserts per second! Actually to get any results I had to reduce number of test iterations for MongoDb from 10000 to just 100, otherwise I couldn't wait till test completes.

Look at the times between writes during MS SQL benchmark:


And compare with MongoDb times:


Fractions of millisecond between MS SQL write operations and 34 ms between MongoDb writes! It took me a few hours to figure out what is going on, but I'm going to save your time. Did you notice I give detailed characteristics of my spin drive? 64 IOPS with 15.6 ms average access time and 29 inserts per second and 34 milliseconds between writes in benchmark...

  I used PerfView to verify my theory. Look at the disk activity while MS SQL performs 10K test iterations:


It does 10K+ writes (additional writes may be caused by test preparation phase) to a single file which is transaction log file. And here are MongoDb results for 100 iterations:













It does twice i/o write operations affecting two files! I supposed it updates some file metadata and I found "proof" quickly: MS SQL doesn't update last write time for InsertBenchmark_log.ldf file, but MongoDb does. That's why in latter scenario two files accessed. So I decided that MongoDb performance is penalized by HDD seek for every write operation.

  At the time of writing I understand I was totally blinded by my "proofs". Later I ran MongoDb benchmark on my SSD and got same results! Only 29 iterations per seconds! Obviously it is not related to HDD seek times...

  Let's summarize my findings before continue:
  • MongoDb is very slow in my append-only durable storage benchmark
  • It updates two files on the disk for every write operation
  • But results are the same on SSD and HDD, that means problem is inside client bindings or mongo itself.

Комментариев нет:

Отправить комментарий

Wider Two Column Modification courtesy of The Blogger Guide