I was answering few questions lately in our internal group email distribution regarding few topics based on iSCSI Storage backed. There was a interesting topic came up and I could not resist myself to make it a article.
Here is what we were discussing:
Anyone seen any best practices or done any testing and found better performance by reducing the maximum allowable IO size with VMware?
Well to my understanding Vmware has nothing to do on the I/O Size or the I/O Block Size. This is purely based on GOS (Guest Operating System) and the application installed on top of that.
In my understanding it solely depends on the kind of workload you are going to Virtualize. So in this context let’s say if you are virtualizing SQL or Exchange then the IO Size would be totally different and that too if it is SQL then it again diversified depends on the front end application. Let us debunk this myth by some example.
Common Types of Applications and Their General I/O Patterns for SQL Server
Online transaction processing (OLTP) workloads tend to select a small number of rows at a time. These transfers come all over the data, and are each fairly small in size – typically between 8K and 64K. This causes the I/O pattern to be random in nature.
Data warehouse applications tend to issue scan-intensive operations that access large portions of the data at a time and also commonly perform bulk loading operations. These operations result in larger I/O sizes than OLTP workloads do, and they require a storage subsystem that can provide the required throughput.
So if you carefully observe the pattern of the I/O; it all depends on the kind of workload or should I say kind of front end application writing the data onto the disk (in this case VMFS volume).
Now let us take a close look at the I/O size pattern for the Exchange Server.
Exchange 2010 I/O Pattern
Today Microsoft Exchange forces pages within the Store to be laid out contiguously rather than having pages scattered around the database. It now expands the default page size from 8 KB to 32 KB and ensures that the Store allocates pages in large contiguous chunks rather than randomly within the database. So basically it moves away from many random small-sized disk I/Os to fewer sequential larger I/Os.
Microsoft increased the default page size from 4 KB to 8 KB in Exchange 2007 (It was 4 KB in Exchange 2003), but the internal organization remained as before. Exchange 2010 increases the page size to 32 KB so that it is more likely that a single item will fit in one page and makes the more important change to take care that pages required to hold new items are assigned contiguously in large chunks.
So now the question arrives as Concerning the exchange pattern in 2010, does the move to “large contiguous chunks rather than randomly” mean the 32KB I/Os are attempted to be laid down sequentially if possible, or are the 32KB I/Os still mostly random?
Let us look at the answer carefully as below:
To achieve a reduction in I/O, the Store had to move away from forcing disks to do many small random I/Os to fetch data, instead using larger sequential I/Os. The physical performance difference between random and sequential I/O almost guarantees better performance and lower I/O activity for any application if the code is written to move away from random I/O. To make the change, Exchange 2010 introduces a new schema that generates fewer I/Os by emphasizing contiguity over storage, essentially by keeping mailbox content together. Because more data are contiguous, the Store can read data out in large sequential chunks rather than in many random and smaller chunks.
I/O characteristics of Exchange 2007 and Exchange 2010
So concluding to an end that it all depends on the kind of workload you are Virtualizing. Also our backend storage disk has to support that total number of IOPS for optimum performance.