wouldn't it make sense, the more vm's, the larger the stripe would need to be for performance? I just don't understand the vmfs layer, especially with changes in vmfs5. I would think the more disk access each vm needs, it would be queuing up reads, and writes so larger stripes would work better. I just benchmarked an 8 disk raid5, and got slower numbers with a 64k stripe, and a 8k stripe, under iometer.
that being said, you notice I'm using stormagic, and mirroring the 6TB iscsi target. The iscsi target re-synchronized much faster under the 64k stripe, compared to the 8k stripe(everything else the same..the stripe is all that changed).
We will be selling this same configuration for mass distribution, so I want to get it right from the start. I's also like to know is there is an iops calculator, for how many vm's can run off a raid set, with x number of iops.
thanks, just stuff I've been wondering about.