November 19, 2012

Using the EC2 High I/O instance SSDs

At $3.1 per hour, the Amazon EC2 SSD option (called High I/O Quadruple Extra Large) isn't exactly cheap. What it is though? Fast. So that is a great way to try out the impact of a large(-ish) storage with low predictable latency and its effect on your product or application.

I had a chance to give it a whirl today so here are my first impressions.
Amazon makes the 2 1TB SSD drives available as /dev/sdf and /dev/sdg. For the sake of sheer performance, we went with a RAID 0 setup.

[root@ip-10-155-240-195 ~]#mdadm --create --verbose /dev/md/ssd --level=stripe --raid-devices=2 /dev/sdf /dev/sdg

devices=2 /dev/sdf /dev/sdg
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/ssd started.

[root@ip-10-155-240-195 ~]# mkfs.ext3 /dev/md/ssd 
mke2fs 1.42 (29-Nov-2011)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
134217728 inodes, 536870400 blocks
26843520 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
16384 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
102400000, 214990848, 512000000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done       

[root@ip-10-155-240-195 ~]# mount /dev/md/ssd /ssd
[root@ip-10-155-240-195 ~]# df -h /ssd
Filesystem            Size  Used Avail Use% Mounted on
/dev/md127            2.0T  199M  1.9T   1% /ssd

tadaaaa! now we have 2TB of SSD goodness.

on with testing now ...

No comments:

Post a Comment