Friday, 17 April 2015

vCloud Air

http://vcloud.vmware.com/au/
https://www.vmware.com/support/pubs/vca_pubs.html
VMware vcloud Air has reached Australia. Very exciting!

vCloud Air in available in Australia South 1 region, they currently have only one data centre located in Melbourne inside a Telstra data centre. So far only IaaS is available with two options:
  • On Demand VMs in Shared Cloud
  • Dedicated Cloud

Here is an overview of the vCloud Air:


On demand pricing:

Dedicated pricing:


I signed up for the On Demand service as they are offering $300 in service credit for your first 90 days. I registered and this is when I hit my first problem, upon sign up I got an email:


Sign up with AWS and Azure takes 5-12 minutes, so I was surprised that vCloud took over over 3 days...so after waiting 3 days I lodged a support call and they notified me they had a technical problem and resolved the issue and within a few hours my vCloud Air service was active.

I created a Centos 6 VM and this was very easy using their portal, I logged into my VM and everything looked good until I tried to reach the internet. I figured I needed to attach a public IP address to my gateway then enable internet access on the VM (this created a SNAT from my private IP on VM to edge gateway public IP and created firewall rules to allow outbound TCP ports 53/80/443) and after adding google DNS to /etc/resolv.conf I could access internet outbound. Yet I could still not access ssh/http from internet to my VM. I followed instruction from excellent blog that described that I need to DNAT from edge gateway public IP to VM and open required firewall ports:
https://maroskukan.wordpress.com/2015/03/17/vcloud-air-ondemand-iaas-by-vmware/

Unfortunately I could still not get it to work and after a quick support call to VMware I managed to get it working. I am still unsure what went wrong, but excellent support from VMware guided me through deleting everything and recreating NAT and firewall rules and after this everthing worked as expeceted and I could ssh to VM from internet.

Here are my initial performance results:
Geekbench3
Name Processor MHz Cores Platform Arch Bits Single-core Score Multi-core Score
2337369 VMware, Inc. VMware Virtual Platform - vCloud Air - 16 vCPU, 32GB Intel Xeon E5-2650 v2 2600 16 Linux x86_64 64 2664 31952
2337347 VMware, Inc. VMware Virtual Platform - vCloud Air - 8 vCPU, 16GB Intel Xeon E5-2650 v2 2599 8 Linux x86_64 64 2665 17627
2337322 VMware, Inc. VMware Virtual Platform - vCloud Air - 4 vCPU, 8GB Intel Xeon E5-2650 v2 2599 4 Linux x86_64 64 2676 9908
2337217 VMware, Inc. VMware Virtual Platform - vCloud Air - 2 vCPU, 4GB Intel Xeon E5-2650 v2 2599 2 Linux x86_64 64 2649 5214
2335833 VMware, Inc. VMware Virtual Platform - vCloud AIr - 1 vCPU, 2GB Intel Xeon E5-2650 v2 2599 1 Linux x86_64 64 2638 2513
1694602 Amazon AWS EC2 c4.8xlarge Intel Xeon E5-2666 v3 3500 18 Linux x86_64 64 3499 53985


  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
Simple CPU benchmark:
[zorang@CentOS64-64bit ~]$ dd if=/dev/zero bs=1M count=1024 | md5sum
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.79177 s, 385 MB/s
cd573cfaace07e7949bc0c46028904ff  -

Bandwidth benchmark:
[zorang@CentOS64-64bit ~]$  wget freevps.us/downloads/bench.sh -O - -o /dev/null|bash
CPU model :  Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
Number of cores : 1
CPU frequency :  2600.000 MHz
Total amount of ram : 1869 MB
Total amount of swap : 1983 MB
System uptime :   34 min,
Download speed from CacheFly: 13.6MB/s
Download speed from Coloat, Atlanta GA: 8.09MB/s
Download speed from Softlayer, Dallas, TX: 7.38MB/s
Download speed from Linode, Tokyo, JP: 1.17MB/s
Download speed from i3d.net, Rotterdam, NL: 3.61MB/s
Download speed from Leaseweb, Haarlem, NL: 176KB/s
Download speed from Softlayer, Singapore: 18.0MB/s
Download speed from Softlayer, Seattle, WA: 9.12MB/s
Download speed from Softlayer, San Jose, CA: 9.77MB/s
Download speed from Softlayer, Washington, DC: 8.37MB/s
I/O speed :  439 MB/s

wget Bandwidth benchmark:
[zorang@CentOS64-64bit ~]$ wget http://mirror.internode.on.net/pub/centos/7.1.1503/isos/x86_64/CentOS-7-x86_64-DVD-1503-01.iso
--2015-04-16 19:37:58--  http://mirror.internode.on.net/pub/centos/7.1.1503/isos/x86_64/CentOS-7-x86_64-DVD-1503-01.iso
Resolving mirror.internode.on.net... 150.101.135.3
Connecting to mirror.internode.on.net|150.101.135.3|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4310695936 (4.0G) [application/octet-stream]
Saving to: “CentOS-7-x86_64-DVD-1503-01.iso”
100%[=========================================================================================================================================================>] 4,310,695,936 56.7M/s   in 69s
2015-04-16 19:39:07 (59.2 MB/s) - “CentOS-7-x86_64-DVD-1503-01.iso” saved [4310695936/4310695936]

IO benchmark script:
[root@CentOS64-64bit ~]# cat iobench.sh
mkfs.ext4 -F $1
mount $1 /mnt
cd /mnt
dd if=/dev/zero of=tempfile bs=1M count=10240 conv=fdatasync,notrunc
echo 3 > /proc/sys/vm/drop_caches
dd if=tempfile of=/dev/null bs=1M count=10240
cd /
umount /mnt
fio --filename=$1 --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=8k --rwmixread=70 --iodepth=16 --numjobs=16 --runtime=600 --group_reporting --name=8k7030test
fio --name=readiops --filename=$1 --direct=1 --rw=randread --bs=512 --numjobs=4 --iodepth=32 --direct=1 --iodepth_batch=16 --iodepth_batch_complete=16 --runtime=600 --ramp_time=5 --norandommap --time_based --ioengine=libaio --group_reporting
fio --name=writeiops --filename=$1 --direct=1 --rw=randwrite --bs=512 --numjobs=4 --iodepth=32 --direct=1 --iodepth_batch=16 --iodepth_batch_complete=16 --runtime=600 --ramp_time=5 --norandommap --time_based --ioengine=libaio --group_reporting

IO benchmarks on 100GB SSD accelerated storage:
[root@CentOS64-64bit ~]# bash -x iobench.sh /dev/sdc
+ mkfs.ext4 -F /dev/sdc
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
6553600 inodes, 26214400 blocks
1310720 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
800 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 20 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
+ mount /dev/sdc /mnt
+ cd /mnt
+ dd if=/dev/zero of=tempfile bs=1M count=10240 conv=fdatasync,notrunc
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 16.0584 s, 669 MB/s
+ echo 3
+ dd if=tempfile of=/dev/null bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 27.7736 s, 387 MB/s
+ cd /
+ umount /mnt
+ fio --filename=/dev/sdc --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=8k --rwmixread=70 --iodepth=16 --numjobs=16 --runtime=600 --group_reporting --name=8k7030test
8k7030test: (g=0): rw=randrw, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=16
...
fio-2.1.10
Starting 16 processes

8k7030test: (groupid=0, jobs=16): err= 0: pid=12639: Fri Apr 17 09:14:22 2015
  read : io=27416MB, bw=46772KB/s, iops=5846, runt=600230msec
    slat (usec): min=6, max=2936.7K, avg=1149.66, stdev=14651.10
    clat (usec): min=1, max=3486.1K, avg=30529.95, stdev=67844.95
     lat (usec): min=310, max=3487.1K, avg=31702.97, stdev=69337.79
    clat percentiles (usec):
     |  1.00th=[  668],  5.00th=[  924], 10.00th=[ 1096], 20.00th=[ 2096],
     | 30.00th=[ 5984], 40.00th=[11840], 50.00th=[17792], 60.00th=[22912],
     | 70.00th=[27520], 80.00th=[33024], 90.00th=[64768], 95.00th=[134144],
     | 99.00th=[214016], 99.50th=[284672], 99.90th=[864256], 99.95th=[1286144],
     | 99.99th=[2244608]
    bw (KB  /s): min=    2, max= 6595, per=6.52%, avg=3048.49, stdev=950.76
  write: io=11768MB, bw=20077KB/s, iops=2509, runt=600230msec
    slat (usec): min=8, max=3421.9K, avg=1191.73, stdev=15373.83
    clat (usec): min=1, max=3455.9K, avg=26729.82, stdev=62270.91
     lat (usec): min=469, max=3455.1K, avg=27944.47, stdev=64012.29
    clat percentiles (usec):
     |  1.00th=[  764],  5.00th=[  940], 10.00th=[ 1048], 20.00th=[ 1480],
     | 30.00th=[ 4384], 40.00th=[ 9024], 50.00th=[15040], 60.00th=[20352],
     | 70.00th=[25216], 80.00th=[30336], 90.00th=[48896], 95.00th=[122368],
     | 99.00th=[199680], 99.50th=[252928], 99.90th=[757760], 99.95th=[1204224],
     | 99.99th=[2179072]
    bw (KB  /s): min=    5, max= 2922, per=6.52%, avg=1308.74, stdev=422.91
    lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
    lat (usec) : 100=0.01%, 250=0.04%, 500=0.14%, 750=1.36%, 1000=5.64%
    lat (msec) : 2=13.25%, 4=6.09%, 10=12.02%, 20=17.31%, 50=32.71%
    lat (msec) : 100=4.46%, 250=6.35%, 500=0.44%, 750=0.07%, 1000=0.04%
    lat (msec) : 2000=0.05%, >=2000=0.02%
  cpu          : usr=0.49%, sys=3.95%, ctx=563883, majf=0, minf=493
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=3509245/w=1506325/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
   READ: io=27416MB, aggrb=46772KB/s, minb=46772KB/s, maxb=46772KB/s, mint=600230msec, maxt=600230msec
  WRITE: io=11768MB, aggrb=20076KB/s, minb=20076KB/s, maxb=20076KB/s, mint=600230msec, maxt=600230msec

Disk stats (read/write):
  sdc: ios=3509243/1506325, merge=1/0, ticks=39469471/12480440, in_queue=51447919, util=100.00%
+ fio --name=readiops --filename=/dev/sdc --direct=1 --rw=randread --bs=512 --numjobs=4 --iodepth=32 --direct=1 --iodepth_batch=16 --iodepth_batch_complete=16 --runtime=600 --ramp_time=5 --norandommap --time_based --ioengine=libaio --group_reporting
readiops: (g=0): rw=randread, bs=512-512/512-512/512-512, ioengine=libaio, iodepth=32
...
fio-2.1.10
Starting 4 processes

readiops: (groupid=0, jobs=4): err= 0: pid=13032: Fri Apr 17 09:24:28 2015
  read : io=3138.6MB, bw=5355.6KB/s, iops=10710, runt=600092msec
    slat (usec): min=98, max=827005, avg=4490.52, stdev=20829.79
    clat (usec): min=1, max=1228.4K, avg=7348.23, stdev=24664.04
     lat (usec): min=507, max=1229.2K, avg=11839.28, stdev=32123.29
    clat percentiles (usec):
     |  1.00th=[    4],  5.00th=[    4], 10.00th=[  844], 20.00th=[  980],
     | 30.00th=[ 1112], 40.00th=[ 2864], 50.00th=[ 4256], 60.00th=[ 5088],
     | 70.00th=[ 6048], 80.00th=[ 7072], 90.00th=[ 8640], 95.00th=[11072],
     | 99.00th=[124416], 99.50th=[162816], 99.90th=[342016], 99.95th=[460800],
     | 99.99th=[602112]
    bw (KB  /s): min=    1, max= 2671, per=25.53%, avg=1367.06, stdev=408.67
    lat (usec) : 2=0.01%, 4=0.76%, 10=5.36%, 20=0.01%, 50=0.08%
    lat (usec) : 100=0.17%, 250=0.13%, 500=0.04%, 750=0.73%, 1000=14.59%
    lat (msec) : 2=13.34%, 4=12.67%, 10=45.80%, 20=3.61%, 50=0.42%
    lat (msec) : 100=0.62%, 250=1.49%, 500=0.14%, 750=0.03%, 1000=0.01%
    lat (msec) : 2000=0.01%
  cpu          : usr=0.95%, sys=21.90%, ctx=277345, majf=0, minf=138
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.1%, 32=100.8%, >=64=0.0%
     submit    : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=6427536/w=0/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: io=3138.6MB, aggrb=5355KB/s, minb=5355KB/s, maxb=5355KB/s, mint=600092msec, maxt=600092msec

Disk stats (read/write):
  sdc: ios=6479964/0, merge=0/0, ticks=15028228/0, in_queue=14750598, util=99.72%
+ fio --name=writeiops --filename=/dev/sdc --direct=1 --rw=randwrite --bs=512 --numjobs=4 --iodepth=32 --direct=1 --iodepth_batch=16 --iodepth_batch_complete=16 --runtime=600 --ramp_time=5 --norandommap --time_based --ioengine=libaio --group_reporting
writeiops: (g=0): rw=randwrite, bs=512-512/512-512/512-512, ioengine=libaio, iodepth=32
...
fio-2.1.10
Starting 4 processes

writeiops: (groupid=0, jobs=4): err= 0: pid=13473: Fri Apr 17 09:34:34 2015
  write: io=2885.5MB, bw=4924.6KB/s, iops=9848, runt=600006msec
    slat (usec): min=103, max=1148.3K, avg=4462.55, stdev=23248.22
    clat (usec): min=1, max=1148.3K, avg=8406.47, stdev=28443.37
     lat (usec): min=570, max=1152.4K, avg=12869.20, stdev=36480.62
    clat percentiles (usec):
     |  1.00th=[    4],  5.00th=[  788], 10.00th=[  892], 20.00th=[ 1012],
     | 30.00th=[ 1976], 40.00th=[ 3280], 50.00th=[ 4640], 60.00th=[ 5280],
     | 70.00th=[ 6304], 80.00th=[ 7200], 90.00th=[ 9024], 95.00th=[11968],
     | 99.00th=[160768], 99.50th=[183296], 99.90th=[329728], 99.95th=[497664],
     | 99.99th=[716800]
    bw (KB  /s): min=    1, max= 2848, per=25.60%, avg=1260.50, stdev=368.66
    lat (usec) : 2=0.01%, 4=0.20%, 10=3.01%, 20=0.01%, 50=0.04%
    lat (usec) : 100=0.08%, 250=0.21%, 500=0.10%, 750=0.73%, 1000=14.72%
    lat (msec) : 2=10.96%, 4=14.33%, 10=48.11%, 20=4.71%, 50=0.24%
    lat (msec) : 100=0.37%, 250=2.02%, 500=0.11%, 750=0.04%, 1000=0.01%
    lat (msec) : 2000=0.01%
  cpu          : usr=1.07%, sys=20.93%, ctx=252884, majf=0, minf=117
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.1%, 32=100.9%, >=64=0.0%
     submit    : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=5909337/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: io=2885.5MB, aggrb=4924KB/s, minb=4924KB/s, maxb=4924KB/s, mint=600006msec, maxt=600006msec

Disk stats (read/write):
  sdc: ios=0/5958605, merge=0/0, ticks=0/17307428, in_queue=17039253, util=99.30%

IO benchmarks on 100GB standard storage:
[root@CentOS64-64bit ~]# bash -x iobench.sh /dev/sdb
+ mkfs.ext4 -F /dev/sdb
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
6553600 inodes, 26214400 blocks
1310720 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
800 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
+ mount /dev/sdb /mnt
+ cd /mnt
+ dd if=/dev/zero of=tempfile bs=1M count=10240 conv=fdatasync,notrunc
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 1068.78 s, 10.0 MB/s <-- bad sequential write

+ echo 3
+ dd if=tempfile of=/dev/null bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 6529.85 s, 1.6 MB/s <-- ridiculously bad sequential read
+ cd /
+ umount /mnt
+ fio --filename=/dev/sdb --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=8k --rwmixread=70 --iodepth=16 --numjobs=16 --runtime=600 --group_reporting --name=8k7030test
8k7030test: (g=0): rw=randrw, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=16
...
fio-2.1.10
Starting 16 processes

8k7030test: (groupid=0, jobs=16): err= 0: pid=16079: Fri Apr 17 11:55:32 2015
  read : io=1000.2MB, bw=1705.3KB/s, iops=213, runt=600583msec
    slat (usec): min=8, max=1502.7K, avg=33032.99, stdev=116023.58
    clat (usec): min=1, max=3749.6K, avg=775194.11, stdev=401008.75
     lat (usec): min=71, max=3750.5K, avg=808228.31, stdev=408064.42
    clat percentiles (msec):
     |  1.00th=[    8],  5.00th=[  249], 10.00th=[  265], 20.00th=[  498],
     | 30.00th=[  502], 40.00th=[  717], 50.00th=[  750], 60.00th=[  758],
     | 70.00th=[  996], 80.00th=[ 1004], 90.00th=[ 1254], 95.00th=[ 1500],
     | 99.00th=[ 2008], 99.50th=[ 2245], 99.90th=[ 2704], 99.95th=[ 2769],
     | 99.99th=[ 3261]
    bw (KB  /s): min=    4, max=  448, per=6.62%, avg=112.90, stdev=65.15
  write: io=438352KB, bw=747394B/s, iops=91, runt=600583msec
    slat (usec): min=8, max=1746.9K, avg=33369.44, stdev=116917.69
    clat (msec): min=1, max=3500, avg=883.06, stdev=397.86
     lat (msec): min=1, max=3500, avg=916.43, stdev=409.99
    clat percentiles (msec):
     |  1.00th=[  249],  5.00th=[  379], 10.00th=[  498], 20.00th=[  502],
     | 30.00th=[  734], 40.00th=[  750], 50.00th=[  750], 60.00th=[  988],
     | 70.00th=[ 1004], 80.00th=[ 1237], 90.00th=[ 1483], 95.00th=[ 1663],
     | 99.00th=[ 2212], 99.50th=[ 2278], 99.90th=[ 2737], 99.95th=[ 2999],
     | 99.99th=[ 3261]
    bw (KB  /s): min=    5, max=  256, per=7.71%, avg=56.17, stdev=32.58
    lat (usec) : 2=0.01%, 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%
    lat (usec) : 1000=0.02%
    lat (msec) : 2=0.09%, 4=0.18%, 10=0.63%, 20=0.71%, 50=0.25%
    lat (msec) : 100=0.11%, 250=3.00%, 500=17.12%, 750=27.17%, 1000=25.10%
    lat (msec) : 2000=24.40%, >=2000=1.18%
  cpu          : usr=0.03%, sys=0.07%, ctx=43173, majf=0, minf=509
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=128021/w=54794/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
   READ: io=1000.2MB, aggrb=1705KB/s, minb=1705KB/s, maxb=1705KB/s, mint=600583msec, maxt=600583msec
  WRITE: io=438352KB, aggrb=729KB/s, minb=729KB/s, maxb=729KB/s, mint=600583msec, maxt=600583msec

Disk stats (read/write):
  sdb: ios=128006/54787, merge=0/0, ticks=61426377/31979618, in_queue=93420213, util=100.00%
+ fio --name=readiops --filename=/dev/sdb --direct=1 --rw=randread --bs=512 --numjobs=4 --iodepth=32 --direct=1 --iodepth_batch=16 --iodepth_batch_complete=16 --runtime=600 --ramp_time=5 --norandommap --time_based --ioengine=libaio --group_reporting
readiops: (g=0): rw=randread, bs=512-512/512-512/512-512, ioengine=libaio, iodepth=32
...
fio-2.1.10
Starting 4 processes

readiops: (groupid=0, jobs=4): err= 0: pid=16504: Fri Apr 17 12:05:38 2015
  read : io=91758KB, bw=156472B/s, iops=305, runt=600492msec
    slat (usec): min=100, max=2639, avg=147.97, stdev=93.66
    clat (msec): min=2, max=2001, avg=418.80, stdev=211.55
     lat (msec): min=3, max=2002, avg=418.95, stdev=211.55
    clat percentiles (msec):
     |  1.00th=[   15],  5.00th=[  104], 10.00th=[  186], 20.00th=[  249],
     | 30.00th=[  285], 40.00th=[  343], 50.00th=[  400], 60.00th=[  465],
     | 70.00th=[  502], 80.00th=[  553], 90.00th=[  734], 95.00th=[  766],
     | 99.00th=[ 1012], 99.50th=[ 1156], 99.90th=[ 1385], 99.95th=[ 1467],
     | 99.99th=[ 1680]
    bw (KB  /s): min=    1, max=   93, per=25.42%, avg=38.63, stdev=11.82
    lat (msec) : 4=0.03%, 10=0.41%, 20=1.00%, 50=1.05%, 100=2.31%
    lat (msec) : 250=15.98%, 500=49.49%, 750=22.85%, 1000=5.73%, 2000=1.18%
    lat (msec) : >=2000=0.01%
  cpu          : usr=0.05%, sys=0.46%, ctx=128598, majf=0, minf=138
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.1%, 32=100.6%, >=64=0.0%
     submit    : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=183392/w=0/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: io=91758KB, aggrb=152KB/s, minb=152KB/s, maxb=152KB/s, mint=600492msec, maxt=600492msec

Disk stats (read/write):
  sdb: ios=184406/0, merge=0/0, ticks=62067138/0, in_queue=62089302, util=100.00%
+ fio --name=writeiops --filename=/dev/sdb --direct=1 --rw=randwrite --bs=512 --numjobs=4 --iodepth=32 --direct=1 --iodepth_batch=16 --iodepth_batch_complete=16 --runtime=600 --ramp_time=5 --norandommap --time_based --ioengine=libaio --group_reporting
writeiops: (g=0): rw=randwrite, bs=512-512/512-512/512-512, ioengine=libaio, iodepth=32
...
fio-2.1.10
Starting 4 processes

writeiops: (groupid=0, jobs=4): err= 0: pid=16690: Fri Apr 17 12:15:45 2015
  write: io=59950KB, bw=102199B/s, iops=199, runt=600676msec
    slat (usec): min=104, max=4062, avg=192.40, stdev=264.93
    clat (msec): min=168, max=2249, avg=641.23, stdev=207.29
     lat (msec): min=168, max=2249, avg=641.43, stdev=207.28
    clat percentiles (msec):
     |  1.00th=[  249],  5.00th=[  273], 10.00th=[  498], 20.00th=[  498],
     | 30.00th=[  502], 40.00th=[  502], 50.00th=[  627], 60.00th=[  750],
     | 70.00th=[  750], 80.00th=[  750], 90.00th=[  750], 95.00th=[ 1004],
     | 99.00th=[ 1254], 99.50th=[ 1500], 99.90th=[ 1500], 99.95th=[ 1745],
     | 99.99th=[ 1745]
    bw (KB  /s): min=    1, max=   48, per=25.64%, avg=25.38, stdev= 6.95
    lat (msec) : 250=1.86%, 500=33.00%, 750=45.03%, 1000=16.31%, 2000=3.85%
    lat (msec) : >=2000=0.01%
  cpu          : usr=0.04%, sys=0.28%, ctx=19702, majf=0, minf=115
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.1%, 32=100.9%, >=64=0.0%
     submit    : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=119776/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: io=59950KB, aggrb=99KB/s, minb=99KB/s, maxb=99KB/s, mint=600676msec, maxt=600676msec

Disk stats (read/write):
  sdb: ios=0/120857, merge=0/0, ticks=0/67944991, in_queue=67965809, util=100.00%

Total cost to evaluate and benchmark platform was only $2.58:



Comparison between AWS and vCloud Air based on similar performance:
AWS - On Demand - Australian regionvCloud Air - On Demand - Australian region
vCPUMemoryMult-core
Perf

US$/month
vCPUMemoryMult-core
Perf (est)

US$/month
t2.micro11Burst$14Nothing available
m3.medium13.751,315$71Nothing available
c3.large23.753,351$98142,513$87
c3.xlarge47.56,676$197285,214$174
c3.2xlarge81512,367$39451511,017$342
c3.4xlarge163023,160$787103022,034$684
c3.8xlarge326044,092$1,575Nothing available
c4.8xlarge366053,985$1,751Nothing available

The good:
  • Same hypervisor as most enterprises already use on premises, easy migration
  • Easy transition for staff as it is still uses same hypervisor
  • Fantastic performance on SSD accelerated storage
  • Very simple pricing model (all in US$ for Australian region):
      • vCPU         $0.017/hr -> $12.24/month
      • 1GB           $0.026/hr -> $18.72/month
      • Public IP    $0.048/hr -> $34.56/month
      • 1GB SSD   $0.00018/hr -> $0.1296/month
  • You only pay for public IP and networking bandwidth is free
  • Pricing for IaaS On Demand seems very competitive 
The bad:
  • Only one single data centre in Australia, no DR capability. This is not enterprise ready
  • No low cost micro VMs or high performance large VMs like AWS and Azure
  • No transparent discounts for customer willing to make long term commitment unlike AWS reserved instances which offer 50%+ discounts. 
  • Ridiculously bad performance on standard storage (I get almost 100 x greater performance for sequential read on a desktop PC with one single SATA drive)
  • No Backups, snapshots are not backups.Where is Data Protection service for on demand cloud?
  • No simple easy to use APIs with examples for Linux (CLI commands that can be added to scripts) and Windows (Powershell) to enable easy automation according to documentation. This enables DevOps to automate into software defined data centre far quicker and less error prone than traditional management. This is also reflected by the largest code repository Github, it shows few code objects compared to other clouds (search for 'vcloud air' = 421, search for 'aws' = 2,538,098, search for 'Azure' = 2,256,265)
  • No services. The promise of cloud is "as a service", yet other than IaaS vCloud Air seems to be missing the point. When they eventually bring out services (which they will) it will be version 1.00.00.00.00000 which like any early release product and will be  full of bugs, missing features and stability. As they are a decade behind on heavy use of automation and use of services this will take a long time to catch up (if ever!!).
  • IaasS is more expensive than on premises private cloud. As enterprises already have VMware private cloud on premises why move to more expensive vCloud Air when it is still missing services and ability for easy automation that make cloud so attractive? 
Update 19/4/2015 - Correction - VMware do have CLI tools for vCloud Air vca-cli
Open sourced by VMware, requires Python and Pip, so will work with any platform that supports these. Looks great! This should really be vCloud Air supported product and in their official documentation.
http://blog.pacogomez.com/overview-of-vca-cli-features/

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# The following commands show how to create, customize and power on, list, power off and delete a vApp:
$ vca vapp create --vapp myvapp --vm myvm \
           --catalog 'Public Catalog' --template 'Ubuntu Server 12.04 LTS (amd64 20150127)' \
           --network default-routed-network --mode POOL 

$ vca vapp customize --vapp myvapp --vm myvm \
           --file add_public_ssh_key.sh
$ vca vapp
$ vca vm
$ vca vapp power.off --vapp myvapp
$ vca vapp delete --vapp myvapp

# Creates an independent disk (d1) of 100GB and attaches it to a virtual machine. Disks can also be detached, listed and deleted.
$ vca disk create -d d1 -s 100
$ vca vapp attach -d d1 -a myvapp -v myvm 

No comments:

Post a Comment