Friday 26 June 2015

AWS EFS

http://aws.amazon.com/efs/
http://docs.aws.amazon.com/efs/latest/ug/how-it-works.html

Amazon announced a new service: AWS Elastic File System (EFS). EFS provides shared access to fully managed file systems across-the-board. Connecting to EFS is similar to connecting to your network drive since it supports NFS protocols, which are standard for network attached storage (NAS) devices.

Elastic File System Features:
  • Elastic File System will be simple and scalable.
  • Designed to use with AWS EC2 instances.
  • Shareable across multiple EC2 instances.
  • Storage capacity (and cost) is automatically scaled up or down as you add or remove files.
  • Like most AWS services, you pay only for what you use.
  • Elastic File System files are stored across multiple Availability Zones within a region.
  • Amazon VPC security groups and network access control lists allow you to control network access to your EFS resources.
  • The cost of storage is based on the average monthly storage space used, at a rate of $0.30/GB-month (about twice the charge for a standard EBS volume)
Technical Specifications
  • SSD-based storage. Grow or shrink as needed.
  • Can grow to petabyte scale, with throughput and IOPS scaled accordingly.
  • Amazon EFS supports the Network File System version 4 (NFSv4) protocol.
  • Will use standard file and directory permissions (chown and chmod) to control access to the directories and files.
  • Setup and configuration are managed through the AWS Console, CLI, or SDKs.
  • EFS supports action-level and resource-level permissions.
  • Data can be accessed from any availability zone within a region.
  • Can be used seamlessly with database instances as storage – throughput and IOPS are scaled accordingly.

Setting up and accessing the Elastic File System from EC2 instances:







Setting up your EC2 instance
  1. Using the Amazon EC2 console, associate your EC2 instance with a VPC security group that enables access to your mount target. For example, if you assigned the "default" security group to your mount target, you should assign the "default" security group to your EC2 instance. (learn more about using VPC security groups with Amazon EFS)
  2. Open an SSH client and connect to your EC2 instance. (find out how to connect)
  3. Install the nfs client on your EC2 instance.
    • On an Amazon Linux, Red Hat Enterprise Linux, or SuSE Linux instance:
    • sudo yum install -y nfs-utils
    • On an Ubuntu instance:
    • sudo apt-get install nfs-common
Mounting your file system
  1. Open an SSH client and connect to your EC2 instance. (find out how to connect)
  2. Create a new directory on your EC2 instance, such as "efs".
    • sudo mkdir efs
  3. Mount your file system using the DNS name. The following command looks up your EC2 instance's Availability Zone (AZ) using the EC2 instance metadata URI 169.254.169.254, then mounts the file system using the DNS name for that AZ. (what is EC2 instance metadata?)
    • sudo mount -t nfs4 $(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone).fs-58e804f1.efs.us-west-2.amazonaws.com:/ efs
If you are unable to connect, please see our troubleshooting documentation.


EFS testing on r3.2xlarge instance:
mount -t nfs4 $(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone).fs-58e804f1.efs.us-west-2.amazonaws.com:/ /efs
mount.nfs4: Failed to resolve server us-west-2b.fs-58e804f1.efs.us-west-2.amazonaws.com: Name or service not known <- Looks like bug on EFS preview, yet works fine with IP address

# mount 172.30.1.198:/ /efs  # Note IP address of EFS on same VPC as instance
# mount 172.30.1.148:/ /efs2 # Note each EFS has its own seperate IP address
[root@ip-172-30-1-162 ec2-user]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      985G  1.6G  983G   1% /
devtmpfs         30G   60K   30G   1% /dev
tmpfs            31G     0   31G   0% /dev/shm
172.30.1.198:/  8.0E  214G  8.0E   1% /efs <- Filesystem size 8 Exabytes (8,000 Petabytes!!!)
172.30.1.148:/  8.0E     0  8.0E   0% /efs2 


Benchmark on r3.2xlarge instance, sequential reads or write ~100MB/sec, random ~250 IOPS:
fio Random 8K 70/30 qd=16:
8k7030test: (g=0): rw=randrw, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=16
...
8k7030test: (g=0): rw=randrw, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=16
fio-2.1.5
Starting 16 processes
8k7030test: Laying out IO file(s) (1 file(s) / 204800MB)

8k7030test: (groupid=0, jobs=16): err= 0: pid=23904: Thu Jun 25 08:15:32 2015
 read : io=181216KB, bw=1498.7KB/s, iops=186, runt=120922msec
   slat (usec): min=1, max=57, avg= 7.55, stdev= 4.10
   clat (msec): min=523, max=1638, avg=967.39, stdev=94.76
    lat (msec): min=523, max=1638, avg=967.40, stdev=94.76
   clat percentiles (msec):
    |  1.00th=[  758],  5.00th=[  824], 10.00th=[  857], 20.00th=[  889],
    | 30.00th=[  914], 40.00th=[  938], 50.00th=[  963], 60.00th=[  988],
    | 70.00th=[ 1012], 80.00th=[ 1045], 90.00th=[ 1090], 95.00th=[ 1123],
    | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1385], 99.95th=[ 1467],
    | 99.99th=[ 1598]
   bw (KB  /s): min=    1, max=  219, per=6.27%, avg=93.95, stdev=38.97
 write: io=72128KB, bw=610799B/s, iops=73, runt=120922msec
   slat (usec): min=2, max=38, avg= 8.32, stdev= 4.46
   clat (msec): min=604, max=1508, avg=1021.17, stdev=95.66
    lat (msec): min=604, max=1508, avg=1021.18, stdev=95.66
   clat percentiles (msec):
    |  1.00th=[  799],  5.00th=[  873], 10.00th=[  906], 20.00th=[  938],
    | 30.00th=[  971], 40.00th=[  996], 50.00th=[ 1020], 60.00th=[ 1045],
    | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1188],
    | 99.00th=[ 1254], 99.50th=[ 1254], 99.90th=[ 1287], 99.95th=[ 1385],
    | 99.99th=[ 1516]
   bw (KB  /s): min=    1, max=  127, per=6.72%, avg=40.03, stdev=20.94
   lat (msec) : 750=0.75%, 1000=58.11%, 2000=41.90%
 cpu          : usr=0.00%, sys=0.04%, ctx=37010, majf=0, minf=355
 IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=103.9%, 32=0.0%, >=64=0.0%
    submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued    : total=r=22511/w=8917/d=0, short=r=0/w=0/d=0
    latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
  READ: io=181216KB, aggrb=1498KB/s, minb=1498KB/s, maxb=1498KB/s, mint=120922msec, maxt=120922msec
 WRITE: io=72128KB, aggrb=596KB/s, minb=596KB/s, maxb=596KB/s, mint=120922msec, maxt=120922msec

fio Sequential read 1MB qd=32:
readbw: (g=0): rw=read, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=32
...
readbw: (g=0): rw=read, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=32
fio-2.1.5
Starting 4 processes

readbw: (groupid=0, jobs=4): err= 0: pid=23922: Thu Jun 25 08:17:38 2015
 read : io=12156MB, bw=102992KB/s, iops=99, runt=120861msec
   slat (usec): min=967, max=44395, avg=1392.67, stdev=2065.40
   clat (msec): min=1118, max=1438, avg=1277.70, stdev= 6.62
    lat (msec): min=1120, max=1440, avg=1278.93, stdev= 6.62
   clat percentiles (msec):
    |  1.00th=[ 1254],  5.00th=[ 1270], 10.00th=[ 1270], 20.00th=[ 1270],
    | 30.00th=[ 1287], 40.00th=[ 1287], 50.00th=[ 1287], 60.00th=[ 1287],
    | 70.00th=[ 1287], 80.00th=[ 1287], 90.00th=[ 1287], 95.00th=[ 1287],
    | 99.00th=[ 1287], 99.50th=[ 1287], 99.90th=[ 1303], 99.95th=[ 1303],
    | 99.99th=[ 1434]
   bw (KB  /s): min= 2627, max=26684, per=24.64%, avg=25378.93, stdev=2238.44
   lat (msec) : 2000=100.53%
 cpu          : usr=0.00%, sys=0.22%, ctx=25539, majf=0, minf=32870
 IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.5%, 32=105.7%, >=64=0.0%
    submit    : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued    : total=r=12032/w=0/d=0, short=r=0/w=0/d=0
    latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  READ: io=12156MB, aggrb=102992KB/s, minb=102992KB/s, maxb=102992KB/s, mint=120861msec, maxt=120861msec

fio Sequential write 1MB qd=32:
writebw: (g=0): rw=write, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=32
...
writebw: (g=0): rw=write, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=32
fio-2.1.5
Starting 4 processes

writebw: (groupid=0, jobs=4): err= 0: pid=23928: Thu Jun 25 08:19:45 2015
 write: io=12348MB, bw=104358KB/s, iops=100, runt=121163msec
   slat (usec): min=884, max=9070, avg=1467.11, stdev=446.48
   clat (msec): min=549, max=2065, avg=1259.65, stdev=202.18
    lat (msec): min=550, max=2066, avg=1261.09, stdev=202.08
   clat percentiles (msec):
    |  1.00th=[  603],  5.00th=[ 1156], 10.00th=[ 1188], 20.00th=[ 1205],
    | 30.00th=[ 1237], 40.00th=[ 1254], 50.00th=[ 1270], 60.00th=[ 1270],
    | 70.00th=[ 1287], 80.00th=[ 1287], 90.00th=[ 1336], 95.00th=[ 1467],
    | 99.00th=[ 1926], 99.50th=[ 1942], 99.90th=[ 2008], 99.95th=[ 2040],
    | 99.99th=[ 2057]
   bw (KB  /s): min= 2692, max=30007, per=24.85%, avg=25932.15, stdev=2126.56
   lat (msec) : 750=4.88%, 1000=0.03%, 2000=95.47%, >=2000=0.15%
 cpu          : usr=0.14%, sys=0.19%, ctx=25694, majf=0, minf=94
 IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.5%, 32=104.7%, >=64=0.0%
    submit    : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued    : total=r=0/w=12224/d=0, short=r=0/w=0/d=0
    latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
 WRITE: io=12348MB, aggrb=104358KB/s, minb=104358KB/s, maxb=104358KB/s, mint=121163msec, maxt=121163msec

fio Random read 8K qd=16:
readiops: (g=0): rw=randread, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=16
...
readiops: (g=0): rw=randread, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=16
fio-2.1.5
Starting 4 processes

readiops: (groupid=0, jobs=4): err= 0: pid=23934: Thu Jun 25 08:21:51 2015
 read : io=377696KB, bw=3143.1KB/s, iops=392, runt=120136msec
   slat (usec): min=23, max=265, avg=125.28, stdev=47.52
   clat (msec): min=14, max=415, avg=162.87, stdev=39.90
    lat (msec): min=14, max=415, avg=162.99, stdev=39.91
   clat percentiles (msec):
    |  1.00th=[   30],  5.00th=[   66], 10.00th=[  110], 20.00th=[  147],
    | 30.00th=[  159], 40.00th=[  165], 50.00th=[  172], 60.00th=[  178],
    | 70.00th=[  184], 80.00th=[  190], 90.00th=[  200], 95.00th=[  210],
    | 99.00th=[  227], 99.50th=[  233], 99.90th=[  255], 99.95th=[  273],
    | 99.99th=[  416]
   bw (KB  /s): min=   24, max= 3535, per=24.91%, avg=782.92, stdev=283.54
   lat (msec) : 20=0.20%, 50=3.05%, 100=5.46%, 250=91.18%, 500=0.10%
 cpu          : usr=0.01%, sys=0.16%, ctx=96167, majf=0, minf=225
 IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=105.1%, 32=0.0%, >=64=0.0%
    submit    : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued    : total=r=47152/w=0/d=0, short=r=0/w=0/d=0
    latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
  READ: io=377696KB, aggrb=3143KB/s, minb=3143KB/s, maxb=3143KB/s, mint=120136msec, maxt=120136msec

fio Random write 8K qd=16:
writeiops: (g=0): rw=randwrite, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=16
...
writeiops: (g=0): rw=randwrite, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=16
fio-2.1.5
Starting 4 processes

writeiops: (groupid=0, jobs=4): err= 0: pid=23942: Thu Jun 25 08:23:57 2015
 write: io=170976KB, bw=1423.9KB/s, iops=177, runt=120084msec
   slat (usec): min=27, max=419, avg=158.18, stdev=43.81
   clat (msec): min=278, max=482, avg=360.33, stdev=23.31
    lat (msec): min=278, max=482, avg=360.49, stdev=23.31
   clat percentiles (msec):
    |  1.00th=[  306],  5.00th=[  322], 10.00th=[  330], 20.00th=[  343],
    | 30.00th=[  347], 40.00th=[  355], 50.00th=[  359], 60.00th=[  367],
    | 70.00th=[  371], 80.00th=[  379], 90.00th=[  388], 95.00th=[  400],
    | 99.00th=[  416], 99.50th=[  429], 99.90th=[  469], 99.95th=[  482],
    | 99.99th=[  482]
   bw (KB  /s): min=   22, max=  420, per=24.85%, avg=353.60, stdev=30.26
   lat (msec) : 500=100.00%
 cpu          : usr=0.02%, sys=0.06%, ctx=43455, majf=0, minf=90
 IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=104.4%, 32=0.0%, >=64=0.0%
    submit    : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued    : total=r=0/w=21312/d=0, short=r=0/w=0/d=0
    latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
 WRITE: io=170976KB, aggrb=1423KB/s, minb=1423KB/s, maxb=1423KB/s, mint=120084msec, maxt=120084msec

fio Write bandwidth - 1MB random write qd=32:
writebw: (g=0): rw=randwrite, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=32
...
writebw: (g=0): rw=randwrite, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=32
fio-2.1.5
Starting 4 processes

writebw: (groupid=0, jobs=4): err= 0: pid=23948: Thu Jun 25 08:26:03 2015
 write: io=12156MB, bw=102982KB/s, iops=99, runt=120873msec
   slat (usec): min=839, max=9467, avg=1459.82, stdev=469.60
   clat (msec): min=625, max=1929, avg=1276.70, stdev=45.86
    lat (msec): min=626, max=1930, avg=1278.13, stdev=45.84
   clat percentiles (msec):
    |  1.00th=[ 1237],  5.00th=[ 1270], 10.00th=[ 1270], 20.00th=[ 1270],
    | 30.00th=[ 1270], 40.00th=[ 1270], 50.00th=[ 1270], 60.00th=[ 1287],
    | 70.00th=[ 1287], 80.00th=[ 1287], 90.00th=[ 1287], 95.00th=[ 1287],
    | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[ 1926], 99.95th=[ 1926],
    | 99.99th=[ 1926]
   bw (KB  /s): min= 2615, max=27215, per=24.78%, avg=25520.56, stdev=1700.49
   lat (msec) : 750=0.24%, 2000=100.29%
 cpu          : usr=0.15%, sys=0.18%, ctx=24545, majf=0, minf=90
 IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.5%, 32=105.7%, >=64=0.0%
    submit    : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued    : total=r=0/w=12032/d=0, short=r=0/w=0/d=0
    latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
 WRITE: io=12156MB, aggrb=102982KB/s, minb=102982KB/s, maxb=102982KB/s, mint=120873msec, maxt=120873msec

fio Read Max IOPS - 512 random read qd=32:
readiops: (g=0): rw=randread, bs=512-512/512-512/512-512, ioengine=libaio, iodepth=32
...
readiops: (g=0): rw=randread, bs=512-512/512-512/512-512, ioengine=libaio, iodepth=32
fio-2.1.5
Starting 4 processes

readiops: (groupid=0, jobs=4): err= 0: pid=23954: Thu Jun 25 08:28:09 2015
 read : io=26998KB, bw=229732B/s, iops=447, runt=120340msec
   slat (usec): min=21, max=393, avg=130.70, stdev=49.04
   clat (msec): min=56, max=436, avg=285.21, stdev=85.18
    lat (msec): min=56, max=437, avg=285.34, stdev=85.18
   clat percentiles (msec):
    |  1.00th=[   98],  5.00th=[  116], 10.00th=[  135], 20.00th=[  202],
    | 30.00th=[  243], 40.00th=[  302], 50.00th=[  318], 60.00th=[  330],
    | 70.00th=[  343], 80.00th=[  355], 90.00th=[  371], 95.00th=[  383],
    | 99.00th=[  400], 99.50th=[  404], 99.90th=[  429], 99.95th=[  437],
    | 99.99th=[  437]
   bw (KB  /s): min=    1, max=  142, per=24.99%, avg=55.99, stdev=22.04
   lat (msec) : 100=1.27%, 250=29.71%, 500=69.14%
 cpu          : usr=0.02%, sys=0.15%, ctx=114600, majf=0, minf=113
 IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.1%, 32=108.6%, >=64=0.0%
    submit    : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued    : total=r=53872/w=0/d=0, short=r=0/w=0/d=0
    latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  READ: io=26998KB, aggrb=224KB/s, minb=224KB/s, maxb=224KB/s, mint=120340msec, maxt=120340msec

fio Raed bandwidth - 1MB random read qd=32:
readbw: (g=0): rw=randread, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=32
...
readbw: (g=0): rw=randread, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=32
fio-2.1.5
Starting 4 processes

readbw: (groupid=0, jobs=4): err= 0: pid=23962: Thu Jun 25 08:30:16 2015
 read : io=12140MB, bw=103013KB/s, iops=99, runt=120678msec
   slat (usec): min=878, max=30936, avg=1290.61, stdev=1386.69
   clat (msec): min=1102, max=1449, avg=1277.51, stdev=13.33
    lat (msec): min=1103, max=1450, avg=1278.69, stdev=13.33
   clat percentiles (msec):
    |  1.00th=[ 1237],  5.00th=[ 1270], 10.00th=[ 1270], 20.00th=[ 1270],
    | 30.00th=[ 1287], 40.00th=[ 1287], 50.00th=[ 1287], 60.00th=[ 1287],
    | 70.00th=[ 1287], 80.00th=[ 1287], 90.00th=[ 1287], 95.00th=[ 1287],
    | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1434], 99.95th=[ 1434],
    | 99.99th=[ 1450]
   bw (KB  /s): min= 2642, max=31568, per=24.59%, avg=25330.26, stdev=2508.30
   lat (msec) : 2000=100.53%
 cpu          : usr=0.00%, sys=0.21%, ctx=24996, majf=0, minf=32866
 IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.5%, 32=105.7%, >=64=0.0%
    submit    : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued    : total=r=12016/w=0/d=0, short=r=0/w=0/d=0
    latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  READ: io=12140MB, aggrb=103012KB/s, minb=103012KB/s, maxb=103012KB/s, mint=120678msec, maxt=120678msec

fio Max Write IOPS - 512 random write qd=32:
writeiops: (g=0): rw=randwrite, bs=512-512/512-512/512-512, ioengine=libaio, iodepth=32
...
writeiops: (g=0): rw=randwrite, bs=512-512/512-512/512-512, ioengine=libaio, iodepth=32
fio-2.1.5
Starting 4 processes

writeiops: (groupid=0, jobs=4): err= 0: pid=23970: Thu Jun 25 08:32:22 2015
 write: io=12430KB, bw=105478B/s, iops=204, runt=120672msec
   slat (usec): min=23, max=509, avg=145.42, stdev=50.72
   clat (msec): min=424, max=816, avg=621.69, stdev=110.31
    lat (msec): min=424, max=817, avg=621.84, stdev=110.32
   clat percentiles (msec):
    |  1.00th=[  445],  5.00th=[  469], 10.00th=[  482], 20.00th=[  502],
    | 30.00th=[  519], 40.00th=[  537], 50.00th=[  676], 60.00th=[  701],
    | 70.00th=[  717], 80.00th=[  734], 90.00th=[  750], 95.00th=[  758],
    | 99.00th=[  783], 99.50th=[  799], 99.90th=[  816], 99.95th=[  816],
    | 99.99th=[  816]
   bw (KB  /s): min=    1, max=   42, per=24.76%, avg=25.51, stdev= 5.50
   lat (msec) : 500=17.79%, 750=73.74%, 1000=8.73%
 cpu          : usr=0.00%, sys=0.08%, ctx=50478, majf=0, minf=90
 IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.3%, 32=105.2%, >=64=0.0%
    submit    : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued    : total=r=0/w=24736/d=0, short=r=0/w=0/d=0
    latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
 WRITE: io=12430KB, aggrb=103KB/s, minb=103KB/s, maxb=103KB/s, mint=120672msec, maxt=120672msec


Network is a bottleneck:
#  dd if=/dev/zero of=tempfile2 bs=1M count=10240 conv=fdatasync,notrunc
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 105.664 s, 102 MB/s

Yet if I create two EFS filesystem and simultaneously perform the above test I get:
/efs1 - 10737418240 bytes (11 GB) copied, 186.399 s, 57.6 MB/s
/efs2 - 10737418240 bytes (11 GB) copied, 185.159 s, 58.0 MB/s
Obviously a network limitation of r3.2xlarge instance network bandwidth

No comments:

Post a Comment