Thursday, 28 September 2017

Lambda Performance

I read that AWS Lambda performance can be poor for small memory size functions. This is due to the fact that Lambda performance scale proportionally to the memory setting. To test this I performed a simple benchmark with the following function:
from __future__ import print_function

import json
import os
import subprocess
import commands

print('Loading function')

def lambda_handler(event, context):
    os.system("dd if=/dev/zero bs=1M count=1024 | md5sum")
   
    return 'Hello from Lambda'
This uses the md5sum command to create an MD5 hash of some data. We can generate some data on the fly with dd and pipe it into the md5sum tool to create a computationally expensive task. 

Here is outcome of the test with 128MB function:
01:54:46
Loading function

01:54:46
START RequestId: 2b966183-a4b9-11e7-b7d0-49f3d131a6ae Version: $LATEST

01:55:25
1024+0 records in

01:55:25
1024+0 records out

01:55:25
cd573cfaace07e7949bc0c46028904ff -

01:55:25
1073741824 bytes (1.1 GB) copied, 38.9593 s, 27.6 MB/s

01:55:25
END RequestId: 2b966183-a4b9-11e7-b7d0-49f3d131a6ae

01:55:25
REPORT RequestId: 2b966183-a4b9-11e7-b7d0-49f3d131a6ae Duration: 38993.44 ms Billed Duration: 39000 ms Memory Size: 128 MB Max Memory Used: 21 MB
Here are results for all of my tests for varying memory sizes and it shows that CPU performance is directly proportional to memory size:


Performance is clearly very poor with small sized memory functions. Just as comparison here is output from my c4.large EC2 instance
12:26:12|ec2-user@ip-1.2.3.4:[tmp]> dd if=/dev/zero bs=1M count=1024 | md5sum
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.14463 s, 501 MB/s
cd573cfaace07e7949bc0c46028904ff  -
12:26:12|ec2-user@ip-1.2.3.4:[tmp]>

No comments:

Post a Comment