hello there infracost friends! We are trying to cr...
# help
d
hello there infracost friends! We are trying to create a post-plan Lambda TFE Run task and we use the commands:
infracost diff
and
infracost output
to calculate the diff and prepare the output in a
github-comment
format. However, we noticed that when Lambda is using 2048MBs of memory, it takes around 18 seconds for a workspace that manages 2246 AWS resources to finish cost estimating. We have workspaces that manage way more resources. Do you have any idea how we can speed things up? 🙏
w
Unrelated to the above, just FYI @damp-baker-46244 that in the last few days we’ve updated RunTasks integration so it works with FinOps/Tagging policies too, see the updated docs.
d
We are aware of the TFE run tasks that you offer. However, due to certain Infosec policies, we cannot send TFE plan's JSON to your servers. I hope you understand. That is why we are trying to create our own
w
Yep, so parsing the plan JSON yourself with the CLI now also works with FinOps/Tagging policies - that’s more what I meant 🙂 (you can email hello@infracost.io if you’d like to discuss those as they’re part of the paid product)
b
@damp-baker-46244 Hello! How big are your plan JSONs? Maybe you could split them per project and run
infracost breakdown/diff
for each?
d
I am about to find out the size of these plans
we use Go in Lambda and execute the 2 commands using the CLI. Sorry Ali, but we are not interested in the policies you mention. I already had a call with Hassan about it
I locally performed a number of tests using 2 different JSON plans: 1. 1st JSON plan is 483K big (52 resources) ◦ Running
infracost diff
takes 3 seconds and produces a 11k output file ◦ Running
infracost output
takes 2 seconds 2. 2nd JSON plan is 19M big (2246 resources) ◦ Running
infracost diff
takes 12 seconds and produces a 1.6M output file ◦ Running
infracost output
takes 2 seconds
Are the above numbers expected? Do you have any ideas how to speed things up?
b
Seems right, more resources - more time to evaluate - even with request caching CLI needs to perform multiple API requests to the pricing API. Splitting the plans and running breakdowns in parallel might speed things up, but it's an overhead for you to code this logic.
d
I only tested using a single TFE workspace. I guess it makes a single API request to pricing API to get the prices? Keep in mind that in our case, multiple TFE workspaces with different set of variables can point to the same folder in a GitHub repo
b
12sec looks pretty good though. Anything less than a minute and <8GB RAM seems fine just now
d
I will increase the memory size to see if that helps at all