You are probably paying AWS too much

מאת anan
בתאריך 5 מאי, 2024

8 Strategies to Optimize Your DynamoDB and Save Over 73% on cloud costs Reducing cloud costs without sacrificing high performance is a true challenge most startups are facing at a certain point. Optimizing your cloud database will significantly reduce cloud expenses. Here’s how to get started with DynamoDB.

You are probably paying AWS too much
DynamoDB pricing is primarily based on the amount of read/write operations and the storage used in tables. Optional features like data transfer, backups restoring global tables, and Streams come with their charges. DynamoDB offers two capacity modes, each with specific billing options for processing reads and writes: On-Demand Capacity Mode: This mode charges you only for the data reads and writes your application performs, automatically adjusting to workload traffic. It’s best for unpredictable application traffic and new tables with unknown workloads, offering a pay-for-what-you-use model. Provisioned Capacity Mode: Here, you specify the expected number of reads and writes per second, with the option to adjust capacity via Auto Scaling. This mode suits applications with predictable traffic and allows for better cost control but requires accurate capacity planning. AWS also provides a free tier for DynamoDB, making the initial hands-on experience more accessible. Go over these 8 easy-to-implement suggestions, to save on your cloud costs: Choose Capacity Modes Wisely — Utilize on-demand capacity for new or unpredictable workloads and switch to provisioned capacity when traffic patterns become clear for cost savings. Utilize Auto Scaling — For provisioned capacity tables, Auto Scaling adjusts your capacity settings based on actual usage, reducing costs by avoiding over-provisioning. Implement Table Tagging — For granular cost analysis, tag your DynamoDB tables to track expenses at the table level. Select the Appropriate Table Class — DynamoDB offers Standard and Standard-Infrequent Access classes, each balancing storage and read/write costs differently. Choose the one that fits your application’s needs: Standard table class: The default table class. It is designed to balance storage and read/write costs. Standard-Infrequent Access table class: Offers up to 60% lower storage costs and 25% higher read/write costs than the Standard table class. Fill out our Cloud Cost Optimization Questionnaire and get a free consultation session with a Solution Architect — click here 5. Find unused tables or GSIs — Unused DynamoDB tables and GSIs (Global Secondary Indexes) can generate unnecessary costs even when not actively used. To identify unused resources, you can check the CloudWatch metrics console for tables or GSIs without consumed read/write capacity usage. Regularly check your tables for tables or GSIs to ensure they are still in active use. Periodically scan your database for orphaned data deleting unused resources can reduce costs, eliminate wastage, and boost database performance. 6. Store large items efficiently — Storing large values or images can quickly shoot up your DynamoDB costs. But you can remedy that with the following strategies: Compressing large attribute values. Consider using compression algorithms such as GZIP or LZO to make the items smaller before saving them. Store large objects in Amazon S3. Store large objects in S3 instead of DynamoDB, Amazon S3 is well suited for large object storage due to its cost-efficiency and high durability. You can write a large object into an S3 bucket and create a DynamoDB item to store the object identifier (for instance, a pointer to the S3 URL of the object). 7. Identify sub-optimal usage patterns — Sub-optimal usage patterns can incur unintentional expenses. Evaluate how you are using your tables and determine if you have any of the below usage patterns: Using only strongly consistent read operations. A strongly-consistent read operation consumes 1 RCU/RRU per 4kb, and an eventually-consistent read operation (the default) consumes 0.5 RCU/RRU per 4kb. Using transactions for all read operations. Transactional reads cost 4 times the cost of eventually consistent reads. You can check your table utilization in CloudWatch to identify if everything is done as a transaction. Ensure you only perform transactions when your- application requires all-or-nothing atomicity for all items written. Scanning tables for analytics. Data analysis via scans leads to high costs as you are charged for all data read from the table. Consider using DynamoDB’s Export to S3 functionality to perform analytics or scan operations on data in S3. Using Global Tables for Disaster Recovery of a single region. Ensure that your global table usage is for the intended purpose or if Global tables are used just for data replication. They offer low RPO/RTO (Recovery Point Objective/ Recovery Time Objective). There may be cheaper alternatives if you have more flexible RPO/RTO requirements. For a free consultation session with a Solution Architect — click here 8. Lower your stream costs — DynamoDB Streams capture item modifications as stream records. So, you can configure DynamoDB Streams as an event source to trigger Lambda functions. These functions can process stream records to perform tasks such as aggregating data for analytics in Amazon Redshift or making your data searchable by indexing with AWS OpenSearch. Filter Dynamodb stream events for Lambda. You can define an event filter if you know a Lambda function only needs to process a subset of DynamoDB item changes. The filters will trigger Lambda functions on specific events instead of every stream event. This does not directly reduce DynamoDB costs. However, it will reduce the number of Lambda invocations, thereby reducing Lambda costs.
מאמרים נוספים...