As a backend developer on AWS, optimizing code to save memory and reduce costs is a critical aspect of efficient operations. In this article, we'll explore various strategies and provide Java code examples before and after optimization, demonstrating how you can save memory and reduce CPU usage on EC2 instances, minify JSON responses stored in Amazon S3, and implement caching to reduce requests to Amazon Aurora.
3 Example Code That I have applied in real projects
1. Optimizing Memory and CPU on EC2 Instances
Inefficient Memory Usage
Before Optimization (Java Code):
import java.util.List;
public class InefficientMemoryUsage {
public List<Integer> processList(List<Integer> data) {
List<Integer> result = new ArrayList<>();
for (Integer num : data) {
// Process data
result.add(num * 2);
}
return result;
}
}
In this code, the processList
method loads the entire list into memory, which can be memory-intensive for large data sets.
Efficient Memory Usage
After Optimization (Java Code):
import java.util.List;
import java.util.stream.Collectors;
public class EfficientMemoryUsage {
public List<Integer> processList(List<Integer> data) {
return data.stream()
.map(num -> num * 2)
.collect(Collectors.toList());
}
}
In the optimized code, we use Java Streams to process data without loading the entire list into memory. This reduces memory consumption and improves efficiency.
2. Minifying JSON Responses in S3
Minify JSON Before Storing in S3
Before Optimization (Java Code):
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.PutObjectRequest;
public class InefficientJsonStorage {
public void storeJsonInS3(AmazonS3 s3, String bucketName, String key, String jsonData) {
PutObjectRequest request = new PutObjectRequest(bucketName, key, jsonData);
s3.putObject(request);
}
}
In this code, JSON data is stored in Amazon S3 without any optimization, potentially leading to larger storage costs.
Minify JSON Before Storing in S3
After Optimization (Java Code):
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.fasterxml.jackson.databind.ObjectMapper;
public class EfficientJsonStorage {
public void storeJsonInS3(AmazonS3 s3, String bucketName, String key, Object data) {
try {
ObjectMapper mapper = new ObjectMapper();
String jsonData = mapper.writeValueAsString(data);
PutObjectRequest request = new PutObjectRequest(bucketName, key, jsonData);
s3.putObject(request);
} catch (Exception e) {
// Handle exceptions
}
}
}
In the optimized code, we use the Jackson library to minify JSON data before storing it in S3, reducing storage costs.
Implementing Caching to Reduce Requests to Aurora
Without Caching
Before Optimization (Java Code):
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
public class InefficientAuroraQuery {
public ResultSet executeQuery(Connection connection, String sql) throws SQLException {
PreparedStatement statement = connection.prepareStatement(sql);
return statement.executeQuery();
}
}
In this code, there is no caching mechanism in place, resulting in potentially excessive requests to Amazon Aurora and increased costs.
With Caching
After Optimization (Java Code):
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.Map;
public class EfficientAuroraQuery {
// it's only example code, you can push to Redis cache if you can.
private Map<String, ResultSet> queryCache = new HashMap<>();
public ResultSet executeQuery(Connection connection, String sql) throws SQLException {
if (queryCache.containsKey(sql)) {
return queryCache.get(sql);
} else {
PreparedStatement statement = connection.prepareStatement(sql);
ResultSet result = statement.executeQuery();
queryCache.put(sql, result);
return result;
}
}
}
In the optimized code, we implement a simple caching mechanism to store and reuse query results, reducing the number of requests to Amazon Aurora and potentially saving costs.
Some other note…
Here are some ways I've optimized backend code to reduce costs on AWS:
- Optimize and compress payload sizes for APIs, apps, and databases. Smaller payloads mean faster processing and lower data transfer fees. Use minification, gzipping, and smaller file formats where possible.
- Use Exponential Backoff and Retry strategies for failed requests. This avoids hammering databases or backends with repeat requests, wasting money. Progressively increase wait times between retries.
- Distribute demanding work across threads/workers. Leverage parallelization to maximize resource utilization on single machines before scaling wider.
- Offload resource intensive processes like image processing, PDF generation, Excel reports, etc. to dedicated services like S3, Lambda, SQS. Don't bog down your core app servers.
- Analyze and optimize algorithms for efficiency. Sometimes re-architecting algorithms can provide huge gains, like using a hash table vs array search.
- Monitor performance trends and bottlenecks. Continuously optimize slow code paths, queries, repetitive processes that waste compute cycles.
Conclusion
Backend developers on AWS have the opportunity to optimize code to save memory, reduce CPU usage, and minimize costs. By adopting efficient coding practices and leveraging AWS services, you can achieve better cost-efficiency while maintaining application performance. The Java code examples provided demonstrate how to optimize memory and CPU usage on EC2 instances, minify JSON data before storing it in S3, and implement caching to reduce requests to Amazon Aurora. Implement these strategies and keep an eye on your AWS bill to ensure you are making the most of your cloud infrastructure while minimizing expenses.