Microservices with DynamoDB: should you use a single table or use one table per microservice?
DynamoDB is a great database for microservices. You can use it for small services with few data or use it with large data heavy applications. One recommendation of AWS is to maintain as few tables as possible in a DynamoDB application. Yet if you create a microservice architecture, should you use a single table design or use one DynamoDB table per microservice?
This blog post explains the idea of a single table design and and why it is not always applicable to a microservice environment. Still there are situations where you want to choose a single table for your microservice architecture.
Why does AWS recommend a single table design?
When you use a relational database it is recommended to normalize the data. Simplified this means that you create one table per entity to reduce redundancy in your data. For example in an e-commerce system you would have one table for the user entity, one table for the order entity. Now when you want to get the order history of a user you would have to join the order table with the user table and filter by user id. This makes data access very flexible because it allows views with ad hoc queries but it reduces the scalability: the database has to scan multiple tables and sometimes has to create very complicated query plans to optimize the query performance.
DynamoDB was build with web scale in mind. It can grow almost infinitely without degrading performance. To achieve this DynamoDB removed joins completely. Instead of using joins you have to model the data in such a way that you can read the data in a single request by denormalizing the data. So you would not only save the user data and order data but also the joined data describing the order history of a user in a single table. This may introduce redundant data for the benefit of performance and scalability.
So should you use a single table design for a microservices?
The best practice for microservices is that every microservice owns its own data. In this sense you should not create a single table with demormalized data owned by multiple microservices. Instead you should use a single table design per microservice. If read performance is an issue with multiple tables you can stream the changes to a read service using DynamoDB streams. The read service denormalizes the data for optimal read access. This way every microservice still reads and writes only its own data.
Edit: Usually it is better to to use EventBrige Pipe and EventBrige Event Bus to decouple the services instead of using DynamoDB Streams directly. You can read more about this pattern in my blog post Decoupling Microservices with AWS EventBridge Pipes.
Alternative: use a single table with a namespaces
Although the operational overhead for a DynamoDB table is small compared to a relational database like Mysql or Postgres if your microservice environment grows it might be a burden to have too many DynamoDB tables. An alternative of using one DynamoDB table per microservice is to share one table but use namespaces so that each service can only see, modify and delete its own data.
You can create a single table and model the data in such a way that every service can only operate on its own data. You would still not create joined data from by different microservices — so you don’t get a performance benefit — but it might reduce the operational overhead because you only have to monitor and configure one database table.
To create a namespace you should prefix every primary key with the name of the service and add a fined grained access control policy to your service that allows access to items with the service’s prefix only. If the service accidentally tries to query, modify or delete an item that is not owned by this service it will get an access denied.
Conclusion
The idea of a single table design is to enhance performance and scalability by using denormalized data. Because a microservice should own its data and not access data from another microservices it should use its own table. So you should use a single table design per microservice. If performance is an issue when reading from multiple microservices then the data from multiple services can be streamed to a service that creates a view for optimal read performance. To reduce operational overhead with too many DynamoDB tables microservices might share a single table where data is namespaced with a prefix on the primary key and access is secured using a fine grained access control policy to ensure each microservice can only operate on its own data.
For further reading I recommend the blog post from Alex DeBrie: The What, Why, and When of Single-Table Design with DynamoDB.