You are writing to a DynamoDB table and receive the following exception:”
ProvisionedThroughputExceededException”. though according to your Cloudwatch metrics
for the table, you are not exceeding your provisioned throughput. What could be an
explanation for this?
A.
You haven’t provisioned enough DynamoDB storage instances
B.
You’re exceeding your capacity on a particular Range Key
C.
You’re exceeding your capacity on a particular Hash Key
D.
You’re exceeding your capacity on a particular Sort Key
E.
You haven’t configured DynamoDB Auto Scaling triggers
Explanation:
C
http://stackoverflow.com/questions/29101371/getting-provisionedthroughputexceededexception-error-when-iterating-over-dynamod
1
down vote
accepted
If you have checked in the AWS Management Console and verified that throttling events are occurring even when read capacity is well below provisioned capacity the most likely answer is that your hash keys are not evenly distributed. As your DynamoDB table grows in size and capacity, the DynamoDB service will automatically split your table into partitions. It will then use the hash key of the item to determine which partition to store the item. In addition, your provisioned read capacity is also split evenly among the partitions.
If you have a well-distributed hash key, this all works fine. But if your hash key is not well distributed it can cause all or most of your reads to come from a single partition. So, for example, if you had 10 partitions and you had a provisioned read capacity of 1000 on the table, each partition would have a read capacity of 100. If all of your reads are hitting one partition you will be throttled at 100 read units rather than 1000.
Unfortunately, the only way to really fix this problem is to pick a better hash and rewrite the table with those hash values.
C. You’re exceeding your capacity on a particular Hash Key
The provisioned throughput associated with a table is also divided evenly among the partitions, with no sharing of provisioned throughput across partitions.
If you have checked in the AWS Management Console and verified that throttling events are occurring even when read capacity is well below provisioned capacity the most likely answer is that your hash keys are not evenly distributed. As your DynamoDB table grows in size and capacity, the DynamoDB service will automatically split your table into partitions. It will then use the hash key of the item to determine which partition to store the item. In addition, your provisioned read capacity is also split evenly among the partitions.