Data bucket record access limitation


Hi fellows,
I’ve seen that there is a limitation of 200 records max by accessing a data bucket, how can this be improved? does paid accounts have the same limitation?

Thanks in advance


Hey @alvarolb can be improved this number on the cloud and local server deployments?


You can read all your bucket over the API, but you have to iterate over the data items based on the timestamp. Each query can return a maximum of 200 records, as there are buckets with several megabytes/gigabytes, so data cannot be returned all at once.

So, the idea is to query a fixed chunk size, i.e., 100 items. If all items are populated, then, it is probable to have more remaining data. So, in the next query you can adjust the maximum/minimum timestamp according to the latest returned element to get the next data chunk, ad so on.

Hope it helps!


I see, but cannot be improved this number for example in own server deployments? I know that make a huge consult against the shared cloud is not allowed, but against my own local server I dont see any limitation.

I’m talking from my ignorant point of view, I dont have any formal training in databases and this stuff, sorry if looks obvious that what I’m saying is not a good practice.

Thanks in advance