Data bucket record access limitation

Hi fellows,
I’ve seen that there is a limitation of 200 records max by accessing a data bucket, how can this be improved? does paid accounts have the same limitation?

Thanks in advance

Hey @alvarolb can be improved this number on the cloud and local server deployments?

You can read all your bucket over the API, but you have to iterate over the data items based on the timestamp. Each query can return a maximum of 200 records, as there are buckets with several megabytes/gigabytes, so data cannot be returned all at once.

So, the idea is to query a fixed chunk size, i.e., 100 items. If all items are populated, then, it is probable to have more remaining data. So, in the next query you can adjust the maximum/minimum timestamp according to the latest returned element to get the next data chunk, ad so on.

Hope it helps!

I see, but cannot be improved this number for example in own server deployments? I know that make a huge consult against the shared cloud is not allowed, but against my own local server I dont see any limitation.

I’m talking from my ignorant point of view, I dont have any formal training in databases and this stuff, sorry if looks obvious that what I’m saying is not a good practice.

Thanks in advance

Hi @alvarolb , related to this topic i’ve got some issue for data bucket.
I have own hosted thingerio server, and already deployed around 10 devices connected to the cloud. Those device measures temperature & humidity, then display to the dashboard and store it to the data bucket.
I’ve check on the data bucket, it only store around 9 hours of measurement time stamp. I’ve set the interval storage with 5 minutes every measurement.
have a look at attachment, when i tried to check previous timestamp, it only stuck to previous ± 9 hours data.
How can we tackle this issue?