New Node-Red Plugin! Version 1.1.0

Today we are announcing that we have released a new Node-RED version for! :partying_face: :star_struck:

It can be installed both directly in a self-hosted Node-RED instance, or directly as a plugin on private IoT Cloud Instances.


  • Now it is possible to use Bucket Reading functionality to extract data periodically from Data Buckets and perform any required analysis, i.e, aggregating data every hour to check trends, detect minimums, maximums, and apply custom IoT rules to your data.


  • We have created a device callback Node that supports auto-provisioning. It is possible to call the Node with the device identifier, and a payload, and the node will automatically generate the device (if it does not exists), its bucket, and will insert the data into it. So, It is possible to create devices from external sources easily.

  • The plugin UI has been improved a lot to support dynamic resource selection from your account, so it is possible to select and search from a dropdown your devices, groups, types, endpoints, resources, properties, buckets, etc.


  • Now there are nodes fo reading and writing to properties from your devices, or assets.

  • Improved nodes documentation on Node-RED

  • Last but not least, now it is possible to scale the flows much more easily. In previous versions, nodes required to be configured from the UI input boxes to set a fixed a device, a given endpoint, etc. Now, it is possible to leave the node configuration empty in the UI and pass the configuration parameters over an input message, so, the same flow can be used for unlimited resources.


  • Updated Node-RED version to 2.0.6
  • Updated Node.js to version 14
  • New bucket create node
  • New bucket read node
  • New property read node
  • New property write node
  • New device create node
  • New device callback node
  • Dynamic selection for existing devices, groups, types, endpoints, resources, properties and buckets
  • Filtering over the dynamic selection of resources


  • Endpoint call node now returns the output of the call
  • Standardized node technincal documentation for help dialog
  • Endpoint call has now output
  • Device write has now output
  • Ordered nodes in palette
  • Added paletteLabel to nodes


  • Migrated deprecated request dependency to internal http/https for Node-RED page: Node-RED documentation:


I am unable to find any documentation on what parameters to pass into a bucket read. I have successfully guessed at msg.bucket msg.aggregation msg.aggregation_type but unable to guess what to use for start and end time stamps.
info/more on the object in node red gives you the input property names but these dont work when they have capitals in them for instance aggregation_type works but property says it is aggregationType

Hi @Intel1 ,

The documentation regarding all available fields of the nodes can be found in the help tab of node red’s sidebar. By selecting the desired node you’ll get a description of the input fields, output and any other information. Here you can find a description on how to find the documentation.

I’ll quote from the help menu regarding the timestamp values, which would be msg.max_ts and msg.min_ts:

The fields msg.max_ts and msg.min_ts allow any string which javascript Date objects can interpret, find more info here. It is recommended to use Epoch timestamp or UTC date for clarity.

The HTML names of the available fields don’t necessarily match against the allowed input by message field due to different convention namings, but most match your example for aggregation_type and aggregationType, as you’ve already found, is not the case of the timestamps fields as in the editor they are composed of two different fields, one for date and another for time.

Let me know if you need any more help.

hello @jaimebs

I started learning Node.Red and I’m excited about the possibilities.

In the “Device Disconnection Alert” example, I would like to know how I can define a timing instruction to call the endpoint.

  • If the device has been disconnected for more than 30 minutes, please send an email.

I would like to know how I can set a time condition (eg 30 minutes) to trigger the endpoint only once (without causing an infinite loop).

The other question is:
On the email endpoint, I managed to extract the time field using: {{ts}}
But the format is in Unix Time Stamp. How can I convert to a pattern “2021-11-01T12:54:53+00:00” or similar?

I found a solution to the second question.
But I didn’t get an answer to the first question.

tz_br = new Date(msg.payload.ts - 10800000);
//timezone -03:00 = - 10800000

dia_utc_br = tz_br.toLocaleDateString("pt-br");
hora_utc_br = tz_br.toLocaleTimeString("pt-br");

date_time = hora_utc_br + " - " + dia_utc_br;

msg.payload = {"device":msg.payload.device,
               "ts":date_time };
return msg;


To solve your first question I would use 3 variables, a bool that will store the device status (0 → no connected, 1 → connected), other that will store the timestamp of the last connection, triggered by the server state, and the last one that will give the status of the notification (0 → notification not sent, 1 ->notification sent), and a “connected” status should set the “notification not sent”, the variables can be stored as a “Property” for each device, I guess this flow can be done easy in nodered, I’m in the same place as you, just started to use it and learning each day more about this tool, it’s exciting!!

Hope this helps.

Hi, @ega and @jaimebs

I’m trying another approach that can solve two similar situations.

  • Monitor devices that have been offline for a certain time
  1. Read the last value of all buckets every 30 minutes;
  2. Identify the time difference between the “ts” of the bucket and the current time date;
  3. If the difference (“server datetime” - bucker “ts” record) is between 40 to 60 minutes, or 6 to 7 hours;
  4. trigger the “endpoint_alert_email” (time restrictions are to prevent “infinite loop” from triggering the endpoint)

In this example, I’m having some difficulties. In step 1, I’m triggering the routine every 30 minutes with the node “Inject”. But I’m not getting EVERY bucket read. The “bucket read” node requires that the “Bucket ID” be set. How can “Inject” feed “bucket read” with the “Bucket ID” of all buckets?
In step 2, I think it will be easy to define the code in the “function” node.
In step 3 I think we can make a “function” node with two outputs. One for each endpoint.

What do you think? Any idea?

  • Send data from buckets to corresponding device
  1. Read the last value of all buckets every 10 minutes;
  2. convert the value of “ts” from Time Stamp Unix to UTC ISO 8601, eg: 2021-11-02 15:07Z;
  3. Send the value of each bucket to its respective device (ex: monitor).

In this example, steps 1 and 2 are similar to the previous example. The difficulty I encountered is in step 3. How will we know which device_monitor is linked to a particular bucket? Would I have to use a CSV file (in File Storages) with data linking the bucket (eg 30 different buckets) to device_monitor (eg 30 different devices)? Ex: Does device_monitor_12345 receive data from bucket_6789?

What do you think? Any idea?

Hi @George_Santiago,

I dont know how to retrieve all “buckets_id”, and iterate with each value, maybe a file as you say or an array may work, but I dont know the best way to do it.

The function node has just one connection output, any output of the function node will trigger the next one, what I would do is use a “Switch” node and stablish in the function node a variable that will trigger each endpoint, and select it by the “Switch” node.

Please note that I’m learning as you, I dont know the best practices with node red.

Kind regards,

The way to retrieve the bucket list that I know is by the RestAPI

Hi @George_Santiago,

About your first question, at this moment there is no way to retrieve all buckets ids with a Node-RED node. We are however working on this feature at the moment and we hope to be able to release it soon.
If you are not able to wait for this feature you could query the API in order to retrieve all buckets and loop over them. Here is an example flow:

[{"id":"1e216fc54478c35d","type":"tab","label":"bucket_loop","disabled":false,"info":""},{"id":"298e6842def1cbcf","type":"http request","z":"1e216fc54478c35d","name":"","method":"GET","ret":"txt","paytoqs":"ignore","url":"{user}/buckets","tls":"78f749b1f3a5b9e0","persist":false,"proxy":"","authType":"bearer","x":390,"y":260,"wires":[["45fba8fbffc63c01"]]},{"id":"1cf46b8b9ec0f2c8","type":"inject","z":"1e216fc54478c35d","name":"","props":[{"p":"payload"},{"p":"topic","vt":"str"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","payload":"","payloadType":"date","x":190,"y":260,"wires":[["298e6842def1cbcf"]]},{"id":"f2b917d67e2c31ba","type":"debug","z":"1e216fc54478c35d","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","statusVal":"","statusType":"auto","x":770,"y":260,"wires":[]},{"id":"45fba8fbffc63c01","type":"function","z":"1e216fc54478c35d","name":"","func":"\nvar payload = JSON.parse(msg.payload);\n\nvar outputMsgs = [];\n\nfor ( var i in payload) {\n    outputMsgs.push({payload: payload[i].bucket});\n}\n\nreturn [ outputMsgs ];","outputs":1,"noerr":0,"initialize":"","finalize":"","libs":[],"x":580,"y":260,"wires":[["f2b917d67e2c31ba"]]},{"id":"78f749b1f3a5b9e0","type":"tls-config","name":"Local","cert":"","key":"","ca":"","certname":"","keyname":"","caname":"","servername":"","verifyservercert":true,"alpnprotocol":""}]

You would need to set the your user in the URL and the Access Token as a Bearer Token for authentication.

Then pass this message to the bucket-read node as msg.bucket while having the rest of the configuration by the edit form (leaving the field Bucket ID empty).

As you said, for step 2 a simple substraction over the timestamps could work, keeping in mind that the ts field is represented by milliseconds.

For your seconds question, the solution would be similar to the above on how to retrieve and loop over the buckets, but in this case you would need to query an additional endpoint:

The result of this query would contain the link with the device you are looking for:

  "backend": "influxdb",
  "bucket": "Climastick",
  "config": {
    "device": "climastick",
    "interval": 15,
    "resource": "compass",
    "source": "device",
    "update": "interval",
    "user": "{user}"
  "description": "Climastick",
  "enabled": true,
  "modified": 1636364234656,
  "name": "Climastick"

I hope this helps, let me know if you succeed or need any additional help. Stay tuned for the upcoming release as it could solve some of this issues with the API endpoints.

Good luck!

1 Like


I am failing to advance in my objective.

I would like Node.Red to run a routine every 40 minutes to identify if any buckets have not received records in the last 30-40 minutes. And, at the end, send a single email with a table, about the buckets that do not receive recordings for between 30 and 40 minutes, as in the example of the email below:

ultimos registros

With the example available at this link (device_state_change → disconnectionAlert), I received 62 emails in 12 hours. This has crowded the inbox and is inefficient.
The best approach would be to make it possible to send emails at a certain time interval (every 30 minutes; every 1 hour; every 6 hours…), returning a table with the device IDs or buckets

As I don’t know how to program in javascript and know very little about Node.Red, maybe I’ll be able to run these reports and alerts on the Thinger server when the functionalities of “Alerts manager” and the “Reporting tool” indicated in the roadmap become available.

At this point, as I know how to program in R language, I will create an AWS EC2 instance with RStuido to perform these tasks through Thinger APIs.

  • Schedule runs
  • Request data from Thinger Server API
  • Manipulate data from Thinger Server API
  • Create reports
  • Send alerts or email communications…

Thanks for the suggestions

Hi @George_Santiago

We have released a new version of the plugin with some improvements. For your use case it may resolve some of your issues and make your flows simpler. Check it out and let us know how we can improve.

Indeed, this flow is not really adequate for your use case, and you would need to have a number of intermediate nodes that can bring all the necessary information together before calling the endpoint.

I’ve tried to achieve your requirements with a flow, I’ll leave it here so you can test it out and import it into your instance.

[{"id":"a14c5e72386ffa8c","type":"tab","label":"Flow 2","disabled":false,"info":"","env":[]},{"id":"e3fb8ac7b3202db0","type":"inject","z":"a14c5e72386ffa8c","name":"every 40 mins","props":[{"p":"payload"},{"p":"topic","vt":"str"}],"repeat":"2400","crontab":"","once":false,"onceDelay":0.1,"topic":"","payloadType":"date","x":140,"y":60,"wires":[["fe1bc17ed8808139"]]},{"id":"fe1bc17ed8808139","type":"asset-iterator","z":"a14c5e72386ffa8c","name":"get all buckets","asset":"bucket","filter":".*","assetType":"","assetGroup":"","server":"e227a9126e2fd92c","x":380,"y":60,"wires":[["d81471d9b3334a4b"]]},{"id":"8642e90719bdeaa3","type":"function","z":"a14c5e72386ffa8c","name":"set 'last_ts' for not records between 30/40 mins?","func":"console.log(msg.payload);\n\nif (msg.payload.length > 0) {\n    \n    let millis = - msg.payload[0].ts;\n    let minutes = Math.floor(millis/60000);\n\n    payload = {};\n    let lastTs;\n    if (minutes > 30 && minutes < 40) {\n        lastTs = (new Date(msg.payload[0].ts)).toUTCString();\n    }\n    payload[\"last_ts\"] = lastTs;\n    payload[\"complete\"] = true;\n    return payload;\n}\n\n","outputs":1,"noerr":0,"initialize":"","finalize":"","libs":[],"x":600,"y":240,"wires":[["594569c280f3f784"]]},{"id":"61b6a49484dc24ed","type":"bucket-read","z":"a14c5e72386ffa8c","name":"read last","bucket":"","filter":"simple","timespanSequence":"","timespanValue":"","timespanUnits":"","maxTs":"","minTs":"","items":"1","limit":"","aggregation":"","aggregationType":"","sort":"desc","server":"e227a9126e2fd92c","x":560,"y":180,"wires":[["8642e90719bdeaa3"]]},{"id":"d81471d9b3334a4b","type":"change","z":"a14c5e72386ffa8c","name":"get bucket & device","rules":[{"t":"set","p":"bucket","pt":"msg","to":"payload.bucket","tot":"msg","dc":true},{"t":"set","p":"device","pt":"msg","to":"payload.config.device","tot":"msg","dc":true},{"t":"delete","p":"payload","pt":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":610,"y":60,"wires":[["016d2d35099bed1a"]]},{"id":"594569c280f3f784","type":"join","z":"a14c5e72386ffa8c","name":"join bucket with last_ts","mode":"custom","build":"merged","property":"","propertyType":"full","key":"topic","joiner":"\\n","joinerType":"str","accumulate":false,"timeout":"30","count":"","reduceRight":false,"reduceExp":"","reduceInit":"","reduceInitType":"num","reduceFixup":"","x":540,"y":320,"wires":[["237f60f677cc51d2"]]},{"id":"016d2d35099bed1a","type":"switch","z":"a14c5e72386ffa8c","name":"Only buckets containing device","property":"device","propertyType":"msg","rules":[{"t":"nempty"}],"checkall":"true","repair":false,"outputs":1,"x":890,"y":60,"wires":[["11ba1e2ac6291973"]]},{"id":"11ba1e2ac6291973","type":"delay","z":"a14c5e72386ffa8c","name":"one msg every 5s","pauseType":"rate","timeout":"5","timeoutUnits":"seconds","rate":"1","nbRateUnits":"5","rateUnits":"second","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":false,"allowrate":false,"outputs":1,"x":190,"y":180,"wires":[["61b6a49484dc24ed","594569c280f3f784"]]},{"id":"2df54b4067186659","type":"change","z":"a14c5e72386ffa8c","name":"clean message","rules":[{"t":"delete","p":"bucket","pt":"msg"},{"t":"delete","p":"device","pt":"msg"},{"t":"delete","p":"last_ts","pt":"msg"},{"t":"delete","p":"payload.complete","pt":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":520,"y":400,"wires":[["07dfdb481f4e6306"]]},{"id":"237f60f677cc51d2","type":"switch","z":"a14c5e72386ffa8c","name":"remove messages without last_ts","property":"last_ts","propertyType":"msg","rules":[{"t":"nempty"}],"checkall":"true","repair":false,"outputs":1,"x":860,"y":320,"wires":[["2df54b4067186659"]]},{"id":"07dfdb481f4e6306","type":"batch","z":"a14c5e72386ffa8c","name":"sets messages batch by time","mode":"interval","count":10,"overlap":0,"interval":"20","allowEmptySequence":false,"topics":[],"x":800,"y":400,"wires":[["9adf2fb97d12f982"]]},{"id":"2ad545ea5a725de6","type":"comment","z":"a14c5e72386ffa8c","name":"also sets trigger time","info":"","x":810,"y":440,"wires":[]},{"id":"9adf2fb97d12f982","type":"join","z":"a14c5e72386ffa8c","name":"joins batch","mode":"auto","build":"object","property":"payload","propertyType":"msg","key":"topic","joiner":"\\n","joinerType":"str","accumulate":true,"timeout":"","count":"","reduceRight":false,"reduceExp":"","reduceInit":"","reduceInitType":"","reduceFixup":"","x":1040,"y":400,"wires":[["64e174db32df1d24"]]},{"id":"64e174db32df1d24","type":"endpoint-call","z":"a14c5e72386ffa8c","name":"","endpoint":"test_endpoint","server":"e227a9126e2fd92c","x":740,"y":540,"wires":[[]]},{"id":"89df0fa5d86c3b23","type":"comment","z":"a14c5e72386ffa8c","name":"sets last_ts","info":"","x":860,"y":240,"wires":[]},{"id":"57335e6983a5d71c","type":"comment","z":"a14c5e72386ffa8c","name":"checks last_ts existence","info":"","x":1120,"y":320,"wires":[]},{"id":"e227a9126e2fd92c","type":"thinger-server","host":"","name":"","ssl":true}]

The result message would be something like this:

    "bucket": "drybox",
    "device": "esp32_example",
    "last_ts": "Mon, 22 Nov 2021 19:28:40 GMT"
    "bucket": "number",
    "device": "esp32_example",
    "last_ts": "Mon, 22 Nov 2021 19:48:41 GMT"
    "bucket": "string",
    "device": "esp32_example",
    "last_ts": "Mon, 22 Nov 2021 19:48:41 GMT"

I encourage you to check it out as well as the new possibilities with the new Node-RED Plugin version


1 Like