Skip to main content
Energy providers understand that customers tend to lose patience when there is not enough power for them to complete day-to-day activities. Task one is keeping the lights on. If you are transitioning to renewable energy, it helps to know when you need to produce energy so you can choose a suitable energy source. Real-time analytics refers to the process of collecting, analyzing, and interpreting data instantly as it is generated. This approach enables you to track and monitor activity, make the decisions based on real-time insights on data stored in a and keep those lights on. Grafana is a popular data visualization tool that enables you to create customizable dashboards and effectively monitor your systems and applications. Grafana real-time analytics This page shows you how to integrate Grafana with a and make insights based on visualization of data optimized for size and speed in the .

Prerequisites

To follow the steps on this page:
  • Create a target with the Real-time analytics capability enabled.

    You need your connection details. This procedure also works for .

Optimize time-series data in hypertables

s are tables in that automatically partition your time-series data by time. Time-series data represents the way a system, process, or behavior changes over time. s enable to work efficiently with time-series data. Each is made up of child tables called chunks. Each chunk is assigned a range of time, and only contains data from that range. When you run a query, identifies the correct chunk and runs the query on it, instead of going through the entire table. is the hybrid row-columnar storage engine in used by . Traditional databases force a trade-off between fast inserts (row-based storage) and efficient analytics (columnar storage). eliminates this trade-off, allowing real-time analytics without sacrificing transactional capabilities. dynamically stores data in the most efficient format for its lifecycle:
  • Row-based storage for recent data: the most recent chunk (and possibly more) is always stored in the , ensuring fast inserts, updates, and low-latency single record queries. Additionally, row-based storage is used as a writethrough for inserts and updates to columnar storage.
  • Columnar storage for analytical performance: chunks are automatically compressed into the , optimizing storage efficiency and accelerating analytical queries.
Unlike traditional columnar databases, allows data to be inserted or modified at any stage, making it a flexible solution for both high-ingest transactional workloads and real-time analytics—within a single database. Because is 100% , you can use all the standard tables, indexes, stored procedures, and other objects alongside your s. This makes creating and working with s similar to standard .
  1. Import time-series data into a
    1. Unzip metrics.csv.gz to a <local folder>. This test dataset contains energy consumption data. To import up to 100GB of data directly from your current based database, migrate with downtime using native tooling. To seamlessly import 100GB-10TB+ of data, use the live migration tooling supplied by . To add data from non- data sources, see Import and ingest data.
    2. In Terminal, navigate to <local folder> and update the following string with your connection details to connect to your .
      psql -d "postgres://<username>:<password>@<host>:<port>/<database-name>?sslmode=require"
      
    3. Create an optimized for your time-series data:
      1. Create a with enabled by default for your time-series data using CREATE TABLE. For efficient queries on data in the , remember to segmentby the column you will use most often to filter your data. In your sql client, run the following command:
        CREATE TABLE "metrics"(
          created timestamp with time zone default now() not null,
          type_id integer                                not null,
          value   double precision                       not null
        ) WITH (
          tsdb.hypertable,
          tsdb.partition_column='created',
          tsdb.segmentby = 'type_id',
          tsdb.orderby = 'created DESC'
        );
        
        If you are self-hosting v2.19.3 and below, create a relational table, then convert it using create_hypertable. You then enable with a call to ALTER TABLE.
    4. Upload the dataset to your
      \COPY metrics FROM metrics.csv CSV;
      
  2. Have a quick look at your data You query s in exactly the same way as you would a relational table. Use one of the following SQL editors to run a query and see the data you uploaded:
    • Data mode: write queries, visualize data, and share your results in for all your s.
    • SQL editor: write, fix, and organize SQL faster and more accurately in for a .
    • psql: easily run queries on your s or deployment from Terminal.
    SELECT time_bucket('1 day', created, 'Europe/Berlin') AS "time",
    round((last(value, created) - first(value, created)) * 100.) / 100. AS value
    FROM metrics
    WHERE type_id = 5
    GROUP BY 1;
    
    On this amount of data, this query on data in the takes about 3.6 seconds. You see something like:
    Timevalue
    2023-05-29 22:00:00+0023.1
    2023-05-28 22:00:00+0019.5
    2023-05-30 22:00:00+0025
    2023-05-31 22:00:00+008.1

Optimize your data for real-time analytics

When converts a chunk to the , it automatically creates a different schema for your data. creates and uses custom indexes to incorporate the segmentby and orderby parameters when you write to and read from the . To increase the speed of your analytical queries by a factor of 10 and reduce storage costs by up to 90%, convert data to the :
  1. Connect to your In open an SQL editor. The in-Console editors display the query speed. You can also connect to your using psql.
  2. Add a policy to convert chunks to the at a specific time interval For example, 60 days after the data was added to the table:
    CALL add_columnstore_policy('metrics', INTERVAL '8 days');
    
    See add_columnstore_policy.
  3. Faster analytical queries on data in the Now run the analytical query again:
     SELECT time_bucket('1 day', created, 'Europe/Berlin') AS "time",
     round((last(value, created) - first(value, created)) * 100.) / 100. AS value
     FROM metrics
     WHERE type_id = 5
     GROUP BY 1;
    
    On this amount of data, this analytical query on data in the takes about 250ms.
Just to hit this one home, by converting cooling data to the , you have increased the speed of your analytical queries by a factor of 10, and reduced storage by up to 90%.

Write fast analytical queries

Aggregation is a way of combining data to get insights from it. Average, sum, and count are all examples of simple aggregates. However, with large amounts of data aggregation slows things down, quickly. s are a kind of that is refreshed automatically in the background as new data is added, or old data is modified. Changes to your dataset are tracked, and the behind the is automatically updated in the background. By default, querying s provides you with real-time data. Pre-aggregated data from the materialized view is combined with recent data that hasn’t been aggregated yet. This gives you up-to-date results on every query. You create s on uncompressed data in high-performance storage. They continue to work on data in the and rarely accessed data in tiered storage. You can even create s on top of your s.
  1. Monitor energy consumption on a day-to-day basis
    1. Create a continuous aggregate kwh_day_by_day for energy consumption:
      CREATE MATERIALIZED VIEW kwh_day_by_day(time, value)
         with (timescaledb.continuous) as
      SELECT time_bucket('1 day', created, 'Europe/Berlin') AS "time",
             round((last(value, created) - first(value, created)) * 100.) / 100. AS value
      FROM metrics
      WHERE type_id = 5
      GROUP BY 1;
      
    2. Add a refresh policy to keep kwh_day_by_day up-to-date:
      SELECT add_continuous_aggregate_policy('kwh_day_by_day',
         start_offset => NULL,
         end_offset => INTERVAL '1 hour',
         schedule_interval => INTERVAL '1 hour');
      
  2. Monitor energy consumption on an hourly basis
    1. Create a continuous aggregate kwh_hour_by_hour for energy consumption:
      CREATE MATERIALIZED VIEW kwh_hour_by_hour(time, value)
        with (timescaledb.continuous) as
      SELECT time_bucket('01:00:00', metrics.created, 'Europe/Berlin') AS "time",
             round((last(value, created) - first(value, created)) * 100.) / 100. AS value
      FROM metrics
      WHERE type_id = 5
      GROUP BY 1;
      
    2. Add a refresh policy to keep the continuous aggregate up-to-date:
      SELECT add_continuous_aggregate_policy('kwh_hour_by_hour',
       start_offset => NULL,
          end_offset => INTERVAL '1 hour',
          schedule_interval => INTERVAL '1 hour');
      
  3. Analyze your data Now you have made continuous aggregates, it could be a good idea to use them to perform analytics on your data. For example, to see how average energy consumption changes during weekdays over the last year, run the following query:
      WITH per_day AS (
       SELECT
         time,
         value
       FROM kwh_day_by_day
       WHERE "time" at time zone 'Europe/Berlin' > date_trunc('month', time) - interval '1 year'
       ORDER BY 1
      ), daily AS (
          SELECT
             to_char(time, 'Dy') as day,
             value
          FROM per_day
      ), percentile AS (
          SELECT
              day,
              approx_percentile(0.50, percentile_agg(value)) as value
          FROM daily
          GROUP BY 1
          ORDER BY 1
      )
      SELECT
          d.day,
          d.ordinal,
          pd.value
      FROM unnest(array['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']) WITH ORDINALITY AS d(day, ordinal)
      LEFT JOIN percentile pd ON lower(pd.day) = lower(d.day);
    
    You see something like:
    dayordinalvalue
    Mon223.08078714975423
    Sun119.511430831944395
    Tue325.003118897837307
    Wed48.09300571759772

Connect Grafana to Tiger Cloud

To visualize the results of your queries, enable Grafana to read the data in your :
  1. Log in to Grafana In your browser, log in to either:
    • Self-hosted Grafana: at http://localhost:3000/. The default credentials are admin, admin.
    • Grafana Cloud: use the URL and credentials you set when you created your account.
  2. Add your as a data source
    1. Open Connections > Data sources, then click Add new data source.
    2. Select PostgreSQL from the list.
    3. Configure the connection:
      • Host URL, Database name, Username, and Password Configure using your connection details. Host URL is in the format <host>:<port>.
      • TLS/SSL Mode: select require.
      • PostgreSQL options: enable TimescaleDB.
      • Leave the default setting for all other fields.
    4. Click Save & test.
    Grafana checks that your details are set correctly.

Visualize energy consumption

A Grafana dashboard represents a view into the performance of a system, and each dashboard consists of one or more panels, which represent information about a specific metric related to that system. To visually monitor the volume of energy consumption over time:
  1. Create the dashboard
    1. On the Dashboards page, click New and select New dashboard.
    2. Click Add visualization, then select the data source that connects to your and the Bar chart visualization. Grafana create dashboard
    3. In the Queries section, select Code, then run the following query based on your :
      WITH per_hour AS (
      SELECT
      time,
      value
      FROM kwh_hour_by_hour
      WHERE "time" at time zone 'Europe/Berlin' > date_trunc('month', time) - interval '1 year'
      ORDER BY 1
      ), hourly AS (
       SELECT
            extract(HOUR FROM time) * interval '1 hour' as hour,
            value
       FROM per_hour
      )
      SELECT
          hour,
          approx_percentile(0.50, percentile_agg(value)) as median,
          max(value) as maximum
      FROM hourly
      GROUP BY 1
      ORDER BY 1;
      
      This query averages the results for households in a specific time zone by hour and orders them by time. Because you use a , this data is always correct in real time. Grafana real-time analytics You see that energy consumption is highest in the evening and at breakfast time. You also know that the wind drops off in the evening. This data proves that you need to supply a supplementary power source for peak times, or plan to store energy during the day for peak times.
  2. Click Save dashboard
You have integrated Grafana with a and made insights based on visualization of your data.