Skip to main content
Real-time analytics refers to the process of collecting, analyzing, and interpreting data instantly as it is generated. This approach enables you track and monitor activity, and make decisions based on real-time insights on data stored in a . Real-time analytics geolocation This page shows you how to integrate Grafana with a and make insights based on visualization of data optimized for size and speed in the .

Prerequisites

To follow the steps on this page:
  • Create a target with the Real-time analytics capability enabled.

    You need your connection details. This procedure also works for .

Optimize time-series data in hypertables

s are tables in that automatically partition your time-series data by time. Time-series data represents the way a system, process, or behavior changes over time. s enable to work efficiently with time-series data. Each is made up of child tables called chunks. Each chunk is assigned a range of time, and only contains data from that range. When you run a query, identifies the correct chunk and runs the query on it, instead of going through the entire table. is the hybrid row-columnar storage engine in used by . Traditional databases force a trade-off between fast inserts (row-based storage) and efficient analytics (columnar storage). eliminates this trade-off, allowing real-time analytics without sacrificing transactional capabilities. dynamically stores data in the most efficient format for its lifecycle:
  • Row-based storage for recent data: the most recent chunk (and possibly more) is always stored in the , ensuring fast inserts, updates, and low-latency single record queries. Additionally, row-based storage is used as a writethrough for inserts and updates to columnar storage.
  • Columnar storage for analytical performance: chunks are automatically compressed into the , optimizing storage efficiency and accelerating analytical queries.
Unlike traditional columnar databases, allows data to be inserted or modified at any stage, making it a flexible solution for both high-ingest transactional workloads and real-time analytics—within a single database. Because is 100% , you can use all the standard tables, indexes, stored procedures, and other objects alongside your s. This makes creating and working with s similar to standard .
  1. Import time-series data into a
    1. Unzip nyc_data.tar.gz to a <local folder>. This test dataset contains historical data from New York’s yellow taxi network. To import up to 100GB of data directly from your current -based database, migrate with downtime using native tooling. To seamlessly import 100GB-10TB+ of data, use the live migration tooling supplied by . To add data from non- data sources, see Import and ingest data.
    2. In Terminal, navigate to <local folder> and update the following string with your connection details to connect to your .
      psql -d "postgres://<username>:<password>@<host>:<port>/<database-name>?sslmode=require"
      
    3. Create an optimized for your time-series data:
      1. Create a with enabled by default for your time-series data using CREATE TABLE. For efficient queries on data in the , remember to segmentby the column you will use most often to filter your data. In your sql client, run the following command:
        CREATE TABLE "rides"(
          vendor_id TEXT,
          pickup_datetime TIMESTAMP WITHOUT TIME ZONE NOT NULL,
          dropoff_datetime TIMESTAMP WITHOUT TIME ZONE NOT NULL,
          passenger_count NUMERIC,
          trip_distance NUMERIC,
          pickup_longitude  NUMERIC,
          pickup_latitude   NUMERIC,
          rate_code         INTEGER,
          dropoff_longitude NUMERIC,
          dropoff_latitude  NUMERIC,
          payment_type INTEGER,
          fare_amount NUMERIC,
          extra NUMERIC,
          mta_tax NUMERIC,
          tip_amount NUMERIC,
          tolls_amount NUMERIC,
          improvement_surcharge NUMERIC,
          total_amount NUMERIC
        ) WITH (
          tsdb.hypertable,
          tsdb.partition_column='pickup_datetime',
          tsdb.create_default_indexes=false,
          tsdb.segmentby='vendor_id',
          tsdb.orderby='pickup_datetime DESC'
        );
        
        If you are self-hosting v2.19.3 and below, create a relational table, then convert it using create_hypertable. You then enable with a call to ALTER TABLE.
      2. Add another dimension to partition your more efficiently:
        SELECT add_dimension('rides', by_hash('payment_type', 2));
        
      3. Create an index to support efficient queries by vendor, rate code, and passenger count:
        CREATE INDEX ON rides (vendor_id, pickup_datetime DESC);
        CREATE INDEX ON rides (rate_code, pickup_datetime DESC);
        CREATE INDEX ON rides (passenger_count, pickup_datetime DESC);
        
    4. Create tables for relational data:
      1. Add a table to store the payment types data:
        CREATE TABLE IF NOT EXISTS "payment_types"(
          payment_type INTEGER,
          description TEXT
        );
        INSERT INTO payment_types(payment_type, description) VALUES
          (1, 'credit card'),
          (2, 'cash'),
          (3, 'no charge'),
          (4, 'dispute'),
          (5, 'unknown'),
          (6, 'voided trip');
        
      2. Add a table to store the rates data:
        CREATE TABLE IF NOT EXISTS "rates"(
         rate_code   INTEGER,
         description TEXT
        );
        INSERT INTO rates(rate_code, description) VALUES
         (1, 'standard rate'),
         (2, 'JFK'),
         (3, 'Newark'),
         (4, 'Nassau or Westchester'),
         (5, 'negotiated fare'),
         (6, 'group ride');
        
    5. Upload the dataset to your
      \COPY rides FROM nyc_data_rides.csv CSV;
      
  2. Have a quick look at your data You query s in exactly the same way as you would a relational table. Use one of the following SQL editors to run a query and see the data you uploaded:
    • Data mode: write queries, visualize data, and share your results in for all your s.
    • SQL editor: write, fix, and organize SQL faster and more accurately in for a .
    • psql: easily run queries on your s or deployment from Terminal.
    For example:
    • Display the number of rides for each fare type:
      SELECT rate_code, COUNT(vendor_id) AS num_trips
      FROM rides
      WHERE pickup_datetime < '2016-01-08'
      GROUP BY rate_code
      ORDER BY rate_code;
      
      This simple query runs in 3 seconds. You see something like:
      rate_codenum_trips
      12266401
      254832
      34126
      4967
      57193
      617
      9942
    • To select all rides taken in the first week of January 2016, and return the total number of trips taken for each rate code:
      SELECT rates.description, COUNT(vendor_id) AS num_trips
      FROM rides
      JOIN rates ON rides.rate_code = rates.rate_code
      WHERE pickup_datetime < '2016-01-08'
      GROUP BY rates.description
      ORDER BY LOWER(rates.description);
      
      On this large amount of data, this analytical query on data in the takes about 59 seconds. You see something like:
      descriptionnum_trips
      group ride17
      JFK54832
      Nassau or Westchester967
      negotiated fare7193
      Newark4126
      standard rate2266401

Optimize your data for real-time analytics

When converts a chunk to the , it automatically creates a different schema for your data. creates and uses custom indexes to incorporate the segmentby and orderby parameters when you write to and read from the . To increase the speed of your analytical queries by a factor of 10 and reduce storage costs by up to 90%, convert data to the :
  1. Connect to your In open an SQL editor. The in-Console editors display the query speed. You can also connect to your using psql.
  2. Add a policy to convert chunks to the at a specific time interval For example, convert data older than 8 days old to the :
    CALL add_columnstore_policy('rides', INTERVAL '8 days');
    
    See add_columnstore_policy. The data you imported for this tutorial is from 2016, it was already added to the by default. However, you get the idea. To see the space savings in action, follow Try the key features.
Just to hit this one home, by converting cooling data to the , you have increased the speed of your analytical queries by a factor of 10, and reduced storage by up to 90%.

Connect Grafana to Tiger Cloud

To visualize the results of your queries, enable Grafana to read the data in your :
  1. Log in to Grafana In your browser, log in to either:
    • Self-hosted Grafana: at http://localhost:3000/. The default credentials are admin, admin.
    • Grafana Cloud: use the URL and credentials you set when you created your account.
  2. Add your as a data source
    1. Open Connections > Data sources, then click Add new data source.
    2. Select PostgreSQL from the list.
    3. Configure the connection:
      • Host URL, Database name, Username, and Password Configure using your connection details. Host URL is in the format <host>:<port>.
      • TLS/SSL Mode: select require.
      • PostgreSQL options: enable TimescaleDB.
      • Leave the default setting for all other fields.
    4. Click Save & test.
    Grafana checks that your details are set correctly.

Monitor performance over time

A Grafana dashboard represents a view into the performance of a system, and each dashboard consists of one or more panels, which represent information about a specific metric related to that system. To visually monitor the volume of taxi rides over time:
  1. Create the dashboard
    1. On the Dashboards page, click New and select New dashboard.
    2. Click Add visualization.
    3. Select the data source that connects to your . The Time series visualization is chosen by default. Grafana create dashboard
    4. In the Queries section, select Code, then select Time series in Format.
    5. Select the data range for your visualization: the data set is from 2016. Click the date range above the panel and set:
      • From: 2016-01-01 01:00:00
      • To: 2016-01-30 01:00:00
  2. Combine and Grafana functionality to analyze your data Combine a time_bucket, with the Grafana $__timefilter() function to set the pickup_datetime column as the filtering range for your visualizations.
    SELECT
      time_bucket('1 day', pickup_datetime) AS "time",
      COUNT(*)
    FROM rides
    WHERE $__timeFilter(pickup_datetime)
    GROUP BY time
    ORDER BY time;
    
    This query groups the results by day and orders them by time. Grafana real-time analytics
  3. Click Save dashboard

Optimize revenue potential

Having all this data is great but how do you use it? Monitoring data is useful to check what has happened, but how can you analyse this information to your advantage? This section explains how to create a visualization that shows how you can maximize potential revenue.

Set up your data for geospatial queries

To add geospatial analysis to your ride count visualization, you need geospatial data to work out which trips originated where. As is compatible with all extensions, use PostGIS to slice data by time and location.
  1. Connect to your and add the PostGIS extension:
    CREATE EXTENSION postgis;
    
  2. Add geometry columns for pick up and drop off locations:
    ALTER TABLE rides ADD COLUMN pickup_geom geometry(POINT,2163);
    ALTER TABLE rides ADD COLUMN dropoff_geom geometry(POINT,2163);
    
  3. Convert the latitude and longitude points into geometry coordinates that work with PostGIS:
    UPDATE rides SET pickup_geom = ST_Transform(ST_SetSRID(ST_MakePoint(pickup_longitude,pickup_latitude),4326),2163),
       dropoff_geom = ST_Transform(ST_SetSRID(ST_MakePoint(dropoff_longitude,dropoff_latitude),4326),2163);
    
    This updates 10,906,860 rows of data on both columns, it takes a while. Coffee is your friend. You might run into this error while the update happens Error: tuple decompression limit exceeded by operation Error Code: 53400 Details: current limit: 100000, tuples decompressed: 10906860 Hint: Consider increasing timescaledb.max_tuples_decompressed_per_dml_transaction or set to 0 (unlimited). To fix this, use
    SET timescaledb.max_tuples_decompressed_per_dml_transaction TO 0;
    

Visualize the area where you can make the most money

In this section you visualize a query that returns rides longer than 5 miles for trips taken within 2 km of Times Square. The data includes the distance travelled and is GROUP BY trip_distance and location so that Grafana can plot the data properly. This enables you to see where a taxi driver is most likely to pick up a passenger who wants a longer ride, and make more money.
  1. Create a geolocalization dashboard
    1. In Grafana, create a new dashboard that is connected to your data source with a Geomap visualization.
    2. In the Queries section, select Code, then select the Time series Format. Real-time analytics geolocation
    3. To find rides longer than 5 miles in Manhattan, paste the following query:
      SELECT time_bucket('5m', rides.pickup_datetime) AS time,
             rides.trip_distance AS value,
             rides.pickup_latitude AS latitude,
             rides.pickup_longitude AS longitude
      FROM rides
      WHERE rides.pickup_datetime BETWEEN '2016-01-01T01:41:55.986Z' AND '2016-01-01T07:41:55.986Z' AND
        ST_Distance(pickup_geom,
                    ST_Transform(ST_SetSRID(ST_MakePoint(-73.9851,40.7589),4326),2163)
        ) < 2000
      GROUP BY time,
               rides.trip_distance,
               rides.pickup_latitude,
               rides.pickup_longitude
      ORDER BY time
      LIMIT 500;
      
      You see a world map with a dot on New York.
    4. Zoom into your map to see the visualization clearly.
  2. Customize the visualization
    1. In the Geomap options, under Map Layers, click + Add layer and select Heatmap. You now see the areas where a taxi driver is most likely to pick up a passenger who wants a longer ride, and make more money. Real-time analytics geolocation
You have integrated Grafana with a and made insights based on visualization of your data.