Skip to content

Author: John White

Using Power BI Incremental Refresh with Azure Data Explorer

Azure Data Explorer (ADX) is a great platform for storing large amounts of transactional data. The Incremental Refresh feature (now available for Pro users!) in Power BI makes it much faster to keep data models based on that data current. Unfortunately, if you follow the standard guidance from Microsoft for configuring Incremental Refresh, you’ll quickly bump into a roadblock. Luckily, it’s not that difficult to get around.

Incremental Refresh works by setting up data partitions in the dataset in the service. These partitions are based on time slices. Once data has been loaded into the dataset, only the data in the most recent partition is refreshed.

To set this up in Power BI Desktop, you need to configure two parameters, RangeStart, and RangeEnd. These two parameters must be set as Date/Time parameters. Once set, the parameters are used to filter the Date/Time columns in your tables accordingly, and once published to the service, to define the partitions to load the data into.

When Power Query connects to ADX, all Date/Time fields come in as the Date/Time/Timezone type. This is a bit of a problem. When you use the column filters to filter your dates, the two range parameters won’t show up because they are of a different type (Date/Time). Well, that’s not a big problem, right? Power Query lets us change the data column type simply by selecting the type picker on the column header.

Indeed, doing this does in fact allow you to use your range parameters in the column filters. Unfortunately, data type conversions don’t get folded back to the source ADX query. You can see this by right-clicking on a subsequent step in the Power Query editor. The “View Native Query” option is greyed out, which indicates that the query cannot be folded.

Query folding is critical to incremental refresh. Without it, the entirety of the data is brought locally so that it can be filtered vs having the filter occur at the data source. This would completely defeat the purpose of implementing Incremental Refresh in the first place.

The good news is that you can in fact filter a Date/Time/Timezone column with a Date/Time parameter, but the Power Query user interface doesn’t know that. The solution is to simply remove the type conversion Power Query step AFTER performing the filter in the Power Query UI.

Alternatively, if you’re comfortable with the M language, you can simply insert something like the following line using the Advanced Editor in Power Query (where CreatedLocal is the name of the column being filtered).

#"Filtered Rows" = Table.SelectRows(Source, each [CreatedLocal] >= RangeStart and [CreatedLocal] < RangeEnd),

If the filtration step can be folded back into the source, Incremental Refresh should work properly. You can continue setting up Incremental Refresh using the DAX editor. You will likely see some warning messages indicating that folding can’t be detected, but these can safely be ignored.

Leave a Comment

Exceed the 500,000 row limit in Application Insights and Log Analytics with Power BI

The combination of Power BI and Application Insights (AI)/Log Analytics (LA) is a powerful one. These tools provide a quick, convenient, and relatively cheap way to collect and analyze telemetry on a wide variety of applications. One drawback of AI/LA is that any data query will return a maximum of 500,000 rows, which can be quite constraining in some cases. This article describes a way to work around this limit.

In this example, we’ll be working with an Application Insights instance that is being populated by the WordPress Application Insights plugin – in fact, it’s the one used on this very blog. There are a couple of ways to connect Power BI Desktop to AI data. The Power Query code is downloadable directly from Application Insights, and you can also use the Azure Data Explorer proxy address as outline in my post on the topic here. This approach will work for both methods, and for our purposes, we’ll be using the generated Power Query code approach.

To begin, access your Application Insights instance, and open the Logs window. If necessary, dismiss the “Queries” window that pops up. Next, form your query using Kusto Query Language (KQL). In our case, we want a simple dump of all rows in the “pageViews” table, so the query is simple – just pageViews.

Once we have the query the way that we want it, we select the Export button, and choose “Export to Power BI (M query). M is the name of the language that Power Query uses. Once chosen, a text file will be downloaded that contains the Power Query that we will need in Power BI Desktop.

At this point, we launch Power BI Desktop, and choose “Get Data”. Since we already have the query that we need, we will choose “Blank Query”.

Next, we name our query “Page Views”, and select the Advanced Editor. This is where we can paste in the query generated by Application Insights above. At this point, we open the file that was downloaded above, copy the contents, and paste them in this window (the top comments can be excluded).

Of note here is the value that will be automatically set for timespan. By default, this will be set to P1D, which means data will be retrieved only for the previous day. In our example above, we have changed it to show data for the past 365 days.

Selecting “Done” will load a preview of our data into Power Query. However, if we want to then load it into the data model, it will do so in a single pull, and we will be subject to the 500,000 row limit. What we need to do is break up our query into multiple queries, and Power Query lets us do this through the use of functions.

The first thing that we’ll need to do is to decide on how to segment the AI data. In our case, it is unlikely that we will have more than 500,000 page views per month, so if we performed one query per month, we should be able to retrieve all of our data. In order to do this, we need to go back to Application Insights, and form up a query that will return a list of year and month for our data. In our case, this query is:

pageViews
| where timestamp > now(-365d)
| summarize by 
    Year = datetime_part('Year',timestamp), 
    Month = datetime_part('Month',timestamp)

Note that the number of days in the where clause above should match the number of days in the larger query above. Next, export this query to Power BI, and create another query in Power Query. Leave the name as default for now. Selecting Done should return a list of years and months for your data. These values are all numbers, and Power Query recognizes them as such. However, we need to work with them as text later on , so we change their types to text.

Now we will return to our original query, and modify it so that it only returns data for a single month. Reopen the advanced editor and replace the query “pageViews” with:

pageViews | where datetime_part('Month',timestamp) == 10 and datetime_part('Year',timestamp) == 2020

The values chosen don’t matter, but they should return data, In the end, the edited code should look as follows:

Selecting done, we verify that we have data restricted to the specified month. This is where the fun begins. We are now going to turn this query into a function. To do so, we right click on our pageViews Query, and select “Create Function”

We are then presented with a dialog box that asks if we want to create the function without parameters. We can go ahead and select “Create”. We are then prompted to name the function, and we’ll call it “GetViewsByMonthAndYear”. We now need to edit the function. To do so, with the function selected in the query pane, we select the Advanced Editor once again. We then dismiss the following warning, and then we edit the function in two places. First, we need to define two variables to be passed to the function Month and Year , and then we add them to our query.

In the function declaration we add “Month as text” and “Year as text”. We then replace the explicit month and year that we originally queried for with these new variables, Month and Year. Our function code now appears as below:

Now we are ready to use our function. We select our query that contains the list of years and months, select the “Add Column” tab from the ribbon, and choose “Invoke Custom Function”. We give the new column a name “Views”, select our function from the dropdown, and then we select our column containing years and the column containing months to be passed to the function.

At this point, selecting “OK” will cause the function to be executed for each of the listed months. These are individual queries to AI, not one large one. Each query is still subject to the 500,000 row limit, but provided that no specific month exceeds that limit, all of the data will be returned.

Initially, the data is returned as a single table per day, but selecting the expand icon at the right of the column header allows us to retrieve the row values. It’s also a good idea to turn off the “Use original column name” option.

Selecting OK at this point displays all of the appropriate column values. We can then remove the “Year” and “Month” columns, as well as the original Page Views table that we used to create the function. We also need to set the data types for all of our columns because Power Query is unable to detect them using this approach.

Renaming our combined Query to Views, gives us the following result:

We still have a single table, but there is no longer a 500,00 row limit. At this point, we can load the data into the model and build our report.

3 Comments

Dynamic time zone conversion using Power BI

Have you ever wanted to show your time data in different time zones simultaneously? Or allows users of the same report to display time values in their own time zone? This article outlines one approach for doing so.

If you’ve spent much time building reports for users in more than one time zone, you’ve likely come across a few of the idiosyncrasies of Power BI and date/time values. In fact, if you’ve worked with time zone values in Power Query and you don’t happen to live in the UK) , you’ve likely noticed that your reports show different time based values when they get published to the service. This is because the Power BI service operates in the UTC time zone, and evaluates all locale based time functions in that time zone. Power BI Desktop evaluates them according to the locale of the user.

For that reason, UTC date/time values are paramount. Luckily, most source data is available in UTC format, and it’s up to report designers to convert it as necessary. However, what happens when a single report is meant to serve users in different time zones? Alternatively, what if you want to use a single data model to serve reports in different time zones?

Time calculations can be performed both in Power Query, and in DAX. However, if we want our users to be able to to select their time zone from filters or slicers, we’re going to be restricted to using DAX. We’re also going to need a good source of time zone data. In the end, we need the time offset from UTC so that our time calculations can adjust time accordingly.

One good source of time zone offset is the Time Zone Database. You can register for an API key (its free), and call it directly using Power BI’s web connector. This means that when we refresh our data, we will get up to date offset data when daylight saving time changes, or there are local changes to the time zone rules.

To retrieve the time zone data, connect to it using Get Data in Power BI Desktop, then select the Web connector. If prompted, choose “Anonymous” as the authentication type, and enter the following for the url:

http://api.timezonedb.com/v2.1/list-time-zone?key=XXXXXXXX&format=json 

Where key is the API key that you received when registering at the Time Zone Database.

As of October 2020, Power Query will then convert the resultant JSON data into a simple table. Some of the columns are unnecessary, and we can safely remove status, message, and timestamp. I like to rename the columns into something a little more user friendly. The offset value returned in in seconds. DAX does its date calculations in days, so I create another column with the same value converted to days (the listed value divided by 86,400). It’s also a good idea to rename the query. When complete, your table should look something like below.

At this point, we can select Close and Apply to load the data into the model.

Our report will show the current time for any selected time zone. We therefore need to know which time zone is selected. We will assume that a filter or slicer, or a row filter has been applied, and there is only one currently selected value. We need to use an aggregate function in order to return the offset value, so in this case, we will MAX. We can therefore create a calculated measure to hold the selected offset value:

Current Offset = MAX('Time Zones'[Offset (days)])

Next, we need the current time. DAX has a Now() function that will return this value, but it will be returned in the locale of the user. When it runs on the service, it will return UTC time. We want this to work properly everywhere, so instead of Now() we will use UTCNow() which always returns the current time in UTC. We will next create two calculated measures – Current time (UTC) and Current time (Local).

Current Time (UTC) = UTCNOW()
Current Time (Local) = UTCNOW() + [Current Offset]

Now we can add a slicer to our report page, and use the “Zone ” dimension. Next, we add two card slicers, one displaying the current time in the UTC time zone, and the other will display the current time in the zone selected in the slicer. It’s a good idea to use the slicer’s selection control to “Single select” to prevent multiple zones from being selected. Every slicer selection will update the two “clocks” and the local time should reflect the currently selected time zone.

To see row filters in action, simply open a new page, and add a table that displays the Zone name (and any other relevant dimensions) along with the Current time (Local).

Given that the fact that slicer selections and filter values can be selected by users and persisted, this allows a single report to be used my multiple users in different time zones, but these users can see the data in their own local time zone right in the Power BI service.

4 Comments

Introducing the new Power BI Premium “Gen-2” Architecture

At Microsoft Ignite this week, the Power BI team unveiled the next generation of the architecture for their “dedicated capacity” customers. This architecture promises to resolve many of the issues experienced with the first generation of Premium, and opens up several possibilities moving forward.

Gen-1

The Power BI dedicated capacity SKUs (which include Premium) were introduced 3 years ago in order to provide a scalable pricing model, and to provide predictable performance. Unlike the Pro SKU, which is licensed per user, these SKUs represent specific sets of resources running in Azure. There are currently 3 SKUs in this category, the A SKU (purchased hourly from Azure, the EM SKU (for embedding), and the well know P or Premium SKU.

When an organization purchases one of these SKUs, they are essentially purchasing memory, CPU cores and storage. The isolation allows for predictability, but it also means that when the resources become over allocated, catastrophic errors can occur. Refreshing a data set can also be particularly memory intensive, and the memory usage during a refresh could increase by more than double what might be normally required. This has an impact on normal operations during refresh, and means that capacity need to be over-sized to accommodate refresh in some cases.

Once acquired, Gen-1 capacities need to be configured, and decisions made as to what services to allow, and how many resources to allocate to them. It’s not always obvious as to what those settings should be. I’ve also seen situations where an overzealous administrator gets so excited about the new Premium capacity that they assign hundreds of workspaces to it, and bring reports to their knees.

Gen-2 – A Different Approach

The new “Gen-2” architecture aims to deliver the isolation and predictability that dedicated capacity brings, without the drawbacks. Gen-2 is in fact, not dedicated, as resources are drawn from a massive pool as needed. However, the performance level is guaranteed, and is focused on CPU cycles. In fact, memory is not even a consideration, apart from per-dataset overall size limits.

Memory will be allocated as needed to refreshes, ending the need to worry about refreshes failing, or impacting end user experiences. The CPU charge for refreshes will be allocated immediately, but the usage allocation will be spread out throughout the day. This provides consistent fast performance without requiring resource overallocation to handle peaks due to resourcing. Memory is no longer a factor whatsoever for refresh.

This architecture also allows for automatic scale up/down for intensive or unpredictable workloads. Administrators will no longer need to make so many decisions up front, or react to changes as frequently. If autoscale is not enabled, queries can potentially be slowed down, but a refresh kicking off can no longer make reports unavailable. The new architecture is moving back to a SaaS model, after being rather close to IaaS with Gen 1

In the past, services that required full isolation like Paginated reports were not available on some of the lower end A and EM SKUs. With this new architecture, they will be available on all of them. In fact, with the newly announced Premium per user SKU, they will even be available on a per user basis.

This new architecture will be available to all of the “dedicated” SKUs, A, EM and P. The preview of the new P SKU will begin later in 2020. As an ISV, I am particularly interested in this new architecture for the A SKUs. The memory spikes caused by large refreshes have been particularly problematic for us in the past. Gen-2 architecture appears to be just what the doctor ordered.

I have often referred to this group of SKUs as the dedicated capacity SKUs in the past, but with this change, that term is no longer accurate. With the term Premium being so pervasive, I think we’ll just have to call them all Premium SKUs, whether they are P or otherwise.

Leave a Comment