Skip to content

Author: John White

Power BI Announces Premium-per-User licensing

The license for the rest of us

Today at Microsoft Ignite, Microsoft announced the upcoming availability of a new licensing model for Premium features in Power BI entitled “Premium per User”. With this model, individual users, or subsets of users can subscribe to most of the capabilities of Power BI Premium for an additional monthly fee.

For a preliminary FAQ about this new license, read this official blog post from the Power BI team.

Three years ago, Microsoft introduced the capacity based Premium license. Prior to this, the only license available for Power BI was the user based Pro license, which listed at $10 per user per month. The problem with this model was that large organizations found it to be prohibitively expensive, especially for casual user. The capacity based SKUs (Premium) had the effect of solving that problem. An organization could purchase their own dedicated resources and then allocate them in any way they saw fit. Report consumers do not need a Pro license with this model.

While Premium went a long way to solve the cost problem with large organizations, it introduced a significant new issue with smaller to mid sized organizations. The problem is the price tag. The entry level Premium SKU (P1) carries a list price of $5,000 US per month. This means that an organization needs to have more that 500 regular Power BI users before the cost of Premium starts to make sense from a sharing only perspective.

Compounding the price tag issue, since the release of Premium, more and more features have been released that require it to function. Features like Paginated reports, AI capabilities, deployment pipelines, and the XMLA endpoint all require Premium. A small organization may have the need for this type of feature, but cannot justify the spend of a Premium license.

The new Premium per user (PPU) license promises to solve this problem. Premium per user will be a new license that will include all of the capabilities of the Pro license, but will also include almost all of the features available in Premium. Details about which features are available can be found here. It will NOT include unlimited sharing. Users with this license will be able to publish content to a PPU workspace, and that content can be consumed by other users that have a PPU license.

The next question is of course going to be “great, so how much is it?”. Therein lies the rub. Microsoft is not saying, at least not at this point. From the official blog post announcing the PPU license, Microsoft says:

Stay tuned for the official pricing announcement as we get closer to the GA timeframe.  I guarantee you won’t want to miss it

Arun Ulag, Corporate Vice President, Power BI

It does seem awfully odd to announce a new license without stating the price, but that’s the situation that stands today. However, given that the goal of this SKU is clearly to make Premium features more accessible across the board, I fully expect it to be quite reasonable.

If, as I expect, the price is reasonable, the PPU license will unlock a lot of doors, making Premium far more widely available. In fact, I expect that PPU will become the go-to license generally. Now, we simply have to wait for the price, before we get too excited.

Leave a Comment

Connect to Application Insights and Log Analytics with Direct Query in Power BI

Application Insights (AI) and Log Analytics (LA) from Microsoft Azure provide easy and inexpensive ways to instrument applications. Using just an instrumentation key, any application can send operational data to AI which can then provide a rich array of tools to monitor the operation of the application. In fact, the blog that you are reading uses an Application Insights plugin for WordPress that registers each view of a page into an instance of AI in my Azure tenant.

Application Insights data can be queried directly in the Azure portal to provide rich insights. In addition, the data can be exported to Excel for further analysis, or, it can be queried using Power Query in either Excel or Power BI. The procedure for using Power Query can be found in this article. The approach for doing so, uses the Web connector in Power Query, which can be automatically refreshed on a regular basis. The Web connector does not however support Direct Query, so the latency of the data in this scenario will be limited by the refresh schedule configured in Power BI. Any features that depend on Direct Query (Aggregations, Automatic Page Refresh) will also not work.

If you’ve worked with AI or LA, and dropped down to the Query editor, you’ve been exposed to KQL – The Kusto Query Language. This is the language that is used by Azure Data Explorer (ADX), or as its code name, “Kusto”. This is of course not a coincidence, as the Kusto engine powers both AI and LA.

Power BI contains a native connector for ADX, and if you can configure an ADX cluster for yourself, populate it, and work with it in Power BI for both imported and Direct Query datasets. Given that ADX is what powers AI and LA, it should be possible to use this connector to query the data for AI and LA. It turns out that the introduction of a new feature known as the ADX proxy will allow us to do just that.

The ADX proxy is designed to allow the ADX user interface to connect to instances of AI and LA and run queries from the same screens as native ADX clusters. The entire process is described in the document Query data in Azure Monitor using Azure Data Explorer. The document explains the process, but what we are particularly interested in is the syntax used to express an AI or LA instance as an ADX cluster. Multiple variations are described in the document, but the ones that we are most interested in are here:

For LA: https://ade.loganalytics.io/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>

For AI: https://ade.applicationinsights.io/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/components/<ai-app-name>

By substituting in your subscription ID, resource group name, and resource name, you can treat these resources as if they were ADX clusters, and query them in Power BI using Direct Query. As an example, a simple query on this blog can be formed using the ADX connector:

And the result will appear as:

The precise query is provided in the query section of the connector above.

Once the report is built, it can be deployed to the PowerBI service, and refreshed using AAD credentials.

It is important to note that this method does NOT require you to configure an ADX cluster of your own. We are simply utilizing the cluster provided to all instances of AI and LA. We therefore do not have any control over performance levels, as we would have in a full ADX cluster. However, if the performance is adequate,(and the queries designed appropriately), this can be a good approach to work with AI and LA data that has low latency (near real time) requirements.

31 Comments

Creating Data Driven Subscriptions for Power BI Reports

One of the features that has never made the leap from SQL Server Reporting Services (SSRS) on-premises to the cloud is data-driven subscriptions. Users can subscribe to reports, but a data-driven subscription allows individual subscriptions to be stored in a central location and parameterized, while delivering the reports to multiple locations. This article will describe a pattern for accomplishing this using SharePoint lists as the subscription store, and Power Automate as the automation tool, for a no-code solution to this requirement.

**Updated – Sept 24 2020** The new Power Automate “Export to File” power BI actions completely remove the need to create custom connectors (outlined below). I am leaving the steps in this post, because the approach can be used for other things, but these new actions make this whole process significantly easier, and cheaper. The Export to file actions are NOT premium actions in Power Automate.

Requirements

In order to implement this pattern it is necessary to have access to Power Automate and to SharePoint, both of which are available in Office 365. The custom connector described below uses the Power BI Rest API and the ExportTo function, which require a dedicated capacity (Premium) in Power BI to work. This pattern works with both interactive (pbix) and paginated reports. Paginated reports also require the use of a dedicated capacity. Data-driven subscriptions in SSRS were always an Enterprise feature on premises, so this requirement should come as no surprise.

Custom Connector

Currently, there are a number of actions available for Power BI within Power Automate. Unfortunately, none of these actions have the ability of rendering and saving a report, but that is something that the Power BI REST API can do. It is possible however to call this API using a custom connector in Power Automate.

Chris Webb recently put together a series of articles on using the Export function in the Power BI REST API with Power Automate. The first article outlines the process of creating the connector, as well as a downloadable Swagger (Open API) definition file that this pattern is based on. The second describes using it within Power Automate.

I won’t re-invent the wheel on the custom connector creation instructions here, just point you to the blogs above to create a connector. Once the custom connection is created, it will be possible to implement data-driven subscriptions.

Subscriptions

Subscriptions can be stored just about anywhere, but for the purposes of this example, we’re going to use a SharePoint list. What we want is the ability to specify the title of a report, what format we want it rendered in, and the destination. The Custom connector will require the workspace ID and the Report ID of the report in Power BI, in addition to the output format. In addition, we want to be able to take advantage of parameters in paginated reports, so our subscription definition needs to contain a parameter value pair as well.

The following SharePoint Columns will be used in a custom list:

Column NameColumn Type
TitleSingle line of text
Workspace GUIDSingle line of text
Report GUIDSingle line of text
File FormatChoice
Destination TypeChoice
DestinationSingle line of text
ParameterNameSingle line of text
ParameterValueSingle line of text

The choices for file format are the different output formats supported by the Export API. They are CSV, DOCX, IMAGE, MHTML, PDF, PNG, PPTX, XLSX, and XML. In my case I set the default to PDF as that is the most common format, but that choice is optional.

PowerAutomate supports a wide variety of file storage mechanisms, so the choices for destination type really depend on what destinations you want to support. In my case, I chose OneDrive for Business, SharePoint libraries, and email recipients. Therefore, one subscription could save to SharePoint while another delivers a file to an email user. These destinations will be reflected in the PowerAutomate flow created below.

Once the list is created, it can be populated with a few entries. In my example below, I am rendering reports from tyGraph for Twitter. Three are paginated reports going to each of the above destinations, and the last is an interactive (pbix) report being delivered to a SharePoint library.

The first three in the list are passing in a different parameter value to each report. Report parameters are not available to interactive reports, so these values are left empty for the interactive report.

The workspace GUID and the report GUID can be obtained by opening the report in a browser, and then inspecting the URL. This is true for both paginated and interactive reports.

Power Automate

Chris Webb’s post referenced above describes a pattern for rendering an export file from a Power Automate flow. We will use this within the pattern here.

The flow will iterate through the subscription list, and for each item found will render the report and save it to the desired output location. It can be created with any trigger, and for our purposes we are using the Recurrence trigger.

The first action in the flow is the SharePoint Get items action. Configure it to get all of the items from the subscription list created above.

We will need a name for the output file in multiple saving steps. It’s a good idea to create a variable for the output file name for ease of maintainability. We therefore initialize “Output File Name” as a String variable next.

We then create an “Apply to Each” Action from the control group and apply it to the “value” output from the “Get items” step above. This will iterate through each of our subscriptions.

Within the loop, we next apply the “Export to File” action from the custom connector created above. Instead of hardcoding the values however, we supply the values saved in the subscription. In addition, we pass in the parameter values taken from the subscription.

The same action can be used for both interactive and paginated reports. Interactive reports will simply ignore paginated specific options. Many options are available here, we are just utilizing a few of them. It should also be noted that this pattern only supports a single parameter/value pair. This is for simplicity’s sake, as the action will support multiple pairs.

It is also important to note that the settings of each of these custom actions must be changed to turn off the “Asynchronous Pattern” for the action. Without doing this, the action will fail at run time, even though it may test successfully when creating the custom connector.

In the next step, we set the value of the output file name variable that we set above. This will be called when we send the file to the destination.

In this case, we use the title, the current time, and the file format extension to create the file name. The exact formula is completely optional, but it’s a good idea to make the names unique to avoid overwriting past reports.

In the next step, we wait. Rendering takes some time, and one of the outputs above gives us an indication of how long we need to wait. In order to do so, we use the built in “Delay” action in Power Automate.

For the value of “Count” we select the “retry-after” output from the Export to file action above. It returns the number of seconds that the service estimates for the rendering of the report. This is just an estimate, and no guarantee, so it is possible that when we check on the status of the report, it will not be complete. Therefore, we need to repeat checks until it is. For that, we use a “Do Until” Action, available from the “Controls” section of the flow.

We check for the status of the report using the “Export Status” action of our custom connector. Therefore, we add this action into our loop and configure it appropriately, and tuning off the “Asynchronous Pattern” option as above. The “Export Status” action takes 3 arguments, the Workspace and Report GUIDs (that we get from the SharePoint list item) and exportId – which can be retrieved from the output of our “Export to File” action above as the “id” field.

The status reported as an output of this action will have 1 of 4 possible values: Succeeded, Failed, Running, or NotStarted . We want to continue checking as long as the status is neither “Successful” nor “Failed”. This is an advanced condition for the loop, so the Advanced option for it must be selected and the following code added:

@or(equals(body('Export_Status')?['status'], 'Succeeded'),equals(body('Export_Status')?['status'], 'Failed'))

Where Export_Status is the name of the action. Keep in mind that the language here is case sensitive.

The next action added is a condition where we inspect the value of the “status” output from the “Export Status” action. THe two conditions that we look for are Running or NotStarted. If either of these conditions are true, we need to wait for another estimated time interval. The entire loop will appear as below when configured.

Once the loop completes, we need to inspect the status field to see if it was successful, or if it failed. If it failed, we do nothing, but if it succeeds, we need to retrieve the report for storage in our destination. For this, we will add another condition AFTER the “Do Until” loop to inspect the status output.

Along the no branch, we add nothing, but if the output was successful, we retrieve the contents of the report with the “Get Export File” action of our custom connector. The “Get Export File” accepts the same arguments as the “Export Status” action, and has a single output – Body, which will contain the body of the report.

Once the body of the report has been retrieved, we need to send it to the destination. The destination will be determined from the “Destination Type” and “Destination” values from our subscription. For this, we use the “Switch” action from the Control section. In our case we have case branches for OneDrive for Business, SharePoint, and eMail. Fully configured, these branches appear as below.

Of course, your branches will reflect your possible destinations. The number of possible destinations is large and constantly evolving. In this way, this approach is much less constrained than the classic “data driven subscription” feature in SSRS which supported a fixed number of outputs.

Final Thoughts

While the classic Data Driven Subscriptions feature from SSRS Enterprise will likely not be returning, it is possible to recreate the capability with this approach. Its decoupled nature means that it is more flexible , allowing designer to add their own logic and destinations into the process.

Leave a Comment

Working with Time Zones in the Power BI Relative Time Slicer and Filter

In the April 2020 release of Power BI Desktop, A new preview feature was debuted which provides an easy way to filter your report down to a rolling time period through the relative time slicer and filter. If you’ve tried this feature, you may have noticed that the results are not exactly what you might expect, unless you live in a very specific time zone. This article will show you how to design around this behaviour.

The problem is that the relative time that is evaluated by these two controls is always evaluated against UTC times. It assumes therefore that the time that you provide to it is also in UTC. If you are a report author working with local time values in a single location, this behaviour may seem confusing. Both the filter and the slicer work the same way, so for our purposes we’ll just be showing the filter here.

As an example of this, I collect data from a weather sensor in 1 minute increments, and have done so for several months. I have a report that shows the temperature over time, and I want to build a report page that shows this information for the past 24 hours. The relative time filter, applied to the page is the perfect control for doing this. It should also be noted here that this data is collected in the Eastern Time Zone, which in the summer (as I write) is offset from UTC time by -4 hours. The result is a report page that looks something like the below, with the filter applied.

You ca see above that although the current time is 9:39 AM, and a 24 hour relative time filter is applied, we are only seeing results after 1:39 PM from the previous day. This is because the supplied value for time is local, not UTC, and 9:39 AM EDT corresponds to 1:39 PM UTC. The filter is working, but it’s not showing the results that we expected.

The solution to this problem is straightforward, we need to use a different field that has bee converted to UTC for the filter, while continuing to display local time in the chart. There are many ways to do this with Power BI, the best will depend on your model design, but if you want to make the change using DAX, you can create a calculated column with a formula similar to:

TimeFieldUTC = TimeFieldLocal + UTCOffset/24

In my specific case above (EDT), the formula is:

ReadingTimeUTC = Time + 4/24

Another approach is to use Power Query to create a new column at refresh time. The Power Query new column formula to do the equivalent is:

[TimeFieldLocal] + #duration(0,UTCOffset,0,0)

Or again, in my specific case:

[Time] + #duration(0,4,0,0)

Once you have your corresponding UTC time values, simply replace the relative time filter fields with that field. The filter will be comparing UTC values to UTC values, and all will be well. The charts and display values can continue to use local times.

This approach does have a flaw in that the report needs to be edited to account for Daylight Savings Time/Standard Time transitions. Taking the Power Query approach allows us to use parameters, which can be changed in the service without editing the data model, but that still requires manual intervention. I would really like to see the ability of Power BI to understand time zones by name, and to be able to account for Daylight savings time changes. In other word to call a function with a time, and a time zone name parameter, and have it return a time using -4 as an offset in the summer, and -5 in the winter. In the absence of that, this approach will have to do.

1 Comment

Formatting the X Axis in Power BI Charts for Date and Time

Dates and times are probably the most commonly used dimensions in Power BI charts, or any charts for that matter. Power BI contains a number of features that help to display data with respect to time. Features like the automated date hierarchy reduce the need for users to construct or connect to a date dimension table (even though they likely should), which helps casual users get to solution more quickly. This is particularly true of using date/time on the axis of a chart. There are a lot of options for displaying this data, and they may not all be that well understood. This article will attempt to explain a number of them.

The scenario

We will be working with thermostat data taken in 5 minute increments over several months. The shape of the data is relatively simple. There are measures for outdoor temperature and heating/cooling system run times in seconds, as well as a date/time dimensions names DateAndTime. An example can be seen below

We want to plot these runtimes over time, and we will be working with a “Line and clustered column chart” to do this. The 4 different heating/cooling runtimes are used for the column values, the Outdoor temperature is used for the line values (with average being the default aggregation behaviour). This gets us to our point – what is the best way to display time on the X axis?

Plotting with DateTime

When the DateAndTime column is added to the X axis, by default it is converted to a date hierarchy. This behaviour is on by default but can be turned off (and in many cases, should be). We initially want to work with the raw datetime value, so we can control that by setting the dropdown option in the shared axis section of the chart and selecting the name of the dimension instead of “Date Hierarchy”.

Doing this with our data results in a rather messy looking chart.

The data here is far too granular to display all of it across all of the available times. By default, using a date or datetime dimension on an X axis will display continuously like this. However, we can control this behaviour through an X axis property on the chart itself.

Opening up the chart display properties, and then opening the X axis section reveals that “Continuous” is selected for the Type property. This is the display mode that will scale the axis to include all available date/time values. The other option is “Categorical”. The Categorical option displays each date/time value as a discrete data element. Changing the axis type property to continuous results in the chart appearing as follows.

The continuous and categorical options are only available for date and date/time dimensions. If the dimension in the X axis is not one of these types, the “Type” option will not appear.

Using Continuous, each and every date and time value is displayed on the X axis, and the data values are clearly resolved. However, in our case, there are far too many values to make this useful. Finding what we’re after would take a lot of scrolling. It’s best in this case (and in most cases) to view the data in aggregate, which is to say totals and averages across different time periods, years, months, days etc. This is where the Date Hierarchy shows value.

Formatting with Date Hierarchy

Selecting our “DateAndTime” dimension back to “Date Hierarchy” immediately changes the chart to show all of the data aggregated by Year. It is also possible to see the detail of the hierarchy in the Shared axis property for the chart.

The top level of the hierarchy is shown, which is all of the data aggregated to the Year level.

I rarely use the “Quarter” level of the hierarchy, so I simply remove it, and have done so for the remainder of the operations. It can be removed simply by selecting the x beside it in the Shared axis property box.

If we want to see our data in a more granular fashion, we have three options – Drill down, Go down one level, and Expand all down one level, which are the icons listed left to right in the highlighter section in the image above. Drilling down is meant to be interactive. With Drill down selected, clicking on the data point in the chart will go down to the next level in the chart for that data point. It replaces the standard cross filtering or cross highlighting that would normally happen when selecting a data point. For example, with drilldown turned on, clicking on any column for 2019 results in the chart below.

Notice that the X axis now shows month names instead of the year. This cart is showing our measures by month now, but only for the year 2019. The up arrow in the upper left arrow can be selected at any time to go back up to year, or selecting one of the months will drill down further to show the values for all of the days in the selected month.

The second option, Go down one level behaves in a similar fashion, but it does not filter to the year, it simply takes the chart down one level in the hierarchy without first filtering by year. This could be useful when comparing months to each other in aggregate. The X axis changes in the same way as drill down, showing the values for that level of the hierarchy.

If we want to show the data more granularly than the year level, but we don’t want to aggregate all of the same month names together, we can use the third option – Expand all down one level, or as I like to call it, “drill down and out”. Selecting this option results in the chart below.

We can see the data broken out by both year and month in order. This is a much richer view and still understandable. For example, you can see that 2018 was generally warmer than 2019 due to the amount of cooling necessary at a glance. The title is automatically changed (if it wasn’t set manually) to reflect this configuration, and the X axis also shows both year and month.

In this particular example the X axis is still readable, but drilling down and out more than one level can be cumbersome, and very wordy. At the same time, you do need to know which year, month, and day a particular data point pertains to. The Z axis formatting pane has some further options that help with this. By default, all of the hierarchy levels are concatenated together when a hierarchy is expanded in this way. Going into the chart format tab, and selecting the X axis, we can see an option for this – “Concatenate Labels”. Turning this off presents each level categorically on different lines. This to my mind is much easier to read and is the configuration that I use.

The concatenate labels option only takes effect when a hierarchy is expanded past its root level.

The examples used above utilize a “Line and clustered column bar chart” but pertain to all of the standard visuals that employ an x and y axis.

6 Comments