Skip to content

Category: Technology

Dynamic time zone conversion using Power BI

Have you ever wanted to show your time data in different time zones simultaneously? Or allows users of the same report to display time values in their own time zone? This article outlines one approach for doing so.

If you’ve spent much time building reports for users in more than one time zone, you’ve likely come across a few of the idiosyncrasies of Power BI and date/time values. In fact, if you’ve worked with time zone values in Power Query and you don’t happen to live in the UK) , you’ve likely noticed that your reports show different time based values when they get published to the service. This is because the Power BI service operates in the UTC time zone, and evaluates all locale based time functions in that time zone. Power BI Desktop evaluates them according to the locale of the user.

For that reason, UTC date/time values are paramount. Luckily, most source data is available in UTC format, and it’s up to report designers to convert it as necessary. However, what happens when a single report is meant to serve users in different time zones? Alternatively, what if you want to use a single data model to serve reports in different time zones?

Time calculations can be performed both in Power Query, and in DAX. However, if we want our users to be able to to select their time zone from filters or slicers, we’re going to be restricted to using DAX. We’re also going to need a good source of time zone data. In the end, we need the time offset from UTC so that our time calculations can adjust time accordingly.

One good source of time zone offset is the Time Zone Database. You can register for an API key (its free), and call it directly using Power BI’s web connector. This means that when we refresh our data, we will get up to date offset data when daylight saving time changes, or there are local changes to the time zone rules.

To retrieve the time zone data, connect to it using Get Data in Power BI Desktop, then select the Web connector. If prompted, choose “Anonymous” as the authentication type, and enter the following for the url:

http://api.timezonedb.com/v2.1/list-time-zone?key=XXXXXXXX&format=json 

Where key is the API key that you received when registering at the Time Zone Database.

As of October 2020, Power Query will then convert the resultant JSON data into a simple table. Some of the columns are unnecessary, and we can safely remove status, message, and timestamp. I like to rename the columns into something a little more user friendly. The offset value returned in in seconds. DAX does its date calculations in days, so I create another column with the same value converted to days (the listed value divided by 86,400). It’s also a good idea to rename the query. When complete, your table should look something like below.

At this point, we can select Close and Apply to load the data into the model.

Our report will show the current time for any selected time zone. We therefore need to know which time zone is selected. We will assume that a filter or slicer, or a row filter has been applied, and there is only one currently selected value. We need to use an aggregate function in order to return the offset value, so in this case, we will MAX. We can therefore create a calculated measure to hold the selected offset value:

Current Offset = MAX('Time Zones'[Offset (days)])

Next, we need the current time. DAX has a Now() function that will return this value, but it will be returned in the locale of the user. When it runs on the service, it will return UTC time. We want this to work properly everywhere, so instead of Now() we will use UTCNow() which always returns the current time in UTC. We will next create two calculated measures – Current time (UTC) and Current time (Local).

Current Time (UTC) = UTCNOW()
Current Time (Local) = UTCNOW() + [Current Offset]

Now we can add a slicer to our report page, and use the “Zone ” dimension. Next, we add two card slicers, one displaying the current time in the UTC time zone, and the other will display the current time in the zone selected in the slicer. It’s a good idea to use the slicer’s selection control to “Single select” to prevent multiple zones from being selected. Every slicer selection will update the two “clocks” and the local time should reflect the currently selected time zone.

To see row filters in action, simply open a new page, and add a table that displays the Zone name (and any other relevant dimensions) along with the Current time (Local).

Given that the fact that slicer selections and filter values can be selected by users and persisted, this allows a single report to be used my multiple users in different time zones, but these users can see the data in their own local time zone right in the Power BI service.

4 Comments

Introducing the new Power BI Premium “Gen-2” Architecture

At Microsoft Ignite this week, the Power BI team unveiled the next generation of the architecture for their “dedicated capacity” customers. This architecture promises to resolve many of the issues experienced with the first generation of Premium, and opens up several possibilities moving forward.

Gen-1

The Power BI dedicated capacity SKUs (which include Premium) were introduced 3 years ago in order to provide a scalable pricing model, and to provide predictable performance. Unlike the Pro SKU, which is licensed per user, these SKUs represent specific sets of resources running in Azure. There are currently 3 SKUs in this category, the A SKU (purchased hourly from Azure, the EM SKU (for embedding), and the well know P or Premium SKU.

When an organization purchases one of these SKUs, they are essentially purchasing memory, CPU cores and storage. The isolation allows for predictability, but it also means that when the resources become over allocated, catastrophic errors can occur. Refreshing a data set can also be particularly memory intensive, and the memory usage during a refresh could increase by more than double what might be normally required. This has an impact on normal operations during refresh, and means that capacity need to be over-sized to accommodate refresh in some cases.

Once acquired, Gen-1 capacities need to be configured, and decisions made as to what services to allow, and how many resources to allocate to them. It’s not always obvious as to what those settings should be. I’ve also seen situations where an overzealous administrator gets so excited about the new Premium capacity that they assign hundreds of workspaces to it, and bring reports to their knees.

Gen-2 – A Different Approach

The new “Gen-2” architecture aims to deliver the isolation and predictability that dedicated capacity brings, without the drawbacks. Gen-2 is in fact, not dedicated, as resources are drawn from a massive pool as needed. However, the performance level is guaranteed, and is focused on CPU cycles. In fact, memory is not even a consideration, apart from per-dataset overall size limits.

Memory will be allocated as needed to refreshes, ending the need to worry about refreshes failing, or impacting end user experiences. The CPU charge for refreshes will be allocated immediately, but the usage allocation will be spread out throughout the day. This provides consistent fast performance without requiring resource overallocation to handle peaks due to resourcing. Memory is no longer a factor whatsoever for refresh.

This architecture also allows for automatic scale up/down for intensive or unpredictable workloads. Administrators will no longer need to make so many decisions up front, or react to changes as frequently. If autoscale is not enabled, queries can potentially be slowed down, but a refresh kicking off can no longer make reports unavailable. The new architecture is moving back to a SaaS model, after being rather close to IaaS with Gen 1

In the past, services that required full isolation like Paginated reports were not available on some of the lower end A and EM SKUs. With this new architecture, they will be available on all of them. In fact, with the newly announced Premium per user SKU, they will even be available on a per user basis.

This new architecture will be available to all of the “dedicated” SKUs, A, EM and P. The preview of the new P SKU will begin later in 2020. As an ISV, I am particularly interested in this new architecture for the A SKUs. The memory spikes caused by large refreshes have been particularly problematic for us in the past. Gen-2 architecture appears to be just what the doctor ordered.

I have often referred to this group of SKUs as the dedicated capacity SKUs in the past, but with this change, that term is no longer accurate. With the term Premium being so pervasive, I think we’ll just have to call them all Premium SKUs, whether they are P or otherwise.

Leave a Comment

Power BI Announces Premium-per-User licensing

The license for the rest of us

Today at Microsoft Ignite, Microsoft announced the upcoming availability of a new licensing model for Premium features in Power BI entitled “Premium per User”. With this model, individual users, or subsets of users can subscribe to most of the capabilities of Power BI Premium for an additional monthly fee.

For a preliminary FAQ about this new license, read this official blog post from the Power BI team.

Three years ago, Microsoft introduced the capacity based Premium license. Prior to this, the only license available for Power BI was the user based Pro license, which listed at $10 per user per month. The problem with this model was that large organizations found it to be prohibitively expensive, especially for casual user. The capacity based SKUs (Premium) had the effect of solving that problem. An organization could purchase their own dedicated resources and then allocate them in any way they saw fit. Report consumers do not need a Pro license with this model.

While Premium went a long way to solve the cost problem with large organizations, it introduced a significant new issue with smaller to mid sized organizations. The problem is the price tag. The entry level Premium SKU (P1) carries a list price of $5,000 US per month. This means that an organization needs to have more that 500 regular Power BI users before the cost of Premium starts to make sense from a sharing only perspective.

Compounding the price tag issue, since the release of Premium, more and more features have been released that require it to function. Features like Paginated reports, AI capabilities, deployment pipelines, and the XMLA endpoint all require Premium. A small organization may have the need for this type of feature, but cannot justify the spend of a Premium license.

The new Premium per user (PPU) license promises to solve this problem. Premium per user will be a new license that will include all of the capabilities of the Pro license, but will also include almost all of the features available in Premium. Details about which features are available can be found here. It will NOT include unlimited sharing. Users with this license will be able to publish content to a PPU workspace, and that content can be consumed by other users that have a PPU license.

The next question is of course going to be “great, so how much is it?”. Therein lies the rub. Microsoft is not saying, at least not at this point. From the official blog post announcing the PPU license, Microsoft says:

Stay tuned for the official pricing announcement as we get closer to the GA timeframe.  I guarantee you won’t want to miss it

Arun Ulag, Corporate Vice President, Power BI

It does seem awfully odd to announce a new license without stating the price, but that’s the situation that stands today. However, given that the goal of this SKU is clearly to make Premium features more accessible across the board, I fully expect it to be quite reasonable.

If, as I expect, the price is reasonable, the PPU license will unlock a lot of doors, making Premium far more widely available. In fact, I expect that PPU will become the go-to license generally. Now, we simply have to wait for the price, before we get too excited.

Leave a Comment

Connect to Application Insights and Log Analytics with Direct Query in Power BI

Application Insights (AI) and Log Analytics (LA) from Microsoft Azure provide easy and inexpensive ways to instrument applications. Using just an instrumentation key, any application can send operational data to AI which can then provide a rich array of tools to monitor the operation of the application. In fact, the blog that you are reading uses an Application Insights plugin for WordPress that registers each view of a page into an instance of AI in my Azure tenant.

Application Insights data can be queried directly in the Azure portal to provide rich insights. In addition, the data can be exported to Excel for further analysis, or, it can be queried using Power Query in either Excel or Power BI. The procedure for using Power Query can be found in this article. The approach for doing so, uses the Web connector in Power Query, which can be automatically refreshed on a regular basis. The Web connector does not however support Direct Query, so the latency of the data in this scenario will be limited by the refresh schedule configured in Power BI. Any features that depend on Direct Query (Aggregations, Automatic Page Refresh) will also not work.

If you’ve worked with AI or LA, and dropped down to the Query editor, you’ve been exposed to KQL – The Kusto Query Language. This is the language that is used by Azure Data Explorer (ADX), or as its code name, “Kusto”. This is of course not a coincidence, as the Kusto engine powers both AI and LA.

Power BI contains a native connector for ADX, and if you can configure an ADX cluster for yourself, populate it, and work with it in Power BI for both imported and Direct Query datasets. Given that ADX is what powers AI and LA, it should be possible to use this connector to query the data for AI and LA. It turns out that the introduction of a new feature known as the ADX proxy will allow us to do just that.

The ADX proxy is designed to allow the ADX user interface to connect to instances of AI and LA and run queries from the same screens as native ADX clusters. The entire process is described in the document Query data in Azure Monitor using Azure Data Explorer. The document explains the process, but what we are particularly interested in is the syntax used to express an AI or LA instance as an ADX cluster. Multiple variations are described in the document, but the ones that we are most interested in are here:

For LA: https://ade.loganalytics.io/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>

For AI: https://ade.applicationinsights.io/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/components/<ai-app-name>

By substituting in your subscription ID, resource group name, and resource name, you can treat these resources as if they were ADX clusters, and query them in Power BI using Direct Query. As an example, a simple query on this blog can be formed using the ADX connector:

And the result will appear as:

The precise query is provided in the query section of the connector above.

Once the report is built, it can be deployed to the PowerBI service, and refreshed using AAD credentials.

It is important to note that this method does NOT require you to configure an ADX cluster of your own. We are simply utilizing the cluster provided to all instances of AI and LA. We therefore do not have any control over performance levels, as we would have in a full ADX cluster. However, if the performance is adequate,(and the queries designed appropriately), this can be a good approach to work with AI and LA data that has low latency (near real time) requirements.

31 Comments

Creating Data Driven Subscriptions for Power BI Reports

One of the features that has never made the leap from SQL Server Reporting Services (SSRS) on-premises to the cloud is data-driven subscriptions. Users can subscribe to reports, but a data-driven subscription allows individual subscriptions to be stored in a central location and parameterized, while delivering the reports to multiple locations. This article will describe a pattern for accomplishing this using SharePoint lists as the subscription store, and Power Automate as the automation tool, for a no-code solution to this requirement.

**Updated – Sept 24 2020** The new Power Automate “Export to File” power BI actions completely remove the need to create custom connectors (outlined below). I am leaving the steps in this post, because the approach can be used for other things, but these new actions make this whole process significantly easier, and cheaper. The Export to file actions are NOT premium actions in Power Automate.

Requirements

In order to implement this pattern it is necessary to have access to Power Automate and to SharePoint, both of which are available in Office 365. The custom connector described below uses the Power BI Rest API and the ExportTo function, which require a dedicated capacity (Premium) in Power BI to work. This pattern works with both interactive (pbix) and paginated reports. Paginated reports also require the use of a dedicated capacity. Data-driven subscriptions in SSRS were always an Enterprise feature on premises, so this requirement should come as no surprise.

Custom Connector

Currently, there are a number of actions available for Power BI within Power Automate. Unfortunately, none of these actions have the ability of rendering and saving a report, but that is something that the Power BI REST API can do. It is possible however to call this API using a custom connector in Power Automate.

Chris Webb recently put together a series of articles on using the Export function in the Power BI REST API with Power Automate. The first article outlines the process of creating the connector, as well as a downloadable Swagger (Open API) definition file that this pattern is based on. The second describes using it within Power Automate.

I won’t re-invent the wheel on the custom connector creation instructions here, just point you to the blogs above to create a connector. Once the custom connection is created, it will be possible to implement data-driven subscriptions.

Subscriptions

Subscriptions can be stored just about anywhere, but for the purposes of this example, we’re going to use a SharePoint list. What we want is the ability to specify the title of a report, what format we want it rendered in, and the destination. The Custom connector will require the workspace ID and the Report ID of the report in Power BI, in addition to the output format. In addition, we want to be able to take advantage of parameters in paginated reports, so our subscription definition needs to contain a parameter value pair as well.

The following SharePoint Columns will be used in a custom list:

Column NameColumn Type
TitleSingle line of text
Workspace GUIDSingle line of text
Report GUIDSingle line of text
File FormatChoice
Destination TypeChoice
DestinationSingle line of text
ParameterNameSingle line of text
ParameterValueSingle line of text

The choices for file format are the different output formats supported by the Export API. They are CSV, DOCX, IMAGE, MHTML, PDF, PNG, PPTX, XLSX, and XML. In my case I set the default to PDF as that is the most common format, but that choice is optional.

PowerAutomate supports a wide variety of file storage mechanisms, so the choices for destination type really depend on what destinations you want to support. In my case, I chose OneDrive for Business, SharePoint libraries, and email recipients. Therefore, one subscription could save to SharePoint while another delivers a file to an email user. These destinations will be reflected in the PowerAutomate flow created below.

Once the list is created, it can be populated with a few entries. In my example below, I am rendering reports from tyGraph for Twitter. Three are paginated reports going to each of the above destinations, and the last is an interactive (pbix) report being delivered to a SharePoint library.

The first three in the list are passing in a different parameter value to each report. Report parameters are not available to interactive reports, so these values are left empty for the interactive report.

The workspace GUID and the report GUID can be obtained by opening the report in a browser, and then inspecting the URL. This is true for both paginated and interactive reports.

Power Automate

Chris Webb’s post referenced above describes a pattern for rendering an export file from a Power Automate flow. We will use this within the pattern here.

The flow will iterate through the subscription list, and for each item found will render the report and save it to the desired output location. It can be created with any trigger, and for our purposes we are using the Recurrence trigger.

The first action in the flow is the SharePoint Get items action. Configure it to get all of the items from the subscription list created above.

We will need a name for the output file in multiple saving steps. It’s a good idea to create a variable for the output file name for ease of maintainability. We therefore initialize “Output File Name” as a String variable next.

We then create an “Apply to Each” Action from the control group and apply it to the “value” output from the “Get items” step above. This will iterate through each of our subscriptions.

Within the loop, we next apply the “Export to File” action from the custom connector created above. Instead of hardcoding the values however, we supply the values saved in the subscription. In addition, we pass in the parameter values taken from the subscription.

The same action can be used for both interactive and paginated reports. Interactive reports will simply ignore paginated specific options. Many options are available here, we are just utilizing a few of them. It should also be noted that this pattern only supports a single parameter/value pair. This is for simplicity’s sake, as the action will support multiple pairs.

It is also important to note that the settings of each of these custom actions must be changed to turn off the “Asynchronous Pattern” for the action. Without doing this, the action will fail at run time, even though it may test successfully when creating the custom connector.

In the next step, we set the value of the output file name variable that we set above. This will be called when we send the file to the destination.

In this case, we use the title, the current time, and the file format extension to create the file name. The exact formula is completely optional, but it’s a good idea to make the names unique to avoid overwriting past reports.

In the next step, we wait. Rendering takes some time, and one of the outputs above gives us an indication of how long we need to wait. In order to do so, we use the built in “Delay” action in Power Automate.

For the value of “Count” we select the “retry-after” output from the Export to file action above. It returns the number of seconds that the service estimates for the rendering of the report. This is just an estimate, and no guarantee, so it is possible that when we check on the status of the report, it will not be complete. Therefore, we need to repeat checks until it is. For that, we use a “Do Until” Action, available from the “Controls” section of the flow.

We check for the status of the report using the “Export Status” action of our custom connector. Therefore, we add this action into our loop and configure it appropriately, and tuning off the “Asynchronous Pattern” option as above. The “Export Status” action takes 3 arguments, the Workspace and Report GUIDs (that we get from the SharePoint list item) and exportId – which can be retrieved from the output of our “Export to File” action above as the “id” field.

The status reported as an output of this action will have 1 of 4 possible values: Succeeded, Failed, Running, or NotStarted . We want to continue checking as long as the status is neither “Successful” nor “Failed”. This is an advanced condition for the loop, so the Advanced option for it must be selected and the following code added:

@or(equals(body('Export_Status')?['status'], 'Succeeded'),equals(body('Export_Status')?['status'], 'Failed'))

Where Export_Status is the name of the action. Keep in mind that the language here is case sensitive.

The next action added is a condition where we inspect the value of the “status” output from the “Export Status” action. THe two conditions that we look for are Running or NotStarted. If either of these conditions are true, we need to wait for another estimated time interval. The entire loop will appear as below when configured.

Once the loop completes, we need to inspect the status field to see if it was successful, or if it failed. If it failed, we do nothing, but if it succeeds, we need to retrieve the report for storage in our destination. For this, we will add another condition AFTER the “Do Until” loop to inspect the status output.

Along the no branch, we add nothing, but if the output was successful, we retrieve the contents of the report with the “Get Export File” action of our custom connector. The “Get Export File” accepts the same arguments as the “Export Status” action, and has a single output – Body, which will contain the body of the report.

Once the body of the report has been retrieved, we need to send it to the destination. The destination will be determined from the “Destination Type” and “Destination” values from our subscription. For this, we use the “Switch” action from the Control section. In our case we have case branches for OneDrive for Business, SharePoint, and eMail. Fully configured, these branches appear as below.

Of course, your branches will reflect your possible destinations. The number of possible destinations is large and constantly evolving. In this way, this approach is much less constrained than the classic “data driven subscription” feature in SSRS which supported a fixed number of outputs.

Final Thoughts

While the classic Data Driven Subscriptions feature from SSRS Enterprise will likely not be returning, it is possible to recreate the capability with this approach. Its decoupled nature means that it is more flexible , allowing designer to add their own logic and destinations into the process.

Leave a Comment