Skip to content

Tag: SharePoint 2013

Analysis Services Power Pivot Mode in SQL Server 2016

It is fairly well known that ever since SQL Server 2012, users have been able to install SQL Server Analysis Services (SSAS) in one of two modes – either “Multidimensional or Data Mining Mode” or “Tabular Mode”. Multidimensional is the traditional SSAS mode that supports OLAP cubes, and Tabular is the new engine that supports in memory xVelocity (or PowerPivot) models. Any given instance can only run in one of the two modes, but multiple instances/modes can exist on the same server. SQL Server 2016 (available as CTP2 as of this writing) is adding a third mode to SSAS – PowerPivot Mode.

What’s PowerPivot mode all about? Well, to start with, it’s actually not new – it’s actually older than Tabular mode. It has been available since SQL Server 2008 R2, but was never installed the same way as the other two modes, and had a couple of other different names. To my mind, this has led to a great deal of confusion that this change aims to fix. What it truly is, is a special instance of Tabular mode that exists solely to support PowerPivot for SharePoint. Up until now, this instance was installed during the installation of SQL Server Power Pivot for SharePoint, as can be seen in the screen below.

From the initial release of SQL Server 2008 R2 until (but not including) SQL Server 2012 SP1, this component needed to be installed on a SharePoint server, which is to say a server that had the basic SharePoint bits installed and had been joined to the farm. This requirement led to more than a few instances of SharePoint being installed on SQL Servers, but I digress. The correct way place to install this was on a SharePoint server that was already part of the farm. When this option was selected, what actually got installed was two things. First, a special instance of Analysis Services (it didn’t really have a name at the time), and a SharePoint service application that handled automatic refresh among other things.

The problem with this is that it forced the SharePoint server in question to also perform all of the Analysis Services functions. It wasn’t possible to scale out the analysis capabilities separately from the SharePoint server(s) themselves. This changed with the combination of SharePoint 2013 and SQL Server 2012 SP1. SharePoint Server 2013 allows for Excel Services to utilize one or more instances of what was now known as Analysis Services SharePoint Mode. This can be configure through Excel Services in the Data Model configuration options.

One can add any number of SSAS servers to the service allowing you to scale out in a limitless fashion. These SSAS servers must be running in “SharePoint mode”. The problem is that it’s not immediately obvious as to how to run an SSAS server in SharePoint Mode. The way that this is done is by installing “PowerPivot for SharePoint” on a server. With SQL Server 2012 SP1, this installation no longer requires a SharePoint server. However, if it is installed on a SharePoint server its behaviour is different. When installed on a server without SharePoint, a standalone instance of Analysis Services SharePoint mode will be installed. You can then connect to it using the Excel Services configuration shown above. However, when installed on a server with SharePoint, both the SSAS SP mode instance and the service application will be installed (as with prior versions).

While this behaviour makes sense, it’s certainly not intuitively obvious as to what’s going on. The Data Model Settings in SSAS only refer to registering “SQL Server Analysis Services”, and makes no mention of SharePoint mode. Regular SSAS servers will not work for this capability. On the SQL Side, it’s also not obvious that “PowerPivot for SharePoint” is the installation option for SSAS SharePoint mode, or that there are different behaviours when installed on farm joined servers or not. Finally, the name SSAS SharePoint mode isn’t particularly descriptive – the server in question is there exclusive to support PowerPivot for SharePoint.

Given all of this, the new installation option in SQL Server 2016 becomes clear, and should go a long way to help clear up confusion. PowerPivot mode is that same specific instance of SSAS Tabular mode that was previously referred to as “Analysis Services SharePoint Mode”. To complete the picture, the language in the PowerPivot for SharePoint configuration tool has also been updated for clarity. It may not be completely consistent, but it’s easier to understand.

While none of this represents any major shifts in functionality or capability, it does help to understand the various components of the overall solution. Hopefully the language in Excel Services in SharePoint 2016 will also be updated accordingly.

1 Comment

Schedule Data Refresh for SSAS Connected Excel Workbooks with PowerPivot for SharePoint

Using Excel Services, SharePoint users have been able to share workbooks that are connected to back end data since SharePoint 2007. Typically, the connection is made to SQL Server, or to Analysis services although a wide variety of sources are available. It’s also possible to publish individual components from these workbooks anywhere within the site collection through the Excel Web Access web part. Users can navigate to a dashboard page that contains all sorts of elements including an Excel chart that is connected to back end data. Well, to be precise, it was connected to back end data, the last time the workbook was saved. The workbook itself can be refreshed, but only manually.

When you open an Excel workbook in a browser through Excel services, by default, you’ll see the visualizations and any stored data in precisely the way that the workbook was when it was last saved. If you need to see more up to date data, you can select “Refresh Connections”. If (and sometimes that’s a big if) the server and connections are set up properly, the server will fetch updated data and update the workbook.

 This works well enough, but the problem is that when you, or anyone else opens the workbook again, they’ll still see the old version of the workbook, and will need to manually refresh the date again. In addition, any visualizations published elsewhere on a dashboard will also continue to show old data unless manually refreshed. If the amount of data is significant, this poses a serious performance issue to the server(s). There’s also a significant usability impact in that it’s a pretty big ask of an end user to have them constantly hitting a refresh button.

To get around this issue, one option is to set the refresh options in the data connections of the workbook. Excel Services respects these options. There are two settings that we need to be aware of, periodic refresh, and refresh on open. Connection properties can be accessed within the Excel client by selecting the Data tab, choosing Connections, then highlighting the connection in question and selecting Properties.

Periodic refresh will allow the workbook to be automatically refreshed in the background while it is opened in the browser. This can be useful when the source data is changing frequently. Refresh on opening will have the greatest impact in our scenario, as it will automatically refresh the data in the workbook whenever the file is opened. This will also work with published objects (Excel Web Access web parts) – every time that the web part is opened, the data will be automatically refreshed. This solves the usability problem above because the user no longer needs to manually update the data. However, it does not affect the server load problem.

Due to the fact that the data and visualizations retain the state that they had when the workbook was last saved, it also affects search. When the search indexer runs, it will only index the data that is saved in the workbook. It has no means of refreshing the data. Finally, in addition to the load imposed on the servers by constant refreshes, if the quantity of data being refreshed is large, users can experience significant lags when loading the file. This obviously introduces another usability option. While the refresh options in Excel are helpful, they don’t fully solve the problem. What is needed is a way to automatically open the file for editing, refresh the data, and resave it to SharePoint.

If you have ever used Power Pivot for SharePoint, you know that it can do exactly that. Power Pivot for SharePoint contains two primary elements – a specialized instance of SQL Server Analysis Services that allows users to interact with workbooks that contain embedded PowerPivot models, and a SharePoint service application that among other things, keeps those embedded models refreshed. Using the PowerPivot Gallery (enabled when PowerPivot for SharePoint is installed), you can configure a workbook’s refresh options by clicking on the icon in the Gallery view, or by selecting “Manage PowerPivot Data Refresh” in the simple All Documents view.

 Data Refresh options in PowerPivot Gallery View

 Data Refresh options in All Documents View

Once configured, the PowerPivot for SharePoint Service will refresh the data model in the workbook on a periodic basis (no more than once per day). The service essentially opens the workbook in edit mode, refreshes all of the data connections, and saves the workbook back to the library. If versioning is enabled, it will be saved as a new version. Unfortunately, if you’re not using a PowerPivot data model, the options are unavailable. In Gallery view, the icons are simply unavailable, and while the option is available in the All Documents view, selecting it results in an error.

On the surface, it would seem that using workbooks with PowerPivot is the only option for keeping large volumes of back-end data up to date in Excel visualizations. However, there is a small loophole that you can take advantage of.

The refresh function in PowerPivot for SharePoint refreshes all of the connections in a workbook. While this option is unavailable if the workbook has no embedded PowerPivot model, when it does, it refreshes ALL of the data connections in the workbook, whether they connect to a model, a back end SSAS server, SQL server or whatever. So therefore, if you want to keep your connected data refreshed, the solution is to add a dummy PowerPivot model to your workbook.

Simply open up the PowerPivot window, import some small amount of data from an external source, and save it. Once saved, the PowerPivot refresh options will appear, and you’ll be able to schedule data refresh for your workbook. You can even deselect the refresh of the source data for your dummy model, and the other connections will work just fine.

Once your workbooks are being updated automatically, your users will be presented with up-to date data on load with no delays, all dashboard visualizations will be up to date and quick to render, and the visible data will be picked up by your search crawler. All will be well with the world.

3 Comments

Calculate a Due Date Based on Business Hours With Nintex Workflow

More than a few times, I have come across the requirement to calculate a due date based on business hours. Recently I needed such a capability for a help desk application that had been built in Nintex Workflow. It’s not quite as simple as it sounds – it’s the business hours part of it that makes it relatively tricky. After a cursory search, while there were a few that would calculate based on days, I couldn’t find any solutions out there to suit this requirement, so I decided to build my own.

The help desk workflow looks up the type of issue from a list, retrieves the expected close time from a Service Level Agreement (SLA) field, and that is used to calculate the due date (along with alert times). SLAs are in business hours,  and the business is measure on overdue times, so it is important that this is accurate.

The workflow itself had been built originally with Nintex Workflow 2007, and a web service had been utilized that served this purpose, but that option was no longer available in the customer’s new SharePoint 2013 environment due to authentication issues. The good news is, Nintex Workflow 2013 contains a feature called User Defined Actions (UDAs) that would allow me to get the job done.

In a nutshell, a UDA allows a workflow developer to encapsulate a workflow, have it accept input parameters have it output results as parameters. Then, that UDA can be used in other workflows as required. It is essentially a workflow function. I was able to successfully build a UDA to do this, and I am providing it here:

Nintex UDA to calculate a due date based on business hours

The basic logic of the UDA goes something like this. Starting with a date, a number of days, hours and minutes, it first calculates the number of minutes that the issue is to be open. The number of days is multiplied by the number of business hours in a day (retrieved from workflow constants), the hours by 60, and these are added to the minutes parameter.

Next, we test to see if it is a business day (determined from a list of holidays and whether the day is a Saturday or Sunday). Next, we see if the date is today, and if we are within business hours. If so, we subtract the remaining minutes in the business day from the minutes outstanding. We then increment the day, and repeat the loop until there are more minutes in the current date than are outstanding. Once that happens, we add the outstanding minutes to the current date, and output is as the due date/time.

The package above contains two files – the UDA itself, and a list template. In order to get the UDA working, you first need to create the Holidays list, and two Nintex Workflow constants. The package contains a list template, Holidays.stp. I’m going to assume that adding a list template to the list template gallery is a well known procedure, and not step through it here, but it needs to be done. Once added, create a new list using the Holidays template, and pops appropriate. It comes with 3 annual holidays already defined, Christmas, Boxing Day, and New Years Day. If the holiday falls on the same date every year, it only needs to be added once, and the “Annual” option checked (the year can be ignored). Otherwise, each holiday must be explicitly added.

Next, two workflow constants need to be created – WorkdayEnd and WorkdayStart. They will use the Number type and represent the number of minutes from midnight that the work day starts and ends. 8:00 AM is 8×60 minutes from midnight and therefore the value is 480.

image

image

We then need to import our UDA. To do so, go to Settings – Nintex Workflow – Manage User Actions.

image

Next import the UDA from the file extracted from the package above (Calculate_Due_Date.uda). Once imported, it will likely be necessary to edit a couple of the UDA actions to connect them to the list and constants. These actions are easy to identify as they will display a yellow alert flag.

image

You will not be able to publish the UDA until all of these connections are made.

Once published, you can use the UDA in your workflows. To do so, open the User Defined Actions section in your Nintex Workflow Designer, and drag the “Calculate Due Date” action onto the workflow design surface. Next, configure the action.

image

Supply a starting date/time, the number of days, hours, and minutes to leave it open, and then assign the Due Date parameter to your workflow variable. That’s all there is to it.

There is nothing compiled in the UDA, and you can modify it to meet your needs. Hopefully this helps out a few people looking for this capability.

16 Comments

This Column Name is not What You Think – Be Careful With Column Names in SharePoint 2013 / Office 365

 

I’ve been involved with SharePoint for a very, very long time, since 2001 in fact. One of the basic truths that we old timers learned very early on was that an internal list column name wasn’t necessarily the same as its display name. For the most part, when a column is created, the name is the same, but any subsequent changes to the name only affect the way that it is displayed, and the original name stays the same. In fact, even on creation, if a space is used in the name, when saved, the internal name replaces the space with “_x0020_”. This is why it is a best practice to create all entities using no spaces, and then edit them later adding any desired spaces. It just makes life easier for people working with internal names.

Who works with internal names? Anyone that needs to extend SharePoint really. Any explicit reference to a SharePoint column normally needs its internal name. Given that display names are editable, this makes sense. Because of this, for a long time, I’ve gotten used to being careful about column names when creating them, so that it’s easy to reference them later.

Recently, I ran into a very odd behavioural change with SharePoint 2013 (Also with Office 365). My situation was that we needed to display a rotating banner on a page. I decided to use Marc Anderson’s SPServices to do the heavy lifting. Also, since I’m not that great at JQuery (or Javascript for that matter) I turned to Mark Rackley’s blog, who had an excellent example of an image slider using SPServices.

The way that the slider works is by reading entries from a custom list. This list contains a couple of columns named HTML and Picture. I added a third named DestinationURL to make the image clickable. I created this new list, and then edited it using the “traditional” method of accessing List settings. The HTML and DestinationURL fields were simple text fields, and the Picture field was a Picture column. Once created and populated, everything worked great.

Next up, I had to repeat this on a different site. I couldn’t save the custom list as a template because the destination site was in a different language. I therefore decided to just go ahead and create another list manually, but this time, I added the field with the new SharePoint 2013 “Quick List Editor”. If you haven’t seen it before, it shows up as a “+” symbol when you use it.image

image

You simply click the “+” symbol, choose the column type type the name of your column and keep going. If you choose the “More Column Types…” option, you are presented with the traditional column creation dialog box. I needed to use this for my “Picture” column.

image

Once complete, everything looked great.

image

I then went ahead and populated the list with content, and implemented the slider code on the home page. The trouble is that it wouldn’t work. After tearing out my hair a fair bit, it appeared that while the picture column was returning data, the HTML and DestinationURL columns were not. I found this very odd as I had been careful with the names, and this was a new list. Additionally, the Picture type column is more complex. If any column was going to give me grief, I’d think it’d be that one.

Finally, I decided to confirm the column names, because I don’t trust myself. The easiest way to do that is to go into List Settings, hover the mouse over the column definition, and check the destination URL that pops up as it contains “Field=fieldname” in it. I then did so for my HTML field:

image

What? The fieldname is “vtwo”. Upon checking, the DestinationURL field had an equally random name (“vn1m”). However, the Picture field was “Picture” as expected. How did this happen? Why did it happen to only two columns?

As it turns out, it’s the quick list editor is the culprit. When you add columns using it, it has no mechanism to check your column name against existing columns, so it doesn’t even try – it just forces a random column name. However, when you choose “More Column Types” it loads the traditional column editor, which does have such control, and therefore, my Picture field worked as expected.

After going back, deleting and recreating my two columns (and repopulating the content), everything worked as expected.

Lesson learned – when changing a list schema, stick to the list settings interface. Don’t use the Quick List Editor for adding new columns.

12 Comments

Reporting Services Web Part Error After SharePoint Upgrade

I recently completed an upgrade of SharePoint 2010 to 2013 for a customer that was using Reporting Services integrated mode fairly heavily. After the initial upgrade however, I was getting the following error whenever I tried to access a page that contained a Reporting Services report viewer web part:

Web Part Error: A Web Part or Web Form Control on this Page cannot be displayed or imported. The type Microsoft.ReportingServices.SharePoint.UI.WebParts.ReportViewerWebPart,Microsoft.ReportingServices.SharePoint.UI.WebParts, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 could not be found or it is not registered as safe.

The version of Reporting Services on the originating SharePoint 2010 farm was from SQL Server 2008 R2 (version 10) and the version in the new farm was SQL Server 2012 SP1 (version 11). The message pretty clearly indicates that it was having trouble loading a version 10 instance of the web part assembly, so I opened up the web.config file for my application, and sure enough, while there was a safecontrols entry for version 11 of the assembly, there was none for version 10. There was a binding redirect for the assembly itself (redirecting from version 10 to 11), so this was pretty clearly a bug in the installation process (SQL server install team take note!).

This would only affect content brought forth from an older SSRS system which is what I had. One solution would be to re-add all of the web parts onto the relevant pages. That would explicitly use new assembly references, but would also take a great deal of time. Another would be to directly edit the web.config files of all of the farm applications. While not generally considered best practice, this would solve the problem, especially since it corrects an oversight.

Hunting around, I came across my friend Chris O’Connor’s blog post on this very topic. Chris had this exact issue, and a rather large farm with multiple applications to deal with, so he built some PowerShell script to add these entries back in. The complete script is in Chris’ post.

In any event, regardless of the method used, adding the safe controls entry into the web.config file(s) fixes this particular issue. On to the next problem…..

4 Comments