Suggestion to Improve the Pivot Table Experience

This is a special post to to discuss a suggestion to improve the Pivot Table experience, especially for Power Pivot users.

This week I’m at the 2015 MVP Summit in Redmond, WA.  It’s a trip I’m lucky enough to make every year, and certainly one of the annual events that I look forward to the most.  It’s a chance to reunite with my friends in the global community of Excel experts, as  well as make some new friends there too.  In addition, we get the opportunity to meet with the Microsoft Excel engineers, give our feedback, and talk about the things that are/aren’t working in the program.

Of course, this doesn’t mean that they can or will implement the suggestions we have.  Excel is a massive program, and every feature change can cause bigger issues elsewhere.  But they do listen, and they do want this product to be the best it can be.  Like every company, they have to work out what they can afford to do, and where the best investments are for their limit of resources.

In the spirit of the summit, I thought I’d share one of the ideas I have that I think would be really beneficial to Power Pivot users.  Maybe it makes the radar, maybe it doesn’t, but I think it would be a really useful change.  I’m fairly certain it could also be implemented without causing any issues with other features in the product as well.

The Issue

For those working with Power Pivot, you know the power of DAX.  This leads to creating many different DAX measures, each of which are landed in the columns of the Pivot Table.  This is awesome, but it brings up a challenge with the usability of the Pivot Table field list:


Back when we just dropped singular fields into the Values area, things weren’t so bad.  I generally only ran with a few fields, and I didn’t feel super constrained by the size of the window.  Yes, I overran the limit on occasion, but it wasn’t a big deal.

With Power Pivot, things have changed.  I have so much more flexibility to write the DAX measures I need, which leads to many more columns being defined.  If you think about things like forecasting an annual cash flow statement, I’ll write at least 13 different measures (one for each month), plus a total.  And that’s just one scenario.  For a regular financial statement the same thing… Actual, Budget, Variances, Year to date Actuals, Year to date Budgets, and so on.  Again, it’s not uncommon to see a statement with over 12 columns.

This proliferation of measures leads us to the issue… the Values are of the Pivot Table field list is too small today.  It only holds 3-4 visible columns at a time.  Trying to move a measure into the right place is a real pain, especially if you add a new measure to the bottom, and you have to drag it up.  I’m sure you’ve had massive “overscroll” problems where the thing seems to speed up to mach 5 JUST as you are trying to move it up that one last row…

The Slightly Better View

The Pivot Table field list has an alternate view called “Field Section and Areas Section Side-By-Side”.



This is a bit better, as we can at least see more fields in the area on the left.  But that’s only helpful for scrolling and finding the fields we need, not placing them on the Pivot:


You see?  I’ve still only got three rows showing (four when my Excel is maximized on screen.)

But here’s the thing…

When I’m building my Pivot, I rarely end up putting anything in the Filters area, as I tend to use Slicers.  I might have a few fields in there that I don’t want users messing with (I hide the top rows of the Pivot Table), but generally I’m looking at between zero and two fields in there.

And when I build my Rows and Columns, I tend to drag them on the Pivot and call it a day.  I could use more space on occasion when I’m layering on my Row fields, but Columns are usually sufficient.  Especially now that I’m writing DAX formulas.  The measure gets dragged in to the Values area, and doesn’t need anything in the Columns area at all.  It’s partly for this reason that the small size of the Values area is killing me.  The old logic for how the Pivot was build has essentially changed, with the description moving from the Columns area to the Values area.

What that means is that I’ve got a ton of wasted whitespace in my Filters and Columns area.  So why not reclaim that whitespace?

Suggestion to Improve the Pivot Table Experience

So here’s my suggestion to improve the Pivot Table experience: modify the “Field Section and Areas Section Side-By-Side” view as follows (excuse the rough mockup…)


The key changes here are really about the arrows to the right of the Filters, Rows, Columns and Values areas.  These are the same arrows as used in the Field List on the left, where the white arrow pointing to the right shows the area collapsed, and the black arrow shows the area expanded.

To be clear, the proportions aren’t correct here, but my thought is that the expanded areas consume an equal share of the remaining whitespace.  So if all four areas are expanded, they each get a 25% share of the remaining space, as it what we see in the current implementation.

But collapse one field (let’s say Filters), and each remaining area expands, as it now gets a 33% share of the remaining space.  Collapse two (as I’ve shown above), and the remaining two get 50% each.  Collapse three, and all remaining whitespace goes to the final area:


This would be fantastic, as it would let me build my Pivot much more easily.  I’d be able to see what I’m working with, especially on Pivot Tables with higher levels of Row or Values fields.

I didn’t scope this in, but it would also probably be a good idea to append a number in parenthesis to each area as well, indicating how many fields exist in each area.  So in this case: image

Naturally, when you’re first building a Pivot, it should open with all areas expanded to 25% of the share… but bonus points if there is a way to save the default view for a configured Pivot.  The reason that I say this is that my guess is that 75% of the time when I’m modifying a Pivot it’s the Values area I’m doing, 20% is Rows, 4% is Columns and the remaining 1% of the time I’m modifying Filters. Respecting that others have different uses though, the ability to choose which fields are expanded/collapsed by default on an already existing pivot would be incredible.

At any rate, that’s my idea.  Here’s hoping a program manager on the Excel team thinks there’s merit to it and starts to look at the feasibility.  Feel free to share your thoughts on the subject below.  :)

If you like this idea...

Please throw it some votes at Excel UserVoice.  The more votes it gets there, the more likely it will be implemented!

Power Query Errors: Please Rebuild This Data Combination

I got sent this today from a friend.  He was a bit frustrated, as he got a message from Power Query that read “Formula.Firewall: Query 'QueryName' (step 'StepName') references other queries or steps and so may not directly access a data source. Please rebuild this data combination.”

What Does This Mean?

This message is a bit confusing at first.  For anyone who has worked with Power Query for a bit, you know that it’s perfectly acceptable to merge two queries together.  So why when you do some merges does Power Query throw this error?


The answer is that you cannot combine an external data source with another query.

Looking At Some Code

Let’s have a quick look at the code that my friend sent:


Notice the issue here?  The Merge step near the end references a query called DimShipper.  No issue there.  But it tries to merge that to an external data source which is called in the first line.

This is actually a bit irritating.  But let’s break this down a bit further.  Looking at the first line, it is made up as follows:


The Filename() portion is calling a function to return the desired filename from a workbook table, (based somewhat on this approach.)  We already know that works, but we also know that this is definitely pulling data from an external workbook (even it the file path is for this current workbook!)  And to be fair, this would be the same if we were pulling from a web page, database or whatever.  As long as it’s an external source being combined with another query, we’re going to get smacked.

So it’s suddenly starting to become a bit clearer what Power Query is complaining about (even if we think it’s frustrating that it does complain about it!)

Let’s “rebuild this data combination”

Right.  Who cares why, we care how to deal with this.  No problem.  It’s actually fairly straightforward.  Here’s how I dealt with it:

  • Right click the Query in Excel’s Query window
  • Choose Edit

I’m now in the Power Query window.  On the left side, I found the Queries pane and clicked the arrow to expand it:


Duplicate the Existing Query

The query in question here is “Purchase”…

  • Right click “Purchase” and choose Duplicate
  • Immediately rename the query to PurchaseList
  • Go to View –> Advanced Editor
  • Selected everything from the comma on the second line down to the last row of the query:


  • Press Delete
  • Change the final line of “Merge” to “Purchase_Sheet”

Perfect… so my new PurchaseList query looks like this:


  • Click Done

This query will now load.  Even though it is pointing to an external data source, it’s not being combined with anything else.  So it’s essentially just a data stage.

Modify the Existing Query

Now we need to go back and modify our existing query.  So in that left pane we’ll again select the Purchase query.

  • Go to View –> Advanced Editor
  • Select the first two lines and delete them
  • Put a new line after the let statement that reads as follows

Source = PurchaseList,

NOTE:  Don’t forget that comma!

And what we end up with is as follows:


So What’s The Difference?

All we’ve really done is strip the “external” data source and put it in it’s own query.  Yet that is enough for Power Query to be happy.  The new Purchase query above now has two references that it is comfortable with, and it works.  (And that’s probably the main thing.)

Designing To Avoid This Issue

I make it a practice to land all of my data sources into specific “Staging Tables”, which are set to load as connections only.  From there I build my “finalization” tables before pushing them into my Data Model (or worksheet).  You can visualize it like this:


The key takeaways here are:

  • I always go from data source into a Staging Query (and do a bit of reshaping here)
  • My staging queries are always set to load to “Connection Only” so that the aren’t executed until called upon
  • All combining of data from disparate sources is done in queries that reference other queries, not external data sources

I’ve found this approach works quite well, and always avoids the “rebuild this data combination” error.

Date Formats in Power Query

Date formats in Power Query are one of those little issues that drives me nuts… you have a query of different information in Power Query, at least one of the columns of which is a date.  But when you complete the query, it doesn’t show up as a date.  Why is this?

Demonstrating the Issue

Have a look at the following table from Excel, and how it loads in to Power Query:


That looks good… plainly it’s a date/time type in Power Query, correct?  But now let’s try an experiment.  Load this to the worksheet:


Why, when we have something that plainly renders as a date/time FROM a date format, are we getting the date serial number?  Yes, I’m aware that this is the true value in the original cell, but it’s pretty misleading, I think.

It gets even better

I’m going to modify this query to load to BOTH the worksheet and the Excel data model.  As soon as I do, the format of the Excel table changes:


Huh?  So what’s in Power Pivot then?


Curious… they match, but Power Pivot is formatted as Text, not a date?

(I’ve missed this in the past and spent HOURS trying to figure out why my time intelligence functions in Power Pivot weren’t working.  They LOOK so much like datetimes it’s hard to notice at first!)

Setting Date Formats in Power Query

When we go back and look at our Power Query, we can discover the source of the issue by looking at the Data Type on the Transform tab:


By default the date gets formatted as an “Any”.  What this means to you – as an Excel user – is that you could get anything out the other end.  No… that’s not quite true.  It means that it will be formatted as Text if Power Pivot is involved anywhere, or a Number if it isn’t.  I guess at least it’s consistent… sort of.

Fixing this issue is simple, it’s just annoying that we have to.  In order to take care of it we simply select the column in Power Query, then change the data type to Date.

Unfortunately it’s not good enough to just say that you’ve set it somewhere in the query.  I have seen scenarios where – even though a column was declared as a date – a later step gets it set back to Any.


I’ve been irritated by this enough that I now advise people to make it a habit to set the data types for all of their columns in the very last step of the query.  This ensures that you always know EXACTLY what is coming out after all of your hard work and eliminates any surprises.

Slicers For Value Fields

Earlier this week I received an email asking for help with a Power Pivot model.  The issue was that the individual had built a model, and wanted to add slicers for value fields.  In other words, they’d built the DAX required to generate their output, and wanted to use those values in their slicers.  Which you can’t do.  Except maybe you can…  :)

My approach to solve this issue is to use Power Query to load my tables.  This gives me the ability to re-shape my data and load it into the data model the way I need it.  I’m not saying this is the only way, by any means, but it’s an approach that I find works for me.  Here’s how I looked at it in Excel 2013.  (For Excel 2010 users, you have to run your queries through the worksheet and into Power Pivot as a linked table.)


The scenario we’re looking at is a door manufacturer.  They have a few different models of doors, each of which uses different materials in their production.  The question that we want to solve is “how many unique materials are used in the construction of each door?”  And secondarily, we then want to be able to filter the list by the number of materials used.

The first question is a classic Power Pivot question.  And the setup is basically as follows:


  • Create a PivotTable with models on rows and material on columns
  • Create a DAX measure to return the distinct count of materials:
    • DistinctMaterials:=  DISTINCTCOUNT(MaterialsList[material])
  • Add a little conditional formatting to the PivotTable if you want it to look like this:


The secret to the formatting is to select the values and set up an icon set.  Modify it to ensure that it is set up as follows:


Great stuff, we’ve got a nice looking Pivot, and you can see that our grand total on the right side is showing the correct count of materials used in fabricating each door.

Creating Slicers For Value Fields

Now, click in the middle of your Pivot, and choose to insert a slicer.  We want to slice by the DistinctMaterials measure that we created… except.. it's not available.  Grr…


Okay, it’s not surprising, but it is frustrating.  I’ve wanted this ability a lot, but it’s just not there.  Let’s see if we can use Power Query to help us with this issue.

Creating Queries via the Editor

We already have a great query that has all of our data, so it would be great if we could just build a query off of that.  We obviously need the original still, as the model needs that data to feed our pivot, but can we base a query off a query?  Sure we can!

  • In the Workbook Queries pane, right click the existing “MaterialsList” query and choose Edit.
  • You’ll be taken into the Power Query editor, and on the right side you’ll see this little collapsed “Queries” window trying to hide from you:


  • When you expand that arrow, you’ll see your existing query there!
  • Right click your MaterialsList query and choose “Reference”.

You’ve now got a new query that is referring to your original.  Awesome.  This will let us preserve our existing table in the Power Pivot data model, but reshape this table into the format that we need.

Building the Query we need

Let’s modify this puppy and get it into the format that will serve us.  First thing, we need to make sure it’s got a decent name…

  • On the right side, rename it to MaterialsCount

Now we need to narrow this down to a list of unique material/model combinations, then count them:

  • Go to Add Column –> Add Custom Column
  • Leave the default name of “Custom” and use the following formula:  [model]&[material]
  • Sort the model column in ascending order
  • Sort the material column in ascending oder

We’ve not got a nicely ordered list, but there’s a few duplicates in it.


Those won’t help, so let’s get rid of them:

  • Select the “Custom” column
  • Go to Home –> Remove Duplicates

Now, let’s get that Distinct Count we’re looking for:

  • Select the “model” column
  • Go to Transform –> Group By
  • Set up the Group By window to count distinct rows as follows:


Very cool!  We’ve now got a nice count of the number of distinct materials that are used in the production of each door.

The final step we need to do in Power Query is load this to the data model, so let’s do that now:

  • File –> Close & Load To…
  • Only create the connection and load it to the Data Model

Linking Things in Power Pivot

We now need to head into Power Pivot to link this new table into the Data Model structure.  Jump into the Manage window, and set up the relationships between the model fields of both tables:


And that’s really all we need to do here.  Let’s jump back out of Power Pivot.

Add Slicers for Value Fields

Let’s try this again now. Click in the middle of your Pivot and choose to insert a slicer.  We’ve certainly got more options than last time!  Choose both fields from the “MaterialsCount” table:


And look at that… we can now slice by the total number materials in each product!


PowerXL Course Live in Victoria, BC

We are very excited to announce that we will be hosting an "Introduction to Power Excel" session in Victoria, BC on June 6, 2014.

PowerPivot is revolutionizing the way that we look at data inside Microsoft Excel. Allowing us to link multiple tables together without a single VLOOKUP statement, it enables us to pull data together from different tables and databases where we never could before. But linking data from multiple sources, while powerful, only scratches the surface of the impact that it is making in the business intelligence landscape. Not only do we look at PowerPivot in this session, but we'll also explore the incredible companion product Power Query; a tool that will surely blow your mind. Come join Ken as he walks you through the process of building a Business Intelligence system out of text files, databases and so much more.

Full details, including an early bird signup offer, can be found at

From TXT files to BI solution

Several years ago, when Power Pivot first hit the scene, I was talking to a buddy of mine about it. The conversation went something like this:

My Friend: “Ken, I’d love to use PowerPivot, but you don’t understand the database culture in my country. Our DB admins won’t let us connect directly to the databases, they will only email us text files on a daily, weekly or monthly basis.”

Me: “So what? Take the text file, suck it into PowerPivot and build your report. When they send you a new file, suck it into PowerPivot too, and you’re essentially building your own business intelligence system. You don’t need access to the database, just the data you get on a daily/weekly basis!”

My Friend: “Holy #*$&! I need to learn PowerPivot!”

Now, factually everything I said was true. Practically though, it was a bit more complicated than that. See, PowerPivot is great at importing individual files into tables and relating them. The issue here is that we really needed the data appended to an existing file, rather than related to an existing file. It could certainly be done, but it was a bit of a pain.

But now that we have Power Query the game changes in a big way for us; Power Query can actually import and append every file in a folder, and import it directly into the data model as one big contiguous table. Let’s take a look in a practical example…

Example Background

Assume we’ve got a corporate IT policy that blocks us from having direct access to the database, but our IT department sends us monthly listings of our sales categories. In the following case we’ve got 4 separate files which have common headers, but the content is different:


With PowerPivot it’s easy to link these as four individual tables in to the model, but it would be better to have a single table that has all the records in it, as that would be a better Pivot source setup. So how do we do it?

Building the Query

To begin, we need to place all of our files in a single folder. This folder will be the home for all current and future text files the IT department will ever send us.

Next we:

  • Go to Power Query --> From File --> From Folder
  • Browse to select our folder
  • Click OK

At this point we’ll be taken in to PowerQuery, and will be presented with a view similar to this:


Essentially it’s a list of all the files, and where they came from. The key piece here though, is the tiny little icon next to the “Content” header: the icon that looks like a double down arrow. Click it and something amazing will happen:


What’s so amazing is not that it pulled in the content. It’s that it has actually pulled in 128 rows of data which represents the entire content of all four files in this example. Now, to be fair, my data set here is small. I’ve also used this to pull in over 61,000 records from 30 csv files into the data model and it works just as well, although it only pulls the first 750 lines into Power Query’s preview for me to shape the actual query itself.

Obviously not all is right with the world though, as all the data came in as a single column. These are all Tab delimited files, (something that can be easily guessed by opening the source file in Notepad,) so this should be pretty easy to fix:

  • Go to Split Column --> Delimited --> Tab --> OK
  • Click Use First Row as Headers

We should now have this:


Much better, but we should still do a bit more cleanup:

  • Highlight the Date column and change the Data Type to Date
  • Highlight the Amount column and change the Data Type to Number

At this point it’s a good idea to scroll down the table to have a look at all the data. And it’s a good thing we did too, as there are some issues in this file:


Uh oh… errors. What gives there?

The issue is the headers in each text file. Because we changed the first column’s data type to Date, Power Query can’t render the text of “Date” in that column, which results in the error. The same is true of “Amount” which can’t be rendered as a number either since it is also text.

The manifestation as errors, while problematic at first blush, is actually a good thing. Why? Because now we have a method to filter them out:

  • Select the Date column
  • Click “Remove Errors”

We’re almost done here. The last things to do are:

  • Change the name (in the Query Settings task pane at right) from “Query 1” to “Transactions”
  • Uncheck “Load to worksheet” (in the bottom right corner)
  • Check “Load to data model” (in the bottom right corner)
  • Click “Apply & Close” in the top right

At this point, the data will all be streamed directly into a table in the Data Model! In fact, if we open PowerPivot and sort the dates from Oldest to Newest, we can see that we have indeed aggregated the records from the four individual files into one master table:



Because PowerQuery is pointed to the folder, it will stream in all files in the folder. This means that every month, when IT sends you new or updated files, you just need to save them into that folder. At that point refreshing the query will pull in all of the original files, as well as any you’ve added. So if IT sends me four files for February, then the February records get appended to the January ones that are already in my PowerPivot solution. If they send me a new file for a new category, (providing the file setup is 3 columns of tab delimited data,) it will also get appended to the table.

The implications of this are huge. You see, the issue that my friend complained about is not limited to his country at all. Every time I’ve taught PowerPivot to date, the same issue comes up from at least one participant. IT holds the database keys and won’t share, but they’ll send you a text file. No problem at all. Let them protect the integrity of the database, all you now need to do is make sure you receive your regular updates and pull them into your own purpose built solution.

Power Query and Power Pivot for the win here. That’s what I’m thinking. :)

PowerPivot training live in Victoria, BC!

Anyone who follows my website or Facebook Fan Page knows that I’m a huge fan of PowerPivot. Well great news now, that you can come and learn not only why this is the best thing to happen to Excel in 20 years, but also how to take advantage of it yourself!

I’ll be teaching a course on PowerPivot and DAX in Victoria, BC on November 22nd, 2013. While the course is hosted by the Chartered Professional Accountants, it’s open to anyone who wishes to subscribe.

If you’ve been trying to figure out how to get started with Power Pivot, you’re confused as to how and why things work, or you want to master date/time intelligence in PowerPivot, this course is for you.

100% hands on, we’ll start with basic pivot tables (just as a refresher.) Next we’ll look at how to build the PowerPivot versions and why they are so much more powerful. From linking multiple tables together without a single VLOOKUP to gaining a solid understanding of relating data, you’ll learn the key aspects to building solid PowerPivot models. We’ll work through understanding filter context, measures and relationships, and finish the day by building measures that allow you to pull great stats such as Month to Date for the same month last year, among others.

Without question, you’ll get the most out of this course if you’re experienced with PivotTables. If you don’t feel like you’re a pro in that area though, don’t worry! As an added bonus, to anyone who signs up via this notice, please let me know. I’ll provide you with a free copy of my Magic of PivotTables course so that you can make sure you’re 100% Pivot Table compatible before you arrive.

This is a hands on course, so you need to bring a laptop that is pre-loaded with either:

  • Excel 2010 and the free PowerPivot download
  • Excel 2013 with Office 2013 Professional Plus installed (yes the PLUS is key!)
  • Excel 2013 with Excel 2013 standalone installed.

If you have any questions as to which you have installed, simply drop me a note via the contact form on my website, and I’ll help you figure it out.

Full details of the course contents as well as a registration link, can be found at Don’t wait too long though, as registration deadline is November 14th!

Hope to see you there!

A review of Add-in Express

A while ago I started transitioning from VBA to VB.NET again, attempting to build a tool to manipulate the new Power Pivot components in Excel 2013. While I was able to get part way to a working solution using Visual Studio 2012 and VSTO, I ran into two key issues:

1. VSTO seems to be “bit” specific, meaning that I would need to keep one version of the solution for 32 bit versions of Office, and another for 64 bit versions, and

2. While I could run my code, the Power Pivot engine would crash on me unless I opened Power Pivot BEFORE I ran any of my code.

Both of these issues were rather severe to me, as I didn’t want to maintain multiple versions of the same code, nor could I release something and expect users to open Power Pivot before running my project. A friend of mine suggested I try Add-in Express to deal with these issues.

In the spirit of full disclosure, I contacted them and asked if they had a trial version. They don’t, but offered to let me trial it if I’d blog on my experiences. I agreed to do that, and what follows is my honest observations of the software.

I do want to preface that, like my friend Rob Collie, I am not a “Read the manual” kind of guy. Ironic, since I write a lot of material, but I take the approach of diving in, and hitting Bing for a quick pointer when I get stuck. It usually causes me a lot of pain, but I tend to learn better that way.

So, here’s how it all came together for me…

When you install Add-in Express, you get a new set of templates. I started my project by creating a new ADX COM Add-in targeting the .NET Framework 2.0. (For those not in the know, in order to target Excel 2010 with VSTO, you need to use .NET 4.0 and Excel 2013 is .NET 4.5. So .NET 2.0 is way too early a framework to have ever even heard of Power Pivot!)


From there you pretty much follow the prompts through the setup. I called it “Connect”, I set the minimum supported version to Office 2010 (since PowerPivot didn’t exist in 2007), I chose a Visual Basic Project, I selected Excel, and I left the rest of the settings at their defaults. Pretty easy, that part.

Once I had a project to work with, I created a new VB Module, just like normal, and built the code I would need to refresh my PowerPivot model and all the PivotTables. (Be aware that, for simplicity of the post, this is 2013 specific code, and will not work with Excel 2010.)

The key piece in this is making sure the xlApp declaration is correct, as you need to refer to the AddinModule portion in order to bind it to Add-in Express’s handlers, instead of just binding to the Interop.Excel objects. That change is what makes Add-in Express work:

[vb]Imports Excel = Microsoft.Office.Interop.Excel

Module Module1

Public xlApp = Connect.AddinModule.CurrentInstance.ExcelApp

Public Sub RefreshPowerPivotTables()

Dim xlWorkbook As Excel.Workbook = xlApp.ActiveWorkbook

Dim ws As Excel.Worksheet

Dim pvt As Excel.PivotTable

'Attempt to connect



Catch ex As Exception

'Data connection could not be refreshed.

MsgBox("Sorry, but I could not refresh the model! Are you sure this workbook has one?")

Exit Sub

End Try

'Refresh all PivotTables

For Each ws In xlWorkbook.Worksheets

For Each pvt In ws.PivotTables

With pvt



End With



End Sub

End Module[/vb]

So far so good. Now I needed a user interface.

To be fair, it took me a bit to figure this one out. Once I finally realized that you need to open the “AddinModule.vb” portion in the solution explorer, then add a Ribbon tab to the canvas, then things got easier.

Despite reading very little documentation, with a little help from the blogs and articles on the Add-in Express site when I did get stuck, I was very quickly able to build a simple UI. There’s a good article on doing this here, which I wish I’d read earlier in the process.

As I say, it’s a really simple UI: a tab called “Model”, a single group, and a button with an image on it.

One criticism I do have is that it would be nice to be able to link the buttons and other controls to their callbacks inside the visual designer. As is, it’s a bit clunky, as you have to select the designer, then choose the other controls in the properties window. It’s not totally intuitive, but once you know where to look (read the article linked to above), it is workable.

My callback code for my button (which I didn’t bother renaming) is as follows:

[vb]Private Sub AdxRibbonButton1_OnClick(sender As Object, control As IRibbonControl, pressed As Boolean) Handles AdxRibbonButton1.OnClick

Call RefreshPowerPivotTables()

End Sub[/vb]

And with that done, it came time to debug. Again, fairly straight forward:

  • Build -> Build the project
  • Build -> Register ADX project
  • Make sure Excel is closed
  • Start the debugging engine

My “Model” tab showed up, with the command I’d built, as shown below:


And with a single click (and a bit of a wait since Power Pivot is so slow), my Power Pivot data was refreshed, and the PivotTable updated to reflect the changes I made in my database. Notice the new customer and the new sales transactions for 8/1/2013:


But the best part is this:

  • The solution is deployable to both 32 bit and 64 bit Office platforms, and
  • I can open my Power Pivot project even if I run my code first, and it doesn’t crash.

I can honestly say that I fought that Power Pivot crash issue for about 2 months with VSTO, and I was really worried that it was going to kill my project completely. No amount of searching would turn up a fix, and other help calls didn’t yield any gold either; where they were answered, it was with a “don’t know” answer. Add-in Express has actually made this goal achievable.

I’ll also tell you that, while refreshing Power Pivot isn’t the focus on my full project, I have been able to use Add-in Express to successfully target and manipulate Power Pivot in both 32 and 64 bit versions of both Excel 2010 and 2013. I.e. multi version deployment with one code base. Pretty damn awesome.

I should also mention that their support has been phenomenal as well. Not only have they answered my emails, but I even ended up on a call with one of their lead people to examine why I didn’t seem able to use the debugging tools at first (a blog post for another day). 30 minutes, problem solved, and I’m good to go. Again, pretty damn awesome.

Readers of this blog will know that I don’t endorse very many products at all. Sure, I use Google Adwords and stuff, but I don’t write too many blog posts talking about how awesome a product is. Here’s my word on Add-in Express:

I’m sold. This product has been a life saver, and I won’t develop using VSTO.

Review – Creating Data Models with PowerPivot How-to

A couple of weeks back I was approached by PackT Publishing, asking me if I’d be willing to read and post a review of Creating Data Models with PowerPivot How-to, by Leo Taehyung Lee.  As I’m always interested to see what others are saying in this space, I agreed.

The e-Book is 40 pages of content (after you discount the copyright, author profiles and other stuff that goes with every book), and is intended to cover off PowerPivot 2010 from install to basic usage.  Being an author, and having written many how-to webpages, I can say that it’s a lot to cover in that few pages.

Overall, you get a pretty basic glimpse into PowerPivot.  Topics covered include:

  • Installing PowerPivot
  • Installing SQL Express
  • Importing a bunch of tables
  • Making a basic Pivot
  • Making a Pivot Chart
  • Adding another table
  • Creating a relationship
  • Creating calculated columns
  • Optimization

The PowerPivot install is pretty simplistic; it basically points you to Microsoft’s site and tells you to follow the instructions.  I guess you can’t complain about that too much, although I think I would have highlighted the pre-requisites a bit more myself.

Honestly, I was more than a little surprised to see the installation of SQL Express in here.  Again, it was a very short coverage, and all in an effort to get access to the AdventureWorks database.  (I’ve never understood the fascination with AdventureWorks myself.)  Of all the pieces covered in this e-Book, I seriously have to question this one.

PowerPivot connects to Access databases, whether Access is installed or not, so that would have been a far easier route to demonstrate connecting to databases in my opinion.  Yes, I understand that corporate users don’t like Access, but that’s not the point.  Using an Access file requires no extra software be installed, unlike SQL Express, and therefore leaves no unneeded software on your machine once you are finished with the e-Book subject.  To my mind, it would have made much more sense to provide a downloadable Access sample database, or even push the user to download data from an online feed that didn’t require SQL Express installation.

Back to the actual PowerPivot specific stuff though… The Author’s approach to pulling in tables was quite interesting to me; he connects to the database and pulls in a whole bunch of them… way more than he needs.  I could criticize that, but I won’t, as that is EXACTLY how an Excel pro builds a PowerPivot solution.  Suck in as much data as possible, and then figure out what you need to link to get what you want.  The reason why this is interesting to me is that I had a conversation at the last MVP summit with some SQL MVP’s and they were horrified at this development approach.  SQL pros explore data first, publish as little as possible, so that the solutions are optimized from the beginning.  Excel people generally only optimize when their file is too big to email, or when it gets so slow they can’t stand waiting for it to refresh any more.

Cleaning up these unused columns was covered, although to my mind there were some key points that probably should have been made.  I would HIGHLY recommend that before any users starts pruning columns from their workbook that they save it first.  Then I’d refresh all of my pivots every 2-3 columns I remove.  Yes, it’s painful.  Yes, PowerPivot is slow to refresh.  But the first day you accidentally delete a column that has a dependency and catch it, you’ll be thankful.  Rebuilding PowerPivot model logic sucks.  (Trust me, I’ve been there.)

The only true technical issue I found in the book is related to the final statements regarding version compatibility issues.  The Author states that “… Excel files created with the older version of PowerPivot will be accessible in the new version…”  While that can be argued to as technically true, it is quite misleading.  PowerPivot files created in 2010 must be upgraded to be used in Excel 2013, and that writes permanent changes to the model structure, preventing it from being used in 2010 again.  I think that the author should have been much clearer in this regard as mixed version Offices will find no joy when trying to share/co-author PowerPivot files.

Overall, the e-Book gives a fairly decent intro level coverage to the topic.  It starts assuming no PowerPivot knowledge at all, and I can certainly see many things in here that remind me of the way I navigated my way through the basics of the PowerPivot learning curve.

If you’ve never used PowerPivot before, this e-book will walk you through some of the basics to get you up and running.  But be aware that PowerPivot is not easy, and you’re very quickly going to require a more in depth book to create good models, let alone master DAX (the real power in PowerPivot.)

If you have used PowerPivot, created a relationship between two tables and based a PivotTable off the related fields, then this book wont’ do anything for you.

To buy directly from PackT, you can do so from the following link: