Power BI Boot Camp

I'm pretty excited to announce that we have our very first Power BI Boot Camp coming up in Vancouver, BC on Feb 22-24.  This is something that I've wanted to do for a while, and it's going to be awesome!


The boot camp is going to be 3 full days of hands on Power BI, and is intended to get you up and running, building cool solutions that will impact your business.

Why do you care about the Power BI Boot Camp?

Okay, I get it… you're an Excel person, and while you've heard of Power BI, you're not quite sure if you need it or not, and certainly don't know if you can justify the cost of a 3 day Power BI Boot Camp...

I honestly believe that the answer to both questions is an emphatic yes!

Top 5 reasons you need to learn about Power BI:

We'll cover the "how do I convince my boss" near the end of the post, but first, let's talk about what Power BI is, and why it is relevant to you.

  1. A major reason that companies are behind in their Business Intelligence development is that they are using outdated versions of Excel and cannot (or are afraid) to upgrade.   Power BI Desktop is a separate program that sits completely outside Excel, meaning that you can run the most up to date business intelligence tool DESPITE the version of Excel you're stuck on.  I'm looking at you Excel 2003 & Excel 2007 users. Winking smile
  2. You can do a TON of stuff with Power BI for free.  Yes, there are services that cost a nominal fee, but you'll be surprised at how much you can do for absolutely nothing.  The investment you make here will pay back in spades as you begin to automate tasks that were taking you huge amounts of time, and get the ability to deliver true business intelligence to your stakeholders.
  3. Power BI's data collection experience is based on Power Query, and it's modelling experience is based on Power Pivot.  So if you have those skills from your Excel journey, they will be totally relevant in Power BI.  And if you don't?  The new skills you learn at this course related to these areas are portable back to Excel as well.  (So again, if you're stuck on an older version of Excel, this is your chance to upskill BEFORE your Excel is updated.)
  4. If you've ever tried to share Excel dashboards, you know how hard it can be to do so and maintain them.  And if your end users want their reports accessible and interactive on mobile?  Forget it.  But Power BI solves these issues, including delivering interactive reports to your mobile phone (yes, even your non-Windows phones!)  Rest assured, however, that if your company is cloud averse, this tool is not dead to you, and it's still worth coming.
  5. You'll be learning in a small class environment from a world class expert who cares about delivering value.  I've been there.  I know how hard your job is.  And I want to help you become more effective.  That is what my company is all about, and I'm very serious about it.

What will the Power BI Boot camp cover?

To be completely honest, the better question is probably "what won't we look at?"  I've got all kinds of stuff cooking for this.  When I first started drafting the Power BI Boot Camp outline I was wondering how I'd ever fill 3 full days.  Now I'm trying to figure out what to cut so that I can fit it all in.  Keep in mind that it's not going to necessarily be in the order below, as some topics are MUCH bigger than the summary I'm providing, but here's a bit of an overview…

Where it all starts…

We'll start by building a basic Power BI solution built on an Excel file.  Some minor data modifications, some cool visuals and then publishing it to the cloud service, we'll walk through the most basic of Power BI journeys.  I want to do this so that you can see just how easy it is to build a cool solution that works and interacts, like this one.  (Go ahead... click any of the bars and see how it cross filters in your web browser!)

More data, more transformations, and some modelling…

From there we'll start to dive deeper, teaching the techniques that allow you to build more robust models and solutions.  We'll spend a bunch of time looking at different data sources, as well as some of the more complex methods for fixing bad data to make sure that you can use it in your solutions.  We'll also show you how to link these tables together so that they filter nicely across multiple visuals.


Let's get visual…

Power BI is all about getting visuals with your data.  It has a big collection of visuals, as well as an ever growing gallery of custom visuals as well.

While I won't promise that we'll look at each different visual in the product (we only have 3 days), we are going to look at a lot of them, including some of my favourite custom visuals.  I want to show you how to create them, format them, and what data you need to get the best of them to work.


We're building dashboards to tell a story here, so I want to try and get that as close to real life as possible for you.  From bar, column and line charts, to maps and bullet charts, I really want to get you comfortable with these, as well as how to control the interactions between them.

And speaking of Dashboards… do you know the difference between a report and a dashboard in Power BI?  That will get clearly explained here as well.

Getting the numbers right…

We'll get into working with the DAX formula language, and understand how to add and override filter context to make your formulas show exactly what you want when you filter any visual.

How about Calendar intelligence?  I have a huge passion in this area too.  The Power BI Boot Camp won't just teach you how to build your own calendar table, you'll also leave armed with a collection of DAX calendar intelligence measures.  Those measure patterns are super important as they'll allow you to report results correctly when you ask for Month to Date data or Month to Date for 2 months ago.


Sharing is caring!

We'll look at sharing via the web first, both for internal and external users on an invite-only basis.  But even more, I'm also going to walk you through embedding your reports in a web page so that you can show the world how awesome your company is.

In addition, you'll want to bring your phone with the Power BI app installed.  Why?  Because I'm going to make sure you get set up with your reports and dashboards delivered to your phone in a mobile optimized fashion where you can even annotate your reports by hand.


And if you'd prefer not to be cloud based?  We'll look at creating Power BI templates that you can distribute within your organization as well.

You know… while we're talking about sharing… have you ever sent a report to a manager only to have them come back with a one of those "can you show me this?" type questions?  Let's try and solve that little problem by allowing our manager to ask their own questions using natural language queries.  In other words, how cool would it be if your manager could just type "Bar chart total stays by city" and it did it, even if you've never built that report?  Because they can.


And what about updates and security?

Wait… how do I keep this stuff up to date?  And how do I ensure that only the right people are looking at their data?  Well of course we'll cover those topics!

Not only will we look at scheduling refresh for cloud hosted data sources, but also your on-premises data sets via gateways and 3rd party applications.  Ooohh… connecting data to the cloud… can I hear anyone saying "Security!"  Well yes, so let's talk about both encryption and also how to restrict the data sets to show only relevant rows to the correct people.  (And let's face it, you will want that last part even if you keep your data local!)

So why Ken's Power BI Boot Camp then?

My goal is to deliver real value to you.  Anyone who has been to any of my courses in the past knows this:

  • It's virtually impossible to find someone who has more passion for what they teach than me
  • We limit the class size so that it doesn't get too big.  A huge part of the reason for this is because…
  • My style is to roam the room and help people keep up with the content so that everyone learns
  • I provide completed examples for all the work we go through, with catch up points along the way, just in case the above isn't achievable
  • I provide a ton of supplemental resources as well

I've been to classes before, and I know that it can be somewhat difficult to apply the course materials to your own work.  For that reason the Power BI Boot Camp is designed like all my courses: We will build solutions like you'd expect to do so in the office, exposing the pitfalls where the software doesn't work as you'd logically expect.  This is intentional, as we teach you not only how to do things correctly, but also how to debug problems as they happen.

What skill set do you need?

Do you need experience working with Power BI in order to come to the Power BI Boot Camp?  Heck no, that's what this course is about!  And while some Excel knowledge is helpful, you're certainly not going to see a test on Excel functions here.

Ultimately what you need is a hunger to learn; a desire to change your reporting world.  If you can bring me that enthusiasm, I'll teach you what you need to know in order to make that happen.

Making the Case to Your Boss

I managed an accounting department in my former life, so I totally get this.  We're all about data here, so here's the up front details:

The cost of the course is $1,495 plus GST.  It includes your breakfast and lunch all 3 days, but you need to look after your dinner and hotel room (if necessary).  So now you can work out the total cost.

I'll let you do the math and work out how many hours per week you need to save yourself in order to break even.  If you spend a significant amount of time cleaning up data to be used in Excel, you'll find that you'll break even on this alone.

Honestly though, the true value proposition of this course isn't the ability to break even on a labour basis, it's about the opportunity costs.  Opportunities like:

  • Identifying programs that are costing your company money which don't earn returns.
  • Finding the best paying leads to pursue for greater sales.
  • Delivering value to your stakeholders when and where they need it.
  • Reducing inventory during slow seasons to reduce carrying costs.
  • Highlighting key metrics that are under performing.
  • Building solutions to blow the minds of your clients or stakeholders.

Every business is different, but ultimately it's the opportunity to get better information into the hands of those who make the decisions, be it you or your boss.  That benefit FAR outweighs the price we charge, and will break even for you VERY quickly.

I hope to see you at the course.  It's going to be awesome, and it will change your data life forever. 🙂

Register for the Boot Camp here.

Make the Sample Binary file path dynamic

In this post we will explore how to make the Sample Binary file path dynamic when combining files using the new Combine Binaries experience in Excel and Power BI Desktop.

Sample Files

If you'd like some sample data files to play with, you can download them here.

Why do we even need to talk about this?

I've covered the new combine binaries experience in my last couple of posts on:

But one thing I didn't dig deep into was the Sample Binary file path, and the fact that it actually gets a hard coded file path.  To replicate the issue, here's how I set up my quick test:

  • Open the application of choice (I'm going to use Power BI Desktop here)
  • Get Data (create a new query) From File --> From Folder
  • Browse to the folder path and click Edit

In this case I've browsed to following file path, which only contains a single file (so far):  C:\Users\KenPuls\Desktop\CSVs.  And here's what it looks like in Power BI Desktop or Excel:


And now I click the Combine Binaries button at the top right of the Content column, resulting in this:


Now, as I discussed in the first post in this series, we know we can modify the "Transform Sample Binary From…" step to see those changes in the final output.  So what's the issue here?  I'm going to right click the Sample Binary and choose to view it in the Advanced Editor:


Note:  I did add a line break between the 3rd and 4th lines, so read that as one.

The key part to notice here is that the file path, despite being in the original CSVs query, is also hard coded into this query TWICE.  That makes it very difficult to port this from one location to another, as simply changing the file path in the CSVs query is not sufficient, it will still break upon refresh.  It's for this reason that we need to make the Sample Binary file path dynamic: so that we only have to change it in one place.

How to make the Sample Binary file path dynamic

To start with, I'm going to throw this solution away and start over completely.  And again, while I'm using Power BI desktop to illustrate the method to make the sample binary file path dynamic, this will work the same in Excel with only one exception. (Once you have the new combine binaries method in Excel, anyway.)

Step 1 - Launch the Power Query editor

To get started, I'm going to launch myself into the Power Query editor, ideally without creating a new query.  This is easy to do in Power BI Desktop, simply go to the Home Tab and click the top of the Edit Queries button.  You'll be launched into the editor without creating any new queries:


In Excel, if you've never opened the Power Query editor before, there is no way to get in there without creating a new query.  You'll need to create a New Query --> From Other Sources --> Blank Query.  Then you can expand the Queries pane on the left, right click Query1 and Delete it.  Silly, but that's pretty much the way to accomplish this.  (If you have created other queries, you just need to edit any one to get into the editor, as it won't create a new one for you.)

Step 2 - Create a Parameter for your file path

Before we get started, we need to create a single place to update our file path.  For this we'll use one of the Power BI/Excel parameters.  To do that:

  • Go to Home --> Manage Parameters --> New Parameter
  • Set up the Parameter as follows:
    • Name:  FolderPath
    • Required:  yes
    • Type:  Text
    • Current Value:  <your file path>  (mine is C:\Users\KenPuls\Desktop\CSVs)
  • Click OK

This will result in a rather simple little parameter that looks like this:


Step 3 - Create a new query against the folder

Now that we have our parameter, we are ready to actually create the query we need against the folder.  So let's do that now.  (Oh, and if you're working with Excel, just stay in the Power Query editor - no need to go back to Excel first.)

  • Power BI Desktop:  Home --> New Source --> More --> Folder
  • Excel:  Home --> New Source (near the end of the ribbon) --> File --> Folder

When prompted for the folder path, instead of clicking Browse, click the ABC on the left and choose Parameter:


It will automatically populate with the FolderPath parameter and, upon clicking OK, will take you to the preview window where you can click OK (Power BI Dekstop) or Edit (Excel.)

Step 4 - Combine the Binary Files

Now we'll combine the binary files… all one of them.

  • Rename the Query to "Transactions"
  • Click the Combine Binaries icon on the top right of the Content column

Just a quick side note here… in the current build of Excel we don't see this, but in Power BI desktop, we are taken to this window where we can control how the data types are determined:


This is pretty cool, and I assume it will be coming to Excel in the future too.  If your data types are consistent most of the time, you generally won't have to worry about this.  If, on the other hand, you've got strange things that happen, (say that once every 10,000 transactions you get a fractional sales unit,) you may want to choose the "Entire Dataset" option to avoid truncated decimals.

For now, just click OK with the default to blow past this dialog.

Step 5 - Make the Sample Binary file path dynamic

And finally, here we are, it's time to make the magic happen and actually make the sample binary file path dynamic.  To do this we're going to make a couple of small edits to the Sample Binary's M code.

  • Right Click the Sample Binary --> Advanced Editor

NOTE:  Be sure not to accidentally hit the Sample Binary Parameter1… we don't want that one!

Now, first thing to notice is that the very first line no longer points to a hard coded file path, instead it points to our parameter.  That's very cool as a single update to the parameter means that both this query and the original one to pull the files from the folder will be changed when our parameter is updated.  One place to fix them both.

Now, there are still issues here, but I just want to do a bit of cosmetic cleanup first.  The second and last lines still start with the name of the file.  This is just something inside the M code that you'll probably never read again, but it's still good practice to clean it up:


I've change both highlighted parts to read "SampleFile" to make the code a bit shorter.  And now I can focus on the real issue:


The highlighted portion above still holds both the hardcoded file path and file name.  What this means is that even though the folder path is dynamic, if I change the parameter and update the file path on a new computer, it will still be pointing to the older source.  That is far from ideal.

Interestingly, it's really simple to fix when you know how.  You simply grab everything from the [ to the ] and replace it with 0 so that it looks like this:


To show that it updates properly, I'm going to click Done, and throw a new file in the folder called "Feb 2008.csv".  Since this is lower in the alphabet, we'd expect it to show up before "Jan 2008.csv", and it does when I refresh the preview window:


The End Result

The biggest reason I want to make the sample binary file path dynamic is the scenario where I email the solution to someone else, and they have a different file path to the data files.  In this case they now only need to edit the project, update the FolderPath parameter, and everything will work again.

Why Not Edit in the Formula Bar?

In truth, you don't actually have to go into the Advanced Editor to update your code.  I made a cosmetic fix in there, as I actually do go back and read code later.  Since the default step name leaves a red herring in the code, I wanted to nail that down.  If you're never going to read the code though, it's cosmetic.

In effect, all that is really necessary is to replace the code from [ to ] with 0 as we did above:


The problem is that if you do this here, it automatically kicks off 3 new steps that have to be deleted:


Granted it's not the end of the world, but since I want to clean up the code anyway…

Final Thoughts

You're not alone if you think this should be unnecessary.  In my opinion, this dynamic nature should be standard, and I think it would be an easy fix for the team to implement.  Marcel even posted a suggestion to modify this feature here, which you should consider voting for.

Combine Excel Files

If you've wanted to use Power Query to combine Excel files with a single click - like you could for TXT and CSV files - the feature is finally here*.  The Combine Binaries feature has been re-written (as I discussed yesterday), and it now allows for easy combination of Excel files.

* This new feature in the O365 Fast Insider preview today and in Power BI Desktop's November update.  I'm expecting to see this show up in the January Power Query update for Excel 2010/2013.

Just a quick caveat, I'm going to cover the items specific to the Combine Excel Files experience here, and not dive deep into the methods of how to modify and adjust the created queries.  For that reason, if you haven't read yesterday's post, I highly recommend doing that first.

The Classic Combine Excel Files Experience

For anyone who has tried to combine Excel files in the past, you'll be familiar with this.  You create a new query to pull From File --> From Folder and navigate to the folder full of Excel files.  Then you hopefully click the Combine Binaries button:


And you get this:


Obviously that's not so helpful.  The answer to deal with this was to go to Add Column --> Add Custom Column and use the formula =Excel.Workbook([Content]) to convert the data into tables. Then some filtering and more user interface driven work was required in order to get your data.  The experience was a bit painful, and certainly not for beginner users.

The New Combine Excel Files Experience

To start with, I'm going to take two Excel files with sales transactions in them and combine them easily using the new combine Excel Files experience.  Here's the file characteristics:

  • Each file has a small set of data (2-3 rows)
  • The data is stored on Sheet 1 in a table
  • Both files are stored in the same folder

To get the data, I'm going to do the following:

  • Get Data using a new query "From File --> From Folder"
  • Browse to and select the folder
  • Click Edit at the preview window

As you can see, we've got a couple of Excel files here:


So I'll click the Combine Binaries button (at the top right of the Content column.)

And where this would have triggered an error in the past, it now kicks out a preview:


And what happens next depends on what you select (which is why they are numbered above.)

Combine Excel Files - Method 1

For the first kick at this, I'll select the Sample Binary Parameter 1 folder icon (indicated by the number 1 in the Combine Binaries preview.

Nothing will ever show in the preview window, but upon selecting the folder I can click OK, which will result in this:


As I showed in yesterday's post on the new Combine Binaries Experience, we now get a bunch of new queries and a few steps along the way.  The end result of this query, however, is a listing of all the worksheets, tables and ranges in each workbook.  This is the view that will allow you to go back and granularly pick out what you need and transform it.  In other words, this is kind of the detailed hard core view which was the equivalent of writing the custom columns that we used to have to do.

Because this is so similar to the classic method, I'm not going to do much more with this.  The real point was to expose that selecting the folder in the preview window will bring you to this setup.

Combine Excel Files - Method 2

Method 2 revolves around selecting the table in the preview window; in this case the Sales table.  When we select that branch in the preview window we WILL get a preview of the data:


And when we click OK, we actually get the data combined nicely:


As discussed in the previous post, if we wanted to modify and/or change:

  • The Source columns (Source.Name or others): We modify the Removed Other Columns1 step in this query.
  • The data before it is imported and combined: We modify the Transform Sample on the left side.

Now this section is MUCH easier than what we used to have to do!

Combine Excel Files - Method 3

But what if your data is not in an official Excel Table?  What if it's just data in a worksheet range?  Well, then you select the worksheet icon instead:


And the results are virtually identical to the previous method:


Why does that work?  It works because the Transform Sample is smart enough to automatically promote the first row to headers, so it doesn't actually need the table.  On the other hand, if that data wasn't in the first row, you may need to go back to the Transform Sample and tweak it to do what you need (like remove top rows, promote rows to headers, delete columns, filter, etc.)

Caveats When Trying to Combine Excel Files

This experience will work well for many things, but as always there are some caveats.

Single Object Only

The default experience here is to combine based on the object you select.  In other words, if you select Sheet 1, it will combine Sheet 1 from each file.  It won't combine all sheets in the file based on the Sheet 1 code.  If you want to do that, you need to go back to Method 1 above, filter to the objects you want, and deal with them using classic import methods.  (Unless you try to get real techy and build the function then repurpose it to use in that table - something I have not done yet.)

Preview Based On First Item In the List

The preview and import templates are based on the first file in the list.  You can see how that affects things when I throw a new file into my structure that has different worksheet and table names:


While the two Sales workbooks have Sheet1 in them, this one doesn't, making it impossible to use this function to combine the first worksheet in each file. (The template would be based on Dec and would return errors for the other two files.)

If the order is important, you'll need to sort the file list first to get the correct file to the top before you trigger the Combine Binaries function.

For the record, I have sent an email to the Power Query team suggesting that it would be nice to get the option to pick the file here which the template should be based upon.  That would make this much easier to work through, I think.

Inconsistent Columns Are Bad News

Let's say that you have two files with tables in them who have different column names (or quantities).   The transformations generated will actually deal with this correctly, resulting in a column of tables which have different headers.  All good so far, but when the main query gets to the last step, it expands the column of tables based on the headers for the table in the first row only.  This is actually a standard thing, so not a surprise, I just want to make sure you don't think this is a holy grail that will solve the differing column issue.  (We have more work to do in that case.)

Overall Thoughts

At the end of the day, I have to say that this is a pretty welcome addition.  I'm still not a fan of the names of the generated queries, and I would add something to change the template file, but I think this is going to make it a LOT easier for people to import and transform Excel files than it has been in the past.

New Combine Binaries Experience

One of the updates in the latest Excel 2016 Insider preview build is a new Combine Binaries experience for Power Query.  This is both good and bad, and in this post we'll take a detailed look at both sides before it hits you in your Excel implementation.  (With this new feature in the O365 Fast Insider preview and with it already existing in Power BI Desktop's November update I'd expect to see this show up in the January Power Query update for Excel 2010/2013.)

A Little History Lesson

Before we start, let's get everyone on the same page.  If you didn't know this, the Combine Binaries feature is one of the coolest things in Power Query or Power BI desktop, as it let you do a one click consolidation of "flat" files (txt, csv and the like).

How Combine Binaries (Used to) Work

In order to make this work, set up a new query to get data From File --> From Folder, browse to the folder that contained your files, select it and clear the preview window.  At that point all the magic happens when you click the Combine Binaries button at the top of the Content column:


That "one-click" experience would take your source folder and add four steps to your query:


And it puts it all together nicely.  So cool, so slick, so easy.

Why did we need a new Combine Binaries experience?

So the first real question here is "Why even modify this experience at all?"  As it happens, there were a few issues in the original experience:

  1. You lost the original file name details.  If you wanted to keep that, you needed to roll your own custom function.
  2. Headers from each file were not removed, so you'd have to filter those out manually.
  3. It could only work with flat files (csv, txt, etc..) but not more complex files like Excel files.

So for this reason, the team decided to build a more robust system that could deal with more files and more data.

The New Combine Binaries Experience

So let's look at what happens in the new experience.  We start the same way as we always did:

  • Set up a new query to get data From File --> From Folder
  • Browse to the folder that contained your files, select it and click OK
  • Click Edit at the preview window
  • Click the Combine Binaries button at the top of the Content column

At this point a whole bunch of crazy stuff now happens, and your query chain looks totally different than in the past:


There are 3 main areas that are different here:

  1. A whole bunch of new queries and parameters with very similar names,
  2. The original source name is retained
  3. Different query steps than in the past

If your first reaction to this is being overwhelmed, you're not alone.  I'll admit that my first reaction to this was not a happy one.  There is a huge amount of stuff injected in the file, it's difficult to follow the relationship in the M code (even if you do know how to read it all), and it isn't intuitive as to what do to with it.

At the end of the day, the biggest change here is that things happen differently in the past.  In the original implementation of the Combine Binaries feature set, it combined the files first, then applied some other Power Query steps.

The new method actually examines the individual files, formats them via a custom function, then appends them.  This is very different, and it actually gives us a bit more flexibility with the files.

What's the end effect that will be different for you?  Simply this:

  • More queries in the chain (maybe you don't care), and
  • The file name is preserved by default (which was not there in the past)


Now if you are good with everything here, then no worries.  Continue on, and you're good to go.  But what if you want to make changes?

Making Changes in the new Combine Binaries Experience

The biggest challenge I have with this new implementation is that if you are a user hitting this for the first time, how and what can you change?

Changing or Preserving File Details

What if I wanted to keep more than just the file name… maybe I wanted to keep the file path.  You'll be surprised, but this is actually pretty easy now.  Since Power Query wrote a custom function for us to import the data, all the file properties were already preserved for us.  But it also made a choice to keep on the source file name.

To modify this, we just look at the steps in the Applied Steps window, click the gear next to "Removed Other Columns", and choose the ones we do want to keep:


So in this case, I can actually uncheck the box next to Source.Name and remove that from the output.  (We want to keep the very last step, as that is what actually appends the data from the files).

Also… after looking at the Applied Steps window the Renamed Column1 step was only in place to avoid a potential name conflict if you had a column called Name (there is a really good reason for this which I'll look at in another post.)  In this case it is totally unnecessary, so we can just delete it.

So now our code looks as shown below, and the output looks similar to what we would see in the old experience:


Notice I said that it looks similar - not that it is identical.  This is actually better as there are no headers repeating in the data at all, so we don't need to get rid of those.  That is an improvement.

Modifying the Import Function

Now, that's all good for the details about the file, but what about the data?  What if I only want records for department 120?

To understand the impact of this next piece, we need to understand that there are two ways to combine data:

  1. Bring ALL the data into Excel, then filter out the records you don't want
  2. Filter out the records you don't want, and ONLY bring in the ones you do

Guess which will consume less resources overall?  Method 2.  But the challenge here is that Power Query encourages to use Method 1.  You're presented with a full table that is just begging you to filter it at this point… but you are better to deal with it using Method 2… only it's not obvious how to do that.

The kicker here is that the logical flow of Power Query's Applied Steps window has been interrupted with the "Import Custom Function1" step.  And if you've ever used Power Query before, you know that modifying a Custom Function is far from a beginner friendly thing to do.

As it happens though, the Power Query team has given us a way to easily modify the custom function, it's just… well… let's just say that the name isn't as intuitive as it could be: image

So this is the secret… if you select the "Transform Sample Binary from Combine Binaries" (what a mouthful!) it takes you to the template that the function is actually using.  To put this another way… any changes you make here will be used by the custom function when it imports the data.

So here's the changes I made in the Transform Sample:

  • Removed the Change Type step
  • Filtered Dept to 120
  • Renamed TranDate to Date
  • Renamed Sum of Amount to Amount
  • Change Type with Locale on the Date column to force it to US dates
  • Removed Errors from the Date column
  • Change Type on the other columns

In other words, I do the majority of the cleanup work here based on the single file.  The end result for the Transform Sample query looks like this:


Hopefully this makes sense.  It's a fairly straight forward transformation of a CSV file, but rather than doing the work on the files that have already been combined, I'm doing it in the Transform Sample query.  Why?  Because it pre-processes my data before it gets combined.  And the cool thing?  Now we go back to the main query I started with:


The results have already propagated, and we're good to go.

Thoughts and Strategies

Overall, I think the new Combine Binaries experience is a good thing.  Once we know how to modify it properly, it allows us some great flexibility that we didn't have before - at least not without writing our own custom functions.  There are a couple of things we do need to think about now though.

Name Your Queries Early

Under this new method, I'd highly recommend that you change the name of your query BEFORE you click that Combine Binaries button.  All those samples on the left side inherit their name from the name of the query, and Power Query is not smart enough to rename them when you rename your query.  Here's a comparison of two differently named queries:


So while I used to think about renaming my query near the end of the process, we really need to think about that up front (or go back and rename all the newly created queries later.)

Where Should You Transform Your Data?

The next big thought is where you should run your transforms… should you do it on the master query (the one with the consolidated results), or the sample used for the transformation function?

The answer to that kind of depends.  A few instances where this could really matter:

  1. Filtering: I would always filter in the sample.  Why spend the processing time to bring in ALL of the data, only to filter it at the end.  Much better to modify the Transform Sample to avoid bringing in the records in the first place.
  2. Sorting: I would suggest that most times you'd want to run your sorts in the master query.  Let's say you have a file for each month, and you sort by date.  If you do that it in the master query, things will be sorted sequentially.  If you do it in the Transform Sample, it will be sorted sequentially within each file, but if the files are out of order, your master will still be out of order until you re-sort by date.  (Apr comes before Jan in the alphabet.)
  3. Grouping:  This one is tricky… most of the time you'll probably want to load all of your transactions into the master query, then group the data.  So if I want to group all transactions by month and account code, I'd do it in the master query.  But there are definitely instances where you'll want to pre-process your data, grouping the data inside the individual file before it lands in your master query.  A case in point might be the daily payments list for payments you issue to vendors.  If they are batched up then sent to the bank, you'd want to import the file with all daily records, group them, then land that pre-grouped data into your master query.  That is an operation that you may wish to do at the Transform Sample level.

The good news is that it is easy to experiment here and switch things up as you need to.

And One More Major Impact…

This blog post has really focussed on how the new Combine Binaries experience changes impacts importing text files.  What it doesn't cover however, is the why it was REALLY built.  I'm going to cover this in tomorrow's blog post: how the new Combine Binaries experience allows one click consolidation of Excel files!

Power Query Dependencies Viewer

The November 2016 update is now out and it finally brings a way to view the Power Query dependencies viewer.  While it’s been out in Power BI Desktop for a while, (as Matt posted about a while ago,) this is huge news for Excel, as this feature has been badly needed.

Viewing Power Query Dependencies

To get to the Power Query Dependencies view, you simply need to perform the following steps:

  • Edit any query (just to get into the Power Query editor)
  • Go to the View tab
  • Click the Query Dependencies button


Once you do so, you’ll be launched into the Power Query dependencies windows as shown below:


At first glance…

So at first glance, this is pretty exciting and yet – if you work with complicated Power Query setups like I do – you’ll find the Query dependencies view a bit… lacking in some areas too.

First off, if your query is complicated, it really does open that small.  Wow.  Now there is a scaling button down the bottom, but that quickly scales so that stuff is off-screen.  No problem, right?  We’ll just drag some stuff around then… oh… except we can’t.  Dragging any box around drags the entire model, not just that one box.  Sad smile

What can you do with the Query Dependencies viewer?

Rather than focus on the stuff we can’t do, I want to take a look at what we can (although I won’t be able to help making a couple of suggestions here as well.)

Maximizing the model layout

The first thing to point out is that despite the fact that it isn’t obvious, the window is resizable.  If you mouse over any border or corner you’ll get the arrows that indicate you can left click and drag to make the window bigger.

So normally the first thing I do is:

  • Move the window to the upper left of the screen
  • Drag the bottom right corner to make the model fill the entire screen
  • Click the little box icon in the bottom right corner by the scroll bar to “Fit to Screen”

After all, the reason I’m using this view is because the models are big!

Some things that would be really useful here:

  • It would be awesome if there was a Maximize button near the X in the top right (like the Power Query window and every other app has.)
  • It would also be good if we could double click the title bar and have it maximize the window (again, like so many apps out there.)

Either (or both) of those features would save me a lot of time.

Alternate Views for Tracing Query Dependencies

In the default view, the data sources are plotted at the top, and the queries cascade down below.  Fortunately you’re not stuck with this view, there are four different ways to display the model:


In this instance I’ve chosen Left to Right, which puts the data sources  on the left and fans the query dependencies out to the right hand side.

Honestly, if I had my preferred way it would probably be to use Bottom to Top (data sources at the bottom and data model tables on the top.)  To me this should basically “bubble up” the model tables to the top of the screen.  Unfortunately it doesn’t quite work like that… all we can guarantee is that the data sources will be at the bottom, but the model tables could end up anywhere.

Ideally, I’d love to have an option to force the Data Sources to be lined up based on the first choice in that menu, and the Load Destinations (whether table or data model) be lined up in the viewer based on the option chosen for the second choice.  This would allow me to easily see the “From” and “To”, with the chain of what happened in between.

Tracing Query Dependencies

In the image below (click on it to see the larger version), I’ve selected one of the tables in the middle of the query dependencies tree:


The effect is that it highlights all child and dependent queries in the data flow.  That’s cool, and I’m okay with this being the default behaviour when I select a query step.  Wouldn’t it be cool though, if we also had:

  • A right click option to trace precedent queries only
  • A right click option to trace dependent queries only

Those would be super helpful in tracing a queries flow without the extra noise, something that is really important in able to quickly dig in to the key factors you probably want to know about your query dependencies.

Identifying Load Destinations

So the very first thing I did when I threw this specific model into the query dependencies view was identify two queries that were not in the query chain.  “Awesome,” I though, so I went and deleted them.  Then I restored from backup, as one of them was in use!

Don’t get me wrong, the view was correct, it’s just that the distinction for load destinations is so weak that I saw no arrows and assumed it was good to be removed.  As it turns out, the words actually matter here:


The Day Types table is created from a hard coded list.  Since there are no queries flowing in or out of it (it is floating above the lines) I nuked it.  I missed the fact – especially with it being on the left), that it was actually loaded to the data model.

Raw Data-Departments, on the other hand, is pulling from the Current Workbook and is loaded as “Connection Only”.

So here’s my thoughts here:

  • I’d love to see nodes that are loaded to worksheets or the data model identified.  Either an icon in the top right, or a shading  in place would be ideal.  Something that makes them a bit less subtle than they are today.
  • I’m not a fan of the “Not loaded” term… it’s about as awesome as the “Load Disabled” that Power Query used to use about two years ago.  This should – in my opinion – be consistent with the UI and should read “Connection only”.  Not loaded makes it look like it isn’t working.

Navigating Query Dependencies

One of the issues I had above is that my Day Types table – being standalone – should not sit on top of any arrows… that’s just scary bad and misleading… but that’s actually part of a much bigger issue as this is kind of the style used throughout the entire tool:


This also leads me to another issue in that I need to be able to follow these arrows.  Today the only ability you have – because you can’t move the boxes – is to essentially print the query dependencies window (you’ll need screen capture software for that since there isn’t a print button) – and trace with a highlighter.

What I’d love to see in this instance is the ability to select a single (or multiple arrows) and have them turn bold.  It would be an even bigger bonus if they shaded the tables on each end of the arrow and allowed you to select multiple arrows.  That would actually solve a few issues mentioned earlier too, allowing us to really drill into the relationships we need to trace.

Overall Impressions of the Query Dependencies Viewer

Overall it’s a good first version.  I’d really love to see some (or all) of the improvements I mentioned above, but it’s a HUGE amount better than what we had a month ago.  Smile

Extract Data from a Mixed Column

More and more I’m seeing examples where people are trying to extract data from a mixed column.  In other words, they have two data types in a single column, but need to find a way to extract one from the other.

Examining the issue

The sample data I’m using can be downloaded from this link.

I’m going to use Power BI Desktop for this, but the results will look identical in Excel using Power Query (except for the colour, of course.)

So let’s get started:

  • Get Data (new Query in Excel) –> From CSV –> MixedDataInColumn1.csv
  • Promote First Row as Headers

The issue can be seen in the red circles below… the report author injected the name of each vendor for the parts above their first part in the list.


So the issue here is how to extract the vendor name from Part No column.  The problem is that there isn’t any obvious way to do this.  We have different textual values in all columns, which could change over time.  There’s really nothing that we can test for reliably in this case.

How to Extract Data from a Mixed Column

There are actually a few different ways to extract data from a mixed column… a few of which we demonstrate in our Power Query workshop.  I’m going to show just one here.

Step 1 – Identify a column with a pattern you can exploit

The key we are really looking for is a column which has values or dates for all rows other that the one with our vendors.  In this case we actually have two: Part No and Cost.  Both have text on the Vendor lines, but what looks like values on the rest.  The challenge we have here is that we can’t always guarantee that Part No won’t have text in it.  It’s completely possible we could see a part number like TH-6715 or something. So this leaves us with the Cost column.

Step 2 – Duplicate the identified column

This next set of steps is actually the trick that lets us work this out.

  • Right click the column in question and choose Duplicate Column
  • Right click the Cost – Copy column –> Change Type –> Whole Number
  • Right click the Cost – Copy column –> Replace Errors –> null

You should now have null values where the textual values were located:


Step 3 – Use a little conditional logic

We now have something that we can use in order to extract the Vendor name.  So let’s build a little bit of conditional logic:

  • Add Column –> Conditional Column
  • Configure the Conditional Column as follows:


The only trick here is to make sure you change the Output to a column so that you can select from the list of columns.

  • Click OK
  • Right click the Vendor column –> Fill Down

The result is shown below:


Step 4 – Clean up

We’re now at the point of clean up which entails:

  • Filter the Cost – Copy column to remove null values
  • Delete the Cost – Copy column
  • Set the data types on all columns

The results now look as follows:


At this point we can commit the query and we are good to go.

Final Thoughts

This is not a new trick by any means; I’ve been using it for a long time.  The biggest key is really about identifying patterns and thinking outside the box.

It’s unfortunately very easy to get focused on the primary column we want to solve, and lose site of the others.  (Trust me, I’ve been there too.)  Sometimes though, when a column is particularly tough to deal with, we need to just step back and take a look at the bigger picture to see if there is a pattern in another column that we can exploit.  In fact, I’d say that this is probably one of the most important things to master when working with Power Query.

Visual Interactions in Power BI

In this post I’m going to explore the options for Visual Interactions in Power BI … in other words, I’m going to explore the options to control what happens to other visuals when you select one in Power BI.

Visual Interactions in Power BI – The default experience

Let’s take a quick look at a report and see what happens when we select a visual.  Here’s a simple report with 3 charts and a card:


And, when we click one of the visuals, it cross filters each of the others.  In the case below, I’ve clicked “Vancouver” in the “Course Attendees by City” visual, and it has cross filtered all the rest:


Okay, so no secret there.  The important things to remember here are that:

  1. I didn’t need to do anything to set up the linkage for the visual interactions in Power BI, and
  2. The cross filtering is applied to show the currently selected portion of the whole

But what if we didn’t want this?

The 3 Options for Visual Interactions in Power BI

There are actually three different states for visual interactions in Power BI:

  • Highlight (the default experience of cross-filtering with shading)
  • Filter (cross-filtering to show the contextual values only)
  • None (do not filter)

You can find each by selecting any visual on your report in Power BI Desktop, then go to Visual Tools –> Format –> Edit Interactions.

Let’s take a look at each of them.

Visual Interactions in Power BI – Highlight

As mentioned above, this is the default of the visual interactions in Power BI.  You don’t need to set up anything to get this behaviour.  If you monkey with it, however, you can get back to it by selecting a different visual, then clicking the little pie chart icon that appears above the visual you want to modify.


The only other thing I want to call out here is what happens when we select a set of data that filters all records out of another visual.  In the case below, I’ve selected Kelowna in the Course Attendees by City chart.  As you can see only four of our courses have been led in Kelowna:


Notice that the last three courses still show, even though we never ran them in this city.  Why?  Because the visuals indicate that we’ve led all our other courses somewhere, but obviously not in Kelowna.

Visual Interactions in Power BI – Filter

The next icon to the left of the pie chart is the Filter icon.  This toggles the visual slightly:


The key difference here is that the shaded portion is gone.  This gives the appearance of drilling in to the data a little more, without preserving the concept of how this data relates to the whole.

Now, check out what happens when we select Kelowna:


The Courses Run by Name visual no longer holds any data about the whole, allowing it to remove the irrelevant courses.  End result here is that we’re able to focus on the data that exists in this context only, without contaminating it with irrelevant data.

To be fair, most of the time I actually quite like the version with the shaded values.  But if you have a long list of data then this can certainly help trim it down so you don’t have to scroll the visuals as much.

Visual Interactions in Power BI – None

The last method we can configure for visual interactions in Power BI is to set the filter behaviour to None.  This prevents any filtering from taking place on a visual with this property set:


At first this looks quite similar to the Filter setting, but the key here is that the data in the Courses Run by Name visual has not been filtered at all, unlike the other two chart visuals and the card visual.  To display the effects just a bit further, the image below shows the card visual set to None, and the city filtered to Kelowna:


Notice that this time the Attendees card shows the true total for All Attendees.  The Courses Run by City visual, however, is filtering, as I left this in “Highlight” mode.

A key observation

I haven’t called this out yet, but should: we can set up different actions for each visual on the report.  That adds a fair amount of flexibility in order to get your report filtering working just the want you want.

A weird Visual Interaction

Before you look at the next visualization, I want you be keenly aware of this fact:

All visual interactions we set up as shown in the last image above.  I changed nothing else.

Keeping that in mind, look what happens when I click on the Creating Vibrant Dashboards course:


It cross filters all other visuals using the default interactions!

This is kind of a key thing to be aware of.  Just because I customized the visual interactions for the Course Attendees by City visual, it doesn’t force those relationships back the other way.  This means that I can customize how each visual affects the rest of the visuals on the page.  How cool is that?

The one missing setting

There is one setting that is badly missing, and that is the ability to persist selections when you select multiple visual filters.  What I mean by this is that I should be able to click on Vancouver, then click on the Creating Vibrant Dashboards selections to select only records that meet both of those criteria.  Alas there is no setting to do this today. You can set this up using drill down, but this means you need to think about what the user wants in advance and build it out, which is pretty tough if you have lots of potential filter combinations.

MailChimp and Power BI

This week I was playing around with MailChimp and Power BI.  If you’re not familiar with MailChimp, it’s the service that we use to send out our Monthly-ish newsletter.  I thought I’d show how incredibly easy it is to get a dashboard from this service.

Software as a Service

MailChimp is what we refer to as a “Software as a Service” (SAS) setup.  Basically what this means is that it is a subscription model where I have no need to set up my own server or host anything myself.  I simply subscribed, set up the forms on my website for you to fill out, and then use their services to build and send out the newsletter.  It’s pretty slick and they don’t even charge me until I hit 2,000 subscribers or want to set up auto replies and such.  I’m a huge fan of this model, as it lets you do a “try before you buy” test, only upgrading when you want to.

What do you need?

There are only two requirements here:

  1. A MailChimp account of your own
  2. A PowerBI.com account which you can create for free (seriously, no Pro Subscription required)

Getting Started with MailChimp and Power BI

Step 1: Connect to MailChimp

So the process begins with signing in to your Power BI.com account.  Once there, you look for this button in the bottom left corner of your screen:


Click that, and you’ll be taken to another screen that looks like this:


Click Get on the Services tab.

Now, to be fair this is still pretty new, so there’s a lot of sources that you would “hope” would be here (like Survey Monkey) which currently aren’t.  Despite this, the easiest way to find what you’re looking for is to start typing the name of the SAS source in the search pane in the top right.  It auto-filters the app list live.  Shown below is the results of “Ma” today:


So click the Get button at the bottom of the MailChimp.

You’ll now be asked to sign in.  If you have multiple domains under your MailChimp profile, just select one for now.  What you’ll see is that it takes you back to the portal and lets you know that it’s loading your data.

Step 2: Wait

Seriously?  Yes.  When you’ve chosen the account to log in, you’ll see that you get a new entry in the Dataset, Report and Dashboard sections of the Power BI portal:


See how the source is greyed out and has a star?  The star means it’s a new item, but the greyed out status means it’s loading.  It takes a while, and you won’t be able to do anything with it until it’s finished the initial load.  So go get a coffee and wait for the chimp to finish it’s trek to your portal.

Step 3: Explore the Dashboard

Wait, what?  I don’t need to build the dashboard myself?

No!  It’s already built for you.  MailChimp and Power BI together in a few easy clicks.  Check it out!


All fully clickable, and Q&A works like a charm:


What about Scheduled Refresh?

Right, I hear you.  That data set took a long time to load, so you don’t really want to have to come in and refresh it every day.  (I mean, unless you need an excuse to go for coffee…)

Well check this out.  This is shot from an account which only has a Power BI Free subscription.  I access this by clicking the … next to the data set then choosing Schedule Refresh (normally a Power BI Pro feature only):


Oh… and did I mention that this was created on Oct 27?  This report has already refreshed, and is scheduled to do it again with no configuration or pro license needed.  Ah the beauty of connecting to a pre-built SAS dashboard!

What about my other MailChimp Accounts?

If you have more than one domain in your MailChimp account, you might be surprised to see that when you try to add another MailChimp dashboard it just creates the same one again without prompting you to log in.  This is because it uses the cached credentials.  So how do you make this work?

Basically, you create the same report again then, once it’s finished loading, you go and edit the data source credentials on the Dataset for the second report.

To do this:

  • Go to Datasets –> Ellipses –> Schedule Refresh
  • Expand Data source credentials
  • Click Edit credentials
  • When prompted for oAuth2 credentials, click Sign In
  • Insert the username & password
  • Then choose the new data set when prompted


I’d also recommend going to each of the datasets, reports and dashboards and renaming them using the ellipses as well, just to keep it clear which is which.

Final Thoughts

The one thing I can’t do here is download the pbix file that was used to create this dashboard.  I’m sure it’s heavily parameterized (how else would they deploy for whomever adds it), but I’d sure love to do that.  Why?  So I could connect directly to MailChimp easily from PowerBI Desktop or Excel… maybe so that I could merge other data in with it.  (Once it’s published, I cannot get in and examine or change the M code in any way.)

Having said that, this is still pretty darned cool stuff!  I really hope other vendors jump on this train as well.  Building dashboards can be hard, and this just makes it super easy.  I’d love to see one for my Facebook account, LinkedIn and other SAS sources.

For reference, Google DOES provide a dashboard too, and it’s just as easy to set up and auto refreshes like this too.

Power BI Slicers

For those coming from the Excel 2013 world, you’ll surely want to create filters using the Power BI Slicers.  After all, you know that Slicers and Timelines are two of the sexiest filters we have for controlling filter context in Power Pivot models.  In this post we’re going to explore the slicer visual, and how to get closer to what you’re used to in Excel.

The Goal

I’m going to fall back to my last project here, and have created a single visual on a blank report page.  You probably remember this one, it’s the map of where I’ve stayed so far this year:


What I’d like to do now is add my slicers and timelines.  I’d like a slicers for Country and Province.  Should be easy, no?  Err… no.  Sad smile

The default Power BI Slicers

Now, Power BI Desktop has a Slicer visualization, as you can see here:


So what’s wrong with it?  I’ll build two to show you why I’m less than satisfied…  Here’s what I did:

  • Created a Slicer visual
  • Added the Country field
  • Created another Slicer visual
  • Added the Province field

Do these look like Excel slicers to you?  They look a lot more like check boxes to me, not slicers…


Now don’t get me wrong, Power BI Slicers certainly work, as you can see here where I’ve drilled in to USA:


So let’s look at the difference between the Power BI Slicers and the Excel slicers that my expectations are based upon:

  • The Province field hides all irrelevant items by default, unlike Excel.  I could not find a configuration option to change this.
  • The checkbox thing drives me crazy.  I hate checkboxes in my Windows files list, I don’t like them here, and again it is inconsistent with Excel.  I could not find a way to turn those off.
  • Originally I wanted to show the provinces in a two column slicer, like I can in Excel.  I could not figure out how to make this happen either.
  • Finally, I wanted to show the bubbles like an Excel slicer.  The closest I could get was the image below (WARNING! SUPER UGLY GRAPHIC AHEAD!)  Should you feel the need to create this abomination you need to select the slicer, go to the Formatting roller –> Items –> Outline –> Frame.


Gross.  That is just gross.  Honestly, I really don’t understand why the slicer is so different from Excel’s.  That slicer is pretty, and people are used to it.

Not happy with these, I deleted both slicers.

Is all hope lost for attractive Power BI Slicers?

Thankfully, the answer is no.  The Power BI team has given developers the ability to create and distribute their own visuals into the Power BI custom visuals gallery.  So let’s go and pull in a couple of those to fill this gap.

Locating the Custom Visuals Gallery

To be fair, the steps for this could be MUCH easier.  To get here the first time you can either just click this link or follow these steps:

  • Click the ellipses in the Visualizations gallery to import a custom visualization


  • Choose to Import a Custom Visual
  • Click the Learn More link


  • To be fair, you should probably read the page you’re taken to, as it talks about all the risks of using a custom visual.  (Remember not all custom visuals are provided by Microsoft, many are provided by 3rd parties.)
  • I scrolled straight to the bottom and clicked the link in the See Also section to go to the Power BI custom visuals gallery

You’ll be taken to the gallery, which has a lot of pretty visuals that can be imported into your project.

To make it easier to find custom visuals, I’d recommend you do a couple of things here:

  1. Bookmark this page (making it a bit easier to get back to it.)
  2. Choose to sort the gallery by Name rather than by Most Recent (which is the default)

When you click on a visual it will offer to download a pbiviz file that you can store in a folder.  You’ll want to remember the location, as you’ll need to import the visuals into every new PBI file you create.

I downloaded a specific visual here: the Chiclet Slicer which, ironically, is published by Microsoft.

Importing the Chiclet Slicer

When I returned to Power BI desktop, it’s still sitting at the Import Custom Visual dialog, which is convenient.  So I was able to just click the big yellow Import button, and select the ChicletSlicer file.  Doing so adds a new option to the Visualizations gallery:


I created two new Chiclet slicers, one for Country and one for Province, and was pleased to end up with the following:


Now that’s more like it!  Certainly needs some tweaking, but better than the past iteration.  So let’s get to that tweaking…  I changed the formatting options for each of the slicers as follows:

  • The Country slicer
    • General –> Columns –> 1
    • Header –> Off
    • Chiclets
      • Unselected Color = very light grey
    • Title –> On
      • Text = Country, Font Color = Black, Alignment = Center, Text Size = 10
  • The Province slicer
    • General –> Columns –> 2
    • Header –> Off
    • Chiclets
      • Unselected Color = very light grey
    • Title –> On
      • Text = Country, Font Color = Black, Alignment = Center, Text Size = 10

And, as you can see, the results are pretty good:


A couple of things that I couldn’t figure out here though:

  • I wanted to align the text in my “chiclets” to the left, like in Excel.  Can’t seem to find an option for that.
  • There is a tantalizing option in the “General” section to show disabled items “Inplace”, and an option in the chiclets to set the colour for those items.  I would have expected it to be equivalent to Excel’s “Show Disabled”, but it doesn’t seem to do that.  I have not figured out how to replicate that effect.

Final Thoughts

To be fair, there are a ton of configuration options for the Chiclet slicer, much more than I’m going to cover.  Why this slicer isn’t part of Power BI’s default install is beyond me… especially since it’s published by Microsoft.

Values Become Text After UnPivoting Other Columns

Have you ever set up a nice query to UnPivot other columns, only to find that the query data types change when you add new columns?  This post will cover why values become text after unpivoting other columns.


We’ve got a nice little table called “Data” showing here.  Nothing special, it just summarizes sales by region by month, and our goal is to unpivot this so that we can use it in future Pivot Tables.  (You can download the source file here.)


Now, you will notice that April’s sales are outside the table. This is by design, and we’ll pull it in to the table later when we want to break things.  Smile

UnPivoting Other Columns – The Hopeful Start

If you’ve been following my blog for any period of time, you’ve seen this, but let’s quickly go over how to unpivot this:

  • Select a cell in the table
  • Go to Power Query (or Data in Excel 2016) –> From Table

We’re now looking at the Power Query preview of the table:


Great, now to unpivot…

  • Hold down the Shift key and select the Country and Prov/State column
  • Right click the header of either of the selected columns and choose Unpivot Other Columns
  • Right click the headers of the two new columns and rename them as follows:
    • Attribute –> Month
    • Value –> Sales

Re-Pivoting from the Data Model

With the table complete, I’m going to load this to the data model and create a Pivot Table:

  • Go to Home –> Close & Load –> Close & Load To…
  • Choose to Load to the Data Model

The steps to create the Pivot depend on your version of Excel:

  • Excel 2013: Go in to Power Pivot –> Home –> PivotTable and choose a location to create it
  • Excel 2016: Click any blank cell and go to Insert –> PivotTable.  As you have no data source selected, it will default to using the data model as your source:


With the PivotTable created, I’ve configured it as follows:

  • Rows:  Country, Prov/State
  • Columns:  Month
  • Values:  Sales

And that gives me a nice Pivot like this:


Let’s Break This…

Okay, so all is good so far, what’s the issue?  Now we’re going to break things.  To do that, we’re going to go back to our original data table and expand the range:


In the picture above, I’ve left clicked and dragged the tiny little widget in the bottom right corner of the table to the right.  The table frame is expanding, and when I let go the Apr column turns blue, indicating that it is now in the boundaries of the table.

With that done, I’m going to right click and refresh my Pivot Table, leaving me with this:


Huh?  Why was the sales measure removed?  And if I drag it back to the table, I get a COUNT, not a SUM of the values?  And even worse, when I try and flip it back to SUM, I’m told that you can’t?  What the heck is going on here?


Importance of Power Query Step Order

To cut to the chase, the issue here is that when we first created the table in the data model, the Sales column was passed as values.  But when we updated the data to include the new column, then Sales column was then passed entirely as text, not values.  Naturally, Power Pivot freaks out when you ask for the SUM of textual columns.

The big question though, is why.  So let’s look back at our query.

Our original data set

If we edit our query, we see that the steps look like this:


To review this quickly, here’s what happened originally

  • Source is the connection that streams in the source data with the following columns:


  • Changed Type set the data type for all the columns.  In this case the Country and Prov/State fields were set to text, and the Jan, Feb & Mar columns were set to whole number.  We can see this by looking at the icons in the header:


Note that if you don’t have these icons, you should download a newer version of Power Query, as this feature is available to you and is SUPER handy


  • We then selected the Country and Prov/State columns and chose to Unpivot Other Columns.  Doing so returned a table with the following headers


Notice that the first three columns are all textual, but Sales is showing a numeric format?  Interestingly, it’s showing a decimal format now, but it shows the numeric format because all unpivoted columns had explicitly defined numeric formats already.

The final steps we did was to rename our columns and load to the data model, but the data types have been defined, so they were sent to the data model with Sales being a numeric type.

Why Values Become Text After UnPivoting Other Columns

Okay, so now that we know what happened, let’s look at what we get when we step through the updated data set.

  • First we pulled in all the columns.  We can plainly see that we have the new Apr column:


  • The Changed Type step is then applied:


Hmm… do you see that last data type?  Something is off here…

So when we originally created this query, Power Query helpfully pulled in the data and applied data types to all the existing columns.  The problem here is two-fold:  First, the Apr column didn’t exist at the time.  The second problem is that Power Query’s M language uses hard coded names when it sets the data types.  The end effect is that upon refresh, only the original columns have data types defined, leaving the new columns with a data type of “any” (or undefined if you prefer).

  • We then unpivoted the data, but now we see a difference in the output


Check out that Value column.  Previously this was a decimal number, now it’s an “any” data type.  Why?  Because there were multiple data types across the columns to be unpvioted, so Power Query doesn’t know which was the correct one.  If one was legitimately text and Power Query forced a numeric format on it you’d get errors, so they err on the side of caution here.  The problem is that this has a serious effect on the end load to Power Pivot…

  • Finally, we renamed the last two columns… which works nicely, but it doesn’t change the data type:


Okay, so who cares, right?  There is still a number in the “any” format, so what gives?

What you get here depends on where you load your data.  If you load it to the Excel worksheet, these will all be interpreted as values.  But Power Pivot is a totally different case.  Power Pivot defaults any column defined as “any” to a Text data type, resulting in the problems we’ve already seen.

Fixing the Issue

For as long as we’ve been teaching our Power Query Workshop, we’ve advocated defining data types as the last step you should do in your query, and this is exactly the reason why.  In fact, you don’t even need to define your data types in the mid point of this one, that’s just Power Query trying to be helpful.  To fix this query, here’s what I would recommend doing:

  • Delete the existing Changed Type step
  • Select the final step in the query (Renamed Columns)
  • Set the data type for each column to Text except the Sales column, which should be Decimal Number (or currency if you prefer)


When this is re-loaded to the Data Model, you’ll again be able to get the values showing on the Pivot Table as Sum of Sales.

Avoiding the Issue

Now, if you don’t want Power Query automatically choosing data types for you, there is a setting to toggle this.  The only problem is that it is controlled at a Workbook level, not at a global Excel level.  So if you don’t mind setting it for every new workbook, you can do so under the Power Query settings:


Is Changed Type Designed in the Correct Way?

It’s a tough call to figure out the best way to handle this.  Should the data types be automatically hard coded each time you add a new column?  If the UnPivot command had injected a Changed Type step automatically, we wouldn’t have seen this issue happen.  On the other hand, if a textual value did creep in there, we’d get an error, which would show up as a blank value when loaded to Power Pivot.  Maybe that’s fine in this case, but I can certainly see where that might not be desirable.

Personally, I’d prefer to get a prompt when leaving a query if my final step wasn’t defining data types.  Something along the lines of “We noticed your final step doesn’t declare data types.  Would you like me to do this for you now (recommended)” or something similar.  I do see this as an alternate to the up-front data type declaration, but to be honest, I think it would be a more logical place.