Combine Excel Files

If you've wanted to use Power Query to combine Excel files with a single click - like you could for TXT and CSV files - the feature is finally here*.  The Combine Binaries feature has been re-written (as I discussed yesterday), and it now allows for easy combination of Excel files.

* This new feature in the O365 Fast Insider preview today and in Power BI Desktop's November update.  I'm expecting to see this show up in the January Power Query update for Excel 2010/2013.

Just a quick caveat, I'm going to cover the items specific to the Combine Excel Files experience here, and not dive deep into the methods of how to modify and adjust the created queries.  For that reason, if you haven't read yesterday's post, I highly recommend doing that first.

The Classic Combine Excel Files Experience

For anyone who has tried to combine Excel files in the past, you'll be familiar with this.  You create a new query to pull From File --> From Folder and navigate to the folder full of Excel files.  Then you hopefully click the Combine Binaries button:

image

And you get this:

image

Obviously that's not so helpful.  The answer to deal with this was to go to Add Column --> Add Custom Column and use the formula =Excel.Workbook([Content]) to convert the data into tables. Then some filtering and more user interface driven work was required in order to get your data.  The experience was a bit painful, and certainly not for beginner users.

The New Combine Excel Files Experience

To start with, I'm going to take two Excel files with sales transactions in them and combine them easily using the new combine Excel Files experience.  Here's the file characteristics:

  • Each file has a small set of data (2-3 rows)
  • The data is stored on Sheet 1 in a table
  • Both files are stored in the same folder

To get the data, I'm going to do the following:

  • Get Data using a new query "From File --> From Folder"
  • Browse to and select the folder
  • Click Edit at the preview window

As you can see, we've got a couple of Excel files here:

SNAGHTML9cbff3d

So I'll click the Combine Binaries button (at the top right of the Content column.)

And where this would have triggered an error in the past, it now kicks out a preview:

image

And what happens next depends on what you select (which is why they are numbered above.)

Combine Excel Files - Method 1

For the first kick at this, I'll select the Sample Binary Parameter 1 folder icon (indicated by the number 1 in the Combine Binaries preview.

Nothing will ever show in the preview window, but upon selecting the folder I can click OK, which will result in this:

image

As I showed in yesterday's post on the new Combine Binaries Experience, we now get a bunch of new queries and a few steps along the way.  The end result of this query, however, is a listing of all the worksheets, tables and ranges in each workbook.  This is the view that will allow you to go back and granularly pick out what you need and transform it.  In other words, this is kind of the detailed hard core view which was the equivalent of writing the custom columns that we used to have to do.

Because this is so similar to the classic method, I'm not going to do much more with this.  The real point was to expose that selecting the folder in the preview window will bring you to this setup.

Combine Excel Files - Method 2

Method 2 revolves around selecting the table in the preview window; in this case the Sales table.  When we select that branch in the preview window we WILL get a preview of the data:

image

And when we click OK, we actually get the data combined nicely:

image

As discussed in the previous post, if we wanted to modify and/or change:

  • The Source columns (Source.Name or others): We modify the Removed Other Columns1 step in this query.
  • The data before it is imported and combined: We modify the Transform Sample on the left side.

Now this section is MUCH easier than what we used to have to do!

Combine Excel Files - Method 3

But what if your data is not in an official Excel Table?  What if it's just data in a worksheet range?  Well, then you select the worksheet icon instead:

image

And the results are virtually identical to the previous method:

image

Why does that work?  It works because the Transform Sample is smart enough to automatically promote the first row to headers, so it doesn't actually need the table.  On the other hand, if that data wasn't in the first row, you may need to go back to the Transform Sample and tweak it to do what you need (like remove top rows, promote rows to headers, delete columns, filter, etc.)

Caveats When Trying to Combine Excel Files

This experience will work well for many things, but as always there are some caveats.

Single Object Only

The default experience here is to combine based on the object you select.  In other words, if you select Sheet 1, it will combine Sheet 1 from each file.  It won't combine all sheets in the file based on the Sheet 1 code.  If you want to do that, you need to go back to Method 1 above, filter to the objects you want, and deal with them using classic import methods.  (Unless you try to get real techy and build the function then repurpose it to use in that table - something I have not done yet.)

Preview Based On First Item In the List

The preview and import templates are based on the first file in the list.  You can see how that affects things when I throw a new file into my structure that has different worksheet and table names:

image

While the two Sales workbooks have Sheet1 in them, this one doesn't, making it impossible to use this function to combine the first worksheet in each file. (The template would be based on Dec and would return errors for the other two files.)

If the order is important, you'll need to sort the file list first to get the correct file to the top before you trigger the Combine Binaries function.

For the record, I have sent an email to the Power Query team suggesting that it would be nice to get the option to pick the file here which the template should be based upon.  That would make this much easier to work through, I think.

Inconsistent Columns Are Bad News

Let's say that you have two files with tables in them who have different column names (or quantities).   The transformations generated will actually deal with this correctly, resulting in a column of tables which have different headers.  All good so far, but when the main query gets to the last step, it expands the column of tables based on the headers for the table in the first row only.  This is actually a standard thing, so not a surprise, I just want to make sure you don't think this is a holy grail that will solve the differing column issue.  (We have more work to do in that case.)

Overall Thoughts

At the end of the day, I have to say that this is a pretty welcome addition.  I'm still not a fan of the names of the generated queries, and I would add something to change the template file, but I think this is going to make it a LOT easier for people to import and transform Excel files than it has been in the past.

New Combine Binaries Experience

One of the updates in the latest Excel 2016 Insider preview build is a new Combine Binaries experience for Power Query.  This is both good and bad, and in this post we'll take a detailed look at both sides before it hits you in your Excel implementation.  (With this new feature in the O365 Fast Insider preview and with it already existing in Power BI Desktop's November update I'd expect to see this show up in the January Power Query update for Excel 2010/2013.)

A Little History Lesson

Before we start, let's get everyone on the same page.  If you didn't know this, the Combine Binaries feature is one of the coolest things in Power Query or Power BI desktop, as it let you do a one click consolidation of "flat" files (txt, csv and the like).

How Combine Binaries (Used to) Work

In order to make this work, set up a new query to get data From File --> From Folder, browse to the folder that contained your files, select it and clear the preview window.  At that point all the magic happens when you click the Combine Binaries button at the top of the Content column:

SNAGHTML56229e0

That "one-click" experience would take your source folder and add four steps to your query:

image

And it puts it all together nicely.  So cool, so slick, so easy.

Why did we need a new Combine Binaries experience?

So the first real question here is "Why even modify this experience at all?"  As it happens, there were a few issues in the original experience:

  1. You lost the original file name details.  If you wanted to keep that, you needed to roll your own custom function.
  2. Headers from each file were not removed, so you'd have to filter those out manually.
  3. It could only work with flat files (csv, txt, etc..) but not more complex files like Excel files.

So for this reason, the team decided to build a more robust system that could deal with more files and more data.

The New Combine Binaries Experience

So let's look at what happens in the new experience.  We start the same way as we always did:

  • Set up a new query to get data From File --> From Folder
  • Browse to the folder that contained your files, select it and click OK
  • Click Edit at the preview window
  • Click the Combine Binaries button at the top of the Content column

At this point a whole bunch of crazy stuff now happens, and your query chain looks totally different than in the past:

SNAGHTML610c47b

There are 3 main areas that are different here:

  1. A whole bunch of new queries and parameters with very similar names,
  2. The original source name is retained
  3. Different query steps than in the past

If your first reaction to this is being overwhelmed, you're not alone.  I'll admit that my first reaction to this was not a happy one.  There is a huge amount of stuff injected in the file, it's difficult to follow the relationship in the M code (even if you do know how to read it all), and it isn't intuitive as to what do to with it.

At the end of the day, the biggest change here is that things happen differently in the past.  In the original implementation of the Combine Binaries feature set, it combined the files first, then applied some other Power Query steps.

The new method actually examines the individual files, formats them via a custom function, then appends them.  This is very different, and it actually gives us a bit more flexibility with the files.

What's the end effect that will be different for you?  Simply this:

  • More queries in the chain (maybe you don't care), and
  • The file name is preserved by default (which was not there in the past)

SNAGHTML61d909b

Now if you are good with everything here, then no worries.  Continue on, and you're good to go.  But what if you want to make changes?

Making Changes in the new Combine Binaries Experience

The biggest challenge I have with this new implementation is that if you are a user hitting this for the first time, how and what can you change?

Changing or Preserving File Details

What if I wanted to keep more than just the file name… maybe I wanted to keep the file path.  You'll be surprised, but this is actually pretty easy now.  Since Power Query wrote a custom function for us to import the data, all the file properties were already preserved for us.  But it also made a choice to keep on the source file name.

To modify this, we just look at the steps in the Applied Steps window, click the gear next to "Removed Other Columns", and choose the ones we do want to keep:

SNAGHTML623e100

So in this case, I can actually uncheck the box next to Source.Name and remove that from the output.  (We want to keep the very last step, as that is what actually appends the data from the files).

Also… after looking at the Applied Steps window the Renamed Column1 step was only in place to avoid a potential name conflict if you had a column called Name (there is a really good reason for this which I'll look at in another post.)  In this case it is totally unnecessary, so we can just delete it.

So now our code looks as shown below, and the output looks similar to what we would see in the old experience:

SNAGHTML629962e

Notice I said that it looks similar - not that it is identical.  This is actually better as there are no headers repeating in the data at all, so we don't need to get rid of those.  That is an improvement.

Modifying the Import Function

Now, that's all good for the details about the file, but what about the data?  What if I only want records for department 120?

To understand the impact of this next piece, we need to understand that there are two ways to combine data:

  1. Bring ALL the data into Excel, then filter out the records you don't want
  2. Filter out the records you don't want, and ONLY bring in the ones you do

Guess which will consume less resources overall?  Method 2.  But the challenge here is that Power Query encourages to use Method 1.  You're presented with a full table that is just begging you to filter it at this point… but you are better to deal with it using Method 2… only it's not obvious how to do that.

The kicker here is that the logical flow of Power Query's Applied Steps window has been interrupted with the "Import Custom Function1" step.  And if you've ever used Power Query before, you know that modifying a Custom Function is far from a beginner friendly thing to do.

As it happens though, the Power Query team has given us a way to easily modify the custom function, it's just… well… let's just say that the name isn't as intuitive as it could be: image

So this is the secret… if you select the "Transform Sample Binary from Combine Binaries" (what a mouthful!) it takes you to the template that the function is actually using.  To put this another way… any changes you make here will be used by the custom function when it imports the data.

So here's the changes I made in the Transform Sample:

  • Removed the Change Type step
  • Filtered Dept to 120
  • Renamed TranDate to Date
  • Renamed Sum of Amount to Amount
  • Change Type with Locale on the Date column to force it to US dates
  • Removed Errors from the Date column
  • Change Type on the other columns

In other words, I do the majority of the cleanup work here based on the single file.  The end result for the Transform Sample query looks like this:

SNAGHTML63c4468

Hopefully this makes sense.  It's a fairly straight forward transformation of a CSV file, but rather than doing the work on the files that have already been combined, I'm doing it in the Transform Sample query.  Why?  Because it pre-processes my data before it gets combined.  And the cool thing?  Now we go back to the main query I started with:

SNAGHTML63e5508

The results have already propagated, and we're good to go.

Thoughts and Strategies

Overall, I think the new Combine Binaries experience is a good thing.  Once we know how to modify it properly, it allows us some great flexibility that we didn't have before - at least not without writing our own custom functions.  There are a couple of things we do need to think about now though.

Name Your Queries Early

Under this new method, I'd highly recommend that you change the name of your query BEFORE you click that Combine Binaries button.  All those samples on the left side inherit their name from the name of the query, and Power Query is not smart enough to rename them when you rename your query.  Here's a comparison of two differently named queries:

SNAGHTML988b327

So while I used to think about renaming my query near the end of the process, we really need to think about that up front (or go back and rename all the newly created queries later.)

Where Should You Transform Your Data?

The next big thought is where you should run your transforms… should you do it on the master query (the one with the consolidated results), or the sample used for the transformation function?

The answer to that kind of depends.  A few instances where this could really matter:

  1. Filtering: I would always filter in the sample.  Why spend the processing time to bring in ALL of the data, only to filter it at the end.  Much better to modify the Transform Sample to avoid bringing in the records in the first place.
  2. Sorting: I would suggest that most times you'd want to run your sorts in the master query.  Let's say you have a file for each month, and you sort by date.  If you do that it in the master query, things will be sorted sequentially.  If you do it in the Transform Sample, it will be sorted sequentially within each file, but if the files are out of order, your master will still be out of order until you re-sort by date.  (Apr comes before Jan in the alphabet.)
  3. Grouping:  This one is tricky… most of the time you'll probably want to load all of your transactions into the master query, then group the data.  So if I want to group all transactions by month and account code, I'd do it in the master query.  But there are definitely instances where you'll want to pre-process your data, grouping the data inside the individual file before it lands in your master query.  A case in point might be the daily payments list for payments you issue to vendors.  If they are batched up then sent to the bank, you'd want to import the file with all daily records, group them, then land that pre-grouped data into your master query.  That is an operation that you may wish to do at the Transform Sample level.

The good news is that it is easy to experiment here and switch things up as you need to.

And One More Major Impact…

This blog post has really focussed on how the new Combine Binaries experience changes impacts importing text files.  What it doesn't cover however, is the why it was REALLY built.  I'm going to cover this in tomorrow's blog post: how the new Combine Binaries experience allows one click consolidation of Excel files!

Power Query Dependencies Viewer

The November 2016 update is now out and it finally brings a way to view the Power Query dependencies viewer.  While it’s been out in Power BI Desktop for a while, (as Matt posted about a while ago,) this is huge news for Excel, as this feature has been badly needed.

Viewing Power Query Dependencies

To get to the Power Query Dependencies view, you simply need to perform the following steps:

  • Edit any query (just to get into the Power Query editor)
  • Go to the View tab
  • Click the Query Dependencies button

image

Once you do so, you’ll be launched into the Power Query dependencies windows as shown below:

image

At first glance…

So at first glance, this is pretty exciting and yet – if you work with complicated Power Query setups like I do – you’ll find the Query dependencies view a bit… lacking in some areas too.

First off, if your query is complicated, it really does open that small.  Wow.  Now there is a scaling button down the bottom, but that quickly scales so that stuff is off-screen.  No problem, right?  We’ll just drag some stuff around then… oh… except we can’t.  Dragging any box around drags the entire model, not just that one box.  Sad smile

What can you do with the Query Dependencies viewer?

Rather than focus on the stuff we can’t do, I want to take a look at what we can (although I won’t be able to help making a couple of suggestions here as well.)

Maximizing the model layout

The first thing to point out is that despite the fact that it isn’t obvious, the window is resizable.  If you mouse over any border or corner you’ll get the arrows that indicate you can left click and drag to make the window bigger.

So normally the first thing I do is:

  • Move the window to the upper left of the screen
  • Drag the bottom right corner to make the model fill the entire screen
  • Click the little box icon in the bottom right corner by the scroll bar to “Fit to Screen”

After all, the reason I’m using this view is because the models are big!

Some things that would be really useful here:

  • It would be awesome if there was a Maximize button near the X in the top right (like the Power Query window and every other app has.)
  • It would also be good if we could double click the title bar and have it maximize the window (again, like so many apps out there.)

Either (or both) of those features would save me a lot of time.

Alternate Views for Tracing Query Dependencies

In the default view, the data sources are plotted at the top, and the queries cascade down below.  Fortunately you’re not stuck with this view, there are four different ways to display the model:

image

In this instance I’ve chosen Left to Right, which puts the data sources  on the left and fans the query dependencies out to the right hand side.

Honestly, if I had my preferred way it would probably be to use Bottom to Top (data sources at the bottom and data model tables on the top.)  To me this should basically “bubble up” the model tables to the top of the screen.  Unfortunately it doesn’t quite work like that… all we can guarantee is that the data sources will be at the bottom, but the model tables could end up anywhere.

Ideally, I’d love to have an option to force the Data Sources to be lined up based on the first choice in that menu, and the Load Destinations (whether table or data model) be lined up in the viewer based on the option chosen for the second choice.  This would allow me to easily see the “From” and “To”, with the chain of what happened in between.

Tracing Query Dependencies

In the image below (click on it to see the larger version), I’ve selected one of the tables in the middle of the query dependencies tree:

image

The effect is that it highlights all child and dependent queries in the data flow.  That’s cool, and I’m okay with this being the default behaviour when I select a query step.  Wouldn’t it be cool though, if we also had:

  • A right click option to trace precedent queries only
  • A right click option to trace dependent queries only

Those would be super helpful in tracing a queries flow without the extra noise, something that is really important in able to quickly dig in to the key factors you probably want to know about your query dependencies.

Identifying Load Destinations

So the very first thing I did when I threw this specific model into the query dependencies view was identify two queries that were not in the query chain.  “Awesome,” I though, so I went and deleted them.  Then I restored from backup, as one of them was in use!

Don’t get me wrong, the view was correct, it’s just that the distinction for load destinations is so weak that I saw no arrows and assumed it was good to be removed.  As it turns out, the words actually matter here:

image

The Day Types table is created from a hard coded list.  Since there are no queries flowing in or out of it (it is floating above the lines) I nuked it.  I missed the fact – especially with it being on the left), that it was actually loaded to the data model.

Raw Data-Departments, on the other hand, is pulling from the Current Workbook and is loaded as “Connection Only”.

So here’s my thoughts here:

  • I’d love to see nodes that are loaded to worksheets or the data model identified.  Either an icon in the top right, or a shading  in place would be ideal.  Something that makes them a bit less subtle than they are today.
  • I’m not a fan of the “Not loaded” term… it’s about as awesome as the “Load Disabled” that Power Query used to use about two years ago.  This should – in my opinion – be consistent with the UI and should read “Connection only”.  Not loaded makes it look like it isn’t working.

Navigating Query Dependencies

One of the issues I had above is that my Day Types table – being standalone – should not sit on top of any arrows… that’s just scary bad and misleading… but that’s actually part of a much bigger issue as this is kind of the style used throughout the entire tool:

image

This also leads me to another issue in that I need to be able to follow these arrows.  Today the only ability you have – because you can’t move the boxes – is to essentially print the query dependencies window (you’ll need screen capture software for that since there isn’t a print button) – and trace with a highlighter.

What I’d love to see in this instance is the ability to select a single (or multiple arrows) and have them turn bold.  It would be an even bigger bonus if they shaded the tables on each end of the arrow and allowed you to select multiple arrows.  That would actually solve a few issues mentioned earlier too, allowing us to really drill into the relationships we need to trace.

Overall Impressions of the Query Dependencies Viewer

Overall it’s a good first version.  I’d really love to see some (or all) of the improvements I mentioned above, but it’s a HUGE amount better than what we had a month ago.  Smile

Extract Data from a Mixed Column

More and more I’m seeing examples where people are trying to extract data from a mixed column.  In other words, they have two data types in a single column, but need to find a way to extract one from the other.

Examining the issue

The sample data I’m using can be downloaded from this link.

I’m going to use Power BI Desktop for this, but the results will look identical in Excel using Power Query (except for the colour, of course.)

So let’s get started:

  • Get Data (new Query in Excel) –> From CSV –> MixedDataInColumn1.csv
  • Promote First Row as Headers

The issue can be seen in the red circles below… the report author injected the name of each vendor for the parts above their first part in the list.

image

So the issue here is how to extract the vendor name from Part No column.  The problem is that there isn’t any obvious way to do this.  We have different textual values in all columns, which could change over time.  There’s really nothing that we can test for reliably in this case.

How to Extract Data from a Mixed Column

There are actually a few different ways to extract data from a mixed column… a few of which we demonstrate in our Power Query workshop.  I’m going to show just one here.

Step 1 – Identify a column with a pattern you can exploit

The key we are really looking for is a column which has values or dates for all rows other that the one with our vendors.  In this case we actually have two: Part No and Cost.  Both have text on the Vendor lines, but what looks like values on the rest.  The challenge we have here is that we can’t always guarantee that Part No won’t have text in it.  It’s completely possible we could see a part number like TH-6715 or something. So this leaves us with the Cost column.

Step 2 – Duplicate the identified column

This next set of steps is actually the trick that lets us work this out.

  • Right click the column in question and choose Duplicate Column
  • Right click the Cost – Copy column –> Change Type –> Whole Number
  • Right click the Cost – Copy column –> Replace Errors –> null

You should now have null values where the textual values were located:

image

Step 3 – Use a little conditional logic

We now have something that we can use in order to extract the Vendor name.  So let’s build a little bit of conditional logic:

  • Add Column –> Conditional Column
  • Configure the Conditional Column as follows:

image

The only trick here is to make sure you change the Output to a column so that you can select from the list of columns.

  • Click OK
  • Right click the Vendor column –> Fill Down

The result is shown below:

image

Step 4 – Clean up

We’re now at the point of clean up which entails:

  • Filter the Cost – Copy column to remove null values
  • Delete the Cost – Copy column
  • Set the data types on all columns

The results now look as follows:

image

At this point we can commit the query and we are good to go.

Final Thoughts

This is not a new trick by any means; I’ve been using it for a long time.  The biggest key is really about identifying patterns and thinking outside the box.

It’s unfortunately very easy to get focused on the primary column we want to solve, and lose site of the others.  (Trust me, I’ve been there too.)  Sometimes though, when a column is particularly tough to deal with, we need to just step back and take a look at the bigger picture to see if there is a pattern in another column that we can exploit.  In fact, I’d say that this is probably one of the most important things to master when working with Power Query.

Visual Interactions in Power BI

In this post I’m going to explore the options for Visual Interactions in Power BI … in other words, I’m going to explore the options to control what happens to other visuals when you select one in Power BI.

Visual Interactions in Power BI – The default experience

Let’s take a quick look at a report and see what happens when we select a visual.  Here’s a simple report with 3 charts and a card:

image

And, when we click one of the visuals, it cross filters each of the others.  In the case below, I’ve clicked “Vancouver” in the “Course Attendees by City” visual, and it has cross filtered all the rest:

image

Okay, so no secret there.  The important things to remember here are that:

  1. I didn’t need to do anything to set up the linkage for the visual interactions in Power BI, and
  2. The cross filtering is applied to show the currently selected portion of the whole

But what if we didn’t want this?

The 3 Options for Visual Interactions in Power BI

There are actually three different states for visual interactions in Power BI:

  • Highlight (the default experience of cross-filtering with shading)
  • Filter (cross-filtering to show the contextual values only)
  • None (do not filter)

You can find each by selecting any visual on your report in Power BI Desktop, then go to Visual Tools –> Format –> Edit Interactions.

Let’s take a look at each of them.

Visual Interactions in Power BI – Highlight

As mentioned above, this is the default of the visual interactions in Power BI.  You don’t need to set up anything to get this behaviour.  If you monkey with it, however, you can get back to it by selecting a different visual, then clicking the little pie chart icon that appears above the visual you want to modify.

image

The only other thing I want to call out here is what happens when we select a set of data that filters all records out of another visual.  In the case below, I’ve selected Kelowna in the Course Attendees by City chart.  As you can see only four of our courses have been led in Kelowna:

image

Notice that the last three courses still show, even though we never ran them in this city.  Why?  Because the visuals indicate that we’ve led all our other courses somewhere, but obviously not in Kelowna.

Visual Interactions in Power BI – Filter

The next icon to the left of the pie chart is the Filter icon.  This toggles the visual slightly:

image

The key difference here is that the shaded portion is gone.  This gives the appearance of drilling in to the data a little more, without preserving the concept of how this data relates to the whole.

Now, check out what happens when we select Kelowna:

image

The Courses Run by Name visual no longer holds any data about the whole, allowing it to remove the irrelevant courses.  End result here is that we’re able to focus on the data that exists in this context only, without contaminating it with irrelevant data.

To be fair, most of the time I actually quite like the version with the shaded values.  But if you have a long list of data then this can certainly help trim it down so you don’t have to scroll the visuals as much.

Visual Interactions in Power BI – None

The last method we can configure for visual interactions in Power BI is to set the filter behaviour to None.  This prevents any filtering from taking place on a visual with this property set:

image

At first this looks quite similar to the Filter setting, but the key here is that the data in the Courses Run by Name visual has not been filtered at all, unlike the other two chart visuals and the card visual.  To display the effects just a bit further, the image below shows the card visual set to None, and the city filtered to Kelowna:

image

Notice that this time the Attendees card shows the true total for All Attendees.  The Courses Run by City visual, however, is filtering, as I left this in “Highlight” mode.

A key observation

I haven’t called this out yet, but should: we can set up different actions for each visual on the report.  That adds a fair amount of flexibility in order to get your report filtering working just the want you want.

A weird Visual Interaction

Before you look at the next visualization, I want you be keenly aware of this fact:

All visual interactions we set up as shown in the last image above.  I changed nothing else.

Keeping that in mind, look what happens when I click on the Creating Vibrant Dashboards course:

image

It cross filters all other visuals using the default interactions!

This is kind of a key thing to be aware of.  Just because I customized the visual interactions for the Course Attendees by City visual, it doesn’t force those relationships back the other way.  This means that I can customize how each visual affects the rest of the visuals on the page.  How cool is that?

The one missing setting

There is one setting that is badly missing, and that is the ability to persist selections when you select multiple visual filters.  What I mean by this is that I should be able to click on Vancouver, then click on the Creating Vibrant Dashboards selections to select only records that meet both of those criteria.  Alas there is no setting to do this today. You can set this up using drill down, but this means you need to think about what the user wants in advance and build it out, which is pretty tough if you have lots of potential filter combinations.

Using Aggregate to Count Visible Rows

In this post I’m going to show one of my favourite financial modeling tricks: how to use Aggregate to Count Visible Rows.

Background

Often, when I’m building models in Excel, I like to group key assumptions at the top of the worksheet in one area. This allows me to change them easily from a centralized location. The problem is that sometimes I need to collapse them to see more of the model.  Of course, you can use this trick to collapse any block of rows (or columns) in your worksheet, so it’s applicable to all kinds of uses.

Let’s take a look at the basic setup:

image

So it’s essentially a block of cells to capture key rates and stats.  No secret there.  And on the left I’ve added some outlining so that I can collapse it easily.  To do that we simply select rows 3:6 and go to Data –> Outline –> Group.

The Trick in Action

Now, check this out… I click the – on the left, and the rows collapse.

image

But check out the message in cell A7.  It wasn’t there before, but now we’ve got a nice message that not only tells you there is an area that is collapsed, it also leads the user as to how to show the rows again.

Using Aggregate to Count Visible Rows

The trick to this is using the AGGREGATE function (which works in Excel 2013 or higher).  So let’s check out how this works.

As AGGREGATE gives us back a count of rows, we will be able to test if the number of visible rows equals zero, and the react to it using an IF function.  So let’s get started.

AGGREGATE's first parameter: the Aggregation Type

=IF(AGGREGATE(

When we open the parenthesis, we are prompted for the first parameter.  There are a variety of options here, but the one I want is COUNTA(), which allows us to count the number of completed cells (either text or values):

image

Next up we put in the comma and we’re on to the second parameter.

AGGREGATE's second parameter: What to aggregate

image

Aha!  So using 5 will allow us to apply the COUNTA(), but ignore any hidden rows.  So it’s this parameter here that allows us to use AGGREGATE to count visible rows only.

AGGREGATE's third parameter: The data to aggregate

On to the next comma and now we need to select the range to count.  Now in this part we have two options.  Personally, I prefer to provide the range of the cells that will be hidden.  In truth though, you only really need to refer to a single cell in the range that will be collapsed.  Here’s what I went with:

=IF(AGGREGATE(3,5,A3:A6

Wrapping up the IF test

Perfect, and now we can just close the parenthesis and complete the test:

=IF(AGGREGATE(3,5,A3:A6)=0,

So, if the count of visible cells equals zero then… what do we want to do?

The IF test: If there are no visible rows...

This is the part that I think really makes this trick work.  I really like providing the arrow key to point to the + icon that shows up, and adding the additional wording as needed.  This allows my users to know not only that there is hidden data, but how to display it again.  So for me, that message might look like:

  • “<-- Show assumptions”
  • “<--Click to expand Revenue assumptions”

You get the idea.  For this example I’ve gone with the following:

=IF(AGGREGATE(3,5,A3:A6)=0,“<-- Show assumptions”,

The IF test: If there are visible rows...

And finally, we round it off with the messaging to provide if the count of visible rows is greater than zero (i.e. if the section is expanded).  Depending on what you want your model to do and how you want to display things for your end users, this could be something like:

  • “End of Assumptions”
  • “Total Revenue”
  • “Please insert new rows above this line”
  • “”

I think the first three are fairly self explanatory, but the last one is essentially two sets of double quotes.  Since everything between the quotes is returned to the cell as text, and there is nothing between the quotes, we get an blank cell.

The complete formula to use Aggregate to Count Visible Rows

Using that method, the finalized formula reads as follows:

=IF(AGGREGATE(3,5,A3:A6)=0,“<-- Show assumptions”,””)

Final Thoughts

My clients love this little trick. It’s fairly easy to set up, and is super useful for allowing people to hide/show the model sections that they want/need to review, without having them bogged down with all the info.

I also find it very useful when we’ve got multiple scenarios laid out on the worksheet. Say I need to look at… scenario 1 and 3 at the same time, I can compress 2 and just focus on the stuff I need to look at, avoiding scrolling up and down.

Can users see my raw data in Power BI?

One of the most important questions we have when publishing our data models and Power BI reports is Can users see my raw data?  The answer to this should virtually always be no unless you’ve explicitly stated you want this to be the case.  In this post we’re going to look at this problem and show you why this is a really serious question.

What kicked off this post?

At last week’s Power BI meetup in Vancouver, Peter Myers was demoing the feature to create a public facing web page (as I showed in this post.)  He quickly took his raw data, threw together a few visuals (without writing a single custom measure) and published the page to the web.  All in all it took less than 3 minutes for him to do this.

And naturally as he demoed to everyone that they could indeed get to the dashboard and play with it, the question of “Can users see my raw data” came up.  I confidently said no, and then it happened…. someone pointed out they could drill in to all the records that made up the data point.

It was a mic drop moment as that completely violated what we were expecting.  Part of the whole mantra of Power BI that we’ve been celebrating is that users only see what you want, and can’t get back to the raw data.

The “See Records” Feature in Power BI

Sure enough, this person was able to right click a chart and say “See Records”.  So what does that do?  It pulls up the full details of each transaction line that led to the data point:

image

Even though it doesn’t pull the records for the entire data set, it was still pretty horrifying.  With enough time, a reader could drill in and discover things about the underlying data that had never been intended to be revealed.

EDIT:  It's 2 days after the intial post date, and something has already changed here.  While See Records still shows the raw record set, it now only shows the columns that contribute to the chart, not the extra columns.  There could still be an issue, but it's not as bad as it used to be.

EDIT2: After a bit more diving in here, I discovered that it was me that changed something by hiding some of the model columns in Power BI desktop.  Hiding columns in the underlying table prevents them from appearing, but you still see all raw data points.

If you want to try this yourself, you can do so on the public version of the report I used for this post, which you can find right here.

The “See Data” Feature in Power BI

Seeing this, I immediately jumped over to the Hotel Stays report that I had published and checked my visuals.  Oddly, none of my visuals had the See Records feature.  They did however, have a feature called “See Data” which concerned me a bit at first.  But once you dig into it you’ll find it yields results like the following:

image

So basically it’s just a table view of the data that is already shown in my chart anyway.  It doesn’t expose anything new.  To be completely honest, it doesn’t add a ton of value as you don’t seem to be able to copy the table from the page or anything.  So the good news here is that no confidential data is being exposed.

Can “See Data” accidentally expose confidential data?

There is one case that I’ve found where the See Data function could actually expose confidential data, and that is when using map visuals.  On my Nights Away From Home report I used a map visual to plot the data points.  If you right click any of those data points and choose See Data, it gives you a list of all the data points.

SNAGHTML2ce4f25

Because I used a full street address to plot these on the map, I get a full list of detailed addresses. So if privacy is a big concern here I’d suggest you plot your data points using a postal code or city.  Something with less granularity. In my case I’m not totally concerned as these are all public hotels.. except for my friends locations where I had already adjusted the addresses to preserve their privacy by their request.

The Big Question: Can users see my raw data in Power BI?

So the simple answer would appear to be “Yes” they can.  But it’s actually a bit more complicated than this.  The Show Data feature is not – in my opinion – a data security risk.  The data is already shown in a visual, it’s just a different way of stating it.  So the real issue is around that Show Records option.

There are actually two big questions here…

  1. Can I disable the Show Records option?
  2. Why did my dashboard NOT have the Show Records option where Peter’s did?

As you can see, the second question actually answers the first.  It IS possible to disable the Show Records feature, but how?  As Peter and I were scratching our heads on this I decided that I needed to find out.

Controlling the ability to see my raw data in Power BI

As it turns out, the secret comes down to two things:

  • The type of visual you choose
  • The type of measure you create

I should also point out that the See Records and See Data features can be triggered from within Power BI Desktop.  This is fantastic as it means that you don’t need to publish to web to test your dashboard.  (Yay!)

The Data Set

To demo these features and issues, I’m going to use a subset of the data I had for my Nights Away From Home report:

image

To quickly explain this, I have three data columns in the table:  City, Country and StayCounter.  The first two are obviously text columns, and the last is a numeric column where each row contains the value of 1.

The last two items are DAX measures defined as follows:

  • CityCount = COUNTA(Stays[City])
  • ExplicitStayCounter = SUM(Stays[StayCounter])

Can users see my raw data in Power BI if I use a Card Visual?

To answer this, I created two card visuals.  The first was created using an Implicit measure.  I.e. I dragged the text based City field onto a card and let Power BI create an implicit “Count of City” measure:

image

The good news here is that there is no right click context menu for a Card.  (Give it a try!)

So using a Card visual means that your audience can never drill in to the underlying data.  I’m happy with that part.

Can users see my raw data in Power BI if I use a Bar Chart Visual?

Once again, I created two visuals here:

Field Red Visual (on left) Green Visual (on right)
Axis Country Country
Value City CityCount

In the case of the green visual, I have explicitly created my own measure by writing DAX.  In the case of the red visual, I have an implicit measure where Power BI has done the measure creation for me.  (Interestingly it reports the measure as Count of City which is not technically correct… it is actually a COUNTA of City, as Count can only count numeric values, where City is a text based field.)

Semantics aside, have a look at what you see for options:

SNAGHTML2ebf2ab

This was a bit shocking to me… so Power BI defaults to allowing you to drill into the underlying records… at least for implicitly generated measures that count text based entries.

Can users see my raw data in Power BI if I use an implicit SUM measure?

So the next question I had to explore is if this issue where users can see my raw data limited to only implicit measures based on text fields?  If the field was numeric and I use an implicitly created measure to count or sum it, will I have the same issue where the underlying records can be exposed?  Let’s take a look:

To avoid confusion here, I created to new visuals, but column charts this time.  Here’s how they were configured:

Field Red Visual (on left) Green Visual (on right)
Axis Country Country
Value StayCounter ExplicitStayCounter

The key difference here is that I just dragged the raw StayCounter column on to the red visual and let Power BI implicitly create the SUM measure for me.  In the case of the green visual, I wrote the DAX to say exactly what I wanted.  The two measures are mathematically equivalent which was important for an apples to apples comparison.  And here are the results:

SNAGHTML2f74c7e

(Yes, they look almost identical to the previous version, but if you check the titles – or play with the public version of the report - you’ll see that they indeed use different configurations.)

Observations and Thoughts

As you can see, the results are consistent with implicit versus explicit measures.  So if you want control of whether your audience can drill in to your underlying data records, you need to know this:

Measure Implicit – Power BI Explicit - DAX
Exposes “See Records” Yes No
Exposes “See Data” Yes Yes

This is a design issue

In my opinion, this is not good.  This basically means that the “quick and easy” way to create a report sets up to potentially expose data that should not be exposed.  And it’s not only applicable to reports that are shared publicly, it’s applicable to all reports distributed via Power BI Desktop, organizational level sharing and external sharing.  My feeling is that the Show Records feature should be disabled by default, and there should be a flag that you need to enable in order to enable this feature.

Having said this, the Power BI team has a problem.  I’d say that the vast majority of their audience do NOT want the See Records feature enabled by default, but some people rely on it.  This is one of the pain points of having something out there in the wilds, then realizing that something isn’t working the way you want.  To fix this, they’d need to released the a method to configured the visibility of the Show Records feature (probably to Power BI Desktop) then switch the default on Power BI.  Someone will no doubt complain of a loss of the feature, but as long as there is a way to fix it I’d say the risk of upsetting a customer in this manner outweighs the risk to Microsoft's entire customer base of accidentally exposing company confidential information.

My personal feeling is that I’d like to see both the See Records and See Data features configurable by the report author.  I’d like to see both off by default, with the ability to turn them on as needed.

In the mean time…

While current implementation is ultimately not what I think should happen, I’m actually happy it is consistent.  Why?

As it turns out, the reason I never saw this is that I’m in the practice of taking explicit control of my data and measures.  Since my earliest days of working with Power Pivot in Excel, I have never relied on the implicit measures, electing to always write my own using DAX.  This is true even for simple measures like SUM(Stays[StayCounter]).

Part of my reason for this is just habit, part is because I want to learn, part is because I don’t trust some of the implicit stuff that happens and don’t trust defaults.  In VBA we have a specific command to force variable declarations which is called “Option Explicit”.  I’ve always adopted that as not just a code word, but rather as a development standard.  In this case I’m fortunate that it saved my data.

If you don’t know DAX, you should learn it.  Not for this reason alone, but it is certainly another reason.

MailChimp and Power BI

This week I was playing around with MailChimp and Power BI.  If you’re not familiar with MailChimp, it’s the service that we use to send out our Monthly-ish newsletter.  I thought I’d show how incredibly easy it is to get a dashboard from this service.

Software as a Service

MailChimp is what we refer to as a “Software as a Service” (SAS) setup.  Basically what this means is that it is a subscription model where I have no need to set up my own server or host anything myself.  I simply subscribed, set up the forms on my website for you to fill out, and then use their services to build and send out the newsletter.  It’s pretty slick and they don’t even charge me until I hit 2,000 subscribers or want to set up auto replies and such.  I’m a huge fan of this model, as it lets you do a “try before you buy” test, only upgrading when you want to.

What do you need?

There are only two requirements here:

  1. A MailChimp account of your own
  2. A PowerBI.com account which you can create for free (seriously, no Pro Subscription required)

Getting Started with MailChimp and Power BI

Step 1: Connect to MailChimp

So the process begins with signing in to your Power BI.com account.  Once there, you look for this button in the bottom left corner of your screen:

image

Click that, and you’ll be taken to another screen that looks like this:

image

Click Get on the Services tab.

Now, to be fair this is still pretty new, so there’s a lot of sources that you would “hope” would be here (like Survey Monkey) which currently aren’t.  Despite this, the easiest way to find what you’re looking for is to start typing the name of the SAS source in the search pane in the top right.  It auto-filters the app list live.  Shown below is the results of “Ma” today:

image

So click the Get button at the bottom of the MailChimp.

You’ll now be asked to sign in.  If you have multiple domains under your MailChimp profile, just select one for now.  What you’ll see is that it takes you back to the portal and lets you know that it’s loading your data.

Step 2: Wait

Seriously?  Yes.  When you’ve chosen the account to log in, you’ll see that you get a new entry in the Dataset, Report and Dashboard sections of the Power BI portal:

image

See how the source is greyed out and has a star?  The star means it’s a new item, but the greyed out status means it’s loading.  It takes a while, and you won’t be able to do anything with it until it’s finished the initial load.  So go get a coffee and wait for the chimp to finish it’s trek to your portal.

Step 3: Explore the Dashboard

Wait, what?  I don’t need to build the dashboard myself?

No!  It’s already built for you.  MailChimp and Power BI together in a few easy clicks.  Check it out!

image

All fully clickable, and Q&A works like a charm:

image

What about Scheduled Refresh?

Right, I hear you.  That data set took a long time to load, so you don’t really want to have to come in and refresh it every day.  (I mean, unless you need an excuse to go for coffee…)

Well check this out.  This is shot from an account which only has a Power BI Free subscription.  I access this by clicking the … next to the data set then choosing Schedule Refresh (normally a Power BI Pro feature only):

image

Oh… and did I mention that this was created on Oct 27?  This report has already refreshed, and is scheduled to do it again with no configuration or pro license needed.  Ah the beauty of connecting to a pre-built SAS dashboard!

What about my other MailChimp Accounts?

If you have more than one domain in your MailChimp account, you might be surprised to see that when you try to add another MailChimp dashboard it just creates the same one again without prompting you to log in.  This is because it uses the cached credentials.  So how do you make this work?

Basically, you create the same report again then, once it’s finished loading, you go and edit the data source credentials on the Dataset for the second report.

To do this:

  • Go to Datasets –> Ellipses –> Schedule Refresh
  • Expand Data source credentials
  • Click Edit credentials
  • When prompted for oAuth2 credentials, click Sign In
  • Insert the username & password
  • Then choose the new data set when prompted

image

I’d also recommend going to each of the datasets, reports and dashboards and renaming them using the ellipses as well, just to keep it clear which is which.

Final Thoughts

The one thing I can’t do here is download the pbix file that was used to create this dashboard.  I’m sure it’s heavily parameterized (how else would they deploy for whomever adds it), but I’d sure love to do that.  Why?  So I could connect directly to MailChimp easily from PowerBI Desktop or Excel… maybe so that I could merge other data in with it.  (Once it’s published, I cannot get in and examine or change the M code in any way.)

Having said that, this is still pretty darned cool stuff!  I really hope other vendors jump on this train as well.  Building dashboards can be hard, and this just makes it super easy.  I’d love to see one for my Facebook account, LinkedIn and other SAS sources.

For reference, Google DOES provide a dashboard too, and it’s just as easy to set up and auto refreshes like this too.

Power BI Slicers

For those coming from the Excel 2013 world, you’ll surely want to create filters using the Power BI Slicers.  After all, you know that Slicers and Timelines are two of the sexiest filters we have for controlling filter context in Power Pivot models.  In this post we’re going to explore the slicer visual, and how to get closer to what you’re used to in Excel.

The Goal

I’m going to fall back to my last project here, and have created a single visual on a blank report page.  You probably remember this one, it’s the map of where I’ve stayed so far this year:

image

What I’d like to do now is add my slicers and timelines.  I’d like a slicers for Country and Province.  Should be easy, no?  Err… no.  Sad smile

The default Power BI Slicers

Now, Power BI Desktop has a Slicer visualization, as you can see here:

image

So what’s wrong with it?  I’ll build two to show you why I’m less than satisfied…  Here’s what I did:

  • Created a Slicer visual
  • Added the Country field
  • Created another Slicer visual
  • Added the Province field

Do these look like Excel slicers to you?  They look a lot more like check boxes to me, not slicers…

image

Now don’t get me wrong, Power BI Slicers certainly work, as you can see here where I’ve drilled in to USA:

image

So let’s look at the difference between the Power BI Slicers and the Excel slicers that my expectations are based upon:

  • The Province field hides all irrelevant items by default, unlike Excel.  I could not find a configuration option to change this.
  • The checkbox thing drives me crazy.  I hate checkboxes in my Windows files list, I don’t like them here, and again it is inconsistent with Excel.  I could not find a way to turn those off.
  • Originally I wanted to show the provinces in a two column slicer, like I can in Excel.  I could not figure out how to make this happen either.
  • Finally, I wanted to show the bubbles like an Excel slicer.  The closest I could get was the image below (WARNING! SUPER UGLY GRAPHIC AHEAD!)  Should you feel the need to create this abomination you need to select the slicer, go to the Formatting roller –> Items –> Outline –> Frame.

image

Gross.  That is just gross.  Honestly, I really don’t understand why the slicer is so different from Excel’s.  That slicer is pretty, and people are used to it.

Not happy with these, I deleted both slicers.

Is all hope lost for attractive Power BI Slicers?

Thankfully, the answer is no.  The Power BI team has given developers the ability to create and distribute their own visuals into the Power BI custom visuals gallery.  So let’s go and pull in a couple of those to fill this gap.

Locating the Custom Visuals Gallery

To be fair, the steps for this could be MUCH easier.  To get here the first time you can either just click this link or follow these steps:

  • Click the ellipses in the Visualizations gallery to import a custom visualization

image

  • Choose to Import a Custom Visual
  • Click the Learn More link

image

  • To be fair, you should probably read the page you’re taken to, as it talks about all the risks of using a custom visual.  (Remember not all custom visuals are provided by Microsoft, many are provided by 3rd parties.)
  • I scrolled straight to the bottom and clicked the link in the See Also section to go to the Power BI custom visuals gallery

You’ll be taken to the gallery, which has a lot of pretty visuals that can be imported into your project.

To make it easier to find custom visuals, I’d recommend you do a couple of things here:

  1. Bookmark this page (making it a bit easier to get back to it.)
  2. Choose to sort the gallery by Name rather than by Most Recent (which is the default)

When you click on a visual it will offer to download a pbiviz file that you can store in a folder.  You’ll want to remember the location, as you’ll need to import the visuals into every new PBI file you create.

I downloaded a specific visual here: the Chiclet Slicer which, ironically, is published by Microsoft.

Importing the Chiclet Slicer

When I returned to Power BI desktop, it’s still sitting at the Import Custom Visual dialog, which is convenient.  So I was able to just click the big yellow Import button, and select the ChicletSlicer file.  Doing so adds a new option to the Visualizations gallery:

image

I created two new Chiclet slicers, one for Country and one for Province, and was pleased to end up with the following:

image

Now that’s more like it!  Certainly needs some tweaking, but better than the past iteration.  So let’s get to that tweaking…  I changed the formatting options for each of the slicers as follows:

  • The Country slicer
    • General –> Columns –> 1
    • Header –> Off
    • Chiclets
      • Unselected Color = very light grey
    • Title –> On
      • Text = Country, Font Color = Black, Alignment = Center, Text Size = 10
  • The Province slicer
    • General –> Columns –> 2
    • Header –> Off
    • Chiclets
      • Unselected Color = very light grey
    • Title –> On
      • Text = Country, Font Color = Black, Alignment = Center, Text Size = 10

And, as you can see, the results are pretty good:

image

A couple of things that I couldn’t figure out here though:

  • I wanted to align the text in my “chiclets” to the left, like in Excel.  Can’t seem to find an option for that.
  • There is a tantalizing option in the “General” section to show disabled items “Inplace”, and an option in the chiclets to set the colour for those items.  I would have expected it to be equivalent to Excel’s “Show Disabled”, but it doesn’t seem to do that.  I have not figured out how to replicate that effect.

Final Thoughts

To be fair, there are a ton of configuration options for the Chiclet slicer, much more than I’m going to cover.  Why this slicer isn’t part of Power BI’s default install is beyond me… especially since it’s published by Microsoft.

Values Become Text After UnPivoting Other Columns

Have you ever set up a nice query to UnPivot other columns, only to find that the query data types change when you add new columns?  This post will cover why values become text after unpivoting other columns.

Background

We’ve got a nice little table called “Data” showing here.  Nothing special, it just summarizes sales by region by month, and our goal is to unpivot this so that we can use it in future Pivot Tables.  (You can download the source file here.)

SNAGHTML552a2a7

Now, you will notice that April’s sales are outside the table. This is by design, and we’ll pull it in to the table later when we want to break things.  Smile

UnPivoting Other Columns – The Hopeful Start

If you’ve been following my blog for any period of time, you’ve seen this, but let’s quickly go over how to unpivot this:

  • Select a cell in the table
  • Go to Power Query (or Data in Excel 2016) –> From Table

We’re now looking at the Power Query preview of the table:

image

Great, now to unpivot…

  • Hold down the Shift key and select the Country and Prov/State column
  • Right click the header of either of the selected columns and choose Unpivot Other Columns
  • Right click the headers of the two new columns and rename them as follows:
    • Attribute –> Month
    • Value –> Sales

Re-Pivoting from the Data Model

With the table complete, I’m going to load this to the data model and create a Pivot Table:

  • Go to Home –> Close & Load –> Close & Load To…
  • Choose to Load to the Data Model

The steps to create the Pivot depend on your version of Excel:

  • Excel 2013: Go in to Power Pivot –> Home –> PivotTable and choose a location to create it
  • Excel 2016: Click any blank cell and go to Insert –> PivotTable.  As you have no data source selected, it will default to using the data model as your source:

Iimage

With the PivotTable created, I’ve configured it as follows:

  • Rows:  Country, Prov/State
  • Columns:  Month
  • Values:  Sales

And that gives me a nice Pivot like this:

image

Let’s Break This…

Okay, so all is good so far, what’s the issue?  Now we’re going to break things.  To do that, we’re going to go back to our original data table and expand the range:

image

In the picture above, I’ve left clicked and dragged the tiny little widget in the bottom right corner of the table to the right.  The table frame is expanding, and when I let go the Apr column turns blue, indicating that it is now in the boundaries of the table.

With that done, I’m going to right click and refresh my Pivot Table, leaving me with this:

image

Huh?  Why was the sales measure removed?  And if I drag it back to the table, I get a COUNT, not a SUM of the values?  And even worse, when I try and flip it back to SUM, I’m told that you can’t?  What the heck is going on here?

image

Importance of Power Query Step Order

To cut to the chase, the issue here is that when we first created the table in the data model, the Sales column was passed as values.  But when we updated the data to include the new column, then Sales column was then passed entirely as text, not values.  Naturally, Power Pivot freaks out when you ask for the SUM of textual columns.

The big question though, is why.  So let’s look back at our query.

Our original data set

If we edit our query, we see that the steps look like this:

image

To review this quickly, here’s what happened originally

  • Source is the connection that streams in the source data with the following columns:

image

  • Changed Type set the data type for all the columns.  In this case the Country and Prov/State fields were set to text, and the Jan, Feb & Mar columns were set to whole number.  We can see this by looking at the icons in the header:

image

Note that if you don’t have these icons, you should download a newer version of Power Query, as this feature is available to you and is SUPER handy

 

  • We then selected the Country and Prov/State columns and chose to Unpivot Other Columns.  Doing so returned a table with the following headers

image

Notice that the first three columns are all textual, but Sales is showing a numeric format?  Interestingly, it’s showing a decimal format now, but it shows the numeric format because all unpivoted columns had explicitly defined numeric formats already.

The final steps we did was to rename our columns and load to the data model, but the data types have been defined, so they were sent to the data model with Sales being a numeric type.

Why Values Become Text After UnPivoting Other Columns

Okay, so now that we know what happened, let’s look at what we get when we step through the updated data set.

  • First we pulled in all the columns.  We can plainly see that we have the new Apr column:

image

  • The Changed Type step is then applied:

image

Hmm… do you see that last data type?  Something is off here…

So when we originally created this query, Power Query helpfully pulled in the data and applied data types to all the existing columns.  The problem here is two-fold:  First, the Apr column didn’t exist at the time.  The second problem is that Power Query’s M language uses hard coded names when it sets the data types.  The end effect is that upon refresh, only the original columns have data types defined, leaving the new columns with a data type of “any” (or undefined if you prefer).

  • We then unpivoted the data, but now we see a difference in the output

image

Check out that Value column.  Previously this was a decimal number, now it’s an “any” data type.  Why?  Because there were multiple data types across the columns to be unpvioted, so Power Query doesn’t know which was the correct one.  If one was legitimately text and Power Query forced a numeric format on it you’d get errors, so they err on the side of caution here.  The problem is that this has a serious effect on the end load to Power Pivot…

  • Finally, we renamed the last two columns… which works nicely, but it doesn’t change the data type:

image

Okay, so who cares, right?  There is still a number in the “any” format, so what gives?

What you get here depends on where you load your data.  If you load it to the Excel worksheet, these will all be interpreted as values.  But Power Pivot is a totally different case.  Power Pivot defaults any column defined as “any” to a Text data type, resulting in the problems we’ve already seen.

Fixing the Issue

For as long as we’ve been teaching our Power Query Workshop, we’ve advocated defining data types as the last step you should do in your query, and this is exactly the reason why.  In fact, you don’t even need to define your data types in the mid point of this one, that’s just Power Query trying to be helpful.  To fix this query, here’s what I would recommend doing:

  • Delete the existing Changed Type step
  • Select the final step in the query (Renamed Columns)
  • Set the data type for each column to Text except the Sales column, which should be Decimal Number (or currency if you prefer)

image

When this is re-loaded to the Data Model, you’ll again be able to get the values showing on the Pivot Table as Sum of Sales.

Avoiding the Issue

Now, if you don’t want Power Query automatically choosing data types for you, there is a setting to toggle this.  The only problem is that it is controlled at a Workbook level, not at a global Excel level.  So if you don’t mind setting it for every new workbook, you can do so under the Power Query settings:

image

Is Changed Type Designed in the Correct Way?

It’s a tough call to figure out the best way to handle this.  Should the data types be automatically hard coded each time you add a new column?  If the UnPivot command had injected a Changed Type step automatically, we wouldn’t have seen this issue happen.  On the other hand, if a textual value did creep in there, we’d get an error, which would show up as a blank value when loaded to Power Pivot.  Maybe that’s fine in this case, but I can certainly see where that might not be desirable.

Personally, I’d prefer to get a prompt when leaving a query if my final step wasn’t defining data types.  Something along the lines of “We noticed your final step doesn’t declare data types.  Would you like me to do this for you now (recommended)” or something similar.  I do see this as an alternate to the up-front data type declaration, but to be honest, I think it would be a more logical place.