Reduce Development Lag

How we reduce development lag when building queries in Power Query is a question that came up in my blog post at PowerPivotPro last week, even though that wasn't the main issue.  I thought it might be a good idea to throw out a development methodology that can help with this if you're struggling in this area.

Also, if you haven't read "Why Excel's Power Query Refresh Speeds Suck" on Rob's site, please do, as it really highlights a key issue that can affect Power Query refresh times, and we really need you to vote it up.

Why do we need to reduce development lag?

This isn't a problem that affects small data sets, but the bigger the data set is, the more you feel it.  The issue comes up because - during query development - Power Query doesn't load all of the data into the local system.  (If it did, you'd REALLY scream about crappy performance!)  What it does instead is pull in a preview of the data.

The number of rows varies based on the number of columns and data types, but for now, let's pretend that the Power Query preview:

  • pulls exactly 1000 rows
  • your data set is 70 million rows

I do want to be clear here though… this issue isn't restricted to data sets of this size.  If you're consolidating 30 CSV files with 50k rows each (a total of 1.5 million rows), you're certainly going to feel this pain.  Why?

To illustrate this, let's go through a workflow:

  • You connect to the data set
  • Power Query pulls the first 1000 rows and load them to your preview
  • You filter out all records for dept 105, removing half the rows in your data set

At this point, Power Query says "Hey, you eliminated half the rows.  I'll go pull in some more for you, so that you can keep operating on a full preview window."  It goes back to the data source, and essentially streams in more data, tossing the records you don't need, in order to fill up the preview of 1000 rows.

Now you drop 20 columns, allowing the data set to expand to 1200 rows in the preview.  Naturally, it needs to go back to the data set, pull values again, run it through all the steps to date in order to land you 1200 rows.

Now to be fair, I'm not certain if something like a "Change Type" or "Replace Values" step causes a refresh, but it very well may.  On large data sets it certainly feels like it does.

If you're not nodding your head at this point, I can pretty much assure you that the data sets you've been working with are tiny.  The bigger those sets get, the more this hurts.

And before anyone says "why not just add a Table.Buffer() command in there?"… in my experience this doesn't help.  From what I've seen, if you are in dev mode, this causes the buffering to get re-executed, which just adds time to the process.

A Strategy to Reduce Development Lag

So how do we reduce development lag and avoid the time wasted when developing our query chain?  Here's a strategy that may work for you.  It's a 5 step process and requires Excel, although you could certainly develop in Excel then copy your queries to Power BI Desktop afterwards. The reason?  Power BI has no grid to land your data to…

Step 1: Connect to the Data Source

The first part is easy.  You simply connect to your data source, whether that be a file, database, folder, or whatever.  The key thing here is that your output needs to be "flat".  (No nested lists, tables, binaries or values in any columns.)  Each column that contains nested lists/tables or values needs to be expanded before you can move to step 2.

In the example below, you can see that I pulled data from a database which has related fields:

SNAGHTML22cc8944

As the related "Value" is a complex data type, it needs to either be removed or expanded.  In this case, I wanted some data from within the tblItems, so I performed the following steps:

  • Expanded the tblItems column to extract the tblCategories column
  • Expanded the tblCategories column to extract the POSCategoryDescription field
  • Removed the tblChitHeaders column

And the result looks like this:

image

And now, let's pretend that every step I add is causing the preview to refresh and taking ages to do so.  (Ages is, of course, a relative term, but let's just assume that it has exceeded my tolerance for wait times.)  So what can I do?

What I do now is:

  • Stop
  • Give the query a name like "Raw-<tablename>" (where <tablename> is your table.  Mine is Raw-ChitDetails)
  • Choose Close & Load To…
  • Load to "Connection Only"

Once done, I can build myself a temporary stage to reduce development lag during the construction stage.

Step 2: Reference the Data Source query and land to an Excel Table

To create our temporary stage, we go to the Queries pane, right click the Raw-ChitDetails query, and choose Reference.  (And if you've read the post on Why Excel's Power Query Refresh Speeds Suck, you're cringing.  Go vote if you haven't already).

When launched into a new query, you're going to do two things and two things only:

  • Rename the Query to something like "Temp-<tablename>" (again, where <tablename> is your table.  I'm using Temp-ChitDetails.
  • Go to Home --> Keep Rows --> Keep Top Rows --> 10,000

And now, go to Home --> Close & Load and choose to load to an Excel table.  This is your temporary data stage.

The key here… never do more than just the Keep Top Rows part.  You don't want to do any additional filtering or manipulations in this query.  You'll see why a bit later.

Step 3: Load the Excel Table to Power Query and develop your solution

Next, you'll need to click a cell in the new table, and choose to load from Table or Range.  This will create you a new query that you can start to manipulate and do the cleanup you want.

The beauty here is that the table you've loaded from Excel needs to be manually refreshed, so we have essentially frozen our data preview.  The preview will still get refreshed (we can't avoid that), but since the data has been loaded to a worksheet in Excel, Power Query will treat that table as the data source.

The reason we needed 10,000 rows is to have enough data to identify the data patterns we are working with.  If that's not enough because you filter out larger quantities, then up the 10,000 to something larger or filter in the Raw query (and suffer the lag time there) before loading to the table.

So this is the query where you do all of your development work.  With a smaller "frozen" data set, this should allow you to work while suffering much less refresh lag time.

As a sample here, I ran a Group By on the smaller data set.  Is that a good query to illustrate this? Probably not, I just wanted to show that I did something here.

image

So this query only has two steps, but hopefully you get the idea that this could be a few dozen (or more) steps long, each taking MUCH less time to update than working against the whole data set.

Quick note… I deleted the Changed Type step that is automatically added when pulling data from an Excel table.  I did this as the Changed Type step often breaks Query Folding when you pull from a database.

With my development work done, I've named my query with the name I want to use for the final load destination.  In this case it's "ChitDetails", and set it to Close & Load To… --> Connection Only and Data Model to load the table for Power Pivot.

Of course, this query is only working against my frozen 10,000 row data set, which isn't going to cut it long term, so I need to fix that.

Step 4: Re-point the working query against the Data Source query

With the query loaded, I'll go back and edit it.  The key is that I need to select the Source Step.  In the formula bar we'll see the code used to call from the Temp data table:

image

This formula needs to be updated to pull from the Raw-ChitDetails table, so I'm going to do exactly that:

image

Notice that the formula is actually:

=#"Raw-ChitDetails"

This is to deal with the fact that I used a hyphen in the name, so it needs to be escaped with the hash-quotes in order to read correctly. If you forget to do that and just type =Raw-ChitDetails, you'll be told you've got a cyclical expression.

At this point, you've done your development work more quickly, and have now pointed back to the original data source.  (This is exactly why we only restrict the number of rows in Step 2.  If we did more, we'd need to retrofit this query.)

When you click Close & Load now, it will load to the data source you chose earlier.  (Power Pivot for me.)

Step 5: Delete the Temp Queries and Excel Table

Notice that we actually have two Temp queries now.  The first was used to reference the Raw query and land the data in the Excel table.  The second was pulling from the Excel table into Power Query.  Both can now be deleted in additional to deleting the worksheet that held the temporary table as their jobs are done.

What if I need to do this again?

Easy enough.  You can simply:

  1. Create a new query by referencing the Raw table.  (Queries Pane --> Right Click --> Reference)
  2. Name the query Temp-<tablename> and Close & Load to an Excel table
  3. Select a cell in the table and create a new query From Table or Range
  4. Rename this new query as "TempStage" (or something) and choose to Close & Load To… --> Connection Only
  5. Edit your "final" query's Source step to =TempStage.  (You only need the #"" if you include a character that Power Query uses for something else.

And at that point you're back in development mode.

What if my Query is already several steps long?  Do I have to start over?

Absolutely not!  If you've been working away building a very long query, you can split them apart into the staging setup I used above.  Open your query and walk through the steps to find where you wish you'd broken them apart.  Right click that step and choose "Extract Previous"

SNAGHTML22ffbe84

You'll be prompted for a new query name, so enter "Raw-<tablename>".  What this will do is extract all the previous steps into a new query for you, and set your current query's Source step to point to that query.  Then you can follow the steps outlined in the previous section to get back into development mode.

What about Power BI Desktop?

The problem with Power BI desktop is that it has no grid to land the data to, which means we can't get that "frozen preview" setup.  But we can adapt the solution to use Excel if you've got Power Query in Excel. You do this by:

  • Creating the same "Raw Data" stage in Power BI Desktop
  • Reference the Raw Data query to get into your "Final" query

Then create your data stage:

  • In Excel, replicate the Raw Data staging query EXACTLY as it is in Power BI desktop
  • Filter the query to 10,000 rows
  • Close & Load to an Excel table
  • Save & Close the file

Back in Power BI Desktop

  • Create a new query to pull from the data in Excel
  • Repoint your "Final" query to reference the Excel query
  • Do your development
  • Repoint to the original query when done

It's a bit clunky, but if you're really suffering performance, you may find it worth it.

Should Previews Always Refresh?

One thing I've thought about setting up a UserVoice item for is a setting in the Excel/Power BI UI that allows me to turn off preview refreshes until I click the button inside the PQ editor.  My initial thought is that this would help, as it would essentially cache the preview, and we could operate on it without tripping a recalculation to slow things down.

Naturally, when we first create the query, it would need to create a preview.  But after that, I would hope that it just worked with the data loaded in the preview until I hit the Refresh Preview button.  That would be the key to re-execute all query steps to date - in full - to generate a revised preview.

Having said that, I have a feeling that this would be pretty complicated to implement.  Some steps (like combine binaries and grouping) would require the preview to be updated or you'd get nothing.  But others, like filtering rows out, really shouldn't.  It could require a significant change to the underlying architecture as well.

Regardless, something to "freeze" the preview would certainly be welcome as the development experience with larger data sets can certainly be painful.

Adding null to values returns null

Today a user brought up something in the forums I've seen before; adding null to values returns null.  Simply put, they wanted to add the values in two columns together, but didn't get the results they expected.

The issue: Adding null to values returns null

Have a look at the following data:

image

Never mind that it's pivoted, pretend that you want to use Power Query or Power BI Desktop to sum across the columns and put the total in each row. The problem, of course, is that we can't sum across columns, as Power Query doesn't do that.  So our first temptation is to reach to a custom column and use the formula:

=[ProductA] + [ProductB] + [ProductC]

But notice how on the second and third rows we get null instead of the totals we'd expect:

image

The issue is obviously the null, which totally screws up the math.  (This still surprises me, as Excel ignores null values, which is what I believe should happen here.)

My original suggestion (was poor)

My original thought (and recommendation) was to select the columns and go to Transform --> Replace Values and replace null with 0.  It seems to work in this case.

Having said that, there's a potential issue here.  Null and zero are not the same things at all.  A null is a effectively an empty data point, where 0 is a value. In a Power Pivot or Power BI model that could lead to reporting zero dollars of sales for a day versus reporting an empty set.  Are those different?  To me they are… empty means we weren't even open, where 0 means that we were and sold nothing.  Maybe it's subtle, but it's a real difference.

Another concern comes up when we are dealing with columns that have real values in them.  Simply put:

  • Average(null,null,null,1,2,3,4,5) = 3
  • Average(0,0,0,1,2,3,4,5) = 1.875

So while my answer worked mathematically for a SUM, it was in truth not a good one.  And, in fact, the poster replied as said that he couldn't replace nulls with zero, as this compromised his data.

The evolution of solution

Bill Sysyz dropped in and posted a bit of M code that would sum all records on the row, which look like this:

= Table.AddColumn(YourPreviousStep, "Sum", each List.Sum(Record.ToList(_)))

I think this is really cool, but it also had an issue in that it summed ALL records in the row.  But there are also columns that include text too.  Now we can adjust this to only pick up certain columns, but it involves writing some code that provides each column name in the List.Sum() function.

That kicked off a discussion between Bill and I about complexity, which ended up with my stumbling into a solution that worked.  And it's simple…

Adding null to values and returning values

So here's all we need to do:

  • Select the columns you want to sum (hold down CTRL to select non-contiguous columns)
  • Go to Add Column --> Standard --> Add

I did exactly that for the ProductA, ProductB and ProductC columns from the data above, and here's the results (compared with the custom column method):

image

And just for reference, here is the formula that was generated:

= Table.AddColumn(#"Added Custom", "Sum", each List.Sum({[ProductA], [ProductB], [ProductC]}), Int64.Type)

So pretty much what Bill provided with specific hard coded columns, and 100% user interface driven, just the way it should be!

Final thoughts

The amusing part to me about this is that I have no idea how long this has worked.  I stumbled on this solution during as I was typing "you should be able to just do this… " and it worked!  I've been working with this tool so long that I sometimes miss that some of the old gaps got filled in.  At any rate, it works, it's awesome, and hopefully it helps someone.  Smile

Protect Power Queries

How you protect power queries is a question that will come up after you've built a solution that relies heavily on Power Query, especially if you're going to release it to other users.

(This is a quick post, as I'm in Australia at the Unlock Excel conference, but still wanted to get a post out this week.)

Can you Protect Power Queries?

The answer to this is yes, you can.  It’s actually very easy, and prevents your users from not only modifying your queries, but adding new queries to the workbook as well. Essentially, it shuts the door on any additions or modifications to query logic, while still allowing queries to be refreshed… at least, it should.

So how do we Protect Power Queries?

To protect Power Queries we simply need to take advantage of the Protect Workbook Structure settings:

  • In Excel (not Power Query), go to the Review tab
  • Choose Protect Workbook
  • Ensure that Structure is checked
  • Provide a password (optional)
  • Confirm the password (if provided)

Once you’ve done this, the Power Query toolsets will be greyed out, and there is no way for the user to get into the editor.

image

Does refresh work when you Protect Power Queries?

This part kills me.  Seriously.

The answer to this question depends on whether or not you use Power Pivot.  If you don't, then yes, you're good to go.  As long as all your tables land on worksheets or as connections, then a refresh will work even when you protect Power Queries via the Protect Workbook method.

If, however, you have a single Power Query that lands in the data model, your stuffed.  If Power Pivot is involved, then the refresh seems to silently fail when you protect Power Queries using this method (and I don't know of another short of employing VBA, which is a non-starter for a lot of people).

It's my feeling that this is a bug, and I've sent it off to Microsoft, hoping that they agree and will fix it.  We need a method to protect both Power Query and Power Pivot solutions, and this would do it, as long as the refresh will consistently work.

Caveats about locking your workbook structure:

Some caveats that are pretty standard with protection:

  • Losing your password can be detrimental to your solution long-term. Make sure you have some kind of independent system to log your passwords so this doesn’t happen to you. And if your team is doing this, make sure you audit them so you don’t get locked out when as staff member leaves for any reason.
  • Be aware that locking the workbook structure also locks the ability for users to get into Power Pivot.
  • Workbook security is hackable with brute force macro code available on the internet for free. (Please don’t bother emailing me asking for copies or links to this code. I don’t help in disseminating code which can be used to hack security.) While protecting the workbook structure will stop the majority of users from accessing your queries, it should not be mistaken for perfect security.

Create Column From Examples

I knew that the create Column From Examples feature had been released to Power BI Desktop, but it just showed up in my Excel 2016 build today.  And man is it cool!

A scenario for Create Column From Examples

A user on my forum asked how to build a function that would extract the "Show Name" from the following format:

DV1511H, Episode Name ( Show Name, SeriesNumber)

As the user pointed out, this can be done via the following Excel formula:

=LEFT((MID(A1,SEARCH(" ( ",A1)+3,200)), (SEARCH(", S",(MID(A1,SEARCH(" ( ",A1)+3,200)))-1))

But how do you do something similar in Power Query?

My initial thought

My first expectation was to use one of the techniques from M is for Data Monkey, using the equivalents of Excel's SEARCH and MID functions, basically emulating the Excel logic.  And while that would totally work, I got distracted by something when I opened up my Excel today:

image

The new create Column From Examples button was there on my ribbon.  So naturally, I had to see what it would come up with.

How to create Column From Examples

Naturally, it starts with clicking the created Column From Examples button, which gives you two options:

  1. From All Columns
  2. From Selection

In this case, they would do the same thing, but I'm going to choose "Selection" anyway, as I only need to look at one column.  When I do, I get a new message across the top, and a new column.  I put in the pattern I wanted to get:

image

And once I hit Enter, it actually shows me the pattern it used:

image

As I'm happy with it, I click OK.  The formula that it provided, (which I passed on to the questioner,) is:

Text.BetweenDelimiters([Column1], " ", ",", 3, 0)

Potential Improvements for create Column From Examples

Let me first say that I think this is fantastic.  I would not have come up with this function on my own, as I didn't even know that this function existed.

Some things that I wish we could change though:

  1. The column is created with a generic name.  I really wish we could have changed this during the creation phase instead of ending up with the generic "Part of Column 1" text.  In order to fix this, we either need to edit the M code formula or do another rename step, both of which could be avoided if we could simply rename the column during the creation phase.
  2. There is no gear icon in the applied steps window to take us back into the interface.  I'm sure that would be really hard to implement, but if you mess it up today, well… delete it and try again.
  3. There is no way to copy the function during the creation phase, and with the gear icon not available, the only way to copy/change the formula is via the formula bar.  Not a big deal if you know your M code, but for a novice/intermediate user picking out the correct parts with all the commas, quotes and parenthesis here could be a bit tricky.

image

Overall, despite what I would change here, this is a fantastic new function that is going to make life a lot easier for people.  Very cool!

‘DIY BI’ e-Book Launches Tomorrow!

Last week I announced that we are working on a series of free 'DIY BI' e-Books.  We've been hard at work on polishing it up, and I'm pleased to announce that the first DIY BI e-Book launches tomorrow!  It will be emailed at 9:00 AM Pacific Time to everyone on our newsletter list.

Sign up to get the free 'DIY BI' e-Book series

If you haven't already, sign for our mailing list to receive your copy!  You can do so at the bottom of this post.

Creating the 'DIY BI' e-Book

I'm really thankful that I have a team of people behind me for this.  For me, technical writing is actually the easy part.  It certainly takes time, don't get me wrong, but the magic of copy editing, proof reading and graphic design is a whole other story.

Deanna has done a great job of proofing the book, and making me re-write any paragraphs that sounded good in my head, but maybe didn't translate so well beyond that.  And Rebekah has done a phenomenal job on the graphic design and layout.

Each book will be themed as shown below:

image

Blue for Excel, based on the Excelguru website colour scheme.  Dark green for Power Query (like the powerquery.training site), light green for Power Pivot (like the Power Pivot logo) and yellow for Power BI like it's colour scheme.

The 'DIY BI' e-Book Cover

We wanted to create a cool cover, but most of the stock images for sale out there have a Mac in the picture.  Since 3/4 of these technologies won't work on the Mac, that plainly wasn't something we wanted to put out there.  So that led to us staging our own photo shoot to generate our cover - which I'll admit is a lot harder than I thought it would be.  Here's the finished cover for the first 'DIY BI' e-Book.

image

Our next e-Book will use the same cover image, but will be themed in the dark green of the Power Query series.

And yes, before you all ask, that IS a Pie Chart in the bottom left. And no, I don't love pie charts.  But sometimes you have to have one, because your boss asks for it.  (Just don't expect to find one INSIDE any of the e-Books!)  Smile

Reserve Your Free 'DIY BI' e-Book Now

If you're already receiving out newsletter, there is nothing else you need to do.  It will show up in your inbox shortly after 9:00 AM Pacific time on Apr 7, 2017.  If you're not on our newsletter list yet though, just sign up. It's that easy!

Subscribe to our mailing list

* indicates required





Free ‘DIY BI’ e-Books

Today I wanted to just make a quick announcement that we are currently working on a series of free 'DIY BI' e-Books.

Free 'DIY BI' e-Books? Tell me more!

Over the past few years of working with Excel an Power BI, I've obviously picked up a few different methods, tips and tricks for working with the software.  And looking at how successful our free e-Book "Magic Tricks for Data Wizards" has been through the Power Query Training site, I thought it would be nice to so something similar for Excelguru readers.

One of the cool things about the Excelguru audience at this site is the diversity.  A lot of people originally came here for Excel, but we've been exploring Power Query, Power Pivot and Power BI for the past few years as well.  The one thing that ties us all together is that we are building "Do it Yourself Business Intelligence" or "DIY BI".

My original plan was to release one e-Book with 20 different tips, tricks and techniques; 5 each for Excel, Power Query, Power Pivot and Power BI.  After getting started, however, I realized that it was going to take me a bit longer to get that all done than I wanted.  But since I want to get information out to our readers, I've decided to break this down into four separate e-Books which will be collected under the umbrella of "DIY BI Tips, Tricks and Techniques".  Each e-Book will focus on one specific area of the DIY BI story.

What will the free 'DIY BI' e-Books include?

Well… tips, tricks and techniques, of course.  Smile  Okay, seriously, each is fully illustrated and written to give you some great examples and ideas that I hope will help you in your DIY BI journey.

Here is what is covered in DIY BI Tips, Tricks and Techniques for Excel:

  • The easiest formula to return the end of the month
  • Show a message when cells are hidden
  • Quick alignment of objects
  • Easy to read variances
  • Show a message if your Pivot data is stale

Sample image from DIY BI Tips, Tricks & Techniques for Excel

What areas will the free 'DIY BI' e-Books cover (and when will they be released)?

Those e-Books will be released in the following order:

  • DIY BI Tips, Tricks and Techniques for Excel
  • DIY BI Tips, Tricks and Techniques for Power Query
  • DIY BI Tips, Tricks and Techniques for Power Pivot
  • DIY BI Tips, Tricks and Techniques for Power BI

The first is already written, we just need to lay it out and make it look a bit more awesome.  Our target is to get it released by the end of next week.

With regards to the rest, I'll go as fast as I can on them, but as you can imagine, doing things right does take time.  I would expect that each will take 2-3 weeks to build out properly, but if I can get them out faster I most certainly will.

Am I going to need Excel 2016 to get value from the free 'DIY BI' e-Books?

No.  While I highly advocate being on a subscription version of Excel 2016, you'll find content in each of the first three e-Books which can be used in prior versions of Excel.

How do I receive the free 'DIY BI' e-Books?

You sign up for the Excelguru newsletter.  It's just that easy.  As soon as each e-Book is finished, we'll be emailing it to everyone who is currently subscribed to our newsletter.

And in the mean time, you also get a monthly email from us which now includes news about the latest updates to both Excel and Power BI.

Longer term, once all four e-Books are written, any new subscribers will receive the first e-Book upon signup, and then the next in the series will arrive every couple of days until you have the full set.

So what are you waiting for?  Sign up right here and don't miss out on free DIY BI Tips, Tricks and Techniques for your work!

Subscribe to our mailing list

* indicates required





Calculate Start and End Dates

I got an email from a reader this morning who asked how to calculate start and end dates for a given employee when they have had multiple terms of employment.  Since it's been a while since we've had a technical post on the blog, I thought that this would be a good one to cover.

The Challenge

In this case (which you can download here) we are given the table shown below on the left, and we need to create the table shown on the right:

image

As you can see, John's start date needs to be listed as Jan 1, 2013 and his end date needs to be listed based on the last date he worked here; Oct 31, 2016.

How to Calculate Start and End Dates using Power Query

My first thought was "we'll need a custom function to do this", but as it turns out, there is a a MUCH easier way to accomplish this, and it's 100% user interface driven as well.  I'm virtually certain that the performance will also be much better over larger data sets as well (although I haven't specifically tested this.)

Let's take a look:

Step 1: Connect to the data

This is pretty easy, just select the table and use Power Query to connect to the data:

  • Excel 2010/2013:  Power Query --> From Table
  • Excel 2016: Data --> From Table/Range

We'll be launched into Power Query and will be looking at our short little table:

image

Step 2: Calculate Start and End Dates via Grouping

The trick here is to actually use Power Query's Grouping feature to calculate the start and end dates.  To do this:

  • Go to Transform --> Group By

The dialog will open and is already offering to group by Name, which is what we need.  Now we just need to select the grouping levels.  The first is going to be our Start Date, so we'll rename it as such and change to calculate a Min of the From column:

image

The effect here is that this will provide the lowest value from the "From" column for each employee.  A perfect start.

Next, we need to add a new grouping level to get the End Date.  To do that:

  • Click Add Aggregation
  • Configure the new column as follows:
    • New column name: End Date
    • Operation: Max
    • Column: To

It should look as follows:

image

And believe it or not, you're done!

image

How this works

The key here is that the grouping dialog in Power Query works for all records in the group.  This is really important, as the first column has no bearing on subsequent columns… if it did, we'd get the max for the first record, which is not at all what we'd be looking for.  Instead, the Group By will restrict to find all records for John, then will pull the Min and Max out of the remaining three rows, returning those as the values.

The other thing that is worth noting here is that the order of the source data is irrelevant.  We could have provided either of these options and the answers would still have been calculated correctly:

SNAGHTML19ec2a56

Final Notes

It's also worth mentioning that this technique to calculate start and end dates will also work in both Excel and in Power BI, as the feature set is identical between the two products.

Sometimes things that look hard, are actually really easy when we have the right tools in our hands, and this happens to be one of those situations.  Smile

Make the Sample Binary file path dynamic

In this post we will explore how to make the Sample Binary file path dynamic when combining files using the new Combine Binaries experience in Excel and Power BI Desktop.

Sample Files

If you'd like some sample data files to play with, you can download them here.

Why do we even need to talk about this?

I've covered the new combine binaries experience in my last couple of posts on:

But one thing I didn't dig deep into was the Sample Binary file path, and the fact that it actually gets a hard coded file path.  To replicate the issue, here's how I set up my quick test:

  • Open the application of choice (I'm going to use Power BI Desktop here)
  • Get Data (create a new query) From File --> From Folder
  • Browse to the folder path and click Edit

In this case I've browsed to following file path, which only contains a single file (so far):  C:\Users\KenPuls\Desktop\CSVs.  And here's what it looks like in Power BI Desktop or Excel:

SNAGHTML1854513

And now I click the Combine Binaries button at the top right of the Content column, resulting in this:

SNAGHTML18699e5

Now, as I discussed in the first post in this series, we know we can modify the "Transform Sample Binary From…" step to see those changes in the final output.  So what's the issue here?  I'm going to right click the Sample Binary and choose to view it in the Advanced Editor:

image

Note:  I did add a line break between the 3rd and 4th lines, so read that as one.

The key part to notice here is that the file path, despite being in the original CSVs query, is also hard coded into this query TWICE.  That makes it very difficult to port this from one location to another, as simply changing the file path in the CSVs query is not sufficient, it will still break upon refresh.  It's for this reason that we need to make the Sample Binary file path dynamic: so that we only have to change it in one place.

How to make the Sample Binary file path dynamic

To start with, I'm going to throw this solution away and start over completely.  And again, while I'm using Power BI desktop to illustrate the method to make the sample binary file path dynamic, this will work the same in Excel with only one exception. (Once you have the new combine binaries method in Excel, anyway.)

Step 1 - Launch the Power Query editor

To get started, I'm going to launch myself into the Power Query editor, ideally without creating a new query.  This is easy to do in Power BI Desktop, simply go to the Home Tab and click the top of the Edit Queries button.  You'll be launched into the editor without creating any new queries:

image

In Excel, if you've never opened the Power Query editor before, there is no way to get in there without creating a new query.  You'll need to create a New Query --> From Other Sources --> Blank Query.  Then you can expand the Queries pane on the left, right click Query1 and Delete it.  Silly, but that's pretty much the way to accomplish this.  (If you have created other queries, you just need to edit any one to get into the editor, as it won't create a new one for you.)

Step 2 - Create a Parameter for your file path

Before we get started, we need to create a single place to update our file path.  For this we'll use one of the Power BI/Excel parameters.  To do that:

  • Go to Home --> Manage Parameters --> New Parameter
  • Set up the Parameter as follows:
    • Name:  FolderPath
    • Required:  yes
    • Type:  Text
    • Current Value:  <your file path>  (mine is C:\Users\KenPuls\Desktop\CSVs)
  • Click OK

This will result in a rather simple little parameter that looks like this:

image

Step 3 - Create a new query against the folder

Now that we have our parameter, we are ready to actually create the query we need against the folder.  So let's do that now.  (Oh, and if you're working with Excel, just stay in the Power Query editor - no need to go back to Excel first.)

  • Power BI Desktop:  Home --> New Source --> More --> Folder
  • Excel:  Home --> New Source (near the end of the ribbon) --> File --> Folder

When prompted for the folder path, instead of clicking Browse, click the ABC on the left and choose Parameter:

image

It will automatically populate with the FolderPath parameter and, upon clicking OK, will take you to the preview window where you can click OK (Power BI Dekstop) or Edit (Excel.)

Step 4 - Combine the Binary Files

Now we'll combine the binary files… all one of them.

  • Rename the Query to "Transactions"
  • Click the Combine Binaries icon on the top right of the Content column

Just a quick side note here… in the current build of Excel we don't see this, but in Power BI desktop, we are taken to this window where we can control how the data types are determined:

image

This is pretty cool, and I assume it will be coming to Excel in the future too.  If your data types are consistent most of the time, you generally won't have to worry about this.  If, on the other hand, you've got strange things that happen, (say that once every 10,000 transactions you get a fractional sales unit,) you may want to choose the "Entire Dataset" option to avoid truncated decimals.

For now, just click OK with the default to blow past this dialog.

Step 5 - Make the Sample Binary file path dynamic

And finally, here we are, it's time to make the magic happen and actually make the sample binary file path dynamic.  To do this we're going to make a couple of small edits to the Sample Binary's M code.

  • Right Click the Sample Binary --> Advanced Editor

NOTE:  Be sure not to accidentally hit the Sample Binary Parameter1… we don't want that one!

Now, first thing to notice is that the very first line no longer points to a hard coded file path, instead it points to our parameter.  That's very cool as a single update to the parameter means that both this query and the original one to pull the files from the folder will be changed when our parameter is updated.  One place to fix them both.

Now, there are still issues here, but I just want to do a bit of cosmetic cleanup first.  The second and last lines still start with the name of the file.  This is just something inside the M code that you'll probably never read again, but it's still good practice to clean it up:

image

I've change both highlighted parts to read "SampleFile" to make the code a bit shorter.  And now I can focus on the real issue:

image

The highlighted portion above still holds both the hardcoded file path and file name.  What this means is that even though the folder path is dynamic, if I change the parameter and update the file path on a new computer, it will still be pointing to the older source.  That is far from ideal.

Interestingly, it's really simple to fix when you know how.  You simply grab everything from the [ to the ] and replace it with 0 so that it looks like this:

image

To show that it updates properly, I'm going to click Done, and throw a new file in the folder called "Feb 2008.csv".  Since this is lower in the alphabet, we'd expect it to show up before "Jan 2008.csv", and it does when I refresh the preview window:

image

The End Result

The biggest reason I want to make the sample binary file path dynamic is the scenario where I email the solution to someone else, and they have a different file path to the data files.  In this case they now only need to edit the project, update the FolderPath parameter, and everything will work again.

Why Not Edit in the Formula Bar?

In truth, you don't actually have to go into the Advanced Editor to update your code.  I made a cosmetic fix in there, as I actually do go back and read code later.  Since the default step name leaves a red herring in the code, I wanted to nail that down.  If you're never going to read the code though, it's cosmetic.

In effect, all that is really necessary is to replace the code from [ to ] with 0 as we did above:

image

The problem is that if you do this here, it automatically kicks off 3 new steps that have to be deleted:

image

Granted it's not the end of the world, but since I want to clean up the code anyway…

Final Thoughts

You're not alone if you think this should be unnecessary.  In my opinion, this dynamic nature should be standard, and I think it would be an easy fix for the team to implement.  Marcel even posted a suggestion to modify this feature here, which you should consider voting for.

Combine Excel Files

If you've wanted to use Power Query to combine Excel files with a single click - like you could for TXT and CSV files - the feature is finally here*.  The Combine Binaries feature has been re-written (as I discussed yesterday), and it now allows for easy combination of Excel files.

* This new feature in the O365 Fast Insider preview today and in Power BI Desktop's November update.  I'm expecting to see this show up in the January Power Query update for Excel 2010/2013.

Just a quick caveat, I'm going to cover the items specific to the Combine Excel Files experience here, and not dive deep into the methods of how to modify and adjust the created queries.  For that reason, if you haven't read yesterday's post, I highly recommend doing that first.

The Classic Combine Excel Files Experience

For anyone who has tried to combine Excel files in the past, you'll be familiar with this.  You create a new query to pull From File --> From Folder and navigate to the folder full of Excel files.  Then you hopefully click the Combine Binaries button:

image

And you get this:

image

Obviously that's not so helpful.  The answer to deal with this was to go to Add Column --> Add Custom Column and use the formula =Excel.Workbook([Content]) to convert the data into tables. Then some filtering and more user interface driven work was required in order to get your data.  The experience was a bit painful, and certainly not for beginner users.

The New Combine Excel Files Experience

To start with, I'm going to take two Excel files with sales transactions in them and combine them easily using the new combine Excel Files experience.  Here's the file characteristics:

  • Each file has a small set of data (2-3 rows)
  • The data is stored on Sheet 1 in a table
  • Both files are stored in the same folder

To get the data, I'm going to do the following:

  • Get Data using a new query "From File --> From Folder"
  • Browse to and select the folder
  • Click Edit at the preview window

As you can see, we've got a couple of Excel files here:

SNAGHTML9cbff3d

So I'll click the Combine Binaries button (at the top right of the Content column.)

And where this would have triggered an error in the past, it now kicks out a preview:

image

And what happens next depends on what you select (which is why they are numbered above.)

Combine Excel Files - Method 1

For the first kick at this, I'll select the Sample Binary Parameter 1 folder icon (indicated by the number 1 in the Combine Binaries preview.

Nothing will ever show in the preview window, but upon selecting the folder I can click OK, which will result in this:

image

As I showed in yesterday's post on the new Combine Binaries Experience, we now get a bunch of new queries and a few steps along the way.  The end result of this query, however, is a listing of all the worksheets, tables and ranges in each workbook.  This is the view that will allow you to go back and granularly pick out what you need and transform it.  In other words, this is kind of the detailed hard core view which was the equivalent of writing the custom columns that we used to have to do.

Because this is so similar to the classic method, I'm not going to do much more with this.  The real point was to expose that selecting the folder in the preview window will bring you to this setup.

Combine Excel Files - Method 2

Method 2 revolves around selecting the table in the preview window; in this case the Sales table.  When we select that branch in the preview window we WILL get a preview of the data:

image

And when we click OK, we actually get the data combined nicely:

image

As discussed in the previous post, if we wanted to modify and/or change:

  • The Source columns (Source.Name or others): We modify the Removed Other Columns1 step in this query.
  • The data before it is imported and combined: We modify the Transform Sample on the left side.

Now this section is MUCH easier than what we used to have to do!

Combine Excel Files - Method 3

But what if your data is not in an official Excel Table?  What if it's just data in a worksheet range?  Well, then you select the worksheet icon instead:

image

And the results are virtually identical to the previous method:

image

Why does that work?  It works because the Transform Sample is smart enough to automatically promote the first row to headers, so it doesn't actually need the table.  On the other hand, if that data wasn't in the first row, you may need to go back to the Transform Sample and tweak it to do what you need (like remove top rows, promote rows to headers, delete columns, filter, etc.)

Caveats When Trying to Combine Excel Files

This experience will work well for many things, but as always there are some caveats.

Single Object Only

The default experience here is to combine based on the object you select.  In other words, if you select Sheet 1, it will combine Sheet 1 from each file.  It won't combine all sheets in the file based on the Sheet 1 code.  If you want to do that, you need to go back to Method 1 above, filter to the objects you want, and deal with them using classic import methods.  (Unless you try to get real techy and build the function then repurpose it to use in that table - something I have not done yet.)

Preview Based On First Item In the List

The preview and import templates are based on the first file in the list.  You can see how that affects things when I throw a new file into my structure that has different worksheet and table names:

image

While the two Sales workbooks have Sheet1 in them, this one doesn't, making it impossible to use this function to combine the first worksheet in each file. (The template would be based on Dec and would return errors for the other two files.)

If the order is important, you'll need to sort the file list first to get the correct file to the top before you trigger the Combine Binaries function.

For the record, I have sent an email to the Power Query team suggesting that it would be nice to get the option to pick the file here which the template should be based upon.  That would make this much easier to work through, I think.

Inconsistent Columns Are Bad News

Let's say that you have two files with tables in them who have different column names (or quantities).   The transformations generated will actually deal with this correctly, resulting in a column of tables which have different headers.  All good so far, but when the main query gets to the last step, it expands the column of tables based on the headers for the table in the first row only.  This is actually a standard thing, so not a surprise, I just want to make sure you don't think this is a holy grail that will solve the differing column issue.  (We have more work to do in that case.)

Overall Thoughts

At the end of the day, I have to say that this is a pretty welcome addition.  I'm still not a fan of the names of the generated queries, and I would add something to change the template file, but I think this is going to make it a LOT easier for people to import and transform Excel files than it has been in the past.

New Combine Binaries Experience

One of the updates in the latest Excel 2016 Insider preview build is a new Combine Binaries experience for Power Query.  This is both good and bad, and in this post we'll take a detailed look at both sides before it hits you in your Excel implementation.  (With this new feature in the O365 Fast Insider preview and with it already existing in Power BI Desktop's November update I'd expect to see this show up in the January Power Query update for Excel 2010/2013.)

A Little History Lesson

Before we start, let's get everyone on the same page.  If you didn't know this, the Combine Binaries feature is one of the coolest things in Power Query or Power BI desktop, as it let you do a one click consolidation of "flat" files (txt, csv and the like).

How Combine Binaries (Used to) Work

In order to make this work, set up a new query to get data From File --> From Folder, browse to the folder that contained your files, select it and clear the preview window.  At that point all the magic happens when you click the Combine Binaries button at the top of the Content column:

SNAGHTML56229e0

That "one-click" experience would take your source folder and add four steps to your query:

image

And it puts it all together nicely.  So cool, so slick, so easy.

Why did we need a new Combine Binaries experience?

So the first real question here is "Why even modify this experience at all?"  As it happens, there were a few issues in the original experience:

  1. You lost the original file name details.  If you wanted to keep that, you needed to roll your own custom function.
  2. Headers from each file were not removed, so you'd have to filter those out manually.
  3. It could only work with flat files (csv, txt, etc..) but not more complex files like Excel files.

So for this reason, the team decided to build a more robust system that could deal with more files and more data.

The New Combine Binaries Experience

So let's look at what happens in the new experience.  We start the same way as we always did:

  • Set up a new query to get data From File --> From Folder
  • Browse to the folder that contained your files, select it and click OK
  • Click Edit at the preview window
  • Click the Combine Binaries button at the top of the Content column

At this point a whole bunch of crazy stuff now happens, and your query chain looks totally different than in the past:

SNAGHTML610c47b

There are 3 main areas that are different here:

  1. A whole bunch of new queries and parameters with very similar names,
  2. The original source name is retained
  3. Different query steps than in the past

If your first reaction to this is being overwhelmed, you're not alone.  I'll admit that my first reaction to this was not a happy one.  There is a huge amount of stuff injected in the file, it's difficult to follow the relationship in the M code (even if you do know how to read it all), and it isn't intuitive as to what do to with it.

At the end of the day, the biggest change here is that things happen differently in the past.  In the original implementation of the Combine Binaries feature set, it combined the files first, then applied some other Power Query steps.

The new method actually examines the individual files, formats them via a custom function, then appends them.  This is very different, and it actually gives us a bit more flexibility with the files.

What's the end effect that will be different for you?  Simply this:

  • More queries in the chain (maybe you don't care), and
  • The file name is preserved by default (which was not there in the past)

SNAGHTML61d909b

Now if you are good with everything here, then no worries.  Continue on, and you're good to go.  But what if you want to make changes?

Making Changes in the new Combine Binaries Experience

The biggest challenge I have with this new implementation is that if you are a user hitting this for the first time, how and what can you change?

Changing or Preserving File Details

What if I wanted to keep more than just the file name… maybe I wanted to keep the file path.  You'll be surprised, but this is actually pretty easy now.  Since Power Query wrote a custom function for us to import the data, all the file properties were already preserved for us.  But it also made a choice to keep on the source file name.

To modify this, we just look at the steps in the Applied Steps window, click the gear next to "Removed Other Columns", and choose the ones we do want to keep:

SNAGHTML623e100

So in this case, I can actually uncheck the box next to Source.Name and remove that from the output.  (We want to keep the very last step, as that is what actually appends the data from the files).

Also… after looking at the Applied Steps window the Renamed Column1 step was only in place to avoid a potential name conflict if you had a column called Name (there is a really good reason for this which I'll look at in another post.)  In this case it is totally unnecessary, so we can just delete it.

So now our code looks as shown below, and the output looks similar to what we would see in the old experience:

SNAGHTML629962e

Notice I said that it looks similar - not that it is identical.  This is actually better as there are no headers repeating in the data at all, so we don't need to get rid of those.  That is an improvement.

Modifying the Import Function

Now, that's all good for the details about the file, but what about the data?  What if I only want records for department 120?

To understand the impact of this next piece, we need to understand that there are two ways to combine data:

  1. Bring ALL the data into Excel, then filter out the records you don't want
  2. Filter out the records you don't want, and ONLY bring in the ones you do

Guess which will consume less resources overall?  Method 2.  But the challenge here is that Power Query encourages to use Method 1.  You're presented with a full table that is just begging you to filter it at this point… but you are better to deal with it using Method 2… only it's not obvious how to do that.

The kicker here is that the logical flow of Power Query's Applied Steps window has been interrupted with the "Import Custom Function1" step.  And if you've ever used Power Query before, you know that modifying a Custom Function is far from a beginner friendly thing to do.

As it happens though, the Power Query team has given us a way to easily modify the custom function, it's just… well… let's just say that the name isn't as intuitive as it could be: image

So this is the secret… if you select the "Transform Sample Binary from Combine Binaries" (what a mouthful!) it takes you to the template that the function is actually using.  To put this another way… any changes you make here will be used by the custom function when it imports the data.

So here's the changes I made in the Transform Sample:

  • Removed the Change Type step
  • Filtered Dept to 120
  • Renamed TranDate to Date
  • Renamed Sum of Amount to Amount
  • Change Type with Locale on the Date column to force it to US dates
  • Removed Errors from the Date column
  • Change Type on the other columns

In other words, I do the majority of the cleanup work here based on the single file.  The end result for the Transform Sample query looks like this:

SNAGHTML63c4468

Hopefully this makes sense.  It's a fairly straight forward transformation of a CSV file, but rather than doing the work on the files that have already been combined, I'm doing it in the Transform Sample query.  Why?  Because it pre-processes my data before it gets combined.  And the cool thing?  Now we go back to the main query I started with:

SNAGHTML63e5508

The results have already propagated, and we're good to go.

Thoughts and Strategies

Overall, I think the new Combine Binaries experience is a good thing.  Once we know how to modify it properly, it allows us some great flexibility that we didn't have before - at least not without writing our own custom functions.  There are a couple of things we do need to think about now though.

Name Your Queries Early

Under this new method, I'd highly recommend that you change the name of your query BEFORE you click that Combine Binaries button.  All those samples on the left side inherit their name from the name of the query, and Power Query is not smart enough to rename them when you rename your query.  Here's a comparison of two differently named queries:

SNAGHTML988b327

So while I used to think about renaming my query near the end of the process, we really need to think about that up front (or go back and rename all the newly created queries later.)

Where Should You Transform Your Data?

The next big thought is where you should run your transforms… should you do it on the master query (the one with the consolidated results), or the sample used for the transformation function?

The answer to that kind of depends.  A few instances where this could really matter:

  1. Filtering: I would always filter in the sample.  Why spend the processing time to bring in ALL of the data, only to filter it at the end.  Much better to modify the Transform Sample to avoid bringing in the records in the first place.
  2. Sorting: I would suggest that most times you'd want to run your sorts in the master query.  Let's say you have a file for each month, and you sort by date.  If you do that it in the master query, things will be sorted sequentially.  If you do it in the Transform Sample, it will be sorted sequentially within each file, but if the files are out of order, your master will still be out of order until you re-sort by date.  (Apr comes before Jan in the alphabet.)
  3. Grouping:  This one is tricky… most of the time you'll probably want to load all of your transactions into the master query, then group the data.  So if I want to group all transactions by month and account code, I'd do it in the master query.  But there are definitely instances where you'll want to pre-process your data, grouping the data inside the individual file before it lands in your master query.  A case in point might be the daily payments list for payments you issue to vendors.  If they are batched up then sent to the bank, you'd want to import the file with all daily records, group them, then land that pre-grouped data into your master query.  That is an operation that you may wish to do at the Transform Sample level.

The good news is that it is easy to experiment here and switch things up as you need to.

And One More Major Impact…

This blog post has really focussed on how the new Combine Binaries experience changes impacts importing text files.  What it doesn't cover however, is the why it was REALLY built.  I'm going to cover this in tomorrow's blog post: how the new Combine Binaries experience allows one click consolidation of Excel files!