Creating a VLOOKUP Function in Power Query

Tonight I decided to actually follow through on something I’d been musing about for a while:  building a full fledged VLOOKUP function in Power Query.  Why?  Yeah… that’s probably a good question!

Replicating VLOOKUP’s exact match is REALLY easy in Power Query.  You simply take two tables and merge them together.  But the approximate match is a little harder to do, since you don’t have matching records on each side to merge together.

Now, to be fair, you could go the route of building a kind of case statement, as Chris Webb has done here.  In actual fact, you probably should do that if you want something that is lean and mean, and the logic won’t change.  But what if you wanted to maintain a table in Excel that holds your lookup values, making it easy to update? Shouldn’t we be able to take that and use it just like a VLOOKUP with an approximate match?  I don’t see why not.  So here’s my take on it.

Practical Use Cases

I see this as having some helpful use cases.  They’ll mostly come from Excel users who are experienced with VLOOKUP and maintain lookup tables, and reach back to that familiarity.  And they would probably be tempted to do something like this:

image

The concern, of course, is that landing data in the worksheet during this cycle contributes to file size, memory usage and ultimately re-calc speed, so if you can avoid this step on the way to getting it into Power Pivot, you plainly want to do that.

The cool thing is that by building this the way I’ve done it, you’re not restricted to landing your data in the worksheet to use VLOOKUP with it.  You can pull data into Power Query from any source (csv, text file, database, web page) and perform your VLOOKUP against your Excel table without that worksheet round trip.

Let’s Take a Look…

Now, I AM going to use Excel based data for this, only because I have a specific scenario to demonstrate.  You can download a sample file – containing just the data – from this link.  (The completed file is also available at the end of the post.)

So, we have a series of numbers, and want to look them up in this table:

image

I really used my imagination for this one and called it “LookupTable”.  Remember that, as we need that name later.  Note also that the first record is 1, not 0.  This was done to demonstrate that an approximate match can return a #N/A value, as you’ll see in a minute.

Now here’s what things would look like using standard Excel VLOOKUP formulas against that table:

image

Hopefully this makes sense.  The formulas in columns 2, 3 and 4 are:

  • =VLOOKUP([@Values],LookupTable,2,TRUE)
  • =VLOOKUP([@Values],LookupTable,3)
  • =VLOOKUP([@Values],LookupTable,2,FALSE)

Just to recap the high points here… column 2 declares the final parameter as ,TRUE which will give us an approximate match.  Column 3 doesn’t declare the final parameter, which will default to ,TRUE and give an an approximate match.  Column 4 declares the final parameter as ,FALSE which means we’ll want an exact match.  The end result is that only one value matches, which is why we get all those #N/A results.

Standard VLOOKUP stuff so far, right?

Creating the VLOOKUP function in Power Query

Before we get to using the function, we need to create it.  To do that we’re going to go to:

  • Power Query –> From Other Sources –> Blank Query
  • View –> Advanced Editor

Highlight all the code in that window and replace it with this… (yes, it’s not short)

let pqVLOOKUP = (lookup_value as any, table_array as table, col_index_number as number, optional approximate_match as logical ) as any =>
let
/*Provide optional match if user didn't */
matchtype =
if approximate_match = null
then true
else approximate_match,

/*Get name of return column */
Cols = Table.ColumnNames(table_array),
ColTable = Table.FromList(Cols, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
ColName_match = Record.Field(ColTable{0},"Column1"),
ColName_return = Record.Field(ColTable{col_index_number - 1},"Column1"),

/*Find closest match */
SortData = Table.Sort(table_array,{{ColName_match, Order.Descending}}),
RenameLookupCol = Table.RenameColumns(SortData,{{ColName_match, "Lookup"}}),
RemoveExcess = Table.SelectRows(RenameLookupCol, each [Lookup] <= lookup_value),
ClosestMatch=
if Table.IsEmpty(RemoveExcess)=true
then "#N/A"
else Record.Field(RemoveExcess{0},"Lookup"),

/*What should be returned in case of approximate match? */
ClosestReturn=
if Table.IsEmpty(RemoveExcess)=true
then "#N/A"
else Record.Field(RemoveExcess{0},ColName_return),

/*Modify result if we need an exact match */
Return =
if matchtype=true
then ClosestReturn
else
if lookup_value = ClosestMatch
then ClosestReturn
else "#N/A"
in Return
in pqVLOOKUP

Now:

All right… the function is there.  Now let’s go make use of it… (we’ll come back to how it works in a bit.)

Using the VLOOKUP function in Power Query

Now, before we go any further, I want to ask you a favour.  I need you to pretend for a second.  Pretend that the data we are connecting to next is a database, not an Excel table.  You’ll see how this can be useful if you’ll play along here.  (The only reason I’m using an Excel table for my source data is that it’s easier to share than a database.)

Let’s go click in the DataTable table.  (This one:)

image

Now, let’s upload this “database” into Power Query…

  • Go to Power Query –> From Table

You should have something like this now:

image

Funny how Power Query reads the #N/A values as errors, but whatever.  Let’s get rid of those columns so that we’re left with just the Values column.

  • Right click Values –> Remove Other Columns

Now, we’re going to make a really small M code edit.

  • Go to View –> Advanced Editor
  • Copy the second line (starts with Source =…)
  • Paste it immediately above the line you just copied
  • Modify it to read as follows:
    • Source –> LookupSource
    • DataTable –> LookupTable

Your M code should now look as follows:

image

  • Click Done

Nothing really appears to look different right now, but you’ll notice that you have an extra step called “LookupSource” on the right.  If you switch back and forth between that and Source, you’ll see we are looking at the original DataTable and the LookupTable.  The reason we do this is to make the next step really easy.

  • Go to Add Column –> Add Custom Column
  • Call the column 2 True
  • Enter the following formula:
    • pqVLOOKUP([Values],LookupSource,2,true)

Okay, so what’s what?

  • pqVLOOKUP is the name of our function we added above
  • [Values] is the value we want to look up
  • LookupSource is the table we want to look in to find our result
  • 2 is the column we want to return
  • true is defining that we want an approximate match

And, as you can see when you click OK, it works!

image

Let’s do the next two columns:

  • Go to Add Column –> Add Custom Column
  • Call the column 3 default
  • Enter the following formula:
    • pqVLOOKUP([Values],LookupSource,3)

So this time we asked for a return from the 3rd column, and we omitted the final parameter.  Notice that it defaulted to true for us:

image

Last one…

  • Go to Add Column –> Add Custom Column
  • Call the column 2 false
  • Enter the following formula:
    • pqVLOOKUP([Values],LookupSource,2,false)

And how about that, all but one comes back with #N/A:

image

And with that you can load this into a table in the worksheet:

image

Notice that the results are identical to that of the original Excel table, with one exception… the #N/A I have provided is text, not an equivalent to the =NA() function.

The completed file is available here.

How Does the VLOOKUP Function in Power Query Actually Work?

This VLOOKUP actually has some advantages over the VLOOKUP we all know and love.  The most important is that we don’t need to worry if the list is sorted or not, as the function takes care of it for you.  It essentially works like this:

  • Pull in the data table
  • Sort it descending by the first column
  • Remove all records greater than the value being searched for
  • Return the value in the requested column for the first remaining record UNLESS we asked for an Exact match
  • If we asked for an Exact match then it tests to see if the return is a match and returns #N/A if it’s not

Some key design principles I used here:

  • The parameters are all in EXACTLY the same order as Excel’s VLOOKUP
  • The required, optional and default parameters match what you already know and use in Excel
  • The function is dynamic in that it will work no matter what your lookup table column names are, how many rows or columns it has
  • It returns results that are in parallel with Excel’s output
  • The function is pretty much a drag’n’drop for your project.  The only thing you need to remember is to define the lookup table in the first part of your query

So how cool is that?  You love VLOOKUP, and you can now use it in Power Query to perform VLOOKUP’s from your Power Query sourced database queries against tables of Excel data without hitting the worksheet first!  (In fact, if your database has an approximate table, you could VLOOKUP from database table against database table!)

How to Reference other Power Query queries

One of the things I really like to do with Power Query is shape data into optimized tables. In order to accomplish that goal, I’ve begun using Power Query to source data over Power Pivot’s built in methods. But in order to build things the way I want, I need an easy way to reference other power query queries.

Why would I go to the effort of feeding through Power Query first? I’m no SQL ninja, and I find Power Query allows me to easily re-shape data in ways that would be hard with my SQL knowledge. I can leverage this new tool to optimize my tables and build Power Pivot solutions that require less tricky and funky DAX measures to compensate for less than ideal data structure. (I’d rather have easy to understand relationships and simple DAX measures!)

Methodology

My methodology generally goes something like this:

  • Load a base table into a Power Query. I then set it to only create a connection. Let’s call this my Base Connection.
  • Next I’ll create as many queries as I need to re-shape the data in the Base Connection into the forms I need, then load those into the data model.

It’s that second part that is key. I need to be able to reference other Power Query queries (namely my Base Connection) so that I could prune/trim/re-shape the data.

Reference other Power Query queries - The Old Way

Until recently, I would create my Base Connection, then I’d do the following to create the new query to reference that one.

  • Go to the Power Query tab
  • Show the Workbook Queries pane
  • Right click the Base Connection query and choose Reference

The problem was this… my intention was to reference and customize my query. Instead, it immediately loads it into a worksheet. I have to wait for that to finish before I can edit the new query and customize it the way I want.

Reference other Power Query queries - The New Way

I learned a new method last week from one of the Power Query team members which is much better (thanks Miguel!). I included it in my last post, but I thought this was worth calling out on its own.

Instead of following the method above, this time we will:

  • Go to the Power Query tab
  • Show the Workbook Queries pane
  • Right click the Base Connection query and Edit

Now we’re taken into the Power Query window. On the left side we can see a collapsed “Queries” pane. When you expand that, you get a list of all Power Queries in the workbook.

image_thumb13[1]

  • Right click the Base Connection query and choose “Reference”

We now have a new query in the editor that we can edit, without loading it into a worksheet first. Faster, and more in line with my goals.

The other thing I like about this method is that it immediately gives me access to that queries pane. Why is that important? Because I can drill through the other queries and get at their M code without having to close the window and go back to Excel first. So if I have some funky M code I need to re-use, it makes it way easier to review it and copy it.

Slicers For Value Fields

Earlier this week I received an email asking for help with a Power Pivot model.  The issue was that the individual had built a model, and wanted to add slicers for value fields.  In other words, they’d built the DAX required to generate their output, and wanted to use those values in their slicers.  Which you can’t do.  Except maybe you can…  :)

My approach to solve this issue is to use Power Query to load my tables.  This gives me the ability to re-shape my data and load it into the data model the way I need it.  I’m not saying this is the only way, by any means, but it’s an approach that I find works for me.  Here’s how I looked at it in Excel 2013.  (For Excel 2010 users, you have to run your queries through the worksheet and into Power Pivot as a linked table.)

Background

The scenario we’re looking at is a door manufacturer.  They have a few different models of doors, each of which uses different materials in their production.  The question that we want to solve is “how many unique materials are used in the construction of each door?”  And secondarily, we then want to be able to filter the list by the number of materials used.

The first question is a classic Power Pivot question.  And the setup is basically as follows:

image

  • Create a PivotTable with models on rows and material on columns
  • Create a DAX measure to return the distinct count of materials:
    • DistinctMaterials:=  DISTINCTCOUNT(MaterialsList[material])
  • Add a little conditional formatting to the PivotTable if you want it to look like this:

image

The secret to the formatting is to select the values and set up an icon set.  Modify it to ensure that it is set up as follows:

image

Great stuff, we’ve got a nice looking Pivot, and you can see that our grand total on the right side is showing the correct count of materials used in fabricating each door.

Creating Slicers For Value Fields

Now, click in the middle of your Pivot, and choose to insert a slicer.  We want to slice by the DistinctMaterials measure that we created… except.. it's not available.  Grr…

image

Okay, it’s not surprising, but it is frustrating.  I’ve wanted this ability a lot, but it’s just not there.  Let’s see if we can use Power Query to help us with this issue.

Creating Queries via the Editor

We already have a great query that has all of our data, so it would be great if we could just build a query off of that.  We obviously need the original still, as the model needs that data to feed our pivot, but can we base a query off a query?  Sure we can!

  • In the Workbook Queries pane, right click the existing “MaterialsList” query and choose Edit.
  • You’ll be taken into the Power Query editor, and on the right side you’ll see this little collapsed “Queries” window trying to hide from you:

image

  • When you expand that arrow, you’ll see your existing query there!
  • Right click your MaterialsList query and choose “Reference”.

You’ve now got a new query that is referring to your original.  Awesome.  This will let us preserve our existing table in the Power Pivot data model, but reshape this table into the format that we need.

Building the Query we need

Let’s modify this puppy and get it into the format that will serve us.  First thing, we need to make sure it’s got a decent name…

  • On the right side, rename it to MaterialsCount

Now we need to narrow this down to a list of unique material/model combinations, then count them:

  • Go to Add Column –> Add Custom Column
  • Leave the default name of “Custom” and use the following formula:  [model]&[material]
  • Sort the model column in ascending order
  • Sort the material column in ascending oder

We’ve not got a nicely ordered list, but there’s a few duplicates in it.

SNAGHTML4800f6be[4]

Those won’t help, so let’s get rid of them:

  • Select the “Custom” column
  • Go to Home –> Remove Duplicates

Now, let’s get that Distinct Count we’re looking for:

  • Select the “model” column
  • Go to Transform –> Group By
  • Set up the Group By window to count distinct rows as follows:

image

Very cool!  We’ve now got a nice count of the number of distinct materials that are used in the production of each door.

The final step we need to do in Power Query is load this to the data model, so let’s do that now:

  • File –> Close & Load To…
  • Only create the connection and load it to the Data Model

Linking Things in Power Pivot

We now need to head into Power Pivot to link this new table into the Data Model structure.  Jump into the Manage window, and set up the relationships between the model fields of both tables:

image

And that’s really all we need to do here.  Let’s jump back out of Power Pivot.

Add Slicers for Value Fields

Let’s try this again now. Click in the middle of your Pivot and choose to insert a slicer.  We’ve certainly got more options than last time!  Choose both fields from the “MaterialsCount” table:

image

And look at that… we can now slice by the total number materials in each product!

image

Transpose Stacked Tables

For the first post of the new year, I thought I’d tackle an interesting problem; how to Transpose Stacked Tables in Power Query.  What’s do I mean by Stacked Tables?  It’s when your data looks like this:

image

Notice that we’ve got 3 tables stacked on top of each other with gaps.  The question is, how do we deal with this?

There’s actually a variety of different ways we could accomplish this, but I want to show a neat trick that allows us to refer to data on the next row(s) in Power Query this time.  We may revisit this in future with some other techniques as well, but for now… I think you’ll find this interesting.

Sample File

If you’d like to play along, click here to download the sample file, with a mock-up of a fictional Visa statement.

Getting Started

The first thing we need to do is pull the data into Power Query, so let’s go to Power Query –> From Table, and set the range to pull in all the data from A1:A17:

image

We end up with the table in the Power Query window, and we’re now going to add an Index column to it; something you’re going to see can be very useful!  To do this, go to Add Column –> Add Index Column (you can start from 0 or 1, your preference.  I’m just going to go with 0):

image

Now, for simplicity, I am going to make an unnecessary change to start with.  What I’m going to do is – in the “Applied Steps” section, I’m going to right click the “Added Index” line, and choose Rename, then rename this step to “AddedIndex” with no space:

image

Transpose Stacked Tables - The Tricky Part

Go to Add Column –> Add Custom Column.  In the window that pops up:

  • Name the column “Location”
  • In the formula area, enter:  AddedIndex{[Index]+1}[Transactions]

And the result:

image

Wow… how cool is that?  We’ve referred to the value on the next row!  But how?  The secret is in the syntax.  It basically works like this:

Name of previous step{[Index] + 1}[Name of Column to Return]

Watch all those brackets carefully too.  The curly ones go around the index row you want to return (the index number of the current row plus 1), and the square brackets around the name of the column you want.

Now, let’s do the next row.  Add a new column again:

  • Name the column “TransactionID”
  • In the formula area, enter:  #”Added Custom”{[Index]+2}[Transactions]

Okay, so what’s with that?  Why the # and quotes around the previous step this time?  The answer is that, in order to read the column name with the space, we need to wrap the column’s name in quotes and preface it with the # mark.  This tells Power Query to interpret everything between the quotes as a literal (or literally the same as what we wrote.)  As you can see, it works nicely:

image

Just to circle back on the unnecessary step I mentioned before, it was renaming the “Added Index” step.  Doing that saved me having the type #”Added Index”.  Personally I can’t stand all the #”” in my code, so I tend to modify my steps to drop the spaces.  It looks cleaner to me when I’m reading the M code.

At any rate, let’s do the last piece we need too.  Add another column:

  • Name the column “Value”
  • In the formula area, enter:  #”Added Custom1”{[Index]+3}[Transactions]

image

Beautiful… I’ve got each row of data transposed into the table the way I need it, but I’ve still got a bunch of garbage rows…

Taking Out The Trash

As it stands, we really only need the rows that start with the dates.  Again, we have multiple options for this, but I see a pattern I can exploit.  I need to:

  • Keep 1 row
  • Remove 5 rows
  • Repeat

How do we do that easily?  Go to the Home Tab –> Remove Rows and choose Remove Alternate Rows!

image

And finally we can get rid of the Index column, set our Data Types, and we’re all set:

image

And there you have it.  Just one of a few ways to Transpose Stacked Tables using Power Query.

Treat Consecutive Delimiters As One

A couple of weeks back, I received a comment on my “Force Power Query to Import as a Text File” blog post.  Rudi mentioned that he’d really like to see a method that would let us Treat Consecutive Delimiters as One; something that is actually pretty easy to find in Excel’s old Import as Text file feature.

…the option for “Treat consecutive delimiters as one” should be made available too. The text import eventually imported but the info was all out of sync due to the tabbed nature of the data – names longer and shorter cause other data to not line up.

So let’s take a look at this issue today.  You can access and download the sample file I’ll be working with from this link.

The issue at hand

The contents of the text file, when viewed in Notepad, are fairly simple:

image

But the challenge shows up when we go to import it into Power Query.  Let’s do that now:

  • Power Query –> From File –> From Text
  • Browse to where you stored the file and double click it to Open it up

The first indication that we have an issue is that we’ve got a single column, the second is that it’s a bit scattered:

image

Well no problem, it’s a Tab delimited file, so we’ll just split it by Tabs, and all should be good, right?

  • Home –> Split Column –> By Delimiter
  • Choose Tab and click OK

image

Uh oh…  What happened here?

The issue is that we’ve got multiple delimiters acting between fields here.  They look like one, but they’re not.  Between Date and Vendor, for example, there are two quotes.  But between the Date values and the Vendor values only one Tab is needed to line it up in the text document.  The result is this crazy mess.

Approaches to the Issue

I can see three potential routes to deal with this problem:

  1. We could replace all instances of 2 Tab’s with a single Tab.  We’d need to do that a few times to ensure that we don’t turn 4 Tabs into 2 and leave it though.
  2. We could write a function to try and handle this.  I’m sure that this can be done, although I decided not to go that route this time.
  3. We could split the columns using the method that I’m going to outline below.

Before we start, let’s kill off the last two steps in the “Applied Steps” section, and get back to the raw import that we had at the very beginning.  (Delete all steps except the “Source” step in the Applied Steps window.)

How to Treat Consecutive Delimiters As One

The method to do this is a three step process:

  1. Split by the Left Most delimiter (only)
  2. Trim the new column
  3. Repeat until all columns are separated

So let’s do it:

  • Select Column1
  • Split Column –> By Delimiter –> Tab –> At the left-most delimiter

image

Good, but see that blank space before Vendor?  That’s one of those delimiters that we don’t want.  But check what happens when you trim it…

  • Select Column1.2
  • Transform—>Format –> Trim

And look at that… it’s gone!

image

Let’s try this again:

  • Select Column1.2
  • Split Column –> By Delimiter –> Tab –> At the left-most delimiter

image

And trim the resulting column again:

  • Select Column1.2.2
  • Transform—>Format –> Trim

And we’re good.  Now to just do a final bit of cleanup:

  • Transform –> Use First Row as Headers
  • Name the Query

And we’re finished:

image

Easy as Pie?

In this case, yes it was.  Easy, if a bit painful.  But what about spaces?  If this file was space delimited I couldn’t have done this, as my Vendors all have spaces in them too.  So then what?

I’d modify my procedure just a bit:

  1. Replace all instances of double spaces with the | character
  2. Split by the left most | character
  3. Replace all | characters with spaces
  4. Trim the new column
  5. Repeat until finished

Final Thought

I haven’t received too much in the way of data like this, as most of my systems dump data into properly delimited text or csv files, but I can certainly see where this is an issue for people.  So I totally agree that it would be nice if there were an easier way to treat consecutive delimiters as one in Power Query.

I do like the versatility of the choices we have currently, but adding this as another option that works in combination with the existing ones would be fantastic.

Power Query team: here’s my suggestion…

image

:)

Force Power Query to Import as a Text File

I’ve run into this issue in the past, and also got an email about this issue this past week as well, so I figured it’s worth taking a look.  Power Query takes certain liberties when importing a file, assuming it knows what type of file it is.  The problem is that sometimes this doesn’t work as expected, and you need to be able to force Power Query to import as a text file, not the file format that Power Query assumes you have.

IT standards are generally a beautiful thing, especially in programming, as you can rely on them, knowing that certain rules will always be followed.  CSV files are a prime example of this, and we should be able to assume that any CSV file will contain a list of Comma Separated Values, one record per line, followed by a new line character.  Awesome… until some bright spark decides to inject a line or two of information above the CSV contents which doesn’t contain any commas.  (If you do that, please stop.  It is NOT A GOOD IDEA.)

The Issue in the Real World

If you’d like to follow along, you can click here to download MalformedCSV.csv (the sample file).

If you open the sample file in Notepad, you’ll see that it contains the following rows:

SNAGHTML35a6fc

Notice the first row… that’s where our issue is.  There are no commas.  Yet when you look at the data in the rows below, they are plainly separated by commas.  Well yay, so what, right?  Who cares about the guts of a CSV file?  The answer is “you” if you ever get one that is built like this…

Let’s try importing the sample file into Power Query:

  • Power Query –> From File –> From CSV
  • Browse to MalformedCSV.csv

And the result is as follows:

SNAGHTML390f9a

One header, and lots of errors.  Not so good!

The Source Of The Error

If I click the white space beside one of those errors, I get this:

image

What the heck does that even mean?

Remember that CSV is a standard.  Every comma indicates a column break, every carriage return a new line.  And we also know that every CSV file has a consistent number of columns (and therefore commas) on every row.  (That’s why you’ll see records in some CSV’s that read ,, – because there isn’t anything for that record, but we still need the same number of commas to denote the columns.

And now some joker builds us a file masquerading as a CSV that really isn’t.  In this case:

  • Our first row has no commas before the line feed.  We therefore must have a one column table.  Power Query sets us up for a one column table.
  • But our second row has three commas, which means three columns… That’s not the same number of columns as our header row, so Power Query freaks out and throws an error for every subsequent record.

So Now What?

If we can’t rely on the file format being correct, why don’t we just import it as a text file?  That would allow us to bring all the rows in, remove the first one, then split by commas.  That sounds like a good plan.  Let’s do it.

  • Power Query –> From File –> From Text
  • Browse to the folder that holds MalformedCSV.csv

Uh oh… our file is not showing.  Ah… we’re filtered to only show *.txt and *.prn files…

  • Click the file filter list in the bottom right and change “Text File (*.txt;*.prn)” to “All Files (*.*)”
  • Open MalformedCSV.csv

And the result…

SNAGHTML390f9a

Damn.  See, Power Query is too smart.  It looks at the file and says “Hey!  That’s not a text file, it’s a CSV file!” and then imports it as a CSV file... which we already know has issues.  Grr…

Force Power Query to Import as a Text File

Let’s try this again, this time from scratch.  We need two things here…

  1. The full file path to the file you want to import (including the file extension).  On my system it is "D:\Test\MalformedCSV.csv"
  2. A little bit of a code template, which is conveniently included below.

What we’re going to do is this:

  • Go to Power Query –> From Other Sources –> Blank Query
  • View –> Advanced Editor
  • Paste in the following code

let
/* Get the raw line by line contents of the file, preventing PQ from interpreting it */
fnRawFileContents = (fullpath as text) as table =>
let
Value = Table.FromList(Lines.FromBinary(File.Contents(fullpath)),Splitter.SplitByNothing())
in Value,

/* Use function to load file contents */
Source = fnRawFileContents("D:\Test\MalformedCSV.csv")

in
Source

  • Update the file path in the “Source” step to the path to your file.
  • Click Done

And the output is remarkably different… in fact, it’s the entire contents of the file!

SNAGHTML4af3ce

This is awesome, as we now have the ability to clean up our data and use it as we were hoping to do from the beginning!  So let’s do just that…starting with killing off that first line that’s been screwing us up:

  • Home –> Remove Rows –> Remove Top Rows –> 1
  • Transform –> Split Column By Delimiter –> Comma –> At each occurrence of the delimiter
  • Transform –> User First Row As Headers

And now we’ve got a pretty nice table of data that actually looks like our CSV file:

SNAGHTMLa3388b

The final steps to wrap this up would essentially be

  • set our data types,
  • name the query. and
  • load the query to the final destination.

Final Thoughts

This seems too hard.  I know that Power Query is designed for ease of use, and to that end it’s great that it can and does make assumptions about your file.  Most of the time it gets it exactly right and there is no issue.  But when things do go wrong it's really hard to get back to a raw import format so that we can take control.

I really think there should be an easy and discoverable way to do a raw import of data without making import/formatting assumptions.  A button for “Raw Text Import” would be a useful addition for those scenarios where stuff goes wrong and needs a careful hand.

I should also mention that this function will work on txt files or prn files as well.  In fact, it also works on Excel files to some degree, although the results aren’t exactly useful!

image

The key here is, whether caused by one of more extra header lines in a csv file, tab delimited, colon delimited or any other kind of delimited file, the small function template above will help you get past that dreaded message that reads “DataFormat.Error:  There were more columns in the result than expected.”

Addendum To This Post

Miguel Escobar has recorded a nice video of the way to do this without using any code.  You can find that here:

 

 

Building a Parameter Table for Power Query

One of the things that I’ve complained about in the past is that there is no built in way for Power Query to be able to pull up the path for the current workbook.  Today I’m going to show you how I solved that problem by building a parameter table for Power Query in Excel, then link it into my queries.

Quick Note!  WordPress is determined to drive me crazy and is replacing all of my straight quotes (SHIFT + ') with curly quotes.  Since curly quotes are different characters, they will cause issues if you copy/paste the code directly.  I'll get this sorted out, but in the mean time, just make sure you replace all "curly quote" characters with straight quotes.

To do this successfully, we need to pieces; an Excel table, and a Power Query function.  So let’s get to it.

Building a Parameter Table

The table is quite simple, really.  It’s a proper Excel Table, and it has a header row with the following two columns:

  • Parameter
  • Value

Once created, I also make sure that I go to the Table Tools tab, and rename the table to “Parameters”.

SNAGHTML192eb95

Pretty bare bones, but as you’ll see, this becomes very useful.

Now we add something to the table that we might need.  Since I’ve mentioned it, let’s work out the file path:

  • A8:     File Path
  • B8:     =LEFT(CELL("filename",B6),FIND("[",CELL("filename",B6),1)-1)

Now, as you can see, column A essentially gives us a “friendly name” for our parameter, where the value ends up in the second column:

SNAGHTML195b12d

While we’re here, let’s add another parameter that we might have use for.  Maybe I want to base my power query reports off the data for the current day.  Let’s inject that as well:

  • A9:     Start Date
  • B9:     =TODAY()

SNAGHTML197a4ef

Good stuff.  We’ve now got a table with a couple of useful parameters that we might want when we’re building a query.

Adding the Parameter Function

Next, we’re going to add the function that we can reference later.  To do that:

  • Go to Power Query –> From Other Sources –> Blank Query
  • Go to View –> Advanced Editor
  • Replace all the code with the following:

(ParameterName as text) =>
let
ParamSource = Excel.CurrentWorkbook(){[Name="Parameters"]}[Content],
ParamRow = Table.SelectRows(ParamSource, each ([Parameter] = ParameterName)),
Value=
if Table.IsEmpty(ParamRow)=true
then null
else Record.Field(ParamRow{0},"Value")
in
Value

  • Click Done
  • Rename the function to “fnGetParameter”
  • Go to Home –> Close & Load To…
  • Choose to create the connection only, avoiding loading to a table or the Data Model

And just in case you think this means it didn’t work, we expect to see that it didn’t load in the queries window:

image

Making Use of the Parameter Table

Okay, so now that we have this in place, how do we go about using it?

Let’s create a ridiculously simple table:

SNAGHTML1a313c0

Now, click in the table and go to Power Query –> From Table.

We’ll be taken into the Power Query window and will be looking at our very simple data.  Let’s pull the contents of my parameters into columns:

  • Go to Add Column –> Add Custom Column
  • Change the title to “File Path”
  • Enter the following formula: =fnGetParameter("File Path")

Check it out!

image

Do you see that?  This is the path to the folder that holds the workbook on my system.  The formula we used in the table retrieves that, and I can pass it in to Power Query, then reference it as needed!

How about we do the same for our date?

  • Go to Add Column –> Add Custom Column
  • Change the title to “First date”
  • Enter the following formula: =fnGetParameter(“Start Date”)

image

The key here is to make sure that the value passed to the parameter function is spelled (and cased) the same as the entry in the first column of the parameter table.  I.e. You could use “FilePath”, “File Path”, “Folder”, “My File Path” or whatever, so long as that’s the name you gave it in the first column of the Parameters Excel table.

And what happens if you pass an invalid value?  Say you ask for fnGetParameter(“Moldy Cheese”) and it’s not in the table?  Simple, you’ll get null returned instead.  :)

Implications of Building a Parameter Table for Power Query

The implications for this technique are huge.  Consider this scenario… you create your workbook, and store it in a folder.  But within that folder you have a subfolder called “Data”.  Your intention is to store all of your csv files in that folder.  And, for argument’s sake, let’s say that it’s a mapped drive, with the path to your solution being “H:\My Solution\”

No problem, you build it all up, and it’s working great.  You keep dropping your text files in the data folder, and you can consolidate them with some M code like this:

let
Source = Folder.Files("H:\My Solution\Data"),
#"Combined Binaries" = Binary.Combine(Source[Content]),
#"Imported CSV" = Csv.Document(#"Combined Binaries",null,",",null,1252),
#"First Row as Header" = Table.PromoteHeaders(#"Imported CSV")
in
#"First Row as Header"

Things run along for ages, and that’s awesome, but then you need to go on vacation.  No worries, it’s Power Query and easy to use, you can just get your co-worker to update it… except… on your co-worker’s machine that same drive is mapped not to the H:\ drive, but the F:\ drive.  Doh!

We could recode the path, but what a pain.  So how about we use the parameter table to make this more robust so that we don’t have to?  All we need to do is modify the first couple of lines of the query.  We’ll pull in a variable to retrieve the file path from our parameter table, then stuff that into the file path, like this:

let
SolutionPath = fnGetParameter("File Path"),
    Source = Folder.Files(SolutionPath & "Data"),
#"Combined Binaries" = Binary.Combine(Source[Content]),
#"Imported CSV" = Csv.Document(#"Combined Binaries",null,",",null,1252),
#"First Row as Header" = Table.PromoteHeaders(#"Imported CSV")
in
#"First Row as Header"

How awesome is that?  Even better, the SolutionPath shows as a step in the Applied Steps section.  That means you can select it and make sure the value is showing up as you’d expect!

Practical Use Case

Several months back I built a solution for a client where we stored his Power Query solution in a folder, and we had data folders that were created on a bi-weekly basis.  Each of those folders were named based on the pay period end data (in ISO’s yyyy-mm-dd format), and were stored in a path relative to the master solution.

Naturally, I needed a way to make the queries dynamic to the master folder path, as I did the development in my home office, then emailed the updated solution files to him in his New York office.  He had different drive mappings than I do, and his team had different drive mappings than he did.  With half a dozen Power Queries in the solution, having them manually update the code each time a new user wanted to work with the file just wasn’t an option.

This technique became invaluable for making the solution portable.  In addition, by having a formula to generate the correct end date, we could also pull the data files from the right place as well.

I still want a Power Query CurrentWorkbook.FilePath method in M, but even if I get it this technique is still super useful, as there will always be some dynamic parameter I need to send up to Power Query.

I hope you find this as useful as I have.

Combine Multiple Worksheets Using Power Query

In last week’s post we looked at how to combine multiple files together using Power Query.  This week we’re going to stay within the same workbook, and combine multiple worksheets using Power Query.

Background Scenario

Let’s consider a case where the user has been creating a transactional history in an Excel file.  It is all structured as per the image below, but resides across multiple worksheets; one for each month:

image

As you can see, they’ve carefully named each sheet with the month and year.  But unfortunately, they haven’t formatted any of the data using Excel tables.

Now the file lands in our hands (you can download a copy here if you’d like to follow along,) and we’d like to turn this into one consolidated table so that we can do some analysis on it.

Accessing Worksheets in Power Query

Naturally we’re going to reach to Power Query to do this, but how do we get started?  We could just go and format the data on each worksheet as a table, but what if there were hundreds?  That would take way too much work!

But so far we’ve only seen how to pull Tables, Named Ranges or files into Power Query.  How do we get at the worksheets?

Basically, we’re going to start with two lines of code:

  • Go to Power Query –> From Other Sources –> Blank Query
  • View –> Advanced Editor

You’ll now see the following blank query:

let
Source = ""
in
Source

What we need to do is replace the second line (Source = “”) with the following two lines of code:

FullFilePath = "D:\Temp\Combine Worksheets.xlsx",
Source = Excel.Workbook(File.Contents(FullFilePath))

Of course, you’ll want to update the path to the full file path for where the file is saved on your system.

Once you click Done, you should see the following:

image

Cool!  We’ve got a list of all the worksheets in the file!

Consolidating the Worksheets

The next step is to prep the fields we want to preserve as we combine the worksheets.  Obviously the Name and Item columns are redundant, so let’s do a bit of cleanup here.

  • Remove the Kind column
  • Select the Name column –> Transform –> Data Type –> Date
  • Select the Name column –> Transform –> Date –> Month –> End of Month
  • Rename the Name column to “Date”

At this point, the query should look like so:

image

Next we’ll click the little double headed arrow to the top right of the data column to expand our records, and commit to expanding all the columns offered:

SNAGHTML52dcfe5

Hmm… well that’s a bit irritating.  It looks like we’re going to need to promote the top row to headers, but that means we’re going to overwrite the Date column header in column 1.  Oh well, nothing to be done about it now, so:

  • Transform –> Use First Row As Headers –> Use First Row As Headers
  • Rename Column1 (the header won’t accept 1/31/2008 as a column name) to “Date” again
  • Rename the Jan 2008 column (far right) to “Original Worksheet”

Final Cleanup

We’re almost done, but let’s just do a bit of final cleanup here.  As we set the data types correctly, let’s also make sure that we remove any errors that might come up from invalid data types.

  • Select the Date column
  • Home –> Remove Errors
  • Set Account and Dept to Text
  • Set Amount to Decimal Number
  • Select the Amount column
  • Home –> Remove Errors
  • Set Original Worksheet to Text

Rename the query to “Consolidated”, and load it to a worksheet.

Something Odd

Before you do anything else, Save the File.

To be fair, our query has enough safe guards in it that we don’t actually have to do this, but I always like to play it safe.  Let’s review the completed query…

Edit the Consolidated query, and step into the Source line step.  Check out that preview pane:

image

Interesting… two more objects!  This makes sense, as we created a new table and worksheet when we retrieved this into a worksheet.  We need to filter those out.

Getting rid of the table is easy:

  • Select the drop down arrow on the Kind column
  • Uncheck “Table”, then confirm when asked if you’d like to insert a step

Select the next couple of steps as well, and take a look at the output as you do.

Aha!  When you hit the “ChangedType” step, something useful happens… we generate an error:

image

Let’s remove that error from the Name column.

  • Select the Name column –> Home –> Remove Errors

And we’re done.  We’ve managed to successfully combine all the data worksheets in our file into one big table!

Some Thoughts

This method creates a bit of a loop in that I’m essentially having to reach outside Excel to open a copy of the workbook to pull the sheet listing in.  And it causes issues for us, since Power Query only reads from the last save point of the external file we’re connecting to (in this case this very workbook.)  I’d way rather have an Excel.CurrentWorkbook() style method to read from inside the file, but unfortunately that method won’t let you read your worksheets.

It would also be super handy to have an Excel.CurrentWorkbookPath() method.  Hard coding the path here is a real challenge if you move the file.  I’ve asked Microsoft for this, but if you think it is a good idea as well, please leave a comment on the post.  (They’ll only count one vote from me, but they’ll count yours if you leave it here!)

Merge Multiple Files With Properties

This post illustrates a cool technique that I learned at the MVP summit last week, allowing us to use Power Query to merge multiple files with properties from the file in the output.  A specific example of where this is useful is where you have several files with transactional data, saved with the month as the file name, but no date records in the file itself.  What we’d want to do is merge all the contents, but also inject the filename into the records as well.

The funny thing about this technique is that it’s eluded me for a long time, mainly because I way over thought the methods needed to pull it off.  Once you know how, it’s actually ridiculously simple, and gives us some huge flexibility to do other things.  Let’s take a look at how it works.

If you’d like to download the files I’m using for this example, you can get them here.  You'll find that there are 3 files in total: Jan 2008.csv, Feb 2008.csv and Mar 2008.csv.

Step 1: Create your query for one file

The first thing we need to do is connect to the Jan 2008.csv file and pull in it’s contents.  So let’s do that:

  • Power Query –> From File –> From CSV
  • Browse to the Jan 2008.csv file and import it
  • Rename the “Sum of Amount” column to “Amount”

Perfect, we now have a basic query:

SNAGHTML63ab03

Notice here how the January file has no dates in it?  That’s not good, so that’s what we’re trying to fix.

Step 2: Turn the query into a function

At this point we need to do a little code modification.  Let’s go into the Advanced editor:

  • View –> Advanced Editor

We need to do two things to the M code here:

  • Insert a line at the top that reads:  (filepath) =>
  • Replace the file path in the Source step (including the quotes) with:  filepath

At that point our M will read as follows:

image

We can now:

  • Click Done
  • Rename our Query to something like fnGetFileContents
  • Save and close the query

Power Query doesn’t do a lot for us yet, just giving us this in the Workbook Queries pane:

image

Step 3: List all files in the folder

Now we’re ready to make something happen.  Let’s create a new query…

  • Power Query –> From File –> From Folder
  • Browse to the folder that holds the CSV files
  • Remove all columns except the Name and Folder Path

Wait, what?  I removed the column with the binary content?  The column that holds the details of the files?  You bet I did!  You should now have a nice, concise list of files like this:

image

Next, we need to add a column to pull in our content, via our function.  So go to:

  • Add Column –> Add Custom Column
  • Enter the following formula:  =fnGetFileContents([Folder Path]&[Name])

Remember it is case sensitive, but when you get it right, some magic happens.  We get a bunch of “Table” objects in our new column… and those Table objects hold the contents of the files!

I know you’re eager to expand them, but let’s finish prepping the rest of the data first.

Step 4: Prep the rest of the data you want on the table rows

Ultimately, what I want to do here is convert the file name into the last day of the month.  In addition, I don’t need the Folder Path any more. So let’s take care of business here:

  • Remove the “Folder Path” column
  • Select the “Name” column –> Transform –> Replace Values –> “.csv” with nothing
  • Select the “Name” column –> Transform –> Date Type –> Date
  • Select the “Name” column –> Transform –> Date –> Month –> End of Month

And we’ve now got a pretty table with our dates all ready to go:

image

Step 5: Expand the table

The cool thing here is that, when we expand the table, each row of the table will inherit the appropriate value in the first column.  (So all rows of the table in row 1 will inherit 2/29/2008 as their date.)

  • Click the little icon to the top right of the Custom column
  • Click OK (leaving the default of expanding all columns)
  • Rename each of the resulting columns to remove the Custom. prefix

And that’s it!  You can save it and close it, and it’s good to go.

A little bit of thanks

I want to throw a shout out to Miguel Llopis and Faisal Mohamood from the Power Query team for demonstrating this technique for the MVP’s at the summit.  I’ve been chasing this for months, and for some reason tried to make it way more complicated than it needs to be.

What’s the next step?

The next logical step is to extend this to working with Excel files, consolidating multiple Excel files together; something we can’t do through the UI right now.  Watch this space, as that technique is coming soon!

Name Columns During a Merge

The timing for the release of the October Power Query update couldn’t really have been much better for me, especially since it’s got a cool new feature in that you can now name columns during a merge.  Why is this timing so cool?

The reason is that I’m at the MVP Summit in Redmond this week, an event where the MVP’s and Microsoft product teams get together to discuss the things that are/aren’t working in the products, and discuss potential new features.  The tie in here is that I already blogged about the new method for Merging Columns With Power Query that came into effect with the August update and, in that blog post, I said:

I do wish this interface had a way to name the column in advance (like exists when you create a custom column.)  Hopefully the PQ team will retrofit us with that ability at some point in the future.

The October Power Query update (released Oct 27, 2014), includes the ability to name columns during a merge, rather than making you do this in two steps.  How cool is that?

image

While I’m under no illusions that this change was made based on my feedback alone, I do know that the Power Query team reads my blog, so it played a part.  This is one of the big reasons I go to the summit every year, to share ideas that make the product more intuitive/usable, so it’s cool to see one of those changes make it into the product a few days before I arrive.

By the time this post is published on Wednesday morning, we’ll already be into our final day of the 2014 Summit.  I’m going in pretty jazzed though, as I know that the product teams are listening and reacting to our feedback.  :)