How do You Design Your Data Model?

On this blog, I showcase a lot of different techniques for manipulating and reshaping data.  For anyone that follows the blog, you already know this, and you know it's a pretty important topic to me.  But the thing we shouldn't lose site of is WHY we do this. It's to drive analytics. I'm fairly convinced that the majority of the loyal readers here already know this. Thus, I wanted to ask your opinion on something...

Raw data to data model, passing through the human brain

How do you design your data model?

What I'm specifically interested in is how you approach designing the Fact and Dimension tables you use for your Power Pivot model.  And I'm not specifically talking about Power Query here. We all know you should be using what you learned from our recently relaunched Power Query Academy to do the technical parts.  😉

What I'm more interested in is the thought process you go through before you get to the technical bit of doing the data reshaping.

If you read books on setting up a data model, you'll probably be told that you need to do the following four steps:

  1. Identify the business process
  2. Determine the grain of the model
  3. Design your Dimension tables
  4. Design the Fact tables

So if you're asked "how do you design your data model", do these steps resonate with you, and why?

Do you consciously sit down, and work through each of these steps in order?  I suspect that many self-service BI analysts skip the first step entirely as they are implicitly familiar with their business process.  (As a consultant, I ask a lot of questions in this area to try and understand this before building anything.)

Do you design the reports on paper, then work backwards to the data you'll need, go find it and reshape it?  Or do you go the other way, trying to collect and reshape the data, then build reports once you think you have what you need?

Do you explicitly define the model grain?  And if you do, what does that mean to you?  Is it restricted to "I want transactions at an monthly/daily/hourly basis"? Or do you do deeper like "I want transactions at a daily basis and want to break them down by customer, region and product"?

Why the question?

There's actually two reasons why I'm asking this question:

Reason 1 is that I'd I think healthy discussion makes all of us better.  I'd like to hear your thoughts on this as I'm probably going to learn something that I haven't discovered in my own learning journey.

Reason 2 is that my whole business is around teaching people how to do these things, and I'm always looking to make things clearer.  The more opinions I hear (even if they contrast with each other), the more I can help people understand his topic.

So sound off, please!  We'd all love to hear how you approach the task of building a data model.

Column Quality and Distribution Features are Finally Here

If you've worked with Power Query in Power BI, you've seen Column Quality and Distribution features already.  But I'm super exited to say that I've finally received it in the Insider build of Excel!

What are the Column Quality and Distribution Features??

If you're not familiar with these, let's break them down a bit:

Data Quality Indicators in Column Headers

In its most simplistic form, you'll see that you now get an indicator in the header of each column as shown here:

Column quality in Power Query Headers

Notice that red bar at the top of column C?  That's happening because there is an error in the column!  This is SUPER awesome, as we've been teaching for years that you have to scroll down looking for errors each time you change from text to a numeric data type.  This should really help, as the column header will make it obvious that something went wrong.  All green?  You should be good to go.  See red?  You've got potential problems to deal with.  (To be fair, you may still get errors after loading, as it is working with a data preview, but still, this will be a BIG help.)

The nice thing about this is that it is always on.  Nothing needs to be set up to enable this by default.  (And so far as I know, you can't turn it off - but then why would you want to?)

The Column Quality Feature

This one is enabled by going to the View tab and choosing "Column Quality", resulting in a view that looks like this:

Power Query Column Preview Settings

Which adds 3 new lines to the top of our columns:

Column Quality in Power Query

Now, I don't plan on leaving this on all the time. However, the cool thing here is that I can very quickly identify that 20% of my data rows in column C are errors.  And while I don't have any empty rows, it would show me that at a glance if I did.

The Column Distribution Feature

The idea behind the Column Distribution feature is to give you a quick glance at the distribution of values in your columns.  Is there a single value that shows up more than others, or is the data uniformly distributed?  This can help you quickly identify if the data has any consistency to it or not:

Column Distribution in Power Query

Notice that in Column a, we have 3 distinct values (b, d, e), but only one value that is truly unique (b only occurs once in the entire data set).  Meanwhile, in column b, each value occurs in the data set once and once only, with no values appearing more than this.  On a data set this small, this isn't going to be super insightful. But say you are pulling in data that spans your monthly summaries for the last year. If suddenly you see 13 values where you are expecting 12... that's a good indicator you might want to look a little deeper.

Something else that is worth noting is that you don't have to keep the Column Quality on to get the Column Distribution chart.  These features can be used together or individually.

The less obvious Column Profile Feature

The final checkbox in the Data Preview menu is the Column Profile feature.  This one is very cool as it only activates when you select the entire column.  And when you do so, it takes over the preview window providing most of the information from the previous two settings.

Column Profile in Power Query

We've got even more stats (although I wish it also showed the % of values like in Column Quality), as well as the value distribution chart.

What I love about this is that it can be called up on-demand by selecting the column, but doesn't get in my way when I don't need it.

My Thoughts on this Feature

I've been waiting for this to hit Excel for over a year now, and am SUPER happy to see it finally show up.  Personally, I'll be running with Column Profile (the last option) selected, but not Column Quality or Column Distribution. The reason is pretty simple... I want to minimize the amount of info between me and my data.  The Column Profile feature will give me what I need in an on-demand fashion when I see the data colours change at the top of my column. 🙂

How do I Get These Features?

This feature is so hot and fresh that it showed for Office Insiders this morning (v1910 build 12112.x.)  So you need to get to that version as a minimum to get these features.  Hopefully it will start rolling through the regular update channels next month!

 

Power Query Challenge 7 – Phone Words

Yesterday, Nick (one of our forum users) posted a Phone Words challenge in our forum, which looks like a great candidate for Power Query Challenge #7.  I haven't had time to tackle it myself, but if you're up for a Friday challenge, why not give it a go?

1-800-Get-Data converts to 1-800-438-3282

Here's the Challenge (word for word):

Take any Phone # and convert it to all possible Letters aka Phone Words.
Phone # can be input any format (strip out all other characters)

Example:

536-7857 = Ken Puls, etc...

Here's the thread where you can post your solution.

Thanks Nick!

PS, if anyone else has challenges they'd like to post, I'm more than happy to set them up in this series.  Obviously it's been a bit quiet here lately, for reasons that we hope to announce soon!

Do Data Types Matter in Power Query?

One of the things that I find a bit misunderstood in Power Query is whether or not Data Types matter. I mean, I’m sure everyone agrees to some point that they do, but you do know just how much they matter, and why?

Over the weekend, I received an email from one of our loyal students which read, in part:

I am going under the assumption that in Power BI nothing is free. Steps, calculated columns, measures and so on could be cheap but they are never free. It is with this premise in mind that I pose the following theory.

Users should sparingly use Change Type in Power Query. It is fine to Change Type when converting Date/Time to Date, Decimal Number to Whole Number, and others that actually change the value. It is a waste of resources to Change Type from Decimal or Whole Number to Currency. Even if you want the column to look like Currency, you can apply that format in the Data Model view and save one Power Query step.

On the face, this theory seems somewhat reasonable. After all, adding additional steps is bound to add some overhead to the Power Query process in most cases. And let’s be honest, in small models, it may make no difference to you at all. But when things get bigger…

Data Types vs Formats

To understand what Data Types matter, we need to get something very clear right off the bat: Data Types and Formats are not the same thing. Data Types dictate what kind of data you have and determine how much memory is allocated to store a value. Formatting, on the other hand, tell you how you want the values to appear. To see this in practice, have a look at the following values in Power Query, where the Data Type has been set to Currency:

Column set to Currency Data Type

Notice that they only place you see a $ sign is in the header. And see how the decimal numbers do not line up? I can tell you from many years of teaching accountants, that this drives them bonkers. That 1.7 needs to be 1.70! But you don’t do this here, you do that in the Excel worksheet, Power Pivot model or Power BI visual.

They key to remember here:

  • In Power Query, you define Data TYPES
  • In Power Pivot, Excel or Power BI, you define Data FORMATTING

Excel’s Data Types

In the classic days of Excel, we only had four data types that we had to be worried about. Those were:

  • Text,
  • Numbers,
  • Blanks, and
  • Errors

(Although represented by textual patterns like #REF!, #N/A, #NAME?, and the like, they actually count as a different data type.) Today it gets a bit more complicated with Stock and Geography data types, as I blogged about here, but anything else was just a number that was formatted with a number formatting string. Examples of these include:

  • Date: format it as a number, and you’ll see the number of days since Jan 1, 1900)
  • Time: a little trickier, but this is just a fraction of a day where .25 is 6:00 AM, .5 is 12:00 PM and .75 is 6:00 PM. And if all you have is the time, and you format it as a DateTime, you get that time on Jan 1, 1900.
  • True/False (Boolean): while this shows in the cell as TRUE/FALSE, which looks like text, if you multiply them by 1 you’ll see that TRUE equates to 1, where FALSE equates to 0.

But that was Excel, where data types and number formats where the same thing. That’s not Power Query.

Power Query’s Data Types

Unlike Excel, which has a pretty short list of data types, Power Query seemingly has a ton:

  • Numeric Data Types:
    • Decimal Number
    • Currency (Fixed Decimal)
    • Whole Number
    • Percentage
  • DateTime Data Types:
    • Date/Time
    • Date
    • Time
    • Date/Time/Zone
    • Duration
  • True/False (Boolean)
  • Text
  • Others (Binary, Tables, Records, Lists, and more)
  • Any (the dangerous “undefined” type which allows the application to determine the correct data type)

The key to recognize, is that each of these data types is DISTINCT, meaning that each of these is different in some way from every other. (While we won’t get into it in this post, unlike Excel which implicitly converts data from one type to another, Power Query requires explicit type conversion via use of functions like Number.From() or Date.ToText(), which can be frustrating at times!)

For the purpose of this post, however, I want to focus on the first three numeric types: Decimal Number, Currency and Whole Number, and ask the question: Does the Data Type matter?

Illustration Background

Let’s have a look at an example. For the illustration, I set up a Power Query chain that looks like this:

Viewing our Query Chain

The Data table had 100,000 randomly generated [Sales] records that vary between 1.27317262341058 and 100000.017761279, and randomly generated [Customers] that vary between 1 and 1000.

The only change I made in the Whole, Currency and Decimal types was to set the Data Type for the [Sales] column accordingly. In the Customers table, I removed the [Sales] column and duplicates from the Customers column.

I then created 3 quick measures to sum the column up, and dropped them on a Pivot:

Summing up our 3 measures

I don’t think it would surprise anyone here that the [Whole Sum] is slightly different than the [Currency Sum] and [Decimal Sum]. After all, the numbers were rounded at the source before being added up. And so far, we can see that the [Currency Sum] and [Decimal Sum] look pretty much the same. At least until we expand them a bit:

Decimal differences between Currency and Decimal Data Types

The only thing that should be a surprise here is that currency only holds up to four decimals, not two as most people expect. This actually makes sense when you start thinking about foreign exchange transactions, and how they are always carried to four decimal places.

But is that it? Is a Data Type just about rounding? As it turns out the answer to that is both yes and no.

Testing the Model Memory

The next step here was to test the model memory and see how Power Pivot is storing the data. To that end, here’s a little table that shows exactly that:

Table illustrating how Power Pivot is storing the data

Before we get into this, I want to call out something important here. The Data Type that is showing has been read from Power Pivot. Notice that Whole Number shows as such, consistent with Power Query. But the Currency and Decimal tables both show Decimal. As it turns out, Power Pivot doesn’t make a distinction between these two data types. However, the distinction between these two Data Types matters to you anyway, as I’ll explain.

So, what does that all mean?

In the Whole Number table, I had rounded off all the decimals. This left 63,815 unique values. And because Power Pivot compresses based on unique values, it deals with this column very well, resulting in a total of 260.54 KB to store these values.

In the Currency table, I effectively rounded all the values off to four decimal places. This left 99,996 unique values in the table (only 4 values were repeats). Despite this, Power Pivot did a good job of compressing the values in memory, resulting in 390.75 KB to store them.

Then we get to the Decimal column. There are only 4 more unique values than in the Currency column, but the memory takes a colossal 5,234.47 KB to store the values, vs the 390.75 KB of Currency. What the heck is going on?

The answer lies in the fact that Power Pivot has to carry all of those decimal places, and once it does, it can flip to storing data using Hash Encoded memory. That’s a deep topic for another time but suffice it to say that this is a bad thing, as Value encoding is much more efficient. (Hash is generally used for Text, and Values for… you guessed it… values!)

Interestingly, if you round the Decimal Number to 5 decimals you end up with 99,999 unique values in the column and a very minor change to the memory used. But if you round it to 4 decimals, the memory of the Decimals column compresses the same as Currency, and the memory need drops to the same 390.75 KB.

Table illustrating memory requirements of the different Data Types

Why Model Memory Matters

And this is the final part of the equation to me. Not all time is created equal. My users will accept a 2-minute refresh of the data model. They might say it takes time, but +/-30 seconds in a refresh isn’t anything that they’ll get overly concerned with. They’ll click Refresh, go grab a coffee, then come back to work with the file.

But if they click a slicer and it takes 10 seconds to redraw the Pivot Table or Power BI visuals? Watch out! They’ll claim the model is too slow, ineffective, wrong and useless. I’d MUCH rather push resource consumption into the initial refresh in order to build a memory-efficient model that performs well when being used for analysis.

What, wait happened to the Data Types?

To be fair, Power Pivot’s compression mechanism is more about unique values and the length of precision than it is about Data Types. But it’s up to you to choose the correct Data Type to future-proof your model and make sure that the compression algorithms can be applied.

But due to the way Excel has always worked, the way the Data Types are named, and the fact that most modellers don’t have a clear understanding of Formatting vs Data Types… users are more likely to pick Decimal over Currency. I mean, why would I ever format my units as Currency? (In Power BI this is called a Fixed Decimal, although it still shows with the currency symbol.)

We need to recognize that lagging decimals really do happen in business. Let’s look at that Units column for a second. Naturally we never sell a partial unit… or do we? I’ll give you 3 examples of things I’ve seen in the Food and Beverage industry that forever changed my opinion on this:

  1. We had a couple of customers who insisted that they be able to purchase half a muffin. I’m not even kidding here. The killer for me was that it only showed up every 10,000 rows of transactions or so, meaning that the column often got set to Whole Number incorrectly.
  2. The salesperson agrees to provide those 300 units for $5,000. No big deal except that they monkey the price field in your database to make it work and you suddenly you have a sales price of $16.666667. Fortunately, this one usually gets rounded via setting it to a Currency Data Type, as that’s logical. But what if you set it to decimal or left it undefined?
  3. Things go the other way and the customer agrees to a package price of $5,000 for something that usually carries a price of 17.00 per unit. It gets rung into the system with a reverse engineered quantity of 294.1176470588235 to get it exactly right.

The last is the real kicker as the memory suddenly flips from Value to Hash, the RAM needed to refresh the data expands exponentially and brings your model to its knees. And now every slicer click has gone from 1 second to update your visuals to the better part of 10 seconds. And your audience is screaming that the model is “unusable”.

My Recommended Practice

The final step in every query I build which loads to a destination (an Excel Table or the Data Model) is to correctly define my Data Types. This does two things for me:

  1. It ensures that I never accidentally load a column with an undefined Data Type. (In the case of Dates, they load to Excel as numbers, and to the Data Model as text!)
  2. It ensures that I’m cutting down to the minimum number of decimals I’ll ever need for my analysis.

Does it take more processing time? Maybe marginally. But does it future-proof my solution to keep it performant? Definitely. And it ensures the time to do so happens in the refresh, not in when the model is being used for analysis.

Some Final Thoughts

The issues you saw here with memory also affect DateTimes in a big way, as they are quite similar to decimal numbers, where time is represented as the fraction of a day.

The article touches on some things from a much bigger topic: how to optimize a Power Pivot/Power BI data model. I actually teach a full day course on this topic for CPABC, where we cover how the Vertipaq engine compresses its data, why you want to avoid calculated columns and fall in love with measures. We make extensive use of Power Query in this course to reshape tables (applying correct data types along the way) so that they can be consumed by the data model in a more efficient manger. We dissect a poorly built model, test its memory and then rebuild it in stages seeing how our changes made an impact.

And if you’re looking for tools to help with this process… stay tuned to this blog. I’m hoping to have an announcement on that front within the next couple of months.

Solutions for Power Query Challenge 6

This morning I logged in to check the solutions for Power Query Challenge 6 that were submitted and... Wow!  There were a bunch, and some cool variety there.  So now it's time to show you all what I came up with here.

What was Power Query Challenge 6?

The full description and source data can be found in yesterday's post, but in short it was to convert this:

Data table with nested data sets

Data table with multiple data points per cell

To this:

Data in 1NF

Data shown in First Normal Form (1NF)

So how do we do it?

Two Solutions for Power Query Challenge 6

Wait, two solutions?  Why?

As it turns out, I put together two for this one. My first attempt was cooked up to solve the issue on short notice.  Then I built another that seems a bit more elegant.  So I'll show them both, although quickly.

Solution 1 - Splitting Individual Columns and Merging Back Together

The first of my solutions for Power Query Challenge 6 is actually quite similar to what Ali posted in the solution thread.  It basically follows this method:

  • Step 1:
    • Create a Data query that connects to the source data, and load it as connection only
  • Step 2:
    • Create 3 new queries for ItemID, Quantity and Price which
      • Reference the data query
      • Keep the InvoiceID column and the other relevant column
      • Split the relevant column by delimiter, ensuring it splits to rows (not columns as it defaults to)
      • Add an Index column
  • Step 3:
    • Reference one of the Step 2 tables, and merge the other two tables to it, based on matching the Index column in each

So when complete the query chain looks like this:

And returns the table we're after:

The only real trick to this one is that - when you are building the Price query - the Price column will pick the decimal as the delimiter, so you have to force it to a line feed.  So building the Price query would go through the following steps:

  • Right click the Data query --> Reference
  • Select the InvoiceID and Price columns --> Right click --> Remove Other Columns
  • Right click the Price column --> Split column --> By Delimiter
    • Clear the decimal from the Custom area
    • Click the arrow to open the Advanced area
    • Change the selection to split to Rows
    • Check "Split using special characters"
    • Choose to insert a Line Feed character
    • Click OK
  • Set the Data Types
  • Go to Add Column --> Add Index Columns

Resulting in this:

The ItemID and Quantity queries follow the same steps, except that Power Query now correctly identifies the Line Feed as the character to split on.

Solution 2 - Group and Split

While the first solution to Power Query Challenge 6 worked, it left me less than satisfied as it took a bunch of queries.  While effective, it didn't feel elegant.  So I cooked up another solution that uses Grouping.  It's actually quite similar to the first solution that Bill Szysz posted.

The basic method is as follows:

  • Connect to the data
  • Right click the InvoiceID column --> UnPivot Other Columns
  • Right click the Value column --> Split Column --> By Delimiter --> OK

Following the steps above gets us to this state:

To unwind this, we group it:

  • Go to Transform --> Group By
    • Group By InvoiceID, Attribute
    • Aggregate a "Data" column using the All Rows operation

Grouping using the All Rows feature

At this point, we need to number these rows, so I just busted out the pattern to do that from our Power Query Recipe cards (recipe 50.125).

  • Go to Add Column --> Custom
    • Column Name:  Custom
    • Formula:  =Table.AddIndexColumn( [Data], "Row", 1, 1)
  • Right click the Custom column --> Remove Other Columns
  • Expand all fields from the Custom column

Leaving us with this data:

Data Grouped with Numbered Rows

The next step is to Pivot it:

  • Select the Attribute column --> Transform --> Pivot Column
    • Choose Value for the Values
    • Expand the Advanced arrow
    • Change the Aggregation to "Don't Aggregate"
    • Click OK
  • Select the "Row" column and Remove it.  (Yes, it was needed to unpivot this correctly, but now adds no value.)
  • Set the data types
  • Load it to the desired destination

At this point, the query (again) looks perfect:

The desired output

Now, I must admit, this felt far more professional and left me feeling good about it.

Which Solution to Power Query Challenge 6 is Better?

Naturally, solution 2 is better.  It takes less queries, and looks way cooler.  Right?  Not so fast...

The real question is in the performance.  And for this one I thought I'd test it.  But I needed more data.  I expanded the set to 11,000 rows and then used a tool we're working on to time the refreshes.  Privacy was left on, and all times shown are in seconds:

  • Solution 1:  1.43, 1.48, 1.11, 1.27  Avg ~1.32 seconds
  • Solution 2:  2.77, 2.65, 2.63, 2.68  Avg ~2.68 seconds

I'll be honest, this surprised me.  So I went back and added the GroupKind.Local parameter into the Grouped Rows step, like this (as that often speeds things up):

Table.Group(#"Changed Type1", {"InvoiceID", "Attribute"}, {{"Data", each _, type table [InvoiceID=number, Attribute=text, Value=number]}}, GroupKind.Local)

The revised timing for Solution 2 now gave me this:

  • Solution 2A:  2.54, 2.49, 2.56, 2.61.  Avg ~2.55 seconds

So while the local grouping did have a small impact, the message became pretty clear here.  Splitting this into smaller chunks was actually way more efficient than building a more elegant "all in one" solution!

My solution (including 5,000 rows of the data), can be found in the solution thread here.

Power Query Challenge 6

Are you ready for Power Query Challenge 6?  In this challenge we'll look at splitting nested data sets into a nice table in First Normal Form.

Let's take a look at Power Query Challenge 6:

Where did the challenge come from?

The inspiration for this challenge came from last week's Self Service BI Boot Camp that we hosted in Vancouver, BC (you should have been there, it was awesome).  At the outset, we talked about how to identify if your data is in First Normal Form (the format that is ideal for a PivotTable), and I showed this data set:

Data table with nested data sets

Data table with multiple data points per cell

Notice how the invoice has multiple ItemID, multiple Quantity and multiple Price fields per cell?  That's not good at all.

What we Really Need - Data in First Normal Form

As I explained in the Boot Camp, it is essential that the data source be in First Normal Form in order for a PivotTable to consume it.  What does that mean?  It means that it needs to look like this:

Data in 1NF

Data shown in First Normal Form (1NF)

Notice that in this case the data is atomic - it only has one data point per cell.  In addition, there is now one complete record per row.  The InvoiceID shows on every row now, and each data point has been correctly split up and applied to them.

So what's the thrust of the Power Query Challenge 6?

Well, as it turns out this is a bit tricky.  There are a few issues at play here:

  • The data is separated by line feeds within the cells
  • There are a different number of line feeds in each row
  • At least the number of line feeds is consistent for each cell in the entire row though!

So the challenge is this: break the table apart so that the data is in First Normal Form like the second image.

You can download the data source file (and post your solutions) in our forum here.  I'll give you my version tomorrow.

Power Query Intellisense and Highlighting in Excel

I'm super excited that Power Query Intellisense and Highlighting features are finally here in Excel!  It's been a long time coming, with these features debuting in Power BI Desktop months ago.

Where do Intellisense and Highlighting Show up?

Let's look at all three places that Intellisense and Highlighting are now exposed in Excel's Power Query.

Intellisense and Highlighting in the Formula Bar

The first thing you should notice about the image below is that Excel now adds colour highlighting to the text in the formula bar.  "Text" values are shown in red, keywords in blue, and data types in green:

Intellisense and Highlighting in Excel's Power Query Formula Bar

In addition, you can see Intellisense showing in the picture above as well.  In this case the cursor is immediately after the opening parenthesis character.  Unlike in the past, we now get a nice syntax popup box that tells us what needs to come next in this function.

Intellisense in Custom Columns

This is the place where I really wanted to see this happen.  Here's a look at Intellisense working live as I started to build a column using a Text function:

Intellisense in Excel's Custom Column Dialog

Pretty sweet, as this actually starts to display the options.

(Now, to be fair, in my build of Excel, this feature only seems to consistently work if I rename the column first, then tab into the formula builder.  If I just click straight into the formula, bypassing the tab order, it doesn't seem to kick in.)

Syntax Highlighting in Custom Columns

How about syntax highlighting?  The formula here is only meant for demo purposes, but you get the idea of how it can look:

Syntax Highlighting in Excel's Custom Column Dialog

Blue keywords for if, then, else.  Hard coded values (like 3) show up in green like data types, and text is showing in red.

Intellisense and Highlighting in the Advanced Editor

And finally, Intellisense and highlighting work in the Advanced Editor as well.  Here's a look at the syntax highlighting:

Syntax Highlighting in Excel's Advanced Editor

And here's what happens when we start manually adding a new step, which kicks in the Intellisense in the Advanced Editor window:

Intellisense in Excel's Advanced Editor

How do you get these features?

First, you'll need to be on Office 365.  Sorry Excel 2016 and 2019, but my understanding is that these new features are only coming to subscription users.  (That's another reason that - in my opinion - you should never buy another 4 digit numbered release of Excel ever again.)

If you are on subscription, you get them with your regular updates.  This feature set hit the Insider build channel last month in version 1907.  It's currently also been deployed to those on the Monthly Targeted channel running version 1907, build 11901.20080 or newer.

If you're not on the Monthly Targeted channel, you'll need some patience, as it's coming.  Just keep checking those updates!

Microsoft Business Applications Summit Recordings

Ken had a blast in June going down to Atlanta for the Microsoft Business Applications Summit (MBAS). There, he and co-author Miguel Escobar led an in-person workshop together for the first time. Ken also presented a breakout session on best practices for spreadsheet modeling, data shaping and data analysis. However, his real highlight was helping people with their Excel questions at the Ask the Experts booth.

Ken at the Microsoft Business Applications Summit 2019

Ken Puls hanging out at the Ask The Experts booth with his 'M is for Data Monkey' co-author Miguel Escobar and Excel legend Bill Jelen (aka Mr. Excel).

At MBAS, Microsoft also unveiled their new Power Query website. It's wonderful to finally have an official Microsoft site dedicated to this amazing tool. In addition, we're extremely proud that the Excelguru blog, along with the books and website created as part of our Power Query Training project, are listed in the Resources section!

Microsoft Power Query Website

We are thrilled to be included on the Resources page of the new Power Query official website!

On-demand Session Recordings

If you weren't able to make it to MBAS, or didn't get to all the sessions you wanted to, Microsoft has tons of on-demand recordings available for FREE! This is an amazing resource to help you continue to learn about and explore the Microsoft ecosystem. You can check them all out at the Microsoft Business Applications Summit website. Microsoft broke the sessions down into 4 streams: Power BI, PowerApps, Microsoft Flow, and Microsoft Dynamics.

Excel Sessions

Microsoft included Excel sessions in the Power BI stream under the "Other" category. Thus, you may find them a bit tricky to find. Luckily the Excel team gathered together a list of available recordings that we wanted to share. This includes Ken's session with David Monroy of Microsoft on data modeling and analysis best practices:

Power Query Sessions

The Power Query sessions at Microsoft Business Applications Summit were also part of the Power BI stream. Hence, we've compiled a list of available recordings to make them easier to find:

Unfortunately, we do not have a recording of Ken and Miguel's workshop on Working with Data in the Power Platform and Excel.

Microsoft Business Applications Summit 2020

Mark your calendar - next year's event will be held in Anaheim, CA on April 20 and 21. Additionally, you can sign up to receive updates for MBAS 2020 info on the event site. Perhaps we'll see you there!

One Solution to Power Query Challenge 5

Hopefully you’ve had a chance to try and build a solution to Power Query Challenge 5.  In this thread, I’ll show you how I approached it.

Challenge Goals

The challenge in this case was to allocate a fixed amount for a series of account IDs based on the data in this list:

Can you make a series from one to the other?

Can you make a series from one to the other?

Creating this…

Desired Power Query Output

Here is the data we need to create

And as a reminder, the rules were:

  • The first 7 digits won’t change for the series
  • If there is no “To” value, then we only need the single value
  • If the final character is text, it will never exceed Z. (i.e. we’ll never go from 10A to 11C)

And you can click here to get the original data file (as well as see solutions posted by myself and others).

I would also like to quickly restate that another of my main goals was to do as much using the UI as possible, and make certain that the students in my workshop could maintain this solution.  During the three day workshop (similar to my upcoming self service BI bootcamp), we did a full day of Power Query work, but mainly around reshaping data through the user interface.  The deepest we got into M code was creating a list using a structure like {1..10}.  So moving to crazy conditional logic and List.Generate solutions was out of the scope of where I wanted to take this for them. 😉

One solution to Power Query Challenge 5

I broke my solution down into four distinct queries as follows:

Listing of all queries used

Let’s look at each of those steps.

The Base Query

The point of this query was basically to carve up the “From” and “To” portions into the components that I would need in order to get my solution.  With the preface remaining the same for every ID in the list, it was all about separating the last 3 digits to go from this table:

Can you make a series from one to the other?

To this one:

Adding From and To columns

The steps I used were as follows:

  • Select the [From] column --> Add Column --> Extract --> First Characters --> 7
  • Name the new column “Preface”
  • Select the [From] column --> Add Column --> Extract --> Last Characters --> 3
  • Name the new column “From3”
  • Add a new conditional column called “To3” with the following formula:
    • =if [To] = null then [From3] else Text.End([To],3)
  • Remove the [From] and [To] columns
  • Set the data types for each column (notice [From3] and [To3] are text, not numbers)
  • Load the query as a Connection Only query

The Numeric Query

Next up, I needed to generate the series for the numeric series.  This was easy, as it works similar to the Power Query Calendar table recipe.  (Recipe card 60.105.x which you can find in our Power Query Recipe Cards)

The steps I used do this were:

  • Reference the Base query
  • Change the type of the [From3] column to Whole Number
  • Filter the [From3] column to remove errors
  • Added a new Custom Column with the following formula:
    • ={ [From3]..[To3] }
  • Removed the [From3] and [To3] columns
  • Expanded the new [Custom] column
  • Merged the [Preface] and [Custom] columns into a new “ID” column
  • Loaded this query as Connection only

This left me with the query output shown here:

Creating the Numeric Series

And the cool this is that using this technique, the final series – which only had a “From” ID and no “To” ID just creates the single item

The Alpha Query

Next up was the alpha query.  Once again, I referenced the Base query, meaning that I started from here:

Adding From and To columns

The steps I followed here were:

  • Duplicate the [From3] column
  • Change the data type on the [From3 – Copy] column to Whole Number
  • Filtered the [From3 – Copy] column to KEEP errors
  • Removed the [From3 – Copy] column
  • Split the [From3] column by Number of Characters, splitting at the left-most 2 characters
  • Split the [To3] column by Number of Characters, splitting at the left-most 2 characters
  • Added a new Custom Column with the following formula
    • = { [From3.2]..[To3.2] }

So at this point, I’m looking at this:

Creating a text based list

Continuing on, I…

  • Removed columns [From 3.2], [To 3.1] and [To 3.2]
  • Expanded the list from the [Custom] column
  • Merged the [Preface] , [From3.1] and [Custom] columns into a new “ID” column
  • Loaded this query as Connection only

And these steps left me with a nice list of the alphanumeric series as follows:

Creating the Alphanumeric Series

The IDs Query

At this point, things became super simple.  There’s no real magic to this query, as the data has already been prepared and normalized in the Numeric and Alpha queries.  So all that needs to be done here is:

  • Reference the Numeric query
  • Append the Alpha query
  • Set the data types (just to be sure)
  • Load the query to the intended destination

And that’s it.

That’s One Solution to Power Query Challenge 5

Of course, there are countless other ways to solve this.  My goal in this case was specifically to avoid a custom function, as I didn’t want to obfuscate the code for a new Power Query user.  Could I have done this all in one query with use of more complicated Custom Columns?  Sure.  But again, would it be maintainable by people newly introduced to Power Query?  Hopefully intellisense in the Custom Column Dialog will change my answer, but right now I’d say that’s unlikely.

One of the things I love about Power Query is the ability to break this down into separate queries, then put them back together.  The reality is that this series of data was mixed, needing slightly different treatment in each case.  This setup allowed me to clearly focus on the goal, getting it done without noise from the other type of data in the set, then just append them afterwards.  I call that a win as it’s easily auditable by someone closer to the beginner side of the Power Query learning curve.

Power Query Challenge 5

It’s been a while since our last challenge, so hopefully you’re ready for Power Query Challenge 5!  This is one that came up as I was teaching one of our 3 day Excel BI Bootcamp courses for a client, (similar to this public course!) which actually made it doubly tricky.  Why?  Because I needed the person to be able to both understand how I approached the task, as well as be able to maintain it after I left.  I didn’t want to just kick up some magic voodoo and walk away, as that isn’t really fair.

The issue for Power Query Challenge 5

The challenge in this case was to allocate a fixed amount for a series of account IDs.  They provided a starting ID, an ending ID, and a value.  Sounds easy so far right?  Here’s the problem… the key part of some of those IDs were purely numeric, but the key part of others were alphanumeric!  An example of the source data:

Can you make a series from one to the other?

Can you make a series from one to the other?

And the required output:

Desired Power Query Output

Here is the data we need to create

The rules – as they were explained to me – were as follows:

  • The first 7 digits won’t change for the series
  • If there is no “To” value, then we only need the single value
  • If the final character is text, it will never exceed Z. (i.e. we’ll never go from 10A to 11C)

How would you solve Power Query Challenge 5?

My first thought was that I’d have to reach to a custom function to do this, but again, I didn’t feel that was fair to my audience with only a day of Power Query training under their belt.  (With Dimensional Modeling and DAX to cover in our workshop, we don’t have time to take people deep into custom functions.)  After playing with it for a bit though, I was able to work past that problem and come up with a UI driven solution which works.  (Having said that, how you decide to solve this is really up to you!)

Now last time, I posted my solution the same day, along with my steps.  The result was that we had less engagement, possibly because that gave too much away.  So this time, I’ve created the thread for you to post your solutions in the forum, but I’m going to hold off on posting my approach until tomorrow.

Good luck!