Name Worksheets After Queries

Have you ever loaded a Power Query to a worksheet and then changed the name to match the query? It's a shame that there is no option to name worksheets after queries, as this would be handy.

Well, after seeing this request come up in the forums last week, we thought that this would be a great feature to add to the DestinationSleuth in Monkey Tools.  So as of build 1.0.7433.38066... it's done!

How to Name Worksheets After Queries

We wish that we could add this as an option in the Close & Load dialog, but sadly that's not possible.  So we did the next best thing...  Once you've loaded your queries, you simply need to open our DestinationSleuth and:

  1. Select the queries you're after
  2. Click the Rename Sheets button

Using DestinationSleuth to select queries and change the names of their host worksheets

At that point, we'll quickly loop through the host worksheets and rename them to match the query landed to that sheet.

Name Worksheets After Queries While Changing Load Destinations

You might also notice a new checkbox called "Name Sheets After Queries".  This checkbox allows you to name worksheets after queries while changing a load destination to create a new table.  It's also super easy to use:

  1. Select the query (or queries) you wish to change
  2. Choose to change the load destination to a Table
  3. Check the Name Sheets After Queries checkbox
  4. Click the Update Load Destinations button

Using DestinationSleuth to change a load destination from Connection Only to Table, and update the worksheet to the Query name at the same time.

We'll change the Load Destinations, creating the worksheets AND naming them to match the query in one step.

This is also a "Forever Free" Feature

As mentioned in my last post, while the DestinationSleuth's colour highlighting is only available in the trial and pro versions of Monkey Tools, the ability to Change Multiple Load Destinations at Once is a "forever free" feature.  And so is the ability to rename worksheets after queries!  All you need is Monkey Tools version 1.0.7433.38066 or higher, and you'll have that ability at your disposal.

If you haven’t already, head over to the Monkey Tools product page to download a copy

If you already have Monkey Tools installed, it will automatically update within a couple of weeks, or you can request the update now by going to Monkey Tools -> Options -> Check For Update Now…

 

 

Change Multiple Load Destinations at Once

Have you ever built multiple queries in one Power Query session?  You get to choose one load destination, then have to change each of the incorrect ones... one by one.  Have you ever wished you could change multiple load destinations at once?  Well now you can - you just need Monkey Tools!

Check out the New and Improved DestinationSleuth!

Following on the heels of last week's feature updates, we've added functionality to the DestinationSleuth that allows you to change load destinations of any one or more queries.

Naturally, DestinationSleuth still gives you a very visual view of the different query load destinations all in one place, but do you see that new option just to the left of the Exit button?

DestinationSleuth showing the different load destinations in colour

Assume that you've created 5 new queries in the workbook, and load them all to Connection Only.  But now you want to change 4 of them to the data model.  Rather than click each query, change the load destination, and wait, then move to the next one... Now you can just launch DestinationSleuth and:

  1. Select the load destination
  2. Select each of the queries you want to repoint (hold down your CTRL key)
  3. Click Change

DestinationSleuth being using to change multiple load destinations at once

Sure, it still takes time, but at least you can walk away and let it cook, rather than slowly shepherd it through the entire process.

Delete Host Worksheets

You'll also notice a little checkbox called "Delete Host Worksheets".  You know that issue where you accidentally load a table to the worksheet, then change the query to Connection Only?  It leaves the worksheet behind.  Now true, you can always just delete the worksheet (which will actually set the query to Connection Only automatically). But what if you need to change 5 of them?  It's easy with DestinationSleuth:

  1. Select the load destination
  2. Check the "Delete Host Worksheets"
  3. Select each of the queries you want to repoint (hold down your CTRL key)
  4. Click Change

We will repoint your queries AND remove the worksheets that were holding the query results.  (Naturally, you want to be really sure you acutally want to do this, but it's handy if you do.)

This is a "Forever Free" Feature

While the DestinationSleuth's colour highlighting is only available in the trial and pro versions of Monkey Tools, the ability to Change Multiple Load Destinations at Once is a "forever free" feature.  All you need is Monkey Tools version 1.0.7423.41125 or higher, and you'll have that ability at your disposal.

If you haven’t already, head over to the Monkey Tools product page to download a copy

If you already have Monkey Tools installed, it will automatically update within a couple of weeks, or you can request the update now by going to Monkey Tools -> Options -> Check For Update Now…

Monkey Tools Update Now Available

We’re pleased to announce the first Monkey Tools update is now available for download! This one contains a new feature, some new logic and an update to one of our data connectors. Read on for more information!

I can’t believe it’s been a month since our initial release, but here we are.  Since that release, a couple of notable things happened:

  • We published a blog post on PivotSleuth and Measure Tables. This was a learning experience for me, as I discovered something new about Measure Tables.  I always knew that you could mark your Fact tables as Measure tables by hiding all the unaggregated columns, but I didn’t realize that this also means that disconnected Measure tables will then suppress the “Relationships between tables may be needed error.”
  • One of my friends hit me with an interesting curve ball: he turned on “Store datasets using enhanced metadata format” in the Power BI preview features. And as it turned out, some of the methods we’d been using to analyze the Power BI model disappear when you do that.

Both of these have led to some improvements in the software, which we’re proud to say are finally available to you.

What’s new in version 1.0.7418.29970?

There are three major things that are new in this version:

A New Power BI Connector

This was actually a huge amount of work for us, as not only did we have to build a new connector to read the new Power BI file format, but we also had to analyze the file as it was opening to see if it was in the classic format or not.  And to make it harder, if you have the Enhanced Metadata Format turned on, a legacy file requires using our initial connector, but any refresh must be done with the new connector.  Fun times for a developer and, as you can imagine, it took as a bit of effort to pull it off.  It’s actually this piece that has held us back on the other features, as the connector MUST work and impacted everything.

While most of this work is invisible to the end user, there are two things that hope you do notice:

  • Connecting to the new file format is much faster than using the legacy format.
  • We also took the time to remove the reliance on configuring the version of Power BI that launches for you by default. We now just launch Power BI using your default connector, then bind to it, no matter how many versions you have on your machine, or which they are.

Updated Functionality in the PivotSleuth

As we learned in the PivotSleuth and Measure Tables blog post, hiding all unaggregated columns on a disconnected measure table flags the table as an official Measure table and suppresses the “Relationships between tables may be needed” message.  For that reason, PivotSleuth needed to recognize that this is acceptable.

In other words, when the Measures table is a properly formatted disconnected table, it needed to (and now does) show that there are no issues:

PivotSleuth giving a clean bill of health for a measures from a disconnected measure table

But when that disconnected Measures table has a visible column, not only should it have shown the issues, but also tell you what needs to be done to fix them.  And now it does:

PivotSleuth showing issues for measures used from an improperly formatted measure table

New QueryMonkey Feature: Add Measure Table

As mentioned before, our philosophy is “Build better, faster”.  For that reason we’ve added a new QueryMonkey feature for you:  Add Measure Table.

This feature will prompt you for a name for the table…

Prompting the user to enter a name for the new Measures table

And then create a new empty table in the data model for you:

An empty table in the data model

Unfortunately, the Excel team hasn’t given us a way to programmatically hide columns in the data model (we could REALLY use that ability), so we can’t take that last critical step for you:  Hiding the Measures column to prevent the "Relationships between tables may be needed" message.  But never fear, we do tell you exactly what needs to be done:

Advice from PivotSleuth on what to do to turn the new table into a proper measure table

So while we typically store our measures on the Fact tables, rather than a disconnected Measure table, we totally get that a lot of people like this approach.  Hopefully this make it a lot easier for you!

How do I get the Monkey Tools Update?

The answer to this depends on whether or not you’ve installed Monkey Tools yet.

If you haven’t, then head over to the Monkey Tools product page to download a copy

If you already have Monkey Tools installed, it will automatically update within a couple of weeks, or you can request the update now by going to Monkey Tools -> Options -> Check For Update Now…

Happy sleuthing! 🙂

Monkey Tools is Here

We are super excited to announce that we’ve (at last) released the first version of our Monkey Tools software!  Ken has been working on this software on and off for the better part of 8 years now.  But after showing it to a friend in Wellington last year, we decided it was finally time to get serious.  We hired a full-time developer last summer and are finally ready to go live with the initial release!

What is Monkey Tools?

Monkey Tools is an Excel add-in (supported in Excel 2016 and higher) which provides tools for you - as a business intelligence author/reviewer - to:

  • Build models more rapidly
  • Follow recommended practices
  • Document your work
  • Audit files that you receive

It is targeted primarily at modelers and analysts who work primarily in Excel, but also push their models into Power BI.  (Our philosophy at Excelguru is to model in Excel first, then export to Power BI for reporting, sharing and security control.)

Oh, and super important… it installs on your system without requiring admin rights on your PC.  How cool is that?

What does Monkey Tools actually do?

Well… lots!  We’ve collected all the cool features under some themed buttons including:

  • QueryMonkey (for inserting new queries)
  • DestinationSleuth (to provide information on query load destinations)
  • QuerySleuth (helping understand your actual queries)
  • TimeSleuth (to benchmark query load times)
  • PivotSleuth (helping you diagnose Pivot Table field issues)
  • DAXSleuth (tools especially for working with DAX measures)
  • ModelSleuth (reporting on the properties of your queries and data model)

Cute names, right?  The Monkey builds things, and the Sleuths investigate things.  Here’s a high-level view of what they each contain.

QueryMonkey

Query Monkey gives you the ability to insert key queries like:

  • The famous “fnGetParameter” query and table (from Chapter 24 of M is for Data Monkey)
  • A “From Folder” setup that works with local and/or SharePoint hosted files
  • Dynamic calendar tables based on your data (for custom calendars, it even provides the option to insert the "periodicity" columns for Rob Collie's GFITW DAX pattern!)

The QueryMonkey provides a Dynamic Calendar generator

DestinationSleuth

Today, this is simply a viewer to visually indicate the load destinations of your tables (better than just “Connection Only” or “x Rows Loaded”).

The DestinationSleuth user form displays four different load destination types

QuerySleuth

This is a single form, packed with information and features such as:

  • A dependency/precedent tree view layout
  • Full colour display based on load destination
  • Colourful and indented M code
  • The ability to modify the M code and write it back to the editor WITHOUT LOCKING YOUR EXCEL User Interface!

The QuerySleuth shows a query dependency tree as well as indented and colourful M code

TimeSleuth

This feature allows you to time query execution in Excel, and even chart comparisons between them with or without privacy settings enabled.  If you’ve ever wondered which query is slowing down your workbook, or wanted to time test two different approaches, you may find this helpful!

A chart generated by Monkey Tools TimeSleuth user form

PivotSleuth

Have you ever seen that irritating “relationships may be needed” error when building a Power Pivot based Pivot Table, and wondered why?  Pivot Sleuth can tell you…

  • See the real, fully qualified names of the fields used in your Pivot Tables
  • Highlight potential or current issues in Pivot Table configurations
  • Debug cross filtering issues, “relationships may be needed” errors and errors where grand totals are returned for all rows on the Pivot Table

Debugging PivotTable errors with the PivotSleuth

DAXSleuth

We believe that measure dependencies are just as important as query dependencies, and this is the reason we build the DAXSleuth.  This form:

  • Displays a dependency/precedent treeview of your DAX measures
  • Provides a full colour display of Implicit and Explicit measures (with or without children), as well as Calculated Columns
  • Shows your DAX measures with colour highlighting in an indented format
  • Allows you to Indent, Un-Indent, Flatten, Duplicate and even Update measures without leaving the DAXSleuth
  • Exposes all locations a DAX Measure has been used (Pivot Tables, Pivot Charts, OLAP Formulae and Named Ranges), and even allows you to select those objects right from the DAX Sleuth!

Monkey Tools DAXSleuth user form in action

ModelSleuth

Have you ever had to provide documentation for your model?  Or picked up a model from someone else and had to review it?  The ModelSleuth provides reports and utilities such as:

  • A full model summary report showing key statistics about your tables, relationships, columns, measures and queries. (Trial and Free licenses are limited to every other record in this report.)
  • A model memory usage report, complete with how much memory is recoverable (for Excel based data models).
  • An unused columns report (for Excel based data models).
  • A DMV Explorer (for those who want to build their own reports).

Showing the impact of unused columns on memory via Monkey Tools ModelSleuth feature

Monkey Tools Supported File Types

The Monkey Tools add-in is compatible with Excel 2016 or higher, and can read from:

  • Excel files
  • Power BI Desktop files
  • Backup files (that you can export from the Monkey Tools software)

Will Monkey Tools get updates?

Oh yes, we have plans for many more features!

Our intended model is to deliver features (and bug fixes) as we develop them.  That means that there could be periods with no updates as we work on something big, or periods with multiple updates delivered in a single week.  We know that some people love frequent updates and some people don’t, so we let you control how often you get them:

Monkey Tools allows you to control update frequency

The key thing to recognize here is that we are not holding new features for a vNext. They’ll be delivered when they’re ready.

Can I try Monkey Tools before I buy it?

Ken did not become or remain a Microsoft MVP without contributing a large portion of tools and help to the community for free, and that won’t change.  Having said that, we’re paying a developer to work on this product full time and need to recoup those costs.  For that reason, we will always have both a Free version, as well as a Pro version.

Naturally, we want you to look at it, as we're convinced you'll like it.  And that's why we have a two-week trial that provides full access to almost all of the full feature set.  Once your trial expires, your license will automatically revert to a free license.  You’ll still get fixes and new features, they’ll just render in free mode (without colour, without field advice, etc.).  We do believe that you’ll still find the tool useful, just maybe not as useful without a Pro license.

Ready to learn more about pricing options and download the free trial?  Click here!

The Data Insights 2 Day Master Class

I’m super excited to be presenting a Data Insights 2 Day Master Class in Wellington, NZ with my good friend Matt Allington.  This is the first time we’ll be working together to bring our unique strengths to our participants in a joint session format, and it’s going to be AWESOME!

Ad for the Data Insights Masterclass in Wellington NZ

How is the event going to work?

We think you’ll love this.  We’re going to divide our group in two.  You’ll get a one full day with me on Dimensional Modeling, and one full day with Matt, which focuses on the DAX formula language.  These two components are essential to understand when you want to build truly dynamic, scalable and stable data models, and we're going to cover both in detail.

What is covered in the Dimensional Modeling day?

Ken will be looking deeply at how to structure your data for a successful Excel/Power BI data model.  You’ll learn how your data should be shaped, what the data model expects in its tables, and a variety of techniques and patterns to work around common join problems.  Our goal here is very simple: to teach you everything you need to lay the foundation for a data model that will stand the test of time.

But not only will you lean practical hands on techniques to lay this groundwork, you’ll learn the key terminology at play.  By the time you leave this session you’ll be able to identify things like ‘facts’, ‘dimensions’, ‘relationships’, ‘schemas’, ‘slowly moving dimensions’ and much more.  Armed with this knowledge you will be able to not only design your own models properly, but you’ll be able to understand other materials you reference during your career.

As you might expect from one of the world’s leading voices on Power Query, there’s going to be a heavy focus on Power Query in this course.  But it's Power Query with a purpose: to feed a Power Pivot Data Model.

What is covered in the DAX Formula day?

Matt will take you into the world of DAX formulas, exploring how this incredible language can be used to summarize virtually any statistic you want to know.  He’s one of the world’s experts in the DAX language and will teach you not only what you SHOULD do with DAX, but what you SHOULDN’T.

When Is This?

Soon!  It’s going to be hosted in Wellington, NZ on Feb 24 and 25, 2020.  But the good news is that there are still seats available, and we’d LOVE to see you there with us.

How Much and Where Do I Sign Up?

Great questions!  Head over to ExceleratorBI for all those details.

Remove Dynamic Number of Top Rows

Removing the top five rows from a data set is easy in Power Query, but what do you do when the number of rows changes?  There isn’t a built-in Remove Dynamic Number of Top Rows function.  In this post we’ll look at how to set this up.

Illustration of the issue

Assume you have the following report, and you’re only interested in the Cider sales:

With Cider starting in row 9, we’d need to remove the top 8 rows.  That’s fairly easy.  You just need to:

  • Go to Remove Rows -> Remove Top Rows -> 8
  • Promote headers
  • Do whatever else you need to do to the data

But then you get an updated version of the data set, and it looks like this:

Uh oh.  Best case, if you run the previously generated Power Query script, you’ll end up with the following result:

But more likely, if you promoted the clean header row from the original data set, you’ll get a step level error since the revised data set doesn’t yield a “Cider” column when row 1 (shown above) is promoted to header:

Regardless of which one of these scenarios appears worse to you, I think we can agree that neither one is desired.  So how do we make this work on a dynamic basis?

Solution Architecture

The way I approach this issue is to split the job into 3 queries as follows:

Let’s look at how this works in practice…

Query 1:    Raw Data

The purpose of this query is quite simple:

  • Connect to the Raw Data source
  • Perform any preliminary cleanup
  • Rename the query as “Raw Data” (add something descriptive if you have many data sources
  • Set the query to load as a Connection Only query (disable the load in Power BI)

The key thing to note here is that we’re not doing any work to remove top rows beyond things that we know will ALWAYS occur.  We may want to drop columns and other things to reduce our data set, we just don’t want to touch anything we can’t guarantee will be exactly the same when we get updated data.

In the case of the data sample I showed above, I’m just going to connect to the data set and load it as connection only.  (While I could make an argument that the first 3 rows will always need to go, I will get rid of those when filtering to just the cider header anyway.)

Query 2:    Generate the Dynamic Row Number

The next step is to generate the number that will indicates the dynamic number of top rows we are looking for.  Despite the fact that the row which holds our data is changing, this is actually relatively easy once you know how:

  • Right click the Raw Data query -> Reference
  • Go to Add Column -> Add Index Column -> From 0
  • Filter one of the columns to the data you are looking for
  • Right click the [Index] Column -> Remove Other Columns
  • Go to Home -> Keep Rows -> Keep Top Rows -> 1
  • Right click the value in the cell -> Drill Down
  • Rename the query as “HeaderRows”
  • Set the query to load as a Connection Only query (disable the load in Power BI)

You now have a query that will dynamically pick up the number of rows to be removed from the top of the data set before it encounters the text you are looking for.

Step 3:       Remove Dynamic Number of Top Rows

So now comes the magical part:

  • Right click the Raw Data query -> Reference
  • Go to Home -> Remove Rows -> Remove Top Rows
  • Type in the current number of rows to remove (for this example, we’ll assume it is 8 rows)

The formula bar will now be showing the formula =Table.Skip(Source, x ) where x is the value you typed in:

  • Replace the value with “HeaderRows”

CAUTION!  Power Query is case sensitive.  You must spell and case HeaderRows EXACTLY as you did previously.  And if you separated those two words with a space, you need to escape it with hash marks and quotes:  #"Header Rows"

If you’ve replaced everything correctly, you should see that everything still works:

Does it Work?

Here’s what we see when we point RawData to the second data set I showed earlier:

The sample file for this example can be downloaded here.

How do You Design Your Data Model?

On this blog, I showcase a lot of different techniques for manipulating and reshaping data.  For anyone that follows the blog, you already know this, and you know it's a pretty important topic to me.  But the thing we shouldn't lose site of is WHY we do this. It's to drive analytics. I'm fairly convinced that the majority of the loyal readers here already know this. Thus, I wanted to ask your opinion on something...

Raw data to data model, passing through the human brain

How do you design your data model?

What I'm specifically interested in is how you approach designing the Fact and Dimension tables you use for your Power Pivot model.  And I'm not specifically talking about Power Query here. We all know you should be using what you learned from our recently relaunched Power Query Academy to do the technical parts.  😉

What I'm more interested in is the thought process you go through before you get to the technical bit of doing the data reshaping.

If you read books on setting up a data model, you'll probably be told that you need to do the following four steps:

  1. Identify the business process
  2. Determine the grain of the model
  3. Design your Dimension tables
  4. Design the Fact tables

So if you're asked "how do you design your data model", do these steps resonate with you, and why?

Do you consciously sit down, and work through each of these steps in order?  I suspect that many self-service BI analysts skip the first step entirely as they are implicitly familiar with their business process.  (As a consultant, I ask a lot of questions in this area to try and understand this before building anything.)

Do you design the reports on paper, then work backwards to the data you'll need, go find it and reshape it?  Or do you go the other way, trying to collect and reshape the data, then build reports once you think you have what you need?

Do you explicitly define the model grain?  And if you do, what does that mean to you?  Is it restricted to "I want transactions at an monthly/daily/hourly basis"? Or do you do deeper like "I want transactions at a daily basis and want to break them down by customer, region and product"?

Why the question?

There's actually two reasons why I'm asking this question:

Reason 1 is that I'd I think healthy discussion makes all of us better.  I'd like to hear your thoughts on this as I'm probably going to learn something that I haven't discovered in my own learning journey.

Reason 2 is that my whole business is around teaching people how to do these things, and I'm always looking to make things clearer.  The more opinions I hear (even if they contrast with each other), the more I can help people understand his topic.

So sound off, please!  We'd all love to hear how you approach the task of building a data model.

Self Service BI with Excel and Power BI

This week Ken is at the Microsoft Business Applications Summit in Atlanta. However, we're super excited that he'll be back on home turf in July for our next public training session. The 3-day Self Service BI Boot Camp in Vancouver will change your company's reporting game forever!

Is Self Service BI a Double-Edged Sword?

Matthew Roche, a member of Microsoft's Power BI CAT Team and noted sword enthusiast, recently posted about a trend he's seeing. His global customers often describe self service BI as a "double-edged sword." After thinking about this comparison, he felt the simile held up. But perhaps not in the way you think!

In the post on his blog, Matthew summed it up nicely. Having two sharp edges is only dangerous to the wielder if they don't know how to effectively use each of the distinct and complementary edges together.

Approaching Self Service BI for the First Time

Perhaps you've been thinking about implementing some self service BI solutions within your organization. And you've been hearing a lot of buzz about Power BI. But you're unsure where to start or how Power BI fits in with your existing data platforms and reporting tools.

Our Self Service BI Boot Camp, with 3 full days of immersive hands-on learning, might be just the place to begin. Guided by the Excelguru, Ken Puls, you'll work with both Excel and Power BI to build some really cool self service BI solutions! We'll not only show you which tool is right for which kind of job, but how to use them together in one solution.

What will the Boot Camp Cover?

We know that data is not usually stored in nicely curated databases and often - even when it is - the data analyst doesn’t have access to it. Instead, we must piece together data provided in text files, Excel files, web pages, and even the body of emails to build the business intelligence solutions we need. Until now this has been a painful process with a great deal of cut and paste effort, and updates have been tedious and error prone. That stops with this workshop!

Building Modern BI Solutions

In the Self-Service BI Boot Camp we'll dive into:

  • a deep exploration of Power Query for gathering, cleaning, and preparing your ugly data
  • the benefits, concepts, and key terminology of Dimensional Modeling
  • an introduction to DAX,including:
    • how to create a variety of DAX measures
    • how to control their Filter Context
    • understanding how DAX measures are calculated

Learn how CALCULATE works in our Self Service BI Boot Camp

  • calendar intelligence, such as:
    • building calendar tables
    • using the “Golden Date” pattern
    • extending our model to report based on our own year-end
  • how to leverage the strengths of Power BI for sharing and reporting
  • specific features of Excel and Power BI that every analyst should know, plus:
    • which tool to use for which job
    • how they can be used together
  • recommended best practices for boosting the performance of your self service BI solution

You can read more about the class on the Excelguru website. Likewise, check out the course outline to see what will be covered in each of the 3 days.

Top 5 Reasons to Attend

  1. Our Self Service BI Boot Camp is loaded with hands-on, practical experience. As Ken's friend, Thomas LaRock, recently commented on Twitter, "Telling isn't teaching." Our philosophy is that you need to actually work with a tool in order to really learn it.
  2. The small class size provides a more intimate learning environment. Working with a small group allows Ken to interact directly with attendees, answering your questions and fielding discussions as they come up. You get to pose questions and discuss scenarios with one of the world's leading self service BI experts. However, this means that spots are limited, so don't  miss out on your chance to sign up.
  3. There's no better time to come to Vancouver than in July - it's a beautiful city all year round, but is in its prime this time of year. That being said, while we can't 100% guarantee great weather, we CAN guarantee you'll get some GREAT training!
  4. Early registration pricing is now in effect. You can save $200 if you sign up before June 24, 2019. Simply go to our website and enter the following coupon code at checkout: SSBIJULY2019.
  5. Registration includes some valuable resources to take with you, so you can refer to them later. You'll get a copy of Ken's slides, many of which contain handy recipes to walk you step-by-step through the techniques. Additionally, youreceive copies of all the class example files, including completed versions for later review. These examples are based on real-world scenarios and provide you with techniques that you can being applying to your own data right away.

Unpivot Subcategorized Tables with Power Query

So what are you waiting for? Come and join us to revolutionize your reporting process!

 

Building BI in Excel Course

Are you interested in learning how to clean and shape data with Power Query, as well as how to model it using Power Pivot? Don’t know which of these mysterious skills to tackle first? Want to learn about building BI in Excel where you create refreshable and maintainable solutions?

Good news: Ken Puls will be in Wellington, New Zealand on February 25-26, 2019 leading a live 2-day, hands-on session covering these essential skills!

What does Building BI in Excel cover?

In Day 1, you’ll learn how Power Query can clean up, reshape and combine your data with ease – no matter where it comes from. You can convert ASCII files into tables, combine multiple text files in one shot, and even un-pivot data. These techniques are not only simple, but an investment in the future! With Power Query’s robust feature set at your fingertips, and your prepared data, you can begin building BI in Excel using Power Pivot. The best part is that these dynamic business intelligence models are refreshable with a single click.

Un-pivoting Subcategorized Data

Un-pivoting subcategorized data is easy with Power Query

Day 2 focuses on Power Pivot, a technology that is revolutionizing the way that we look at data inside Microsoft Excel. Power Pivot allows you to link multiple tables together without a single VLOOKUP statement. It also enables you to pull data together from different tables, databases, the web, and other sources like never before. But this just scratches the surface! We'll also focus on proper dimensional modeling techniques and working with DAX formulas to report on your data the way you need to see it.

Top Selling Servers Report

Build dynamic reports that are easy to filter and refresh

Who is this course for?

Building BI in Excel is for anyone who regularly spends hours gathering, cleaning and/or consolidating data in Excel. It's also valuable for anyone responsible for building and maintaining reports. Participants must have experience using PivotTables. Some exposure to Power Pivot and Power Query is not required but is a bonus.

Where do I sign up?

We are offering this course in conjunction with Auldhouse, a leading computer training company in New Zealand. Go to the Auldhouse site and use the following promo code EARLYBIRD20 to give yourself a 20% discount.

This will knock $300 NZ off the course, bringing it down to $1200. That’s $600 per day for pretty much the best introduction to both Power Query and Power Pivot that money can buy! Then use your new skills to free up 90% of your data-wrangling time, giving you time to negotiate a 20% pay increase*. Unbeatable ROI!

Don't miss out, the early bird discount is only available until January 31, 2019! Visit the Auldhouse site today for full details and registration.

*Numbers are indicative only, your mileage may vary. Heck, it may be way better than that!

Power Pivot eBook Coming Soon

It's been a long time coming, but we are putting the finishing touches on the third installment of our free 'DIY BI' series. Consequently, we are excited to announce that the Power Pivot eBook will be officially released on Tuesday, July 3, 2018!

Power Pivot eBook

This brand new book will feature five of Ken's top tips, tricks, and techniques for Power Pivot, including:

  • Hiding fields from a user
  • Hiding zeros in a measure
  • Using DAX variables
  • Retrieving a value from an Excel slicer
  • Comparing data using one field on multiple slicers

Power Pivot eBook

 

About the 'DIY BI' Series

This free eBook series is available to anyone who signs up for the monthly(ish) Excelguru email newsletter. The series includes four books, one edition each for Excel, Power Query, Power Pivot, and Power BI. Each book contains five of our favourite tips, tricks, and techniques which Ken developed over years of research and real-world experience.

DIYBI eBook Series

We first launched this series in the spring of 2017 with the Excel Edition, and the Power Query edition followed later that summer. You can read some more about why Ken decided to create this series in his initial blog post about it.

The Excelguru Newsletter

The monthly Excelguru email newsletter features the latest updates for Excel and Power BI, as well as upcoming training sessions and events, new products, and other information that might be of interest to the Excel and Power BI community.

Don't Miss Out, Get Your Free Copy of the Series

If you're not already a newsletter subscriber, you can sign up here. We will send you the Excel Edition right away, and the Power Query Edition a few days later. All of our current and new subscribers will receive the Power Pivot edition once it is released on July 3, 2018. Be sure to keep an eye on your inbox for the new book.

We will be continuing to work on the fourth and final book, the Power BI Edition, over the coming months so stay tuned for details!