This past weekend we published a major redesign at Skillwave.Training. Months in the making, this has been a total overhaul to focus on delivering the best online learning experience for our clients. Check out some of the images from the new site:
When you log in, you’ll be taken to your Dashboard immediately. This is the one stop console that will let you access any of your active course subscriptions, review forum posts, download your files, and manage your billing and profile details. We’ve worked hard to make this dashboard intuitive and easy to use as possible, and to make it look great on mobile as well.
Re-Designed Course Player
The course player is a completely custom built as well. Of course, you’d expect to see your navigation menu on the left to get to your lessons, but we’ve also added a “Materials” fly out menu on the right where you can access files specific to any given lesson.
Community Forum Overhaul
We said is was a major redesign at Skillwave.Training, and we meant it. One of our big goals here was to do a better job with the Skillwave help forum and foster a sense of community within it. Our belief is that learning is great, but there can be another hurdle when trying to convert theory into practice with your own data. We see the forum experience and Skillwave community as a crucial part of solving this issue, giving students the ability to:
Ask questions about the course materials,
Get help with applying techniques to their own data,
Interact with other people in the same training,
Practice applying their skills to other data sets, and
Reinforce their knowledge and help others in the process.
Any of our clients who have an active subscription to one of our paid products will find a completely revamped forum experience. As forum posters ourselves, there were a couple of very important things that we wanted to make sure that our community was provided a good set of tools for:
Asking To this end, we’ve made sure that we support topic tags, image and file uploads, code tags and a variety of rich formatting options. (Our old forum was quite weak in this regard).
Answering In addition to the tools above, we’ve added the ability to mark questions as solved. Our forums are searchable based on topic tags, answered status, solved status and more.
Ensuring high quality answers. Our forum is private and monitored by our admin team. Even if Matt, Miguel or myself aren’t the ones answering specific questions, we have a special “Recommended Answer” tag that we can apply to answers. This serves two purposes to us: the first is providing assurance to the asker that they got a great answer, while the second is providing validation to a poster that they’ve provided a high-quality response.
Course to Question Integration
There’s one more really cool thing though… We also now give you the ability to post a forum question directly from a given lesson and provide links to all other questions that have been posted in this manner. This serves both askers and answerers as it links directly back to the source of the question. We’re super proud of this little feature and feel that it sets us apart from other platforms out there. Not because other platforms don’t offer the ability to ask questions – they do. But we serve all of that up right inside the lesson page.
Check Out the major redesign at Skillwave.Training
If you haven’t checked out Skillwave.Training yet, you really should. We’ve got all kinds of great courses related to Excel, Power BI, Power Query and DAX. You can even try out the platform via our free Power Query Fundamentals course. You won’t have access to the forums on the free tier, but you’ll be able to experience the rest of our new platform.
As we've just launched the site, we'd love to get your feedback. For the next month or so, you can do that by clicking the little Feedback widget on the right side of any site page. Let us know what you think!
You know the drill… extract, transform and load your data, relate your tables, then create basic DAX measures. All work that needs to be done before you can really get started on analyzing your data. Today we’ve unleashed the Measure Monkey to help speed up that process a bit for you. (You can think of the Measure Monkey as Quick Measures for Excel.)
If you follow Monkey Tools already, you’ll know that our goal is to help you build better models faster. We already include helpful functions such as:
the ability to inject a query that can automatically switch between local folders and SharePoint folders
manage your queries via our Query Sleuth
build calendar tables on the fly against your data
and so much more...
But while we’ve had a nice tool to trace DAX query chains, we haven’t included a lot of DAX functionality to date. That is changing today. And oh… before we dive into it, I want to be clear that this feature will be available to ALL users of Monkey Tools. Yes, even those of you using a Free license!
The Sample Model
Before we dive into this, let’s take a look at a sample data model:
Notice that everything is nicely created and linked (by the way - we created that calendar in a few seconds with Monkey Tools’ Calendar Monkey…) but that there are no DAX Measures on our Sales and Budget tables. Date and Category are both foreign keys that link each of the those tables to the Calendar and Categories tables. However, we really want explicit measures to sum both the Sales[Amount] and Budget[Amount] columns.
Of course, these measures are easy to write, but what if your model is a bit more complicated and there are ton of them to do?
Creating Explicit Measures in Bulk with the Measure Monkey
As of version 1.0.7599.31348, you’ll find a new Measure Monkey menu on the Monkey Tools ribbon for this exact purpose:
Step 1A: Which Tables Host The Columns To Aggregate?
When you launch the new feature, you’ll be taken to a screen that looks like this:
This screen in intended to allow you to tell the Measure Monkey which tables hold the columns you need to aggregate. Our aim in this screen is to give you the highest possible chance of just clicking "Next". That being said, we realize that this may not work for everyone, so we also allow some flexibility here.
In the top left, we pre-select the tables which we believe have the highest chance of needing aggregation: your fact tables. (Those tables with only ‘many’ sides of relationships attached to them.) But if we get this wrong for you, you simply need to check the other boxes to include basic aggregations for other tables. (Ideally, you shouldn’t be aggregating dimensions, but there are – of course – exceptions to every rule.) You’ll get immediate feedback in the box in the bottom left, as we show all the tables that will be included based on your checkbox selections.
Step 1B: Tell the Measure Monkey Where to Store Your New Measures
In the top right, we also allow you to tell us where you want to store the measures. If you have created a specific “Measures” table, we’ll provide that by default. If you haven’t, we’ll offer to store the measures on the Host Table. (In other words, all measures created to aggregate columns from the Budget table will be stored on that table, whereas columns from Sales will be stored on Sales.)
Forgot to set up a new Measures table before doing this? No worries, click the + to add a measures table on the fly, give it a name, and we’ll create it for you:
There are a couple of Advanced options as well, but we believe most people will want to leave these set based on their defaults. So let’s click Next, to go to page 2…
Step 2: Choose Your Aggregations
This page contains a ton of info, but again we’re trying to provide you the biggest chance of clicking “Create” right away. Unfortunately, this is something that we can’t do in the image above…
The reason our Create button is disabled is that we have two measures offered with the name “Sum of Amount”. The blue one is the first instance, and any subsequent measures with the same name will highlight in red. So let’s fix those, and choose a default data type format:
It’s all good to go now, except that I want to add a “Transactions” measure that counts the rows of the Sales table. So I’m going to click the “Add another aggregation” button in the Sales table. Then I choose the name of the table from the drop down list:
That will give me a new row with a “Count Rows of Sales” measure, which I can quickly rename to “Transactions” before clicking “Create”.
During this process, the Measure Monkey will create your measures for you. Plus, if you created a Measures table, it gives you some advice on how to make it an “official” measures table. You can see the results in my data model here:
That was Easy…
The demo above was obviously a fairly simple model. Yet it cuts my explicit measure creation time down to less than a minute to create these two measures. Now consider the time savings when you get a bit more complicated:
So how do you get the Measure Monkey menu?
This update to Monkey Tools is available in Monkey Tools 1.0.7599.31348 or higher. And it's will be a “forever free” feature, so you’ll be able to use it on either a Free or Pro license!
If you already have Monkey Tools installed, it will automatically update within a couple of weeks. Alternatively, you can request the update now by going to Monkey Tools -> Options -> Check For Update Now…
We've been kind of quiet here, but we're excited to announce that we've just published an update to Monkey Tools QuerySleuth feature. It now contains an "tabbed" experience so that you can easily flip back and forth between queries, "pinning" the ones you want to see and compare.
The Updated QuerySleuth Interface
In this case you'll notice that I pinned The ChitDetails and ChitHeaders queries, then selected the Locations query from the left menu.
Why does this matter? Did you notice that the ChitDetails and Locations tab names are both red? That's because I made changes to both of them to update a data type... I can now hold onto those changes as I flip back and forth between JUST the queries I want to keep in focus.
Updating Multiple Queries
But now, of course, I want to commit my changes and force the data model to update to reflect those changes. In this image, I'm doing just that, with three queries:
And due to the selection pointed out by the arrow, each of these queries will not only get saved back to the Power Query engine, but a refresh of each query will be triggered as well.
So how do you get this update to Monkey Tools QuerySleuth?
This update to Monkey Tools QuerySleuth is available in Monkey Tools 1.0.7553.5975 or higher. And it's available in both the free and Pro versions of the tool. (Of course, you will still need a Pro version in order to actually save your queries.)
On this blog, I showcase a lot of different techniques for manipulating and reshaping data. For anyone that follows the blog, you already know this, and you know it's a pretty important topic to me. But the thing we shouldn't lose site of is WHY we do this. It's to drive analytics. I'm fairly convinced that the majority of the loyal readers here already know this. Thus, I wanted to ask your opinion on something...
How do you design your data model?
What I'm specifically interested in is how you approach designing the Fact and Dimension tables you use for your Power Pivot model. And I'm not specifically talking about Power Query here. We all know you should be using what you learned from our recently relaunched Power Query Academy to do the technical parts. 😉
What I'm more interested in is the thought process you go through before you get to the technical bit of doing the data reshaping.
If you read books on setting up a data model, you'll probably be told that you need to do the following four steps:
Identify the business process
Determine the grain of the model
Design your Dimension tables
Design the Fact tables
So if you're asked "how do you design your data model", do these steps resonate with you, and why?
Do you consciously sit down, and work through each of these steps in order? I suspect that many self-service BI analysts skip the first step entirely as they are implicitly familiar with their business process. (As a consultant, I ask a lot of questions in this area to try and understand this before building anything.)
Do you design the reports on paper, then work backwards to the data you'll need, go find it and reshape it? Or do you go the other way, trying to collect and reshape the data, then build reports once you think you have what you need?
Do you explicitly define the model grain? And if you do, what does that mean to you? Is it restricted to "I want transactions at an monthly/daily/hourly basis"? Or do you do deeper like "I want transactions at a daily basis and want to break them down by customer, region and product"?
Why the question?
There's actually two reasons why I'm asking this question:
Reason 1 is that I'd I think healthy discussion makes all of us better. I'd like to hear your thoughts on this as I'm probably going to learn something that I haven't discovered in my own learning journey.
Reason 2 is that my whole business is around teaching people how to do these things, and I'm always looking to make things clearer. The more opinions I hear (even if they contrast with each other), the more I can help people understand his topic.
So sound off, please! We'd all love to hear how you approach the task of building a data model.
Are you interested in learning how to clean and shape data with Power Query, as well as how to model it using Power Pivot? Don’t know which of these mysterious skills to tackle first? Want to learn about building BI in Excel where you create refreshable and maintainable solutions?
Good news: Ken Puls will be in Wellington, New Zealand on February 25-26, 2019 leading a live 2-day, hands-on session covering these essential skills!
What does Building BI in Excel cover?
In Day 1, you’ll learn how Power Query can clean up, reshape and combine your data with ease – no matter where it comes from. You can convert ASCII files into tables, combine multiple text files in one shot, and even un-pivot data. These techniques are not only simple, but an investment in the future! With Power Query’s robust feature set at your fingertips, and your prepared data, you can begin building BI in Excel using Power Pivot. The best part is that these dynamic business intelligence models are refreshable with a single click.
Un-pivoting subcategorized data is easy with Power Query
Day 2 focuses on Power Pivot, a technology that is revolutionizing the way that we look at data inside Microsoft Excel. Power Pivot allows you to link multiple tables together without a single VLOOKUP statement. It also enables you to pull data together from different tables, databases, the web, and other sources like never before. But this just scratches the surface! We'll also focus on proper dimensional modeling techniques and working with DAX formulas to report on your data the way you need to see it.
Build dynamic reports that are easy to filter and refresh
Who is this course for?
Building BI in Excel is for anyone who regularly spends hours gathering, cleaning and/or consolidating data in Excel. It's also valuable for anyone responsible for building and maintaining reports. Participants must have experience using PivotTables. Some exposure to Power Pivot and Power Query is not required but is a bonus.
Where do I sign up?
We are offering this course in conjunction with Auldhouse, a leading computer training company in New Zealand. Go to the Auldhouse site and use the following promo code EARLYBIRD20 to give yourself a 20% discount.
This will knock $300 NZ off the course, bringing it down to $1200. That’s $600 per day for pretty much the best introduction to both Power Query and Power Pivot that money can buy! Then use your new skills to free up 90% of your data-wrangling time, giving you time to negotiate a 20% pay increase*. Unbeatable ROI!
Don't miss out, the early bird discount is only available until January 31, 2019! Visit the Auldhouse site today for full details and registration.
*Numbers are indicative only, your mileage may vary. Heck, it may be way better than that!
It's been a long time coming, but we are putting the finishing touches on the third installment of our free 'DIY BI' series. Consequently, we are excited to announce that the Power Pivot eBook will be officially released on Tuesday, July 3, 2018!
Power Pivot eBook
This brand new book will feature five of Ken's top tips, tricks, and techniques for Power Pivot, including:
Hiding fields from a user
Hiding zeros in a measure
Using DAX variables
Retrieving a value from an Excel slicer
Comparing data using one field on multiple slicers
About the 'DIY BI' Series
This free eBook series is available to anyone who signs up for the monthly(ish) Excelguru email newsletter. The series includes four books, one edition each for Excel, Power Query, Power Pivot, and Power BI. Each book contains five of our favourite tips, tricks, and techniques which Ken developed over years of research and real-world experience.
We first launched this series in the spring of 2017 with the Excel Edition, and the Power Query edition followed later that summer. You can read some more about why Ken decided to create this series in his initial blog post about it.
The Excelguru Newsletter
The monthly Excelguru email newsletter features the latest updates for Excel and Power BI, as well as upcoming training sessions and events, new products, and other information that might be of interest to the Excel and Power BI community.
Don't Miss Out, Get Your Free Copy of the Series
If you're not already a newsletter subscriber, you can sign up here. We will send you the Excel Edition right away, and the Power Query Edition a few days later. All of our current and new subscribers will receive the Power Pivot edition once it is released on July 3, 2018. Be sure to keep an eye on your inbox for the new book.
We will be continuing to work on the fourth and final book, the Power BI Edition, over the coming months so stay tuned for details!
Yes, you read that right… Power Pivot is coming to all Office SKUs and the rollout has already started for those on subscription versions of Office.
This is something I've been championing (along with many others) since Power Pivot was initially rolled in to Excel 2013. The whole need for Pro Plus licensing, which was even initially only open to enterprise licensing, was a huge mistake in my opinion.
Today, the Excel team updated the Excel Uservoice request to make Power Pivot available in all SKU's of Excel to say "we're doing it". Finally! So yes, if you're running the Home and Student, or Business Premium plans, you'll finally have access to Power Pivot!
What is Power Pivot is coming to all Office SKUs?
How long will it take for Power Pivot to show up in your version of Excel? That depends upon what you purchased…
Excel 2013/2016 non-subscription users
Unfortunately, you need to upgrade to a subscription version of Office or to Office 2019 (whenever it comes out). They're not going to back port it to those versions as far as I'm aware.
Excel Subscription users
You have two options:
Wait till it shows up. Microsoft has said that they are already rolling this out to people on the April Current Channel (build 9330 and higher). So depending on where you are in the cycle, it will just show up one day.
Install an Insider Preview in order to jump the queue. Keep in mind that this isn't for everyone as this is Beta software, but if you're interested you'll also get access to newer features like the new data types, Insights and more as well.
How do you know your Excel SKU, version and what channel you are on?
(Please note that Channel is only applicable to Subscription users)
Go to File --> About
How do you get on the Insider Channel?
It depends on the SKU of Office you purchased.
Consumer Office Versions (Office 365 Home, Personal, and University)
What is with Excel Tables and the Data Model? Believe it or not, this is not the question I started with today, it was actually "which is faster; loading from CSV files or Excel?" My initial results actually brought up a surprising - and very different - question, which has become the subject of this post.
The testing stage:
Let's start by setting the background of my test setup…
What does the test data look like?
I started by wanting to test the difference in load speeds between data stored in an Excel table and a CSV. To do that, I started with a CSV file with 1,044,000 rows of data, which look like this:
What does my test query actually do?
The query to collect this data only has a few steps:
Connect to the data source
Promote headers (if needed)
Set data types
Load to the Data Model
Nothing fancy, and virtually no transformations.
I decided to load the data into the Data Model, as I figured that would be fastest. And during testing, I decided to expand the locations from which I was pulling the source data. What I ended up testing for the data source (using the same data) were:
A table in the same workbook
A named range in the same workbook
A CSV file
A table in a different workbook
A named range in a different workbook
And just for full transparency here, I turned Privacy settings off, as well as turned on Fast Data Load, trying to get the best performance possible. (I wanted to run the tests multiple times, and hate waiting…)
Your turn to play along…
All right, enough about the test setup, let's get into this.
Just for fun, which do you think would be the fastest of these to load the data to the Data Model? Try ranking them as to what you expect would be the best performing to worst performing. I.e. which will refresh the quickest, and which would refresh the slowest?
For me, I expected that order to be exactly what I actually listed above. My thoughts are that data within the workbook would be "closest" and therefore perform better since Excel already knows about it. I'd expect the table to be more efficient than the range, since Excel already knows the table's data structure. But I could see CSV having less overhead than an external file, since there are less parts to a CSV file than an Excel file.
And now for the great reveal!
These were generated by averaging the refresh times of 10 refreshes, excluding the initial refresh. (I wanted a refresh, not the overhead of creating the model.) I shut down all other applications, paused all file syncing, and did nothing else on the PC while the timing tests were running. The main reason is that I didn't want anything impacting the tests from an external process.
Okay, I hear you… "what am I seeing here?" It's a Box & Whisker plot, intended to show some statistics about the refresh times. It measures the standard deviations of the refresh times, and the boxes show the 2nd and 3rd quartiles. The whiskers show the variance for the other times. The fact that you can barely see those tells you that there wasn't a ton of significant variation in the testing times. To make it a bit easier to see the impact, I also added data labels to show the mean refresh time for each data source in seconds.
So basically the time to refresh 1,044,000 rows breaks down like this:
Pulling from CSV was fastest at 8.1 seconds
Pulling from a table in a different Excel file took 11.5 seconds
Pulling from a regular range in a different Excel file took 11.8 seconds
And then we hit the stuff that is pulling from a named range in the current Excel file (67.3 seconds), and finally, pulling up the tail end of this performance test, is pulling data from a local Excel table into the Data Model at 67.5 seconds.
I even changed the order the queries refreshed, (not included in the plotted data set,) but still no noticeable difference.
Wow. Just wow.
Let's be honest, the table vs range is a negligible performance variance. At 0.2 to 0.3 seconds, I'd just call those the same. I'll even buy that pulling from a CSV is quicker than from an external Excel workbook. Less structure, less overhead, that makes sense to me.
But the rest… what is going on there? That's CRAZY slow. Why would it be almost 6 times slower to grab data from the file you already have open instead of grabbing it from an external source? That's mind boggling to me.
Is there a Data Model impact?
So this got me wondering… is there an impact due to the Data Model? I set it up this way in order to be consistent, but what if I repointed all of these so that they loaded into tables instead of the Data Model?
Here's the results of those tests - again in a Box & Whisker chart. The data labels are calling out the average refresh time over those 10 tests, and the error bars show how much variation I experienced (the largest spread being about 2.3 seconds):
To be honest, I actually expected loading to a table to be slower than loading directly into the data model. My reason is that Excel needs to set up the named ranges, table styles and such, which the Data Model doesn't really need. And based on these tests, it's actually supports that theory to a degree. When loading from CSV it was almost 10% faster to go direct to the Data Model (8.1 seconds) rather than to a worksheet table (8.8 seconds). (There is also virtually no difference in the refresh times for CSV, so it's quite consistent.)
Loading from tables and ranges in other workbooks also saw some slight performance gains by going directly to the Data Model, rather than landing in an Excel table.
But the real jaw dropper is the range and table from the current workbook. Now don't get me wrong, I can't see ever needing to grab a table and load it to a table with no manipulation, that's not the point here. All I was trying to do is isolate the Data Model overheard.
What is with Excel Tables and the Data Model?
So what is with Excel Tables and the Data Model? I'm actually not sure. I've always felt that Power Pivot adds refresh overhead, but holy smokes that's crazy. And why it only happens when reading from a local file? I don't have an answer. That's the last place I'd expect to see it.
So what do we do about it?
If performance is a major concern, you may not want to pull your data from an Excel table in the same workbook. Use a workbook to land the data in an Excel Table, then save it, close it and use Power Query to pull that into the Data Model. If you're pushing a million rows it may be worth your time.
Something else I tried, although only in a limited test, is landing my query in a worksheet then linking that table to the Data Model. Oddly, it doesn't seem to have a huge impact on the Data Model refresh (meaning it doesn't have the massive overhead of loading from table to the Data Model via Power Query.) Of course, it limits your table to 1,048.575 rows of data, which sucks.
I'm not sure if this is a bug or not (it certainly feels like one), but it certainly gives you something to think about when pulling data into your Power Pivot solution.
Working around the issue...
First off, thanks to AlexJ and Lars Schreiber for this idea... they asked what would happen if we pulled the data via Excel.Workbook() instead of using the Excel.CurrentWorkbook() method in Power Query. (The difference is that you get Excel.Workbook() when you start your query from Get Data --> Excel, and you get Excel.CurrentWorkbook() when you start your query via Get Data --> From Table or Range.)
Using Excel.Workbook() to pull the contents from itself, in a single test, returned results of 11.4 seconds, which is right in line with pulling from an external source. So it's starting to look like Excel.CurrentWorkbook() doesn't play nice with the Data Model. (It doesn't seem to have an impact with loading to tables, as shown above.)
Of course, one big caveat is that Excel.Workbook() doesn't read from the current data set, it reads from the most recently saved copy of the file.
So this gives as an opportunity here... if we cook up a macro to save the file, then refresh the query via the External connector, we should get the best performance possible. To test this, I cooked up a macro to save the file, then refresh the data via the Excel.Workbook() route. In two tests I ended up at 12:02 seconds and 12:39 seconds, so it looks like it works. While that's not an extensive study, the saving process only adds a bit of overhead, but it's certainly made up by avoiding the refresh lag.
Here's a copy of the macro I used to do this:
.Connections("Query - Current_via_ExternalLink").Refresh
How you protect power queries is a question that will come up after you've built a solution that relies heavily on Power Query, especially if you're going to release it to other users.
(This is a quick post, as I'm in Australia at the Unlock Excel conference, but still wanted to get a post out this week.)
Can you Protect Power Queries?
The answer to this is yes, you can. It’s actually very easy, and prevents your users from not only modifying your queries, but adding new queries to the workbook as well. Essentially, it shuts the door on any additions or modifications to query logic, while still allowing queries to be refreshed… at least, it should.
So how do we Protect Power Queries?
To protect Power Queries we simply need to take advantage of the Protect Workbook Structure settings:
In Excel (not Power Query), go to the Review tab
Choose Protect Workbook
Ensure that Structure is checked
Provide a password (optional)
Confirm the password (if provided)
Once you’ve done this, the Power Query toolsets will be greyed out, and there is no way for the user to get into the editor.
Does refresh work when you Protect Power Queries?
This part kills me. Seriously.
The answer to this question depends on whether or not you use Power Pivot. If you don't, then yes, you're good to go. As long as all your tables land on worksheets or as connections, then a refresh will work even when you protect Power Queries via the Protect Workbook method.
If, however, you have a single Power Query that lands in the data model, your stuffed. If Power Pivot is involved, then the refresh seems to silently fail when you protect Power Queries using this method (and I don't know of another short of employing VBA, which is a non-starter for a lot of people).
It's my feeling that this is a bug, and I've sent it off to Microsoft, hoping that they agree and will fix it. We need a method to protect both Power Query and Power Pivot solutions, and this would do it, as long as the refresh will consistently work.
Caveats about locking your workbook structure:
Some caveats that are pretty standard with protection:
Losing your password can be detrimental to your solution long-term. Make sure you have some kind of independent system to log your passwords so this doesn’t happen to you. And if your team is doing this, make sure you audit them so you don’t get locked out when as staff member leaves for any reason.
Be aware that locking the workbook structure also locks the ability for users to get into Power Pivot.
Workbook security is hackable with brute force macro code available on the internet for free. (Please don’t bother emailing me asking for copies or links to this code. I don’t help in disseminating code which can be used to hack security.) While protecting the workbook structure will stop the majority of users from accessing your queries, it should not be mistaken for perfect security.
Last week I announced that we are working on a series of free 'DIY BI' e-Books. We've been hard at work on polishing it up, and I'm pleased to announce that the first DIY BI e-Book launches tomorrow! It will be emailed at 9:00 AM Pacific Time to everyone on our newsletter list.
Sign up to get the free 'DIY BI' e-Book series
If you haven't already, sign for our mailing list to receive your copy! You can do so at the bottom of this post.
Creating the 'DIY BI' e-Book
I'm really thankful that I have a team of people behind me for this. For me, technical writing is actually the easy part. It certainly takes time, don't get me wrong, but the magic of copy editing, proof reading and graphic design is a whole other story.
Deanna has done a great job of proofing the book, and making me re-write any paragraphs that sounded good in my head, but maybe didn't translate so well beyond that. And Rebekah has done a phenomenal job on the graphic design and layout.
Each book will be themed as shown below:
Blue for Excel, based on the Excelguru website colour scheme. Dark green for Power Query (like the powerquery.training site), light green for Power Pivot (like the Power Pivot logo) and yellow for Power BI like it's colour scheme.
The 'DIY BI' e-Book Cover
We wanted to create a cool cover, but most of the stock images for sale out there have a Mac in the picture. Since 3/4 of these technologies won't work on the Mac, that plainly wasn't something we wanted to put out there. So that led to us staging our own photo shoot to generate our cover - which I'll admit is a lot harder than I thought it would be. Here's the finished cover for the first 'DIY BI' e-Book.
Our next e-Book will use the same cover image, but will be themed in the dark green of the Power Query series.
And yes, before you all ask, that IS a Pie Chart in the bottom left. And no, I don't love pie charts. But sometimes you have to have one, because your boss asks for it. (Just don't expect to find one INSIDE any of the e-Books!)
Reserve Your Free 'DIY BI' e-Book Now
If you're already receiving out newsletter, there is nothing else you need to do. It will show up in your inbox shortly after 9:00 AM Pacific time on Apr 7, 2017. If you're not on our newsletter list yet though, just sign up. It's that easy!