D A T A P L A N T

The Return of Data Science Foundations: What's Changed + What's New for This Decade

My, oh my, have things changed in the world of customer success. While on a Data Science World Tour back in 2018, I stared a blog series about the foundations of applied data science in the customer success industry. Back then I shared some breakthrough insights that many of my customers leveraged when building out thier customer success programs. Fast forward to 2020, we're now in a Post-Covid 19 world where companies and people alike all have had to adapt and change. CS is now essential. I have led customer facing technical teams, managed large account portfolios and delivered success for many organizations over the years using its tenants. Take advantage of new episodes on subjects including step by step breakdowns how to define true north metrics, emerging methodologies like the ODO Framework and more....

But first follow me for a sec for a bigger payoff at the end. In aid of those who do not know. I got my start in startup/tech industry by participating in 48 hour hackathons. In late 2014, a few friends and myself participated in a hackathon called Globalhack II,which was the second one I attended. Although the problem that we had to solve in the competition was obscure at the time. The solution was essentially to create a system that could process many types of data into one insightful topic based view. Sound familiar? One of the key elements of the solution I convinced my team to build at the time was a way to categorize data sources into 4 core types that would then allow for the system to process each accordingly. For example, any .doc files would be transcribed and analysed using natural language processing. Whereas any videos loaded in the systems are analysed by image tracking and object recognition models. This macro-categorization by source type approach was my main pitch I brought to my team. Unfortunately we did not win the competition. But fundamental approaches to solutioning were born that I would leverage later. 120+ customer success configuration projects later and I saw the same thing emerge. The wild landscape of untamed CS data could be tamed into a few core source types.

Side note: I cannot recommend hackathons enough. Some people do call out their flaws and how they can be exploitative of those who participate in them. But one of my number 1 reasons for recommending people participate in hackathon is for people to learn new problem solving techniques that can be helpful for new problems in their life even if they lose the competition. But I digress, since writing that blog in 2018, I found it to be one of the most helpful frameworks I have used as a data scientist and a CSM. So today join me as we begin to expand on what was started in 2018 and sprinkle in some learnings over the last few years along the way.

RECAP - For those who want the TLDR of the first article or hate reading stuff from before the 2020. Call it another thing to blame on COVID-19 but many methodologies from before 2020 dont hold up the same these days so it's understandable. Having completed recent projects right before and the quarters after the first round of lockdowns. I got some updates and new revelations to share. The frameworks I built and applied in the 2010's throughout my early career still helps oranizations build better customer success programs today in 2020.

The Primary Data Source Types framework stated that although all customer success data might vary in size, source system & timeframe . At the core, there are only 7 fundamental data source types that all possible sources rollup under. Customer success data problems are uniquely hard because they rely on highly dimensional relationship data. Even most straight forward questions like "which customers are most risky" require a lot of interconnected relationships between systems and stakeholders. Which in practice leads to tons of custom objects and data sources being managed by companies. Having 7 primary source types compared to what can feel like infinite allows for the simplification of customer success problems.There are 7 core types: Master Records and the rest are usually time-series based: Subscription, Support, Preparedness, Sentiment, Engagement, Usage..

The first of the data source types are the most foundational. Master Records, are all of the named things that show up in your other data sources: Accounts, Contacts, Products, Campaigns, etc. No matter what your universe consists of. All of the named things will be represented as Master Records in any CRM/Database. When data architectures are built right; Master Records have unique identifiers that show up as foreign keys in the other 6 relationship time-series based data sources: Subscription, Support, Preparedness, Sentiment, Engagement, Usage. Each of the time-series based source types are all similar because they all represent relationships between different types of Master Records over time. For example, if you submit a ticket to your cellphone provider because your cellphone is not working. Your ticket record that is created as a result in your celllphone providers database has ID’s for you as a user, as well as an ID for which product you called in about. Along with the relationships between you and which product you have. There is also a time-stamp of when the ticket was created. This makes any data that shows metrics & dimensions on the interaction between an company/person & your product’s support essentially able to be treated the same. This principle is what underpins the power of having a fundamental 7 source types. Custom object choas becomes order.

What has surprised me the most since first sharing my journey back in 2018 is how well the concepts I introduced in the Data Science Foundations Series has held up. I am no Einstein or Newton but any stretch, But I did not expect how well all of the learnings would stand the test of working in dozens of customer success data projects. As you could imagine at the time, being in a high-volume customer facing technical environment for many years could be stressful. But the good part was it allowed for me to rapidly iterate on my frameworks in a relatively short time frame. Although the primary data source types framework has been mostly solid. While it is great to highlight old frameworks breaking the odds and holding up in a new economic worlds. There are some areas that have evolved and improved over the last few years. So without further ado, let me give you the latest and greatest since the 2018 release of Data Science Foundation series.

Look out for Pt.2 HERE and Pt. 3 that breaks down what has changed in the Primary Data Source Type framework and introduces a new methodology that will define this decade called: Intentionality.