This course is as practical as it gets and it's based on the assumption that AI is a platform shift, similar to mobile 15 years ago and the web before that. For that reason, its main goal is to teach how to build useful and monetizable apps on top of this new platform.
The course is split into three sections:
The first one is about Chatgpt foundational knowledge. You want to go through this section in detail to understand the main concepts. This is the knowledge that will allow you to build apps.
The second section is about real-world projects that focus on business applications. For each project in this section, there is at least one successful business today (often more than one). Basically, I picked a few successful AI companies and reverse-engineered how to implement their AI component. These projects are off-line scripts, similar to a Jupiter Notebook or an R Markdown data science analysis. The main goal of this section is to give ideas about possible projects by showing concrete examples of what people are successfully doing. I would read through the projects fairly quickly, making sure everything is clear. And then, if you build a project similar to one of them, use that code as a template.
The last section takes a couple of the projects from the previous section and turns them into production apps, essentially making them monetizable. This is written from the perspective of a data scientist. That is, the production apps are in either R Shiny or Python. You should be able to reuse most of the code in this section for your own project.
Speed of execution is the most important thing, by far. I'd advise to try to go through the course as quickly as possible in order to get to the point where you can build something fully functional. This is always a good advice in tech, but it is particularly true now.
When you are early in a platform shift, the opportunities are incredible because (a) there are so many new things that can be built and (b) the competition is much lower. Speed is also crucial because right now text data is easily accessible via scraping. But considering that text data has suddenly become so valuable, websites might make scraping much harder/impossible. So it's better taking advantage of it while it lasts.
It wasn't a platform shift, but just to give an example about the importance of being early: in 2013 I interviewed for data scientist roles at various tech companies (Airbnb, FB, etc.). Those were early days for data science. The hardest questions were about grouping by and finding max, min, etc. in R and self-joins in SQL. Right now, that would be considered so easy that it's almost unthinkable. Product sense questions, window functions, etc. didn't even exist until two years later. It's very likely that a few years from now people will look back at 2023 thinking: wow, these people could really make money with AI apps that are so simple?
The course is largely based on what I learned by building and selling a Chatgpt app to Colgate. The app was about extracting information from online reviews. A project very similar to that app is in the second section (the two lessons whose title is "Product Data Science via Chatgpt - Identify Weaknesses/Strengths").
You might be hearing a lot about B2C Chatgpt apps and random solo developers making tons of money. However, what's happening in B2B is arguably much bigger. It's just that people don't openly talk about it. One revolutionary aspect is that there is no integration needed. That is, all by yourself you can scrape text data that's valuable to a given company, build a Chatgpt app on top of it, put it on your server, and show it to them. You don't need anything from that company in order to actually build the product: no PII data approval, no integration with their infrastructure, no need to figure out how to extract data from their messy tables, etc.
In the course I use OpenAI/Chatgpt because at the moment it is clearly the best choice in terms of result quality and ease of use. Its cost is very low and, definitely, more than worth it. I did look into self-storing my own model, but eventually it didn't make much sense to me. There are so many opportunities right now by building Chatgpt apps that the added complexity of storing my own model didn't make sense. I am not saying that it doesn't make sense in general. I am just saying that if your goal is to get something to production fast and Chatgpt can fulfill the requirements (e.g. it is OK sending that kind of data to their API), I would stick to it. If this changes and, for instance, someone creates something better, I will update the course accordingly.
Chatgpt answers are not fully deterministic. That is, if you re-run the code in the course, it is likely that you won't get exactly the same answers word by word. However, the differences are going to be minor and the main concepts are going to be consistent (if there are major API updates that would make the answers significantly better, I will update the course).