Is ChatGPT a certified Tableau Desktop Specialist?
What ChatGPT got wrong and what it got right.
ChatGPT by OpenAI has been put to the test in many different scenarios since its release at the end of November last year. The Wall Street Journal tested the program a couple of weeks ago in the context of AP Lit. You can check out the video here. It got me thinking of testing the chatbot in it’s knowledge of Tableau by having it take a Tableau Specialist Practice Exam provided by Learning Tableau.
Out of a total of 53 points, ChatGPT scored 37 points or 70%, not reaching the 75% requirement to pass the real exam. You can view the prompts and output from the GPT model here. So what did it get right and what did it get wrong? Let’s take a look.
The Tableau Specialist Exam is designed to test a person's knowledge and understanding of the Tableau software and its capabilities. It covers the foundations of Tableau Desktop and is recommended to be taken after having at least three months experience using the program.
So, how well did ChatGPT do on the practice exam? Well, a score of 37 out of 53 is certainly not perfect, but it is also not a complete failure. One area where ChatGPT excelled was in explaining when a live connection is more appropriate than an extract. ChatGPT was correct in naming and explaining the file types used in Tableau (i.e. .twb, .twbx, .tde, and .hyper).
ChatGPT also demonstrated the ability to correctly identify the visual type most appropriate for a given context of the data. For example, it knew that a bar chart is the best visual type for comparing data across categories or a line graph for data over time.
However, ChatGPT was not without its flaws. It missed several questions on different topics such as join types, manual sorting, and table calculations. It also failed to correctly answer questions that involved using descriptions of photos provided for the questions.
For the questions that ChatGPT got incorrect, ChatGPT often provided false explanations. For example, it provided an incorrect explanation on how to add multiples of the same sheet to a single dashboard which is not possible. This highlights the need for caution when using the GPT program, as it may not always provide the correct answers or valid justifications to those answers.
Overall, ChatGPT's score on the Tableau Specialist practice exam is a testament to the challenges of using language models to perform specialized tasks. That being said, it will be interesting to see how ChatGPT and other language models continue to improve and evolve in the future. As they become more sophisticated, it's possible that they will be able to perform even more specialized tasks with greater accuracy and efficiency.
Despite its limitations, ChatGPT can still be useful for Tableau developers who need quick answers to questions, but it is important to be aware of its limitations and use it with caution. Tableau users should continue to rely on the resources provided by Tableau and its community of experts as they provide a wealth of knowledge and expertise in the field of data analysis and visualization that ChatGPT cannot reliably match at this time.
This is my second post on using ChatGPT for Tableau. My first post is on my website, VizKid.org. New to Tableau? Here is a previous post I wrote that includes tips for new learners. Check it out!