5/28/2023 0 Comments 3 red herring logical fallacyGPT-3 175B is trained on an unlabeled text dataset that contains almost everything present on the internet with 499 Billion tokens from multiple sources including Wikipedia(3%), books(16%), and Common Crawl(60%), etc. All GPT-3 models use a transformer-based neural network, as their predecessor (the popular NLP model BERT also use transformers) but with more, wider layers and more data (the largest model has 96 attention layers, each with 96x128-dimension heads). The largest model includes 175 Billion parameters which expand the capacity of GPT-3's predecessor GPT-2 by two orders of magnitudes. GPT-3 stands for Generative Pretrained Transformer 3rd generation and comes in eight sizes. In the following, I will try to shed light on crucial details of the GPT-3 model, then I will ask GPT-3 some examples about the common logical fallacies through the playground of the OpenAI API. In this post, we will explore the effectiveness of GPT-3 to recognize some of the common informal fallacies which cater to the understanding of reasoning capabilities of GPT-3. Arguments containing informal fallacies may be formally valid, but still fallacious. A formal fallacy can be expressed neatly in a standard system of logic, such as propositional logic, while an informal fallacy originates in an error in reasoning other than an improper logical form. A logical fallacy is an error in reasoning that can impair the logic of an argument.įallacies are commonly divided into formal and informal. Among the other GPT-3 demos, I haven’t come across any demo that addresses the capabilities of the GPT-3 to detect logical fallacies. I had a chance to get access to GPT-3 API and to explore its capabilities.
0 Comments
Leave a Reply. |