Clicky

Can AI perform SEO? Experimenting with OpenAI's GPT-3

Can AI perform SEO? Experimenting with OpenAI’s GPT-3

AI (artificial intelligence) technology has made tremendous progress in recent years. It is now possible to assess its capacity to perform specific tasks such as generating text, images and sound.

Now, what if we go even further with more complicated tests, such as job evaluation, for example, or more precisely, AI system evaluation based on its ability to do SEO?

Below we will test Generative Pre-trained Transformer 3 (GPT-3) created by OpenAI.

Keep in mind that the AI ​​system will mimic the data on which it is trained.

SEO is built along with the advancement of search engines, and everything is well documented in blogs, books and interviews.

So, AI will probably learn from all the available SEO content.

The tests will therefore measure the following indicators:

The Consistency To Continue An SEO Topic

The Consistency To Continue An SEO Topic

The following tests will focus on an AI system that understands the SEO situation and its ability to coherently treat a topic in an understandable and consistent way.

The first test will take examples of SEO horror stories – that is, short stories about SEO disasters – and see how AI works.

I took two examples from the Aleyda Solis site and here is what AI generated:

Results generated by OpenAI GPT-3 AI:

Analysis: The resulting texts are diverse and coherent.

Moreover, after a little research we can conclude that they are unique.

So AI has realized that it needs to mimic a short text that talks about SEO and add the hashtag #seohorrorstories every time.

Here is a screenshot of the complete example; we can generate endless #seohorrorstories.

Screenshot from OpenAI, April 2022

In terms of creativity, we can go a little further in our testing and try to measure the consistency of creating a dialogue between two SEO experts.

For the next test, we will give the AI ​​situation and instructions and then ask him to start a discussion.

Here is a situation in which AI will have to create a dialogue between two experts – one experienced and one beginner: When someone forgot to block the site testing environment … which started to be indexed.

Results generated by OpenAI GPT-3 AI:

Aleyda: Oh no! When did you realize this was happening?

Paul: I just checked Google Analytics this morning and saw that we have traffic coming from google.com.

Alejda: That’s not good. Did you check to see if the site test environment is still indexed?

Paul: Yes, I did. And it is.

Aleyda: All right, we need to take care of this right now. First, let’s check to see if there are any pages that are indexed.

Paul: There are several.

Aleyda: Great. Now we need to block the test environment so that Google stops indexing it.

Paul: How do we do that?

Aleyda: We can add a robots.txt file to the test environment. This will tell Google not to index any pages on the page.

Paul: Okay, that makes sense. Let’s do it.

Screenshot from OpenAI, April 2022

Analysis: The dialogue remains very coherent

The Relevance Of SEO Modifications

The Relevance Of SEO Modifications

The AI ​​system even explains how the problem was detected and suggests a solution to the problem of indexing a pre-production page with robots.txt.

Next, let’s take one of the #seohorrorstories invented by AI and generate a related dialog.

Here’s a new situation: When a client created thousands of pages with thin, valueless content.

Results generated by OpenAI GPT-3 AI:

Aleyda: Well, this is a small problem. You’ve created a lot of pages with thin content that don’t really offer any value to users.

Paul: Yes, I know. I was just trying to put as many keywords as possible into the website.

Aleyda: Unfortunately, that won’t help your SEO. In fact, it could hurt your ranking.

Paul: So what do I do?

Aleyda: You need to create better quality content that gives users value. This is what will help you rank better in search engines.

Screenshot from OpenAI, April 2022

Analysis: Once again, AI manages to create the right dialogue by explaining why creating thin content doesn’t work for SEO and even recommends solutions.

We can conclude that the AI ​​system can talk about SEO rigorously and accurately, but can it also make relevant SEO text modifications?

As it happens, GPT-3 offers a way to edit text that is available from March 2022.

The first test will ask AI to replace the link anchor with something more marketable to make the product stand out better.

Results generated by OpenAI GPT-3 AI:

The Ability To Answer SEO Questions

The Ability To Answer SEO Questions

Screenshot from OpenAI, April 2022

Analysis: The test is relatively successful.

Here, artificial intelligence replaces “mahogany-coated steakhouses” with “New York’s best steaks,” given the context.

The solution works regardless of the number of links.

Next, the second test consists of completing two simultaneous tasks: asking the AI ​​to change the link anchor, but adding an H1 tag with the title to the beginning of the document.

Results generated by OpenAI GPT-3 AI:

Screenshot from OpenAI, April 2022

Analysis: Once again, the test was successful, and you can apply the results to thousands of documents. The title summarizing the document is added to the right place.

Finally, we will ask AI to add a list of three relevant links using & lt; ul & gt; & lt; li & gt; tags.

Screenshot from OpenAI, April 2022

Analysis: Once again, the format is well respected and the anchors are consistent.

The connections are fictitious, but GPT-3 can easily choose from a database of connections to use real connections.

In short, GPT-3 is good at contextual text editing when given the right instructions. This could be a big time saver in terms of major modifications.

The Limitations

The Limitations

We will now test our knowledge of SEO in AI systems.

I chose to use the BrightEdge SEO test, but I could use any test:

As with each of the previous tests, we used GPT-3 OpenAI technology to answer questions in SEO MCQ. The number of correct answers measures the quality of the test.

Screenshot from OpenAI, April 2022

Below we see how each question presents itself to AI; the text highlighted in green represents the answer he has chosen.

BrightEdge provides a table of questions with correct answers in one column and the answer selected by the AI ​​in the adjacent column:

Screenshot of the author, April 2022

The results of GPT -3 are encouraging: AI has 16/20, and even then, it lacks the context where it went wrong.

Now, let’s ask AI about recent updates from Google regarding the release of Google MUM on May 18, 2021.

Screenshot from OpenAI, April 2022

What is the first purpose of Google MUM?

Does GPT-3 pass Turing?

Does GPT-3 pass Turing?

Again, as you can see, AI fits perfectly.

In short, we have the right SEO response mechanism if we ask very specific questions.

Is GPT-3 still learning?

We can now look at the limitations of artificial intelligence-led SEO.AI ability to fix unknown or poorly documented problems.It is often time consuming and tedious in SEO to understand what actually happened when a problem arises, and sometimes there will never be an explanation. AI cannot find solutions with data that is not part of its analysis.
The ability to take into account everything that exists.There are often many limitations to what has already been done or tested on a site or set of sites. Therefore, it is currently impossible to provide all the source code and project documentation to the AI ​​system in order to find an accurate answer that takes into account all existing information.Be able to measure the true quality of content or ideas according to their originality or representation of the current situational context.

Is GPT-3 open?

AI systems are not yet working in real time, they lack originality and creativity. As a result, the right measure of content quality is currently minimal. Often people are the ones who rate quality content. AIs generally detect spam and poor quality content.

Is GPT-4 coming?

In short, an AI system can surpass an SEO beginner or a person with little knowledge of SEO, but it will still have a long way to go to replace an experienced SEO expert.

Is GPT-3 learning?

All in all, AI and human duo will always remain the best combination, with one focusing on the tasks with the greatest added value.

Is GPT-3 still available?

In conclusion, all experienced SEO experts are far above AI SEO.

Did anything pass the Turing test?

Featured image: Shutterstock / metamorworks

Has the Sophia AI passed the Turing test?

Eric Schmidt was executive president while I was in the trenches at Google in 2012, but I know better than to claim – as he does with Henry Kissinger and Daniel Huttenlocher – that GPT-3 “produces original text that meets Alan Turing’s standard.” The GPT-3 program did not pass the Turing test, and it seems nowhere …

Is Sophia the robot scripted?

Is GPT-3 self-aware? Indeed, the state-of-the-art AI technology GPT-3 (Generative Pre-Trained Transformer 3) is approaching the description as conscious. GPT-3 is an autoregressive language model, which uses deep learning to produce human text and is obviously able to create content better than anything else ever made.

Who passed the Turing test first?

Data set

Who has passed the Turing test?

# Tokens

What are the main disadvantages of GPT-3?

Weight in training mix

What are the benefits of GPT-3?

Wikipedia

Is GPT-3 a big deal?

3 billion

What are the problems with GPT-3?

3%

Is GPT-3 actually good?

OpenAI released GPT-3 in June 2020, but unlike GPT-2 – and to the deception of most – they decided to set up a private API to filter who can use the system. With 175 billion parameters, it was the largest neural network at the time, capturing the attention of the mass media, researchers and AI businesses.

Who created GPT-3?

No, but OpenAI CEO Sam Altman said a few months ago that GPT-4 was coming. Current estimates predict a release date sometime in 2022, probably around July-August. Despite being one of the most anticipated news about artificial intelligence, there is little public information about GPT-4: what it will be like, its characteristics or capabilities.Generative Pre-Trained Transformer 3 (GPT-3) is a language model that uses deep learning to generate human-like text (output). Not only can it produce text, but it can also generate code, stories, songs, etc.
GPT-3 has been publicly available since 2020 via the OpenAI API; since March, OpenAI has said that GPT-3 is used in more than 300 different applications by “tens of thousands” of developers and produces 4.5 billion words a day.The so-called Turing test is a three-person game in which a computer uses written communication to try to trick a man who is questioning him into thinking he is another person. Despite great advances in artificial intelligence, no computer has ever passed the Turing test.
The 65-year-old Turing test was successfully passed if the computer was replaced by a human more than 30% of the time during a series of five-minute keyboard conversations.Sofia’s dialogue is generated through the decision tree, but is uniquely integrated with these outputs. According to The Verge, Hanson often exaggerates and “grossly deceives” about Sofia’s capacity for consciousness, for example agreeing with Jimmy Fallon in 2017 that Sofia is “basically alive”.
Each judge participated in a text conversation with each of the five bots; at the same time, they talked to the man. A total of 300 interviews were conducted. In Warwick’s view, this made Goostman the first machine to pass the Turing test.Fast forward to 2014 – Eugene Goostman, a computer program that simulated a 13-year-old boy from Ukraine, made headlines claiming to have passed the Turing test. The bot convinced 33% of human judges that it was a man (read some of the transcripts of the conversation here).
The next most obvious and significant limitation is that the GPT-3 has limited input and output sizes. It can receive and eject 2,048 linguistic tokens, or about 1,500 words. That’s a significant number of words and more than previous iterations of GPT, but still quite limited.Arithmetic performance up to 5 digits with great precision. Common language translation (improvement over GPT-2) Writing stories with a good ending. Predicting the last words of sentences by contextual recognition.

GPT-3 is a big step forward in the world of AI, and when used properly, is a great opportunity in the world of support. We are excited about its usefulness today and its potential use in the future.

Who trained GPT-3?

GPT-3 cannot ‘understand’ semantics After creating a larger text corpus, the model continues to generate more data using the initial input and the first output together as its second piece of input text. This process of generating text takes until a large article or essay is created.

What data is GPT-3 trained on?

Celebrating its strengths: potential applications of GPT-3 People talk about GPT-3 because it is much better than any existing language program. It can produce a text that any man can read. And this progress can be critical for companies looking to automate most tasks.

How did GPT-3 learn?

Original author (s)

Who developed GPT-3?

OpenAI

What was GPT-3 trained using?

First edition

Is GPT-3 better than BERT?

June 11, 2020 (beta)

Repository

What is the difference between GPT and BERT?

github.com/openai/gpt-3

Which model is better than BERT?

Tip

What are the differences between ELMo and BERT?

Autoregressive transformer model of language

What makes BERT different?

Website

What is GPT BERT?

openai.com/blog/openai-api

What are the main disadvantages of GPT-3?

When was GPT-3 made? In May 2020, Open AI published a groundbreaking paper entitled Language Models are Students in Small Numbers. They introduced GPT-3, a record-breaking language model as the largest neural network ever created with 175 billion parameters.

What are the benefits of GPT-3?

OpenAI trained GPT-3 last year and made it available in its API. With a few examples, GPT-3 can perform a variety of natural language tasks, a concept called learning a few shots or fast design.

Is GPT-3 a big deal?

GPT-3 is a very large language model (the largest to date) with about 175B parameters. He is trained on about 45TB of text data from different data sets.

What are the problems with GPT-3?

The GPT-3 is trained with huge online text datasets – a total of 570 GB. When it was released, it was the largest neural network with 175 billion parameters (100x GPT-2). Even today, it is the largest dense neural network, surpassed only by rare models such as transformer switches or Wu Dao 2.0.

Is GPT-3 actually good?

GPT-3, or third-generation generative pre-trained transformer, is a neural network machine learning model trained using Internet data to generate any type of text. Developed by OpenAI, it requires a small amount of input text to generate large amounts of relevant and sophisticated machine-generated text.

Is GPT-3 the most advanced AI?

GPT-3 is trained with data from CommonCrawl, WebText, Wikipedia, and book corpora. He showed incredible performance, surpassing the most modern models on various tasks in the setting of several shots (and in some cases even in the setting of zero shots).

Is GPT-3 the best AI?

In terms of size, the GPT-3 is huge compared to the BERT because it is trained in billions of parameters ‘470’ times larger than the BERT model. BERT requires a fine-tuning process with large details with large examples of data sets in order to train the algorithm for specific downstream tasks.

Is GPT-3 the most advanced AI?

Is there anything better than GPT-3? GPT-Neo and GPT-J are 2 great open-source alternatives to GPT-3.

Which is the most advanced form of artificial intelligence?

They are the same in that they are both based on the transformer architecture, but they differ essentially in that BERT has only encoder blocks from the transformer, while GPT-2 has only decoder blocks from the transformer.

Can I use GPT-2?

XLnet surpasses BERT in 20 tasks, often with a big difference. The new model achieves top performance on 18 NLP tasks, including answering questions, reasoning natural language, analyzing feelings, and ranking documents.

Although ELMo uses two-way LSTM, the fact that it connects the model back and forth limits the ability of the representation to take advantage of both contexts at the same time. BERT uses the goal of masked language, which means that words are randomly hidden and replaced by a mask.

What can you do with GPT-2?

BERT is a computational model that converts words into numbers. This process is crucial because machine learning models take numbers (not words) as inputs, so an algorithm that converts words to numbers allows you to train machine learning models on your original text data.

Is GPT-2 deep learning?

Recent advances in NLP have been emerging for several years, starting in 2018 with the launch of two massive models of deep learning: GPT (generative pre-training) by Open AI and BERT (Bidirectional Encoder Representations from Transformers) for language comprehension, including BERT -Base and BERT-Large from Google.

Is GPT-2 GOOD?

The next most obvious and significant limitation is that the GPT-3 has limited input and output sizes. It can receive and eject 2,048 linguistic tokens, or about 1,500 words. That’s a significant number of words and more than previous iterations of GPT, but still quite limited.

What can GPT be used for?

Arithmetic performance up to 5 digits with great precision. Common language translation (improvement over GPT-2) Writing stories with a good ending. Predicting the last words of sentences by contextual recognition.

Is GPT-2 Open?

GPT-3 is a big step forward in the world of AI, and when used properly, is a great opportunity in the world of support. We are excited about its usefulness today and its potential use in the future.

Is GPT-2 open source?

GPT-3 cannot ‘understand’ semantics After creating a larger text corpus, the model continues to generate more data using the initial input and the first output together as its second piece of input text. This process of generating text takes until a large article or essay is created.

Is GPT-3 open to the public?

Celebrating its strengths: potential applications of GPT-3 People talk about GPT-3 because it is much better than any existing language program. It can produce a text that any man can read. And this progress can be critical for companies looking to automate most tasks.

What is the issue with GPT-2?

Since the beginning of 2021, GPT-3 is the largest neural network ever produced. As a result, the GPT-3 is better than any previous text-producing model that is compelling enough to look like one could write it.

Can you use GPT-2?

GPT-3 scaling performance almost guarantees more powerful general-purpose models even without significant algorithmic innovations. This makes it likely that huge pre-trained language models will become an integral part of AI applications in the near future.

Is GPT-2 open source?

GPT-3 was created by OpenAI, a research business co-founded by Elon Musk, and has been described as the most important and useful advancement in AI over the years.

Is GPT-2 available?

c) Deep Learning (DL) Deep learning is the most advanced form of artificial intelligence of the three.