Internal Google search algorithm documents leaked. We are constantly eating the information on top of the larger mechanism.

【SEO Industry Shaken; Internal Google Search Algorithm Data Leaked】
https://www.gizmodo.jp/2024/06/google-search-algorithm-leaked.html

 

・Google keeps information related to its algorithms a closely guarded secret, but a 2,500-page document titled “Google Search’s Content Warehouse API” has been leaked online, sending shockwaves through the SEO industry.

Rumors that “search rankings are determined by the click rate of links in search results,” “subdomains are treated differently in ranking evaluation,” “newer sites are less likely to appear higher in search results,” and “the age of domains is also collected and evaluated” have been denied at every turn, but all have been found to be true.

The inside story, contrary to Google’s official explanation that “Chrome browser data will not be used to determine rankings,” was also revealed.

 

The above is a quote from the article

 



 

 

We are constantly eating information on top of a larger structure.

 

We casually search for something on Google every day, but the mechanism (search algorithm) that determines how the search results are displayed (order) was a black box.

 

However, people had guessed that this was the case to some extent, but Google had denied these guesses.

However, the internal documents revealed that those guesses were quite correct.

At any rate, here is Google’s search algorithm, as revealed in the article above.

・Search rankings are determined by the percentage of clicks on links in search results.
・Subdomains are treated differently in ranking evaluations.
・New sites are less likely to rank high in search results.
・The age of a domain is also taken into account when evaluating rankings.
・Chrome browser data is used to determine rankings.

 

Well, I guess that’s true.

 

Of course, there is no good or bad in this kind of algorithm itself.

I just think that,

The less specific the search criteria (if the search method is vague), the more likely we are to be shown what the algorithm wants to show us.

The more specific the search information, the more likely we are to see what we want to see,

 

Algorithmic mechanisms like this are not limited to Google.

The same may be true of YouTube and various social networking sites.

In a larger sense, the same may be true of TV and other forms of news. (Since all channels broadcast the same kind of news, is there some kind of algorithm that determines what kind of news to broadcast? I guess).

In light of these things,

I can’t help but think that we are constantly being fed information that is put on top of a larger mechanism, like an algorithm, all the time.

 

It is not that there is anything wrong with getting and eating information that is placed on top of a big mechanism.

However, it might be a good idea to pause and think about whether what you are about to see or eat is what you really wanted to see or eat.

That’s all I have to say. I am thinking about the story of the leaked internal data of Google’s search algorithm, and I am again overreacting and expanding the story.

 

See you then

 

 

Even the generative AI that is permeating our lives is an algorithm. We will continue to live on the basis of a larger mechanism, like an algorithm.

 

 

 

 

 

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *