//Google strongly recommends using HTML to get content quickly indexed by @MattGSouthern

Google strongly recommends using HTML to get content quickly indexed by @MattGSouthern



 Google strongly recommends using HTML to quickly get indexed content ;
$ (# Scheader .sc-logo & # 39;). append (& # 39;);
$ (# Scheader & # 39;). append (& # 39;);
$ (# Scheader .scdetails & # 39;). append (& # 39;

+ cat_head_params.sponsor + & # 39;

& # 39;);
$ (# Scheader .scdetails & # 39;). append (cat_head_params.sponsor_text);
$ (# Scheader & # 39;). append (& # 39;


& # 39;);

if ("undefined"! = typeof __gaTracker) {
__gaTracker ('create', 'AU-1465708-12', 'auto', 'tkTracker');
__gaTracker ("tkTracker.set", "dimension1", window.location.href);
__gaTracker ('tkTracker.set', 'dimension2', 'search-engine-optimization');
__gaTracker ("tkTracker.set", "contentGroup1", "search-engine-optimization");
__gaTracker ('tkTracker.send', 'hitType': 'pageview', 'page: cat_head_params.logo_url, & title = & # 39; # 39; cat_head_params.sponsor});

$ (# Scheader a) .click (function () {
__gaTracker ("tkTracker.send", "event", "Sponsored Category Click Var 1", "search-engine-optimization", ($ (this) .attr ("href"));
__gaTracker ("send", "event", "Sponsored Category Click Var 1", "search-engine-optimization", ($ (this) .attr (& # 39; href & # 39;)));



Google's John Mueller says content must be in HTML so it can be indexed quickly.

This is especially true for sites that frequently produce new and / or updated content.

Mueller gave his opinion in a Twitter chat about Google's two-pass indexing system.

When Google crawls and indexes content, it makes two passes. The first pass is only for HTML. Then, a little later, he will make a second pass by looking at the whole site.

Mueller says that there is "no fixed time" between the first and the second passage.

can happen quickly, in other cases it may take a few days or weeks.

Yes, there is no fixed deadline – the rendering can happen quite quickly in some cases, but even a few weeks. If your site produces new / updated content frequently and you want to index it quickly, you need this content in the HTML code.

– 🍌 John 🍌 (@JohnMu) September 13, 2018

This is very important when it comes to SEO for web pages that use a lot amount of client-side JavaScript code for rendering.

Some details may be missed when first passing a JavaScript-heavy webpage, which means that it will not be indexed in its entirety until the second pass.

[1945910] As Mueller says, it could take weeks .

not be fully indexed in Google Search until several weeks after its publication.

This is obviously not ideal, which is why it is essential for Googlebot to see the main content of the first pass.

Veteran SEO Alan Bleiweiss added his expertise to the discussion, saying that he recently audited a site that had great success after rendering all client-side JavaScript renderings on critical pages.

And if it takes weeks to scan the entire site, all updates are disabled with JavaScript. I just did a review audit on a site that rendered all client-side JS renders on critical pages. It's a mess and they have had great success.

– Alan Bleiweiss (@AlanBleiweiss) September 14, 2018

Why does not Googlebot make a whole page at a time? ]

The reason that Googlebot does not parse and index a complete Web page in the first pass is:

The JavaScript-based webpage processing requires processing.

When a page contains JavaScript code, rendering is postponed until Googlebot has the resources ready to render the client-side content.

For Googlebot to index a page before rendering is complete, it will take some time to finish rendering.

When the final rendering arrives, Google will perform a second wave of indexing on the content rendered on the client side.

To hear this subject in much more detail, see Google I / O's 40-minute conference on how to provide search-optimized and JavaScript-based websites.

Subscribing to SEJ

our weekly newsletter from the founder of SEJ, Loren Baker, on the latest news from the 39 ;industry!