This article helps to explain the value of Google Search Console and its purpose to help a webmaster or website owner to optimize their site for search. The data received from Google Search Console is instrumental in Optimization of Google Ads and Schema Markup for websites.
What is Google Search Console?
Google Search Console is a free service offered by Google that helps you monitor, maintain, and troubleshoot your site’s presence in Google Search results. You don’t have to sign up for Search Console to be included in Google Search results Search Console helps you to understand and improve how Google sees your site.
Search Console offers tools and reports for the following actions:
- Confirms that Google can find and crawl your site.
- Ability to Fix indexing problems and request re-indexing of new or updated content.
- View Google Search traffic data for your site: how often your site appears in Google Search, which search queries show your site, how often searchers click through for those queries, and more.
- Receive alerts when Google encounters indexing, spam, or other issues on your site.
- Show you which sites link to your website.
- Troubleshoot issues for AMP, mobile usability, and other Search features.
How Does Google Search Console Work?
Google says that these are the sources of Google Search Console Information:
User-submitted content such as Google My Business and Maps user submissions
Public databases on the Internet
Additional Sources of Data
The first step is finding out what pages exist on the web. There isn’t a central registry of all web pages, so Google must constantly search for new pages and add them to its list of known pages. This process of discovery is called crawling.
Some pages are known because Google has already crawled them before. Other pages are discovered when Google follows a link from a known page to a new page. Still other pages are discovered when a website owner submits a list of pages (a sitemap) for Google to crawl. If you’re using a managed web host, such as Wix or Blogger, they might tell Google to crawl any updated or new pages that you make.
How do I improve website Crawling?
- For changes to a single page, you can submit an individual URL to Google.
- Get your page linked to by another page that Google already knows about. However, be warned that links in advertisements, links that you pay for in other sites, links in comments, or other links that don’t follow the Google Webmaster Guidelines won’t be followed.
- If you ask Google to crawl only one page, make it your home page. Your home page is the most important page on your site, as far as Google is concerned. To encourage a complete site crawl, be sure that your home page (and all pages) contain a good site navigation system that links to all the important sections and pages on your site; this helps users (and Google) find their way around your site.
Google doesn’t accept payment to crawl a site more frequently, or rank it higher. If anyone tells you otherwise, they’re wrong.
After a page is discovered, Google tries to understand what the page is about. This process is called indexing. Google analyzes the content of the page, catalogs images and video files embedded on the page, and otherwise tries to understand the page. This information is stored in the Google index, a huge database stored in many, many (many!) computers.
How do I improve Page Indexing on my Website?
- Create short, meaningful page titles.
- Use page headings that convey the subject of the page.
- Use text rather than images to convey content. (Google can understand some image and video, but not as well as it can understand text. At minimum, annotate your video and images with alt text and other attributes as appropriate.)
Serving (and ranking) in Google
When a user types a query, Google tries to find the most relevant answer from its index based on many factors. Google tries to determine the highest quality answers, and factor in other considerations that will provide the best user experience and most appropriate answer, by considering things such as the user’s location, language, and device (desktop or phone). For example, searching for “bicycle repair shops” would show different answers to a user in Paris than it would to a user in Hong Kong. Google doesn’t accept payment to rank pages higher, and ranking is done programmatically.
To improve your serving and ranking:
- Make your page fast to load, and mobile-friendly.
- Put useful content on your page and keep it up to date.
- Follow the Google Webmaster Guidelines, which help ensure a good user experience.
- Read more tips and best practices in our SEO starter guide.
- You can find more information here, including the guidelines that we provide to our quality raters to ensure that we’re providing good results
Who should use Google Search Console?
Anyone with a website! From generalist to specialist, from newbie to advanced, Search Console can help you.
- Business owners: Even if you won’t be using Search Console yourself, you should be aware of it, become familiar with the basics of optimizing your site for search engines, and know what features are available in Google Search.
- SEO Marketers: As someone focused on online marketing, Search Console will help you monitor your website traffic, optimize your ranking, and make informed decisions about the appearance of your site’s search results. You can use the information in Search Console to influence technical decisions for the website and do sophisticated marketing analysis in conjunction with other Google tools like Google Analytics, Google Trends, and Google Ads.
- Site administrators: As a site admin, you care about the healthy operation of your site. Search Console lets you easily monitor and in some cases resolve server errors, site load issues, and security issues like hacking and malware. You can also use it to ensure any site maintenance or adjustments you make happen smoothly with respect to search performance.
- Web developers: If you are creating the actual markup and/or code for your site, Search Console helps you monitor and resolve common issues with markup, such as errors in structured data.
Control How Google Indexes Your Site
Read the long version of How Google Search Works; if you don’t understand the crawl/index/serving pipeline well, it will be difficult to debug issues or anticipate Search behavior on your site.
Be sure that you understand what canonical pages are, and how they affect crawling and indexing of your site. Also understand how to remove or handle duplicate content on your site, when it is merited.
Be sure that any resources (images, CSS files, and so on) or pages that Google is meant to crawl are accessible to Google; that is, they are not blocked by any robots.txt rules and are accessible to an anonymous user. Inaccessible pages will not appear in the Index Coverage report, and the URL Inspection tool will show them as not crawled. Blocked resources are shown only at the individual URL level, in the URL Inspection tool. If important resources on a page are blocked, this can prevent Google from crawling your page properly. Use the URL Inspection tool to render the live page to verify whether Google sees the page as you expect.
Use robots.txt rules to prevent crawling, and sitemaps to encourage crawling. Block crawling of duplicate content on your site, or unimportant resources (such as small, frequently used graphics such as icons or logos) that might overload your server with requests. Don’t use robots.txt as a mechanism to prevent indexing; use noindex or login requirements for that. Read more about blocking access to your content.
What are Sitemaps?
Sitemaps are a very important way to tell Google which pages are important to your site, and also provide additional information (such as update frequency), and are very important for crawling non-textual content (such as images or video). Although Google won’t limit crawling to pages listed in your sitemaps, it will prioritize crawling these pages. This is especially important for sites with rapidly changing content, or with pages that might not be discovered through links. Using sitemaps helps Google discover and prioritize which pages to crawl on your site. Read all about sitemaps here.
Site Migration or Page Migrations
On the occasion that you might need to move a single URL or even a whole site, follow these guidelines:
Migrating a single URL
If you move a page permanently to another location, don’t forget to implement 301 redirects for your page. If the move is only temporary for some reason, return 302 instead to indicate to Google that they should continue to crawl your page.
When a user requests a page that has been removed, you can create a custom 404 page to provide a better experience. Just be sure that when a user requests a page that is no longer there, you return a true 404 error, not a soft 404.
Migrating a site
If you’re migrating an entire site, implement all the 301 and sitemap changes you need, then tell Google about the move so that we can start crawling the new site and forwarding your signals to the new site. Learn how to migrate your site.
Search Console Best Practices
Make your links crawlable. Google can follow links only if they are an <a> tag with an href attribute. Links that use other formats won’t be followed by Google’s crawlers. Google cannot follow <a> links without an href tag or other tags that perform as links because of scripted click events.
Use rel=nofollow for paid links, links that require login, or untrusted content (such as user-submitted content) to avoid passing your quality signals on to them, or having their bad quality reflect on you.
Managing your crawl budget: If your site is particularly large (hundreds of millions of pages that change periodically, or perhaps tens of millions of pages that change frequently), Google might not be able to crawl your entire site as often as you’d like, so you might need to point Google to the most important pages on your site. The best mechanism for doing so at present is to list your most recently updated or most important pages in your sitemaps, and (possibly temporarily) hiding your less important pages using robots.txt rules.
AJAX-based sites: If you use AJAX for your site, read up on how Google crawls AJAX pages.
Multi-page articles: If you have an article broken into several pages, be sure that there are prominent next and previous links for users to click (and that these are crawlable links). That’s all you need for the page set to be crawled by Google.
Infinite scroll pages: Google can have trouble scrolling through infinite scroll pages; you should provide a paginated version if you want the page to be crawled. Learn more about search-friendly infinite scroll pages.
See the list of which file types are indexable by Google. If your file type is not natively crawlable, be sure that it is linked by text that describes it, or (video, image, or podcast feed) provide metadata in a sitemap.
In the unlikely situation that Google seems to be crawling your site too much, you can turn down the crawl rate for your site. However, this should be a rare occurrence.
Help Google Understand What is on Your Website
Put key information in text, not graphics, on the site. Although Google can parse and index many file types, text is still the safest bet to help us understand the content of the page. If you use non-text content, or if you want to provide additional guidance about the content of the site, add structured data to your pages to help us understand your content (and in some cases, provide special search features such as rich results).
If you feel comfortable with HTML and basic coding, you can add structured data by hand following the developer guidelines. If you want a little help, you can use the WYSIWYG Structured Data Markup helper to help you generate basic structured data for you.
If you don’t have the ability to add structured data to your pages, you might use the Data Highlighter tool, which lets you highlight portions of a page and tell Google what each section represents (an event, a date, a price, and so on). This is simple, but it can break if you change the layout of your page.
Follow Google’s Guidelines
Important: Be sure to follow our Webmaster Guidelines. Some of these are recommendations and best practices; others can lead to a site being removed from the Google Index if you do not follow them.
Google Search Console Content-specific Guidelines
If you have specific types of content on your site, here are some recommendations for getting them on Google in the best way:
How to index Video: Be sure to follow our video best practices to enable Google to find, crawl, and show results for videos hosted on your site.
How to index Podcasting: You can expose podcasts to Google by following these guidelines. Google finds podcast RSS feeds if exposed in structured data as described in those guidelines, or if you follow the home page requirements on your podcast’s home page. Google detects and recrawls updated RSS feeds.
How to index Images: Follow our image best practices to get your images to appear in Search. You can show additional information about your image in Image search by providing image rights metadata on the image host page. To block an image from being indexed, use a robots.txt Disallow rule.
Google automatically optimizes image sizes for users on slow networks; if, for some reason, you do not want this, you can opt out of mobile optimization for image results.
Indexing content for Adult sites: If your site (or specific pages) contain adult-only content, you might consider tagging it as adult content, which will filter it in SafeSearch results.
Indexing Blogs: If your site is a blog, here are some tips for creating a useful blog and helping Google crawl it.
Indexing News: If you run a news site, here are some important considerations:
- If you have news content, be sure to read the Google Publisher Center help documentation.
- In addition, create a News sitemap to help Google discover content more quickly.
- Be sure to prevent comment spam in your site.
- If you want to provide a limited number of free views to visitors, read about flexible sampling to learn some best practices about providing limited free access to your content.
- See how to indicate subscription and paywalled content on your site to Google while still enabling crawling.
- See how to use meta tags to limit text or image use when generating search result snippets.
- Consider using AMP or AMP stories for fast-loading content.
Other sites (businesses, books/apps,/scholarly works): See other Google services where you can post your information.
See if Google supports a search feature specific for your content type. Google supports specialized search features for recipes, events, job posting sites, and more.
Manage your User Experience on your Website
Providing a good user experience should be your site’s top goal, and a good user experience is a ranking factor. There are many elements to providing a good user experience; here are a few of them.
Google recommends that websites use HTTPS, rather than HTTP, to improve user and site security. Sites that use http can be marked as “not secure” in the Chrome browser. Read guidelines on securing your site with HTTPS.
Ensure that your site works in different browsers and different platforms.
A fast page usually beats a slow page in user satisfaction. You can use the new Speed report to see your site-wide performance numbers, or PageSpeed Insights to test performance for individual pages. You can learn more about building fast pages on the web.dev site. Also consider using AMP for fast pages.
Help with Google Search Console
Google Search console is an ever evolving software that has many uses for the end user. There is a ton of information out on the web explaining how Google Search Console works and we have simplified it for you with some of the major points.As the World Wide Web continues to grow, we can anticipate that tools like Google Search Console will be even more important for webmasters to manage Schema Markup, Voice Search, and judge the effectiveness of Google Ads Campaigns.