The SEO Technical Verification of 2017: Essential Marketing Guide

As a digital trader, it is likely that you will be responsible for formulating technical recommendations for a client’s website, probably at the beginning of the campaign. The cornerstone of this process will be the SEO technical verification. Identifying and developing ideas for the many technical elements will inform the next steps in terms of what will make the valuable pages of a website more visible in search results and accessible to users .

Below, we will break down the fundamental technical aspects integrated in 2017, the nuances of each element and their influence on the functionality of a website.

Meta-data

As for title tags and meta descriptions, these HTML snippets are two attributes that serve to inform users and search engines of the topic of the page. If the optimization is correct, the information provided should be for a user search query, which should then improve clickthrough rates.

To establish appropriate optimization, there are only a few guidelines for each element that should be taken into account when auditing a website:

Title tags

• Navigate with the relevant keywords in the front of Title Tag
• To take advantage of the brand awareness, incorporate the brand name at the end, maintaining a uniform layout site
• A title label must not exceed 50-60 characters (or 512 pixels). Search engines can cut additional characters if the length is exceeded
• Overall, they should be unique and compelling to improve visibility and user experience

Meta Descriptions

• 1-2 axes keywords must be included and consistent with those used in Tag Title, Header Tags and, in most cases, the URL
• The text should match the subject of the page
• The optimal length is 150-160 characters
• Meta descriptions must be unique
• A meta description must be written as a compelling copy of the advertisement that inspires curiosity and invokes users’ desire to click

Crawlability & Indexation

Search engine robots use web pages, and then store copies of those pages in their clues. The vulnerability of a page is one of the forces that determine which pages are returned following a Google search. If a search engine can not browse a page, it will definitely not be able to show this page to a user who might find the information relevant to their query.

To avoid such problems, let’s look at the 3 components that you should look for to facilitate the vulnerability and indexing of a Web page:

XML Sitemap:

This document provides an easily digestible menu of value-oriented pages, according to your discretion, which you have chosen to tell the search engines, which will streamline the indexing and indexing process. Also, it’s a quick way to tell Google when new content has been posted on a website and what content elements will help improve ranking, on time.

HTML Sitemap:

Unlike the XML sitemap, HTML sites are designed for human consumption, not search engines. Therefore, if a user can not find a particular content, this sitemap will help improve that user experience.

Robots.txt:

This file lives in the root directory of your site and tells the robots of the search engines which pages to access and index and the pages that it should avoid. The functionality of this file is essential not only for the SEO aspect, but also for the privacy of the areas of a site.

When inspecting a robots.txt file, you should make sure that the guidelines are aligned with how visitors should access the site. For example, a “reject” directive can be used for the content of the page, category, or site, as well as resource and image files as required. Additionally, the robots.txt file should always point the search robots to the XML sitemap.

URL structure

The structure of a URL will help to describe a page to visitors as well as search engines, just like the title tag of a page. Depending on the location of a page in a site, the paths and taxonomy of the URL must match the site’s navigation and about the page. Some additional guidelines to follow are:

.URLs should be kept as succinct as possible, avoiding excessive use of folders
• Keywords that best represent a page are still useful
• Words in URLs should be separated by hyphens
• Unnecessary punctuation characters should be removed
• Use of certain prepositions or ‘stop’ words should be avoided (of; or; and; a; etc.)
• Dynamic parameters may be excluded, if advisable

Secure protocol

Since Google launched the HTTPS campaign throughout 2014, the advocacy of this push has been perpetuated only. In 2016 Google representatives said that securing websites should be on the radar for webmasters in 2017. This simply means that transferring your website from HTTP to HTTPS.
HTTP (Hypertext Transfer Protocol) is the structure for transferring and receiving data over the Internet. This is an application layer protocol that provides information to a user regardless of the channel required to do so.

With Secure Hypertext Transfer Protocol (HTTPS), the exchange of permissions and transactions is protected by an additional layer known as Secure Sockets Layer (SSL) to transfer sensitive data.
As Google tries to impose a more secure web experience on its users, it announced that HTTPS is indeed a rank signal even though it is small. Nevertheless, Google search bots have started to prioritize secure pages on non-secure pages. The encryption of a website will prove its integrity and authenticity.
When compiling a technical check, make sure that this item is addressed and that the recommendations of the next step to obtaining a security certificate are included.

Canonicalization and redirectors

When conducting a technical audit, exploring the website will be one of the first fundamental steps to help you move forward and make valuable assessments. A popular tool of choice among SEO is the Screaming Frog tool, which will provide a complete breakdown of file type, metadata, status code, canonical URL, etc. For each animated page of a site.
So let’s get to the point.

Canonicalization

If a search engine is able to crawl multiple pages that contain virtually the same content, it will be difficult to try to determine which page version should be selected to appear in search results based on a query, and thus Not Choose one of them.
To make matters worse, if different sources bind to these multi-page versions, the authority of the main page version will be diluted, the trust and credibility of that page will be reduced.
Make sure that if a website contains multiple page variants, all indicate a common URL, which is assigned via the canonical tag. The duplicate page version examples are listed below:
1. Pages with filter settings
2. Paginated pages
3. Secure and https pages
4. Insecure HTTP Pages
5. pages www
6. Non-WWW Pages

Channel Redirection

A redirect string is defined as a series of redirects that continually move from one URL to another, causing users and search engines to wait for these unnecessary steps to be taken before they reach their URL. The implication here is that nearly 10% of the Authority’s page is lost for each redirect.
A feature available in Screaming Frog makes it quite easy to identify redirect strings from a single report. At this point, filter out those that require an elimination of the redirect string, so that once you have updated your development team, the site will improve the user experience, indexing and time Loading the page.

Site Speed ​​Optimization

Beyond the first glance, several factors contribute to the effectiveness of the speed and performance of a website. In the digital age, how our devices serve us is critical to our productivity at work, school and home, so we need to respond as quickly as possible.

Since the page load time dictates the user’s behavior, Google also wants the pages of our websites to load quickly and declared that the speed of the site guarantees a small advantage.

A very useful tool that will provide information about website performance is GTmetrix. As a result of a quick analysis of your website, you will be able to discern how fast (or slow) your website loads, prioritize areas of interest, such as reducing image file size, Caching of the browser, reporting of the analysis of JavaScript files, etc.

Assessments and recommendations can easily be made because the report also provides a granular look at specific resources that require special attention.

Rich extracts / structured data by schema

Structured data is a type of HTML markup implemented in relevant content on a website, which, when read by search engines, helps them to interpret information quickly. The result, although not guaranteed, is the population of extracts rich in search results, which enhances the researchers’ experience, resulting in additional click-through rates.

Some of the most common schema types are:
• Organization
• Society
• Local Business
• Government organization
• Sports organization
• Organization of education

A comprehensive resource that provides areas for a website that can be included for markup can be found here.

To determine whether or not a Web page has implemented structured schema data or has been properly configured, Google’s structured data test tool is a complete resource.

Now that you are equipped with the quality of the most basic technical components, you can dive into the audit phase! Remember that the SEO technical audit is as valuable as the dedicated search assigned to them. No component can be neglected, regardless of the productivity of this deliverable. Once the recommendations are implemented, over time you should start to see the positive impact of the above changes and the result they have on the user experience and the performance of a search engine



Leave a Reply