Being a marketer for a business, you are probably highly aware of the need for great content on your company website, and that a few decent links to your site can reap great rewards, but what about the more technical side of your site? Did you know that you can have the greatest content in the world and the best links going to your site, but if your site has any technical issues, it will damage your rankings AND traffic!
We know that not every marketing department has a resident “techie”, but unfortunately, search is about more than content and links nowadays. General marketers need to understand technical details as much as web developers need to know a little bit about marketing. We aren’t saying you need to go out and learn to code, but it will definitely help you to understand a few of the more techical aspects of SEO which you can check, then ask your web team to fix for you if necessary. Please note the following are not an exhaustive list, but a good place to start!
You may have heard of the robots.txt file. This little file provides directions to search engines, and every website needs one in the root directory (e.g. example.com/robots.txt). It is important that the file is formatted in the right way, meaning it should only block files or directories you DON’T want indexed. It should be included in your XML sitemap. By the way, the XML Sitemap is essentially a list of your website’s URLs. It tells search engines what content you have, and how to find it.
You can find out more about the robots.txt file and learn about Google’s recommendations here.
Sometimes, when you are redesigning your website, the dev site might be blocked in robots.txt using disallow:/. When you launch the site, make sure this disallow is removed! If you don’t, this means your site may be taken out of the index. It happens surprisingly often… You can check the robots.txt file in Google search console.
It sometimes happens that website owners accidentally create different URLs which generate identical, or next to identica content. This can be a big problem for SEO, as search engines do not like “duplicate content” – content which is the same on different pages of a site or sites. Canonical link elements solve the problem. Lots of websites use canonical link elements to make sure the correct page (preferred version of a page) is indexed by search engines.
When using canonical link elements, you need to make sure that you reference a URL which does not redirect and is indexed. You should use the full path (e.g. https://www.example.com/).
Redirectes are a way to tell search engines when a webpage has moved to a new location. This is usually if a page has been deleted, or the webpage URL has changed. There are different types of redirect for different purposes, but for SEO, it is generally recommended that you use a 301 redirect. This tells search engines that a page has permanently moved.
It is important that the page redirects to the final destination. For example, if your page A redirects to page B, then redirects again to page C, this forms a redirect chain. Instead, the better way to use redirects is to redirect page A straight to page C, and page B can also redirect to page C. This minimises the number of redirects.
We already touched on this earlier, and we know that duplicate content is not a good thing! When your duplicate content is found by search engines, generally it is filtered out so only pages with unique information is shown. This can mean that your page is not seen by the public, as it is being hidden away because Google thinks it is a copy of another page. Duplication of content is often accidental, such as when your site migrates from a non-secured domain to a secure one (http to https). If redirects are not set up correctly, you end up with two identical sets of pages, just one with http and the other with https. Duplication can also occur with online shops, where product pages are found under more than one URL. This situation can be fixed using canonicals.